Multinode installation#
Before you begin#
Before continuing, you should have followed the OVA installation on each node according to the steps and preliminary requirements specified in: Platform install OVA on a VM and according to the node roles as indicated in Role of each VM installation for multi-node installation.
Optionally download or extract language pack template files to support languages other than English.
Note
For a geo-redundant multinode cluster deployment with six unified nodes, there are four unified nodes in the primary site and two unified nodes in the disaster recovery (DR) site in active-standby setup.
The worker count (voss workers command) needs to be set on the DR nodes. Refer to:
For 2 node cluster deployment there are 2 unified nodes.
Template installation and upgrade takes approximately two hours. You can follow the progress on the Admin Portal transaction list.
It is strongly recommended not to allow customer end-users the same level of administrator access as the restricted groups of provider- and customer administrators. This is why Self-service web proxies as well as Administrator web proxies should be used.
Systems with Self-service only web proxies are only recommended where the system is customer facing, but where the customer does not administer the system themselves.
For cluster installations, also refer to the Health Checks for Cluster Installations Guide.
If it is necessary to change an IP address of a node in a cluster, first remove it from the cluster by running the command below on the node to be changed:
cluster del <IP address of node to be changed>
Refer to Inspect the logs to troubleshoot installation for troubleshooting logs during an installation.
Before installing from release 24.2 onwards, ensure that an additional 70 GB disk has been made available for the Insights database.
See: Adding Hard Disk Space and Automate Hardware Specifications.This disk is needed to assign to the
insights-voss-sync:databasemount point. See the final installation step below.
The standard tmux command should be used where indicated.
See: Using the tmux command.
Installation#
Step 1: Install VMWare tools#
Install VMware tools on each node:
Log in to each node, then run the following command:
app install vmwareVerify that vmware is running:
app list
Step 2: Add nodes to the cluster#
Prepare each node to be added to the cluster:
Select a primary unified node that will become the primary database node. The designation of primary unified node is arbitrary. The deploying administrator can pick any unified node that they see fit.
On each web proxy and unified node, excluding the primary node, run
cluster prepnode.
Add nodes to the cluster.
Log in to the selected primary unified node.
Add the unified and web proxy nodes to the cluster:
cluster add <ip_addr>Note
You don’t need to add the selected primary node to the cluster. It will automatically be added to the cluster.
Verify the list of nodes in the cluster:
cluster list
Step 3: Add network domain and check network#
Add the network domain (optional if a domain name is needed):
From the selected primary unified node:
Configure the domain:
cluster run all network domain <domain_name>Verify the configured network domain:
cluster run all network domain
Each node shows the domain that you configured.
Check the network:
From the selected primary unified node, run
cluster checkto verify the status of the cluster, network connectivity, disk status, and NTP.Since database weights are not yet added, you can ignore the following errors:
database: not configured
Verification of database weights should be done when the
cluster checkcommand is run during the step following the provisioning step.If a cluster is not yet provisioned, you can ignore port 443 errors from web proxies triggered by the
cluster checkcommand.If the
cluster checkcommand triggers errors on other ports, for example, port 27020, you can run the following command to verify that the firewall service has started:cluster run database app start services:firewall --forceVerify the DNS configuration:
cluster run all network dnsEach node responds with the DNS server address.
Step 4: Create a restore point#
As part of the rollback procedure, ensure that a suitable restore point is obtained prior to the start of the activity, as per the guidelines for the infrastructure on which the Automate platform is deployed.
Step 5: Configure the cluster#
To configure the cluster:
Use the following command to provide a weight for each database server:
database weight add <database_ip> <priority>Recommended weights:
For two unified nodes, weights of 40, 30 are recommended
For four unified nodes, weights of 40, 30, 20, and 10 are recommended
For six unified nodes, weights of 60, 50, 40, 30, 20, and 10 are recommended
Higher values are prioritized.
Weights used for multinode cluster deployment with four unified nodes in a geo-redundant system containing two data center infrastructures in two physical locations:
For the primary node at the primary site, specify a weight of 40
For the secondary node at the primary site, specify a weight of 30
For the secondary nodes at the DR site, specify weights of 20 and 10
Weights used for multinode cluster deployment with six unified nodes in a geo-redundant system containing two data center infrastructures in two physical locations:
For the primary node at the primary site, specify a weight of 60
For the secondary node at the primary site, specify a weight of 50
For the secondary node at the primary site, specify a weight of 40
For the secondary node at the primary site, specify a weight of 30
For the secondary nodes at the DR site, specify weights of 20 and 10
From the selected primary unified node, now set it up as the primary unified node.
Run this step in a terminal opened with the
tmuxcommand.Run
tmuxRun
cluster provision
For two web proxies and four unified nodes, allow approximately two hours for the operation to complete.
When provisioning is complete, use the following command to check that each node is contactable and that the time server is running on each:
cluster checkRestart any stopped services:
cluster run <node_ip> app startIs provisioning successful?
Yes. Type
exitin the terminal to close thetmuxsession.No. If there are errors, keep the
tmuxterminal open for troubleshooting purposes, and then contact VOSS support.
(Optional) If required, set the web weights configurations (Active-Active, Active-Standby, Single node cluster).
From the primary unified node, run the required
web weightcommands for the web proxy nodes.For details, refer to Multi data center deployments and the Automate Best Practices Guide.
(Optional) If required, enable or disable Self-service or admin web services on the web proxy nodes.
Note
This may be needed for security purposes.
Commands must be run on the relevant web proxy node, and only on a cluster (not on a single node cluster system).
The commands will automatically reconfigure and restart the nginx process, so some downtime will result. Request URLs to a disabled service will redirect the user to the active service.
To disable or enable admin or Self-service web services on the web proxy node:
web service disable <selfservice|admin>web service enable <selfservice|admin>To list web services on the web proxy node:
web service list
Step 6: Create a restore point#
As part of the rollback procedure, ensure that a suitable restore point is obtained prior to the start of the activity, as per the guidelines for the infrastructure on which the Automate platform is deployed.
Step 7: Initialize the database#
To initialize the database and clear all data, run the following command on the primary unified node:
voss cleardown
This step may take some time.
To monitor progress, in a separate console on the primary unified node, you can run
either log follow upgrade_db.log or log follow voss-deviceapi/app.log.
Step 8: Import the templates#
To import the templates:
Copy the Automate template file to the primary unified node:
scp <VOSS Automate_template_file> platform@<unified_node_ip_address>:~/mediaLog in to the primary unified node and and install the template.
Run this step in a terminal opened with the
tmuxcommand.Run
tmuxRun
app template media/<VOSS Automate_template_file>
View console message:
Deploying the template-product for VOSS Automate <<RELEASE_VERSION>> ...
At the prompt to select the product deployment type, choose an option:
Enterprise
Provider
For information on the “Insights Netflow” deployment type when installing release 24.2, contact VOSS.
Depending on the deployment type selected, at the prompt, fill out and verify the following:
A top-level administrator password:
Please enter a password for "sysadmin"
And one administrator password - depending on the deployment:
For Enterprise deployment:
Please enter a password for "entadmin"
For Provider deployment:
Please enter a password for "hcsadmin"
Upon installation, the password length should be at least 8 characters.
Deployment-specific artifacts are installed according to the selected product deployment type. View console message based on the deployment type, either of the following:
"Importing EnterpriseOverlay.json" "Importing ProviderOverlay.json"
Deployment-specific system artifacts are imported, and the following message displays:
Deployment-specific Overlay artifacts successfully imported.
Python functions are deployed.
System artifacts are imported.
At the prompt, provide administrator passwords.
The template install automatically restarts necessary applications. If a cluster, the installation propagates changes throughout the cluster.
Review the output from the
app templatecommands, and confirm that the install message displays:Deployment summary of UPDATED template solution (i.e. current values after installation): ----------------------------------------------------------------------------------------- Product: [PRODUCT] Version: [UPDATED PRODUCT RELEASE] Iteration-version: [UPDATED ITERATION] Platform-version: [UPDATED PLATFORM VERSION]
You can also monitor the template installation from the Admin Portal transaction list.
Are there errors?
No. As part of the rollback procedure, ensure that a suitable restore point is obtained prior to the start of the activity, as per the guidelines for the infrastructure on which the Automate platform is deployed.
Yes. The install script stops with a failure message describing the problem. Contact VOSS Support.
Step 9: (Optional) Install language template#
This step is optional and required only for installing language templates for languages other than English.
Copy the language template file to the primary unified node:
scp <language_template_file> platform@<unified_node_ip_address>:~/mediaLog in to the primary unified node and install the template:
app template media/<language_template_file>For example, to install French:
app template media/VOSS AutomateLanguagePack_fr-fr.templateThere is no need to run this command on all nodes.
Step 10: (Optional) Install Automate Phone-based registration#
This step is optional and required only if the Automate Phone Based Registration Add-on is required.
If required, follow installation instructions in the Appendix of the Automate Core Feature Guide: Install the Phone Based Registration Web Service
Step 11: device/cucm/HuntPilot#
Run the following command:
voss migrate_summary_attributes device/cucm/HuntPilot
Step 12: License the installation#
From release 21.4 onwards, the deployment needs to be licensed. After installation, a 7-day grace period is available to license the product.
Obtain the required license token from VOSS.
License through the GUI or CLI:
To license through the GUI, follow steps indicated in Product License Management in the Core Feature Guide.
To license through the CLI, follow steps indicated in Product Licensing in the Automate Platform Guide.
Step 13: Mount the Insights database drive#
On each unified node, assign the insights-voss-sync:database mount point to the drive added for the Insights database prior to installation.
For example, if drives list shows the added disk as:
Unused disks:
sde
Then run the following command on each unified node where the drive has been added: drives add sde insights-voss-sync:database
Sample output (the message below can be ignored on release 24.1):
WARNING: Failed to connect to lvmetad. Falling back to device scanning.
$ drives add sde insights-voss-sync:database
Configuration setting "devices/scan_lvs" unknown.
Configuration setting "devices/allow_mixed_block_sizes" unknown.
WARNING: Failed to connect to lvmetad. Falling back to device scanning.
71ad98e0-7622-49ad-9fg9-db04055e82bc
Application insights-voss-sync processes stopped.
Migrating data to new drive - this can take several minutes
Data migration complete - reassigning drive
Checking that /dev/sde1 is mounted
Checking that /dev/dm-0 is mounted
/opt/platform/apps/mongodb/dbroot
Checking that /dev/sdc1 is mounted
/backups
Application services:firewall processes stopped.
Reconfiguring applications...
Application insights-voss-sync processes started.