Modular architecture multinode installation#
Overview#
Note
A modular architecture installation is not supported for a single node cluster (“cluster of one”) topology.
Before installing from release 24.2 onwards, ensure that an additional 70 GB disk has been made available for the Insights database.
See: Adding Hard Disk Space and VOSS Automate Hardware Specifications.This disk is needed to assign to the insights-voss-sync:database mount point. See the final installation step below.
Before you start#
Before continuing, you should have followed the OVA installation on each node according to the steps and preliminary requirements specified in: Platform install on a VM and according to the node roles as indicated in Role of each VM installation for multi-node installation. Data center names are also selected at this stage.
For example, for an 8-node modular cluster in 2 data centers:
DC1 = primary site or data center containing primary database node (highest database weight)
DC2 = data recovery (DR) data center
Install:
3 nodes with Database roles (2 in DC1, 1 in DC2)
3 nodes with Application roles (2 in DC1, 1 in DC2)
2 nodes with WebProxy roles (1 in DC1, 1 in DC2)
Optionally download or extract language pack template files to support languages other than English.
Note
For typical modular geo-redundant multinode cluster deployment with 3 database and 3 application nodes, there are:
two application nodes in the primary Site
two database nodes in the primary Site
one application node in the Disaster Recovery (DR) Site
one database node in the Disaster Recovery (DR) Site
The worker count (voss workers command) needs to be set on the DR nodes. Refer to:
NAT between nodes is not allowed
If there is a firewall between nodes, then specific ports must be configured. For port configuration, refer to:
Template installation and upgrade takes approximately two hours. You can follow the progress on the Admin Portal transaction list.
It is strongly recommended not to allow customer end-users the same level of administrator access as the restricted groups of provider- and customer administrators. This is why Self-service web proxies as well as Administrator web proxies should be used.
Systems with Self-service-only web proxies are only recommended where the system is customer facing, but where the customer does not administer the system themselves.
For cluster installations, also refer to the Health Checks for Cluster Installations Guide.
If it is necessary to change an IP address of a node in a cluster, first remove it from the cluster by running the command below on the node to be changed:
cluster del <IP address of node to be changed>Refer to Inspect the logs to troubleshoot installation for troubleshooting logs during an installation.
The standard tmux command should be used where indicated. Refer to Using the tmux command.
Install modular multinode#
Step 1: Install VMWare tools and add nodes to the cluster#
Start by installing VMware tools on each node, then prepare each node to be added to the cluster, then add nodes to the cluster.
Install VMware tools on each node.
Log in to each node, then run:
app install vmwareVerify that vmware is running:
app list
Prepare each node to be added to the cluster:
Select a database node that will become the primary database node.
Note
The primary site or data center will contain the primary database node. The deploying admin user can choose any database node they prefer.
On each node, run
cluster prepnode.
Add nodes to the cluster.
Log in to the selected primary database node.
Run the following command to add the other database, application, and web proxy nodes to the cluster:
cluster add <ip_addr>Note
You won’t need to add the selected primary database node to the cluster as it is added to the cluster automatically.
Run the following command to verify the list of nodes in the cluster:
cluster list
Step 2: Add network domain and check network#
Add the network domain (optional if a domain name is needed).
From the selected primary database node:
Configure the domain:
cluster run all network domain <domain_name>Verify the configured network domain:
cluster run all network domainEach node shows the domain you configured.
Check the network:
From the selected primary database node, run the following command to verify the status of the cluster, network connectivity, disk status, and NTP:
cluster checkVerify the DNS configuration:
cluster run all network dnsEach node responds with the DNS server address.
Create a restore point.
As part of the rollback procedure, ensure that a suitable restore point is obtained prior to the start of the activity, as per the guidelines for the infrastructure on which the Automate platform is deployed.
Step 3: Configure the cluster#
From the selected primary database node, use the following command to provide a weight for each database server:
database weight add <database_ip> <priority>Recommended weights are as follows (the higher the value, the higher the priority):
For the primary database node at the primary site (DC1), a weight of 40 is recommended
For the secondary database node at the primary site (DC1), a weight of 30 is recommended
For the secondary database node at the DR (Data Recovery) site (DC2), a weight of 10 is recommended
From the selected primary database node, run the provisioning step in a terminal opened with the
tmuxcommand:Run
tmux, then runcluster provision.For two web proxies and four database, application nodes, allow approximately two hours for the operation to complete.
When provisioning is complete, check that each node is contactable and that the time server is running on each:
cluster checkIf a service is down, to restart the service, run:
cluster run <node_ip> app startIf provisioning is successful, type
exitin the terminal to close thetmuxsession.If there are errors, keep the
tmuxterminal open for troubleshooting purposes, and contact VOSS support.
On each of the new application nodes, use the following command to set the queues to “2”:
voss queues 2Once you run the command, applications are reconfigured and the voss-queue process is restarted.
(Optional) If required, set the web weights configurations (Active-Active, Active-Standby, Single node cluster).
From the primary database node, run the required
web weightcommands for the web proxy nodes. For details, refer to Multi data center deployments, and the Automate Best Practices Guide.(Optional) If required, enable or disable Self-service or admin web services on the web proxy nodes. This may be needed, for example, for security purposes.
The commands must be run on the relevant web proxy node, and will automatically reconfigure and restart the nginx process. Some downtime will therefore result. Request URLs to a disabled service will redirect the user to the active service.
To disable or enable admin or Self-service web services on the web proxy node:
web service disable <selfservice|admin>web service enable <selfservice|admin>
To list web services on the web proxy node:
web service list
Create a restore point. As part of the rollback procedure, ensure that a suitable restore point is obtained prior to the start of the activity, as per the guidelines for the infrastructure on which the VOSS Automate platform is deployed.
Step 4: Initialize the database#
To initialize the database and clear all data:
On an application node, run the following command:
voss cleardownThis step may take some time. To monitor progress, in a separate console on the application node, run either:
log follow upgrade_db.logorlog follow voss-deviceapi/app.log
Step 5: Import the templates#
To import the templates:
Run the following command to copy the Automate template file to an application node:
scp <VOSS Automate_template_file> platform@<app_node_ip_address>:~/mediaLog in to this application node, and install the template. Run this step in a terminal opened with the
tmuxcommand.tmuxapp template media/<VOSS Automate_template_file>View the console message:
Deploying the template-product for VOSS Automate <<RELEASE_VERSION>>
At the prompt, select the product deployment type, either of the following:
Enterprise
Provider
Note
Contact VOSS for details on the Insights Netflow deployment type when installing release 24.2.
At the prompt, provide your password, depending on the deployment type you chose:
Top-level admin password - fill out a password for sysadmin
Fill out a deployment-specific admin password:
For Enterprise, fill out the entadmin password
For Provider, fill out the hcsadmin password
Note
At install, password length must be at least 8 characters.
Deployment-specific artifacts are installed. View the system message that displays, either of the following:
- ::
“Importing EnterpriseOverlay.json”
“Importing ProviderOverlay.json”
Deployment specific system artifacts are imported. View the system message that displays, either of the following:
Deployment-specific Overlay artifacts successfully imported.
Python functions are deployed
System artifacts are imported.
At the prompt, provide administrator passwords.
The template install automatically restarts necessary applications. If a cluster, the installation propagates changes throughout the cluster.
Review the output from the
app templatecommands and confirm that the install message appears:Deployment summary of UPDATED template solution (i.e. current values after installation): ----------------------------------------------------------------------------------------- Product: [PRODUCT] Version: [UPDATED PRODUCT RELEASE] Iteration-version: [UPDATED ITERATION] Platform-version: [UPDATED PLATFORM VERSION]
You can also monitor the template installation from the Admin Portal transaction list.
Are there errors?
No. If there are no errors indicated, we recommend a suitable restore point is created as per the guidelines for the infrastructure on which the Automate platform is deployed.
Yes. The install script stops with a failure message describing the problem. Contact VOSS Support.
Step 6: (Optional) Install language templates#
This step is required only for installing language templates for languages other than English.
Copy the language template file to the selected application node:
scp <language_template_file> platform@<app_node_ip_address>:~/mediaLog in to the application node and install the template:
app template media/<language_template_file>For example, to install French:
app template media/VOSS AutomateLanguagePack_fr-fr.templateThere is no need to run this command on all nodes.
Step 7: (Optional) Install Automate Phone-based Registration#
This step is optional and required only if the Automate Phone Based Registration Add-on is required. If required, follow the installation instructions in the Appendix of your Core Feature Guide: “Install the Phone Based Registration Web Service”
Step 8: device/cucm/HuntPilot#
Run the following command:
voss migrate_summary_attributes device/cucm/HuntPilot
Step 9: License the installation#
From release 21.4 onwards, the deployment needs to be licensed. After installation, a 7-day grace period is available to license the product.
Obtain the required license token from VOSS.
License through the GUI or CLI:
License through the GUI? Follow steps indicated in Product License Management in the Core Feature Guide.
License through the CLI? Follow steps indicated in Product Licensing in the Platform Guide.
Step 10: Mount the Insights database drive#
On each database node, assign the insights-voss-sync:database mount point to the drive added for the Insights database prior to installation.
For example, if drives list shows the added disk as:
- ::
Unused disks: sde
Then run the following command on each unified node where the drive has been added:
drives add sde insights-voss-sync:database
Sample output (the message below can be ignored on release 24.1:
WARNING: Failed to connect to lvmetad. Falling back to device scanning.
$ drives add sde insights-voss-sync:database
Configuration setting "devices/scan_lvs" unknown.
Configuration setting "devices/allow_mixed_block_sizes" unknown.
WARNING: Failed to connect to lvmetad. Falling back to device scanning.
71ad98e0-7622-49ad-9fg9-db04055e82bc
Application insights-voss-sync processes stopped.
Migrating data to new drive - this can take several minutes
Data migration complete - reassigning drive
Checking that /dev/sde1 is mounted
Checking that /dev/dm-0 is mounted
/opt/platform/apps/mongodb/dbroot
Checking that /dev/sdc1 is mounted
/backups
Application services:firewall processes stopped.
Reconfiguring applications...
Application insights-voss-sync processes started.