Modular Architecture Multinode Installation#
Note
A modular architecture installation is not supported for a single node cluster (“cluster of one”) topology.
Before installing release 24.1, ensure that an additional 70 GB disk has been made available for the Insights database.
See: Adding Hard Disk Space and VOSS Automate Hardware Specifications.This disk is needed to assign to the
insights-voss-sync:database
mount point. See the final installation step below.
Before You Begin#
Before continuing, you should have followed the OVA installation on each node according to the steps and preliminary requirements specified in: Create a New VM Using the Platform-Install OVA and according to the node roles as indicated in Notes on Multi-Node Installation. Data center names are also selected at this stage.
For example, for an 8-node modular cluster in 2 data centers:
DC1 = primary site or data center containing primary database node (highest database weight)
DC2 = data recovery (DR) data center
Install:
3 nodes with Database roles (2 in DC1, 1 in DC2)
3 nodes with Application roles (2 in DC1, 1 in DC2)
2 nodes with WebProxy roles (1 in DC1, 1 in DC2)
Optionally download or extract language pack template files to support languages other than English.
Note
For typical modular geo-redundant multinode cluster deployment with 3 database and 3 application nodes, there are:
two application nodes in the primary Site
two database nodes in the primary Site
one application node in the Disaster Recovery (DR) Site
one database node in the Disaster Recovery (DR) Site
The worker count (voss workers command) needs to be set on the DR nodes. Refer to:
NAT between nodes is not allowed
If there is a firewall between nodes, then specific ports must be configured. For port configuration, refer to:
Template installation and upgrade takes approximately two hours. You can follow the progress on the Admin Portal transaction list.
It is strongly recommended not to allow customer end-users the same level of administrator access as the restricted groups of provider- and customer administrators. This is why Self-service web proxies as well as Administrator web proxies should be used.
Systems with Self-service only web proxies are only recommended where the system is customer facing, but where the customer does not administer the system themselves.
For cluster installations, also refer to the Health Checks for Cluster Installations Guide.
If it is necessary to change an IP address of a node in a cluster, first remove it from the cluster by running the command below on the node to be changed:
cluster del <IP address of node to be changed>
Refer to Installation Logs for troubleshooting logs during an installation.
The standard screen command should be used where indicated. See: Using the screen command.
Procedure#
Install VMware tools on each node.
Log in to each node and run app install vmware.
Verify that vmware is running: app list.
Prepare each node to be added to the cluster:
Select a database node that will become the primary database node. The primary site or data center will contain the primary database node. The deploying administrator can pick any database node that they see fit.
On each node, run cluster prepnode.
Add nodes to the cluster.
Log in to the selected primary database node.
Add the other database, application and WebProxy nodes to the cluster: cluster add <ip_addr>.
Note that you do not have to add the selected primary database node to the cluster. It will autmatically be added to the cluster.
Verify the list of nodes in the cluster: cluster list.
Add the network domain (optional if a domain name is needed). From the selected primary database node:
Configure the domain: cluster run all network domain <domain_name>.
Verify the configured network domain: cluster run all network domain. Each node shows the domain that you configured.
Check the network:
From the selected primary database node, run cluster check to verify the status of the cluster, network connectivity, disk status and NTP.
Verify the DNS configuration: cluster run all network dns. Each node responds with the DNS server address.
Create a restore point. As part of the rollback procedure, ensure that a suitable restore point is obtained prior to the start of the activity, as per the guidelines for the infrastructure on which the VOSS Automate platform is deployed.
Configure the cluster.
From the selected primary database node, provide a weight for each database server with the database weight add <database_ip> <priority> command.
The higher the value, the more priority.
A weight of 40 for the primary database node at the primary site (DC1)
A weight of 30 for the secondary database node at the primary site (DC1)
A weight of 10 for the secondary database node at the DR (Data Recovery) site (DC2)
From the selected primary database node:
It is recommended that this step is run in a terminal opened with the screen command.
Run screen.
Run cluster provision
Allow approximately 2 hours for the operation to complete for two WebProxy and four database, application nodes.
When provisioning is complete, check that each node is contactable and that the time server is running on each with cluster check.
If a service is down, run cluster run <node_ip> app start to restart the service.
If provisioning is successful, the screen session can be closed by typing exit in the screen terminal. If errors occurred, keep the screen terminal open for troubleshooting purposes and contact VOSS support.
On each of the new application nodes, set the queues to 2 with the command voss queues 2.
Note
Applications are reconfigured and the
voss-queue
process is restarted.(Optional) If required, set the web weights configurations (Active-Active, Active-Standby, Single node cluster). From the primary database node, run the required web weight commands for the Web Proxy nodes. For details, refer to Multi Data Center Deployments and the VOSS Automate Best Practices Guide.
(Optional) If required, enable or disable Self-service or admin web services on the web proxy nodes. This may for example be needed for security purposes.
The commands must be run on the relevant web proxy node. The commands will automatically reconfigure and restart the nginx process, so some downtime will result. Request URLs to a disabled service will redirect the user to the active service.
To disable or enable admin or Self-service web services on the web proxy node:
web service disable <selfservice|admin>
web service enable <selfservice|admin>
To list web services on the web proxy node:
web service list
Create a restore point. As part of the rollback procedure, ensure that a suitable restore point is obtained prior to the start of the activity, as per the guidelines for the infrastructure on which the VOSS Automate platform is deployed.
Initialize the database and clear all data. On an application node, run voss cleardown.
Note that this step may take some time. You can follow the process by running log follow upgrade_db.log or log follow voss-deviceapi/app.log in a separate console on the application node.
Import the templates.
Copy the VOSS Automate template file to an application node with the command:
scp <VOSS Automate_template_file> platform@<app_node_ip_address>:~/media
Log in to this application node and install the template. It is recommended that this step is run in a terminal opened with the screen command.
Run screen.
Run app template media/<VOSS Automate_template_file>
The console will display a message:
Deploying the template-product for VOSS Automate <<RELEASE_VERSION>> ...
When prompted to select the product deployment type, provide and confirm the deployment type:
Enterprise
Provider
In accordance with the selected deployment type, you are prompted to enter and verify:
a top-level administrator password:
Please enter a password for "sysadmin"
and one administrator password - depending on the deployment:
Enterprise :
Please enter a password for "entadmin"
Provider :
Please enter a password for "hcsadmin"
Upon installation, the password length should be at least 8 characters.
Deployment-specific artifacts are installed according to the selected type of product deployment. A message displays according to the selected deployment type - one of:
"Importing EnterpriseOverlay.json" "Importing ProviderOverlay.json ..."
Deployment specific system artifacts are imported and a message is displayed:
Deployment-specific Overlay artifacts successfully imported.
Python functions are deployed
System artifacts are imported.
You are prompted to provide administrator passwords.
The template install automatically restarts necessary applications. If a cluster the installation propagates changes throughout the cluster.
Review the output from the app template commands and confirm that the install message appears:
Deployment summary of UPDATED template solution (i.e. current values after installation): ----------------------------------------------------------------------------------------- Product: [PRODUCT] Version: [UPDATED PRODUCT RELEASE] Iteration-version: [UPDATED ITERATION] Platform-version: [UPDATED PLATFORM VERSION]
You can also monitor the template installation from the Admin Portal transaction list.
If there are no errors indicated, we recommend a suitable restore point is created as per the guidelines for the infrastructure on which the VOSS Automate platform is deployed.
If there was an error, the install script stops with a failure message listing the problem. Contact VOSS Support.
Check for needed security updates by running the cluster run all security check command on the primary database node. If at least one update is required for any node, run the cluster run all security update command on the primary database node.
After the security update is successful, reboot the cluster:
From the selected primary database node, run cluster run notme system reboot. Since all services will be stopped, this takes some time.
From the selected primary database node, run system reboot. Since all services will be stopped, this takes some time.
If a node does not properly reboot but the console shows that all processes have terminated, you can manually reboot the node without any system corruption.
(Optional) Install language templates for languages other than English.
Copy the language template file to the selected application node with the command:
scp <language_template_file> platform@<app_node_ip_address>:~/media
Log in to the application node and install the template with the command:
app template media/<language_template_file>
For example, to install French:
app template media/VOSS AutomateLanguagePack_fr-fr.template
There is no need to run this command on all nodes.
(Optional) If the VOSS Automate Phone Based Registration Add-on is required, follow the installation instructions in the Appendix of your Core Feature Guide:
“Install the Phone Based Registration Web Service”
Run the following command:
voss migrate_summary_attributes device/cucm/HuntPilot
License the installation:
From release 21.4 onwards, the deployment needs to be licensed. After installation, a 7-day grace period is available to license the product.
Obtain the required license token from VOSS.
License:
To license through the GUI, follow steps indicated in Product License Management in the Core Feature Guide.
To license through the CLI, follow steps indicated in Product Licensing in the Platform Guide.
Mount the Insights database drive
On each database node, assign the
insights-voss-sync:database
mount point to the drive added for the Insights database prior to installation.For example, if
drives list
shows the added disk as:Unused disks: sde
then run the command
drives add sde insights-voss-sync:database
on each unified node where the drive has been added.
Sample output (the message below can be ignored on release 24.1:
WARNING: Failed to connect to lvmetad. Falling back to device scanning.
)$ drives add sde insights-voss-sync:database Configuration setting "devices/scan_lvs" unknown. Configuration setting "devices/allow_mixed_block_sizes" unknown. WARNING: Failed to connect to lvmetad. Falling back to device scanning. 71ad98e0-7622-49ad-9fg9-db04055e82bc Application insights-voss-sync processes stopped. Migrating data to new drive - this can take several minutes Data migration complete - reassigning drive Checking that /dev/sde1 is mounted Checking that /dev/dm-0 is mounted /opt/platform/apps/mongodb/dbroot Checking that /dev/sdc1 is mounted /backups Application services:firewall processes stopped. Reconfiguring applications... Application insights-voss-sync processes started.