Migrating from a 6 Node to 8 Node System#
On a standard topology, to migrate a clustered 6 node system (4 unified nodes and 2 WebProxy nodes) to a clustered 8 node system (6 unified nodes and 2 WebProxy nodes), the considerations and steps below are required.
Check and make a restore point of the clustered 6 node system before adding the nodes:
Run cluster list to ensure the node count is correct.
Run cluster status to check all nodes are online and services reported as running.
Run cluster run database cluster list to make sure all unified nodes are aware of the current cluster nodes.
Run cluster run all app status to make sure all services are running on all nodes.
Make a restore point of the entire 6 node cluster.
As part of the rollback procedure, ensure that a suitable restore point is obtained prior to the start of the activity, as per the guidelines for the infrastructure on which the VOSS Automate platform is deployed.
Add the 2 unified nodes:
Create the new unified node - see: Create a New VM Using the Platform-Install OVA.
An extra functions file (
extra_functions.py
) that is installed on the existing cluster needs to be re-installed on each added unified node. Request thedeploy_extra_functions_<version>.template
file from VOSS Level 2 support and run the command app template deploy_extra_functions_<version>.template.Run cluster prepnode on all nodes, including new nodes.
From the primary unified node, run cluster add <ip> for new nodes, excluding itself.
Reset the cluster database weights. When nodes are removed from and added to a cluster, remove all database weights completely and add them back in before provisioning to reset the configuration.
Delete all database weights in the cluster. For each IP, run database weight del <IP>.
To add database weights in, you set the weight of the intended primary, but must always specify the current primary (using database primary), regardless of whether the new intended primary is the same node or not. During the provision process, the role of primary will then be transferred from the existing primary to the node with the highest weight.
Determine the current primary database node with database primary.
Run the command database weight add <IP> <numeric> on the primary database node for each IP, making the value of the intended primary database node the highest value.
Check the cluster before provisioning:
Run cluster list to ensure the node count is correct.
Run cluster status to check all nodes are online and services reported as running.
Run cluster run database cluster list to make sure all unified nodes are aware of the current cluster nodes.
Run cluster run all app status to make sure all services are running on all nodes. Fresh nodes that have not been provisioned will show a message:
suspended waiting for mongo
.
Run cluster provision to provision the cluster.
After a successful migration, the restore point made in step 1. can be removed.