Migrating from a 6 Node to 8 Node System

On a standard topology, to migrate a clustered 6 node system (4 unified nodes and 2 WebProxy nodes) to a clustered 8 node system (6 unified nodes and 2 WebProxy nodes), the considerations and steps below are required.

  1. Check and snapshot the clustered 6 node system before adding the nodes:

    1. Run cluster list to ensure the node count is correct.
    2. Run cluster status to check all nodes are online and services reported as running.
    3. Run cluster run database cluster list to make sure all unified nodes are aware of the current cluster nodes.
    4. Run cluster run all app status to make sure all services are running on all nodes.
    5. Snapshot the entire 6 node cluster
  2. Add the 2 unified nodes:

    1. Create the new unified node - see: Create a New VM Using the Platform-Install OVA.
    2. An extra functions file (extra_functions.py) that is installed on the existing cluster needs to be re-installed on each added unified node. Request the deploy_extra_functions_<version>.template file from VOSS Level 2 support and run the command app template deploy_extra_functions_<version>.template.
    3. Run cluster prepnode on all nodes, including new nodes.
    4. From the primary unified node, run cluster add <ip> for new nodes, excluding itself.
  3. Reset the cluster database weights. When nodes are removed from and added to a cluster, remove all database weights completely and add them back in before provisioning to reset the configuration.

    1. Delete all database weights in the cluster. For each IP, run database weight del <IP>.

    2. To add database weights in, you set the weight of the intended primary, but must always specify the current primary (using database primary), regardless of whether the new intended primary is the same node or not. During the provision process, the role of primary will then be transferred from the existing primary to the node with the highest weight.

      Determine the current primary database node with database primary.

  4. Run the command database weight add <IP> <numeric> on the primary database node for each IP, making the value of the intended primary database node the highest value.

  5. Check the cluster before provisioning:

    1. Run cluster list to ensure the node count is correct.
    2. Run cluster status to check all nodes are online and services reported as running.
    3. Run cluster run database cluster list to make sure all unified nodes are aware of the current cluster nodes.
    4. Run cluster run all app status to make sure all services are running on all nodes. Fresh nodes that have not been provisioned will show a message: suspended waiting for mongo.
  6. Run cluster provision to provision the cluster.

  7. After a successful migration, the snapshot made in step 1. can be removed.