Migrating from a 6 Node to 8 Node System
----------------------------------------

.. index:: cluster;cluster list
.. index:: cluster;cluster status
.. index:: cluster;cluster run
.. index:: cluster;cluster prepnode
.. index:: cluster;cluster add
.. index:: cluster;cluster provision

To migrate a clustered 6 node system (4 unified nodes and 2 WebProxy nodes)
to a clustered 8 node system (6 unified nodes and 2 WebProxy nodes), the
considerations and steps below are required.

1. Check and snapshot the clustered 6 node system *before* adding the nodes:

   a. Run **cluster list** to ensure the node count is correct.
   #. Run **cluster status** to check all nodes are online and services reported as running.
   #. Run **cluster run database cluster list** to make sure all unified nodes are aware of the current cluster nodes.
   #. Run **cluster run all app status** to make sure all services are running on all nodes.
   #. Snapshot the entire 6 node cluster

#. Add the 2 unified nodes:  

   a. Create the new unified node - see: :ref:`create_a_new_VM_using_the_platform-install_OVA`.
   #. An extra functions file (``extra_functions.py``) that is installed
      on the existing cluster needs to be re-installed *on each added unified node*.
      Request the ``deploy_extra_functions_<version>.template`` file from VOSS Level 2 support and 
      run the command **app template deploy_extra_functions_<version>.template**. 
   #. Run **cluster prepnode** *on all nodes*, including new nodes.
   #. From the primary unified node, run **cluster add <ip>** for new
      nodes, excluding itself.
  
#. Reset the cluster database weights. When nodes are removed from and added to a cluster, remove all
   database weights completely and add them back in *before provisioning* to reset the configuration. 
   
   a. Delete all database weights in the cluster. For each IP, run **database weight del <IP>**.
   #. To add database weights in, you set the weight of the intended primary, but must always specify the
      current primary (using **database primary**), regardless of whether the new intended primary is the
      same node or not. During the provision process, the role of primary will then be transferred from
      the existing primary to the node with the highest weight.
  
      Determine the current primary database node with **database primary**.

#. Run the command **database weight add <IP> <numeric>** on the primary database node for each IP,
   making the value of the intended primary database node the highest value.

#. Check the cluster before provisioning: 

   a. Run **cluster list** to ensure the node count is correct.
   #. Run **cluster status** to check all nodes are online and services reported as running.
   #. Run **cluster run database cluster list** to make sure all unified nodes are aware of the current cluster nodes.
   #. Run **cluster run all app status** to make sure all services are running on all nodes.
      Fresh nodes that have not been provisioned will show a message: ``suspended waiting for mongo``.

#. Run **cluster provision** to provision the cluster.

#. After a successful migration, the snapshot made in step 1. can be removed.

.. |VOSS-4-UC| replace:: VOSS-4-UC
.. |Unified CM| replace:: Unified CM
.. |Platform Guide| replace:: Platform Guide