Recovery Steps (two options):
Commands should be run on an operational unified node from the DR site.
During the recovery of clusters, database weights should be deleted and added again.
Delete the failed node database weights from the cluster: database weight del <ip>
Run cluster del <ip> to remove the nodes at the failed primary site.
Power off the deleted node, or disable its Network Interface Card.
At this point, you have two options:
Option A: provision half the cluster for a faster uptime of your DR site.
Only the DR site will then be operational after the provision. You can also optionally
add nodes to this cluster.
Option B: bring the full cluster back up at both the DR site and primary site.
You need to redeploy the primary site nodes.
Option A: provision half the cluster or optionally adding 2 more nodes to it.
If you choose to add 2 more nodes to optionally create a cluster with 2 application and 2 database
nodes, deploy the new nodes as follows.
Run cluster provision on the cluster without the node to be
added and then create the new application and database nodes at the required data center -
see: Create a New VM Using the Platform-Install OVA.
on the existing cluster needs to be re-installed on each added application node.
Request the Macro_Update_<version>.template
file from VOSS Level 2 support and
run the command app template Macro_Update_<version>.template.
Run cluster prepnode on all new nodes.
From a running database node, run cluster add <ip>, with the IP address
of each new node to add it to the existing cluster.
Add the database weights nodes in the cluster at the DR site.
Delete all database weights in the cluster of the DR site. On a selected database node, for each database node IP,
run database weight del <IP>.
Re-add all database weights in the cluster of the DR site. On each database node, for each database node IP,
run database weight add <IP> <weight>, considering the following:
For the new database node, add a database weight lower than that of the weight of the
current primary if this will be a secondary, or higher if this will be the new primary.
Run database config to determine if you have a primary database. If not, run cluster provision primary <ip> (current primary IP) It is recommended
that this step is run in a terminal opened with the screen command. If you do have a primary database, only run cluster providsion.
If an OVA file was not available for your current release and you used the most recent release OVA
for which there is an upgrade path to your release to create the new unified node, re-apply the
Delta Bundle upgrade to the cluster.
Note that the new node version mismatch in the cluster can be ignored, since this upgrade step
aligns the versions.
See: Upgrade
Check all services, nodes and weights - either individually for each node, or for the cluster
by using the commands:
cluster run all app status (make sure no services are stopped/broken -
the message ‘suspended waiting for mongo’ is normal on the fresh database nodes)
cluster run application cluster list (make sure all nodes show)
cluster run application database weight list (make sure all database nodes show correct weights)
Option B: bring the full cluster back up at both the DR site and primary site. You need to
redeploy the primary site nodes.
Deploy 5 nodes: 2 database nodes, 2 application nodes and 1 proxy node.
Run cluster provision on the cluster without the node to be added and then
create the new application, proxy and database nodes at the required data center -
see: Create a New VM Using the Platform-Install OVA.
on the existing cluster needs to be re-installed on each added application node.
Request the Macro_Update_<version>.template
file from VOSS Level 2 support and
run the command app template Macro_Update_<version>.template.
Run cluster prepnode on all new nodes.
Run cluster add <ip> from the current primary database node, with the IP address
of each new node to add it to the existing cluster.
Ensure the database weights are added back:
Delete all database weights in the cluster. On a selected database node, for each database node IP,
run database weight del <IP>.
Re-add all database weights in the cluster. On each database node, for each database node IP,
run database weight add <IP> <weight>, considering the following:
For a new database node, add a database weight lower than that of the weight of the
current primary if this will be a secondary, or higher if this will be the new primary.
Since the primary database node is newly added, run cluster provision primary <ip> (current primary IP),
It is recommended that this step is run in a terminal opened with the screen command.
After provisioning, the node with the largest database weight will be the primary server.
If an OVA file was not available for your current release and you used the most recent release OVA
for which there is an upgrade path to your release to create the new unified node, re-apply the
Delta Bundle upgrade to the cluster.
Note that the new node version mismatch in the cluster can be ignored, since this upgrade step
aligns the versions.
See: Upgrade
Check all services, nodes and weights - either individually for each node, or for the cluster
by using the commands:
cluster run all app status (make sure no services are stopped/broken -
the message ‘suspended waiting for mongo’ is normal on the fresh database nodes)
cluster run application cluster list (make sure all nodes show)
cluster run application database weight list (make sure all database nodes show correct weights)
Run cluster provision primary <ip>, where <ip>
is the current primary database in the DR site.
It is recommended that this step is run in a terminal opened with the screen command. The
six node (or eight node) cluster then pulls the data from this <ip>
into the new primary database server
at the primary site.
After provisioning, the database configuration can then be checked with database config to verify
the primary database node in the primary site.
On the new app nodes, check the number of queues using voss queues and if the
number is less than 2, set the queues to 2 with voss queues 2.
Note
Applications are reconfigured and the voss-queue
process is restarted.