Scenario: Loss of the Primary Database Server¶
The administrator deployed the cluster into a Primary and DR site.
The cluster is deployed following the Installation Guide.
The example is a typical cluster deployment: 6 nodes, where 4 nodes are database servers and 2 nodes are proxy servers.
However, this scenario also applies to a cluster deployment of 8 nodes: 6 database servers and 2 proxy servers. If non-primary database servers are also lost on the primary or DR site, then also follow the recovery steps for these nodes.
The design is preferably split over 2 physical data centers.
Node Failure
- Normal operations continue where the cluster is processing requests and
transactions are committed successfully up to the point where a loss of a
primary database server is experienced.
In this scenario
AS01[172.29.42.100]
failed while transactions were running. - Examine the cluster status running cluster status to determine the failed state:
Data Centre: unknown
application : unknown_172.29.42.100[172.29.42.100] (not responding)
webproxy : unknown_172.29.42.100[172.29.42.100] (not responding)
database : unknown_172.29.42.100[172.29.42.100] (not responding)
Data Centre: jhb
application : AS02[172.29.42.101]
webproxy : PS01[172.29.42.102]
AS02[172.29.42.101]
database : AS02[172.29.42.101]
Data Centre: cpt
application : AS03[172.29.21.100]
AS04[172.29.21.101]
webproxy : PS02[172.29.21.102]
AS03[172.29.21.100]
AS04[172.29.21.101]
database : AS03[172.29.21.100]
AS04[172.29.21.101]
- Some downtime occurs. This can be take up to 15 minutes. To speed up recovery, restart the services: cluster run all app start.
- The loss of the Primary database server will cause an election and the node with the highest weighting still running will become primary.
- Check the weights set in the cluster configuration: database weight list
platform@AS01:~$ database weight list
172.29.21.100:
weight: 10
172.29.21.101:
weight: 20
172.29.42.100:
weight: 50
172.29.42.101:
weight: 40
- The primary node 172.29.42.100 failed and therefore node 172.29.42.101 will become the primary node after election.
- To find the primary database, run database primary.
platform@AS02:~$ database primary
172.29.42.101
At this point all transactions that are currently in flight are lost and will not recover.
The lost transactions have to be replayed or rerun.
Bulk load transactions cannot be replayed and have to be rerun. Before resubmitting a failed Bulk load job, carry out the following command on the primary node CLI in order to manually clear each failure transaction that still has a Processing status after a service restart. Use the command:
voss finalize_transaction <Trans ID>
The failed transaction status then changes from Processing to Fail.
With the database server
AS01[172.29.42.100]
still down, replaying the failed transactions is successful.
Recovery Steps if the server that is lost, is unrecoverable:
Generally, cluster provision must be run every time a node is deleted or added, even if it is a replacement node. It is recommended that this step is run in a terminal opened with the screen command.
Delete its database weight (database weight del <ip>), in other words database weight del 172.29.42.100
Run cluster del 172.29.42.100, because this server no longer exists. Power off the deleted node, or disable its Network Interface Card.
Run cluster provision primary 172.29.42.101 from the current primary node. It is recommended that this step is run in a terminal opened with the screen command.
This server should already have the highest weight, and its database weight can be checked with database weight list
If all the database weights are deleted and provisioning is run again with cluster provision, the CLI message is:
‘Please select which of the database should be used as the remaining primary by running “database config”, selecting a node to sync from (any node that says primary or secondary and is in a good state, i.e. not in a ‘RECOVERING’ or ‘STARTUP’ state ) and rerun provisioning with “cluster provision primary <db server ip from commmand above>”’
A new unified node needs to be deployed. Ensure the server name, IP information and data centre name is the same as on the server that was lost.
Run cluster provision on the cluster without the node to be added and then create the new unified node - see: Create a New VM Using the Platform-Install OVA.
An extra functions file (
extra_functions.py
) that is installed on the existing cluster needs to be re-installed on each added unified node. Request theMacro_Update_<version>.template
file from VOSS Level 2 support and run the command app template Macro_Update_<version>.template.Run cluster prepnode on all servers.
Run cluster add <ip> from the primary unified node (current), with the IP address of the new unified server to add it to the existing cluster.
Check the output of the commands: cluster list and cluster status from the existing node. If the new node does not show up:
- Run cluster del <new node>
- Rerun the add of the node on another existing unified node, until the node shows up in cluster list and cluster status.
- Verify that the node shows up from all existing nodes. The recovery process may be time consuming.
Delete all database weights in the cluster. On a selected unified node, for each unified node IP, run database weight del <IP>.
Re-add all database weights in the cluster. On each unified node, for each unified node IP, run database weight add <IP> <weight>, considering the following:
For the new unified node, add a database weight lower than that of the weight of the current primary if this will be a secondary, or higher if this will be the new primary.
If the lost primary unified node release version is 18.1-V4UC-Patch-Bundle-03b and if it will be the new primary, first set its weight lower than the current primary and re-apply the patch on it:
app install media/18.1-V4UC-Patch-Bundle-03b.script –force
When done, check the database weights - either individually for each node, or for the cluster by using the command:
cluster run application database weight list
Make sure all application nodes show correct weights.
Make sure the new node is part of the cluster (run cluster list) and run cluster provision primary 172.29.42.101 from the current primary. It is recommended that this step is run in a terminal opened with the screen command.
During the provision process, the role of primary will then be transferred from the current primary to the node with the highest weight. The role transfer may take a significant amount of time, depending on the database size.
During the process, typing app status from the new primary node will still show the database as
not provisioned
:mongodb v11.5.3 (2018-07-01 14:35) |-arbiter running |-database running (not provisioned)
To check the progress of the transfer, the database log can be checked. Type log follow mongodb/mongodb/mongodb.log. When the transfer is complete, an entry will show
sync done
as in the example below:2018-07-09T14:09:48.639986+00:00 un1 mongod.27020[129593]: [initial sync-0] initial sync done; took 5821s.
While the primary role transfer is in progress, the system can be used, but bulk database operations should not be carried out, because the sync may fall too far behind to complete.
If an OVA file was not available for your current release and you used the most recent release OVA for which there is an upgrade path to your release to create the new unified node, re-apply the Delta Bundle upgrade to the cluster.
Note that the new node version mismatch in the cluster can be ignored, since this upgrade step aligns the versions.
See: Upgrade
Note
Upon cluster provision failure at any of the proxy nodes during provisioning, the following steps illustrate the cluster provisioning:
- Run database config and check if nodes are either in STARTUP2 or SECONDARY or PRIMARY states with correct arbiter placement.
- Login to web proxy on both primary and secondary site and add a web weight using web weight add <ip>:443 1 for all those nodes that you want to provide a web weight of 1 on the respective proxies.
- Run cluster provision to mitigate the failure.
- Run cluster run all app status to check if all the services are up and running after cluster provisioning completes.
Note
If the existing nodes in the cluster do not see the new incoming cluster after cluster add, try the following steps:
- Run cluster del <ip> from the primary node, <ip> being the IP of the new incoming node.
- Delete all database weights. Run database weight del <ip> from the primary node, <ip> being the IP of the nodes, including the new incoming node.
- Log into any secondary node (non primary unified node) and run cluster add <ip> ,<ip> being the IP of the new incoming node.
- Re-add all database weights. Run database weight add <ip> <weight> from the same session, <ip> being the IP of the nodes, including the new incoming node.
- Use cluster run database cluster list to check if all nodes see the new incoming nodes inside the cluster.