Scenario: Loss of Full Cluster¶
Background¶
The administrator deployed a single-node cluster or deployed the cluster into a Primary and DR site.
The cluster is deployed following the Installation Guide.
The example is a typical cluster deployment: 6 nodes, where 4 nodes are database servers and 2 nodes are proxy servers.
However, this scenario also applies to a cluster deployment of 8 nodes: 6 database servers and 2 proxy servers.
The design is preferably split over 2 physical data centers.
The cluster might also be in two geographically dispersed areas. The cluster has to be installed in two different site names or data center names.
Full cluster failure¶
In this scenario, all nodes failed while transactions were running.
At this point, all transactions that were in flight are lost and will not recover.
The lost transactions have to be rerun.
The cluster will not be operational and manual intervention is needed to recover.
To recover the cluster, carry out the Recovery Steps.
Recovery Steps¶
Important
Prerequisite: a system backup exported to a remote backup location. The backup file on the remote location would typically have a format <timestamp>.tar.gz. This recovery procedure will only succeed if you have a valid recent backup to restore.
For details, considerations and specific commands at each step below, refer to the “Standalone (single-node cluster) Installation” or “Multinode Installation” topic in the Installation Guide.
Ensure all traces of the previous nodes have been removed from the VMware environment.
Deploy fresh nodes as per the original topology.
Check topologies and hardware requirements in the Installation Guide.
Multinode Cluster with Unified NodesMultinode Cluster Hardware SpecificationFor new node deployment, see: Create a New VM Using the Platform-Install OVA.
For the steps below, follow either the “Standalone (single-node cluster) Installation” or “Multinode Installation” topic in the Installation Guide:
Standalone (single-node cluster) InstallationMultinode Installation
Add each non-primary node to the cluster by running cluster prepnode.
From the primary node, add each node to the cluster using the cluster add <IP address of node> command.
For multi-node clusters:
On the primary node, set the database weights for each database node using the database weight add <IP address of node> <weight> command.
Restore a backup made from the highest weighted secondary database node in the original cluster.
Follow the Import steps here: Backup and Import to a New Environment.
Note: It is not necessary to run cluster provision again on the primary node. This action is included in the backup restore process.
Ensure all services are up and running:
Run cluster run all app status to check if all the services are up and running after the restore completes.
Note
For multi-node clusters:
Upon cluster provision failure at any of the proxy nodes during provisioning, the following steps illustrate the cluster provisioning:
Run database config and check if nodes are either in STARTUP2 or SECONDARY or PRIMARY states with correct arbiter placement.
Login to web proxy on both primary and secondary site and add a web weight using web weight add <ip>:443 1 for all those nodes that you want to provide a web weight of 1 on the respective proxies.
Run cluster provision to mitigate the failure.
Run cluster run all app status to check if all the services are up and running after cluster provisioning completes.
Note
For multi-node clusters:
If the existing nodes in the cluster do not see the new incoming cluster after cluster add, try the following steps:
Run cluster del <ip> from the primary node, <ip> being the IP of the new incoming node.
Delete all database weights. Run database weight del <ip> from the primary node, <ip> being the IP of the nodes, including the new incoming node.
Log into any secondary node (non primary unified node) and run cluster add <ip> ,<ip> being the IP of the new incoming node.
Re-add all database weights. Run database weight add <ip> <weight> from the same session, <ip> being the IP of the nodes, including the new incoming node.
Use cluster run database cluster list to check if all nodes see the new incoming nodes inside the cluster.