Multinode Upgrade Sheet#

Download CSV

Multinode Upgrade Sheet#

Description

Steps

Download Files and Check Steps

Download Files and Check Steps

===================================

=====================================

Download VOSS files - XXX is the release number

https://voss.portalshape.com > Downloads > VOSS Automate > XXX > Upgrade

Download .iso and .template files

Transfer the .iso file to the media/ folder of all nodes

Download .iso and .template files

Transfer the .template file to the media/ folder of the primary node

Two transfer options:

Either using SFTP:

sftp platform@<unified_node_hostname>

cd media

put <upgrade_iso_file>

put <upgrade_template_file>

Or using SCP:

scp <upgrade_iso_file> platform@<unified_node_ip_address>:~/media

scp <upgrade_template_file> platform@<unified_node_ip_address>:~/media

Verify that the .iso image and .template file copied:

ls -l media/

Verify that the original .sha256 checksums on the SFTP server match.

system checksum media/<upgrade_iso_file>

Checksum: <SHA256>

system checksum media/<upgrade_template_file>

Checksum: <SHA256>

Security and Health Check Steps

Security and Health Check Steps

===================================

=====================================

Verify that the primary node is the active primary node at the time of upgrade

database config

Ensure that the node on which the installation will be initiated has the stateStr parameter set to PRIMARY and has the highest priority number (highest priority number could vary depending on cluster layout)

Example output

<ip address>:27020: priority: <number> stateStr: PRIMARY storageEngine: WiredTiger

Validate the system health.

system mount - mount the upgrade ISO.

app install check_cluster - install the new version of the cluster check command.

For details: refer to the ‘Cluster Check’ topic in the Platform Guide.

cluster check - inspect the output of this command for warnings and errors. You can also use cluster check verbose to see more details for example avx enabled. While warnings will not prevent an upgrade. It is advisable that these be resolved prior to upgrading where possible. Some warnings may be resolved by upgrading.

cluster check

For troubleshooting and resolutions: also refer to the Health Checks for Cluster Installations Guide and Platform Guide.

If there is any sign of the paths below are over 80% full: a clean-up is needed. For example to avoid risk of full logs occurring during upgrade. Clean-up steps are indicated next to the paths:

/

call support if over 80%

/var/log

run: log purge

/opt/platform

remove any unnecessary files from /media directory

/tmp

reboot

On the Primary Unified Node: verify there are no pending Security Updates on any of the nodes.

Note: If you run cluster status after installing the new version of cluster check: any error message regarding a failed command can be ignored. This error message will not show after upgrade.

Version Check steps

Version Check steps

===================================

=====================================

Customized data/Settings

If data/Settings instances have been modified: record these or export them as JSON.

The modifications can be re-applied or exported JSON instances can be merged following the upgrade. See: Window Post Template Upgrade Tasks.

Version

Record the current version information. This is required for upgrade troubleshooting.

Log in on the Admin Portal and record the information contained in the About > Extended Version

Pre-Upgrade Steps

Pre-Upgrade Steps

===================================

=====================================

VOSS cannot guarantee that a restore point can be used to successfully restore VOSS-4-UC. If you cannot restore the application from a restore point, your only recourse is to reinstall the application.

Create a restore point as per the guidelines for the infrastructure on which the VOSS-4-UC platform is deployed.

Optional: If a backup is also required

backup add <location-name>

backup create <location-name>

For details, refer to the Platform Guide.

After restore point creation and before upgrading: validate system health and check all services nodes and weights for the cluster:

cluster run application cluster list

Make sure all application nodes show 4 or 6 nodes.

cluster check

inspect the output of this command, for warnings and errors. You can also use cluster check verbose to see more details.

Make sure no services are stopped/broken. The message ‘suspended waiting for mongo’ is normal on the fresh unified nodes.

Check that the database weights are set. It is critical to ensure the weights are set before upgrading a cluster.

Example output:

172.29.21.240: weight: 80

172.29.21.241: weight: 70

172.29.21.243: weight: 60

172.29.21.244: weight: 50

Verify the primary node in the primary site and ensure no nodes are in the ‘recovering’ state (stateStr is not RECOVERING).

Upgrade steps

Upgrade steps

===================================

=====================================

By default, the cluster upgrade is carried out in parallel on all nodes and without any backup in order to provide a fast upgrade.

For systems upgrading to 24.1 from 21.4.0 – 21.4-PB5:

The VOSS platform maintenance mode will be started automatically when the cluster upgrade command is run. This prevents any new occurrences of scheduled transactions, including the 24.1 database syncs associated with insights sync. For details on insights sync, see the Insights Analytics topic in the Platform Guide.

The cluster maintenance-mode stop command must however be run manually after the maintenance window of the upgrade – see Manually Stop the Maintenance Window.

For details on the VOSS platform maintenance mode, see the Maintenance Mode topic in the Platform Guide.

It is recommended that the upgrade steps are run in a terminal opened with the screen command.

Verify that the ISO has been uploaded to the ‘media/’ directory on each node. This will speed up the upgrade time.

On the primary unified node:

screen

cluster upgrade media/<upgrade_iso_file>

Note: The cluster upgrade command will also silently first run cluster check and the upgrade will fail if any error conditions exist.

Note: A check for security updates will also be made, with message ‘Checking for security updates …’. If updates are found, a message will show the number and carry out the update. If no updates are found, a message ‘No security updates found’ shows.

Note: If the system reboots, do not carry out the next manual reboot step.

Manual reboot only if needed:

cluster run notme system reboot

If node messages: ‘<node name> failed with timeout’ are displayed,these can be ignored.

system reboot

Since all services will be stopped, this takes some time.

Post-Upgrade, Security and Health Steps

Post-Upgrade, Security and Health Steps

===================================

=====================================

On the primary unified node, verify the cluster status:

cluster check

If any of the above commands show errors, check for further details to assist with troubleshooting:

cluster run all diag health

If upgrade is successful, the screen session can be closed by typing exit in the screen terminal. If errors occurred, keep the screen terminal open for troubleshooting purposes and contact VOSS support.

Check for needed security updates. On the primary node, run:

cluster run all security check

Note: if the system reboots, do not carry out the next manual reboot step.

Manual reboot only if needed:

cluster run notme system reboot

If node messages: <node name> failed with timeout are displayed, these can be ignored.

system reboot

Since all services will be stopped, this takes some time.

Database Schema Upgrade steps

Database Schema Upgrade steps

===================================

=====================================

It is recommended that the upgrade steps are run in a terminal opened with the screen command.

On the primary unified node: screen

voss upgrade_db

Check cluster status

cluster check

Template Upgrade steps

Template Upgrade steps

===================================

=====================================

It is recommended that the upgrade steps are run in a terminal opened with the screen command.

On the primary unified node: screen

app template media/<VOSS-4-UC.template>

Review the output from the app template command and confirm that the upgrade message appears.

If no errors are indicated: make a backup or create a restore point as per the guidelines for the infrastructure on which the VOSS-4-UC platform is deployed. This restore point can be used if post-upgrade patches that may be required, fail.

For an unsupported upgrade path: the install script stops with the message:

Upgrade failed due to unsupported upgrade path. Please log in as sysadmin and see Transaction logs for more detail.

You can restore to the backup or rollback, i.e. revert to the restore point made before the upgrade.

If there are errors for another reason: the install script stops with a failure message listing the problem.

Contact VOSS support.

Verify the ‘extra_functions’ have the same checksum across the cluster.

cluster run application voss get_extra_functions_version -c

Post upgrade migrations:

On a single node of a cluster: run: voss post-upgrade-migrations

Check cluster status and health

cluster status

Post Template Upgrade steps

Post Template Upgrade steps

===================================

=====================================

Import device/cucm/PhoneType

In order for a security profile to be available for a Call Manager Analog Phone, the ‘device/cucm/PhoneType’ model needs to be imported for each Unified CM.

  1. Create a Model Type List which includes the ‘device/cucm/PhoneType’ model.

  1. Add the Model Type List to all the required Unified CM Data Syncs.

  1. Execute the Data Sync for all the required Unified CMs.

Customized data/Settings

Merge the previously backed up customized ‘data/Settings’ with the latest settings on the system by manually adding the differences or exporting the latest settings to JSON, merging the customized changes and importing the JSON.

Support for VG400 and VG450 Analogue Gateways

Before adding the VG400 or VG450 Gateway, the ‘device/cucm/GatewayType’ model needs to be imported for each Unified CM.

  1. Create a Model Type List which includes the ‘device/cucm/GatewayType’ model.

  1. Add the Model Type List to all the required Unified CM Data Syncs.

  1. Execute the Data Sync for all the required Unified CMs.

Verify the upgrade

Log in on the Admin Portal and check the information contained in the About > Version menu. Confirm that versions have upgraded.

Release should show ‘XXX’, where this matches the upgrade release.

Check themes on all roles are set correctly

For configurations that make use of the Northbound Billing Integration (NBI): please check the service status of NBI and restart if necessary.

Log Files and Error Checks

Log Files and Error Checks

===================================

=====================================

Inspect the output of the command line interface for upgrade errors - for example: File import failed! or Failed to execute command.

To view any log files indicated in the error messages - for example run the command if the following message appears:

log view

For more information refer to the execution log file with log view platform/execute.log

If it is for example required send all the install log files in the install directory to an SFTP server:

log send sftp://x.x.x.x install

Log in on the Admin Portal as system level administrator

Go to Administration Tools > Transaction and inspect the transactions list for errors

Manually Stop the Maintenance Window

Manually Stop the Maintenance Window

===================================

=====================================

On the CLI:

cluster maintenance-mode stop

Run the cluster maintenance-mode stop command to end the automatic start of the VOSS maintenance mode when the cluster upgrade command was run when upgrading to 24.1 from 21.4.0 - 21.4-PB5.

This will allow synced transactions to resume, including insights sync operations added in 24.1.

For details on the VOSS platform maintenance mode, see the Maintenance Mode topic in the Platform Guide.

Licensing (outside, after Maintenance Window)

Licensing (outside, after Maintenance Window)

===================================

===================================

From release 21.4 onwards, the deployment needs to be licensed. After installation, a 7-day grace period is available to license the product. Since license processing is only scheduled every hour, if you wish to license immediately, first run voss check-license from the primary unified node CLI.

voss check-license

Obtain the required license token from VOSS.

Steps for GUI and CLI:

To license through the GUI, follow steps indicated in Product License Management in the Core Feature Guide.

To license through the CLI, follow steps indicated in Product Licensing in the Platform Guide.

Mount the Insights disk (outside, after Maintenance Window)

Mount the Insights disk (outside, after Maintenance Window)

============================================

============================================

On each unified node, assign the insights-voss-sync:database mount point to the drive added for the Insights database prior to upgrade.

For example, if drives list shows the added disk as: Unused disks: sde

For example, if drives list shows the added disk as: Unused disks: sde

drives add sde insights-voss-sync:database

drives add sde insights-voss-sync:database

the message below can be ignored on release 24.1:

WARNING: Failed to connect to lvmetad. Falling back to device scanning.