Upgrade Automate#
Overview#
This section provides the steps for upgrading Automate with ISO and template, for all topologies. At each step in this procedure we’ve added labels to indicate the relevant topologies:
Unified node cluster topology:

Modular node cluster topology:

Single unified node topology:

You can find out more about the Automate deployment topologies in the Automate Architecture and Hardware Specification Guide.
Before you start#
Note
For deployments with a two-node cluster with unified nodes topology and upgrading from release 24.X to the current release, contact VOSS support to carry out the commands below:
Root access:
app install nrsRun command:
config.py --app=mongodb delete /servers_arb

Review section “Prepare for Upgrade” before proceeding.
Important
Before starting the upgrade, ensure that the hardware version of each of your virtual machines (VMs) is at least version 11, compatible with ESXi 6.0 and up, and that your host CPU supports AVX (Advanced Vector Extensions).
A cluster check command in the Automate pre-upgrade steps checks for AVX support. To ensure
that AVX support is added to the VMs, you’ll need to upgrade the compatibility of the VM in vCenter.
For the target version, before starting this upgrade, verify VMWare, Cloud deployments, and application version compatibility, as indicated in the Compatibility Matrix.
Prior to maintenance window#
Prior to the maintenance window, you will need to complete the following tasks:
Verify the primary database node and application node
Download and check files
Check the version
Verify the primary database node and application node#

Note
This task is optional for a single unified node cluster topology.
Verify the primary application node, run the following command on the node:
cluster primary role applicationNote
In a modular node cluster topology, the application and database are on separate nodes.
In a unified node cluster topology you will need to ensure that the database and application status is primary on the same node (the node configured as “primary”). In this case (unified node cluster), you’ll need to run the command on each node until you find the “primary” node. For example, the database node with the highest “weight” is “primary”.
The output should be true, for example:
platform@UN2:~$ cluster primary role application is_primary: true
Verify the primary database node, run the following command on the node:
cluster primary role databaseNote
In a modular node cluster topology, the application and database are on separate nodes.
In a unified node cluster topology you will need to ensure that the database and application status is “primary” on the same node (the node configured as “primary”). In this case (unified node cluster), you’ll need to run the command on each node until you find the “primary” node. For example, the database node with the highest “weight” is “primary”.
The output should be true, for example:
platform@UN1:~$ cluster primary role database is_primary: true
Download and check files#

Note
Ensure that the .iso file is available on all nodes.
Go to the download location for VOSS files (where XXX is the major version in your upgrade path requirement, for example, 24.2, if you’re upgrading to 24.2-PB1):
https://voss.portalshape.com > Downloads > VOSS Automate > XXX > Upgrade
Download .iso and .template files.
Transfer the files to the media/ folder, using either SFTP or SCP:
Transfer the .iso file to the media/ folder of all nodes.
Transfer the .template file to the media/ folder of the primary application node.
Transfer using SFTP:
For all nodes
sftp platform@<node_hostname>cd mediaput <upgrade_iso_file>For primary application node
sftp platform@<application_node_hostname>cd mediaput <upgrade_template_file>Transfer using SCP:
For all nodes
scp <upgrade_iso_file> platform@<node_ip_address>:~/mediaFor primary application node
scp <upgrade_template_file> platform@<application_node_ip_address>:~/mediaVerify that the .iso image and .template file copied:
ls -l media/Verify that the original .sha256 checksums on the Download site match:
On any node, run:
cluster run all system checksum media/<upgrade_iso_file>Note
If you have multiple nodes, run this command on only one node.
The output should be:
Checksum: <SHA256>
On the primary application node, run:
system checksum media/<upgrade_template_file>The output should be:
Checksum: <SHA256>
Version check#

If you have customized data settings (data/Settings), record these or export as JSON. Customizations can be re-applied or the exported JSON instances can be merged following the upgrade. See Post-template upgrade.
Record current version information for upgrade troubleshooting:
Log in to the Admin Portal.
Go to About > Version.
Make a note of the system version information.
Maintenance window#
In the maintenance window, you will need to complete the following tasks:
Perform security and health checks
Validate system health
Perform pre-upgrade steps
Upgrade
Perform post-upgrade and health check steps
Perform database schema upgrade
Perform template upgrade
Perform post-template upgrade steps
Inspect the log files and check for errors
Security and health checks#
Note
From Automate 25.1 and later, the security check and security update commands
are no longer available, since security updates are included during the release upgrade
process.

Note
This step is not relevant when upgrading a single unified node topology.
Verify that the primary database node is the active primary node at the time of upgrade:
database configNote
A unified node cluster topology will have the primary and database on the same node.

Note
This step is not relevant when upgrading a single unified node topology.
Ensure that the primary database node on which installation will be initiated has the stateStr parameter set to “PRIMARY” and has the highest priority number.
The highest priority number could vary depending on cluster layout.
Example output:
<ip address>:27020: priority: <number> stateStr: PRIMARY storageEngine: WiredTiger<ip address>:27020: priority: 70.0 stateStr: PRIMARY storageEngine: WiredTiger <ip address>:27030: priority: 0.0 stateStr: ARBITER storageEngine: WiredTiger <ip address>:27020: priority: 50.0 stateStr: SECONDARY storageEngine: WiredTiger <ip address>:27030: priority: 0.0 stateStr: ARBITER storageEngine: WiredTiger <ip address>:27020: priority: 30.0 stateStr: SECONDARY storageEngine: WiredTiger
Validate system health#

Note
From release 25.1, new packages are being installed at the start of upgrade and fresh install. There is no longer a need to do a security check.
Mount upgrade ISO:
system mountInstall the new version of the cluster check command:
app install check_clusterFor details, see Cluster Check.
Run
cluster check.Inspect the output for warnings and errors. You can also use
cluster check verboseto see more details, for example, to check that avx is enabled.Review and resolve any warnings or errors before proceeding with the upgrade. Contact VOSS Support for assistance, if required.
For troubleshooting and resolutions, also refer to the Health Checks for Cluster Installations Guide and the Platform Guide.
If there is any sign that the paths below are over 80% full, a clean-up is required, for example, to avoid the risk of full logs during upgrade. Recommended steps to resolve are indicated at each path:
Path
Resolution
/
Contact VOSS Support if over 80%
/var/log
Run
log purge/opt/platform
Remove any unnecessary files from /media directory
/tmp
Reboot
Note
If you run
cluster statusafter installing the new version ofcluster check, any error message regarding a failed command can be ignored. This error message will not show after upgrade.Adaptation checks - if the GS SME Adaptation is installed, check for duplicate instances of of GS_SMETemplateData_DAT and delete any duplicates before upgrading to 24.2.
Pre-upgrade#

Obtain a suitable restore point as part of the rollback procedure (as per the guidelines for the infrastructure on which the VOSS Automate platform is deployed).
Important
All nodes must be powered off prior to creating the restore point, and must be powered back on again when the restore point is complete.
Optionally, if a backup is also required, use the following commands on the primary database node:
backup add <location-name>backup create <location-name>For details, see the Platform Guide.
Validate system health and check all services, nodes, and weights for the cluster:
Run
cluster run application cluster list, and ensure that all application nodes show.Run
cluster check, then inspect the output of this command for warnings and errors. You can use thecluster check verbose commandto see more details.Note
When upgrading cloud deployments to release 25.1, the pre-upgrade
cluster checkcommand output will show a anerror, with messages containingpackage in an undesired state. These messages can be safely ignored, as the newer check cluster installation will fix these errors.Ensure that no services are stopped or broken:
app statusThe following message is normal on fresh database nodes:
suspended waiting for mongo ...
Important! Check that the database weights are set before upgrading a cluster. Example output:
<ip address>: weight: 80 <ip address>: weight: 70 <ip address>: weight: 60 <ip address>: weight: 50Verify the primary node in the primary site and ensure no nodes are in the recovering state (stateStr is not “RECOVERING”).
Upgrade#

It is recommended that the upgrade steps are run in a terminal opened with the screen command.
By default, the cluster upgrade is carried out in parallel on all nodes and without any backup in order to provide a fast upgrade.
Important
The VOSS platform maintenance mode starts automatically when running cluster upgrade.
This prevents any new occurrences of scheduled
transactions, including database syncs associated with insights sync. For details, see
Insights Analytics in the Platform Guide. Note however that after upgrade, the maintenance mode needs
to be ended manually using cluster maintenance-mode stop - refer to the Post-maintenance window topic below.
Verify that the ISO has been uploaded to the media/ directory on each node. This speeds up the upgrade time.
On the primary database node (modular node cluster) or primary unified node (unified node cluster and single unified node), run the following commands:
screencluster upgrade media/<upgrade_iso_file>To remove a mount directory media/<iso_file basename> on nodes that may have remained after, for example, an upgrade, run:
cluster run all app cleanupIf the message:
*** Reboot Required - New Kernel Installed vmlinuz-x.xx.x-xxx-generic ***displayed at the bottom after the upgrade, reboot the cluster:Topology
Command

cluster run notme system rebootWhen all other nodes have rebooted, run
system rebooton the local node.
system rebootIf the following node messages display, these can be ignored:
<node name> failed with timeout
Since all services will be stopped, this takes some time.
Press Ctrl+d to close
screenif no reboot was required.
Post-upgrade and health check#
Note
From Automate 25.1 and later, the security check and security update commands
are no longer available, since security updates are included during the release upgrade
process.

Run
cluster checkand verify no errors display.
Database schema upgrade#
It is recommended that the upgrade steps are run in a terminal opened with the screen command.

On the primary application node, run the following:
screenvoss upgrade_dbCheck cluster status:
cluster check
Template upgrade#
It is recommended that the upgrade steps are run in a terminal opened with the screen command.

On the primary application node, run the following commands:
screenapp template media/<VOSS Automate.template>View the message that displays:
Running the DB-query to find the current environment's existing solution deployment config …
View progress:
Python functions are deployed
System artifacts are imported
Note
To perform fewer upgrade steps, updates of instances of some models are skipped, where:
data/CallManager instance does not exist as instance in data/NetworkDeviceList
data/CallManager instance exists, but data/NetworkDeviceList is empty
Call Manager AXL Generic Driver and Call Manager Control Center Services match the data/CallManager IP
The template upgrade automatically detects the deployment mode, Enterprise or Provider. A system message displays for the selected deployment mode, for example:
On Enterprise deployment:
Importing EnterpriseOverlay.json
On Provider deployment:
Importing ProviderOverlay.json
The template install automatically restarts necessary applications. If a cluster is detected, the installation propagates changes throughout the cluster.
Review the output to verify that the upgrade message displays:
Deployment summary of PREVIOUS template solution (i.e. BEFORE upgrade): ------------------------------------------------- Product: [PRODUCT] Version: [PREVIOUS PRODUCT RELEASE] Iteration-version: [PREVIOUS ITERATION] Platform-version: [PREVIOUS PLATFORM VERSION]
This is followed by updated product and version details:
Deployment summary of UPDATED template solution (i.e. current values after installation): ----------------------------------------------- Product: [PRODUCT] Version: [UPDATED PRODUCT RELEASE] Iteration-version: [UPDATED ITERATION] Platform-version: [UPDATED PLATFORM VERSION]
If no errors are indicated, create a restore point.
As part of the rollback procedure, ensure that a suitable restore point is obtained prior to the start of the activity, as per the guidelines for the infrastructure on which the VOSS Automate platform is deployed.
For unsupported upgrade paths, the install script stops with the message:
Upgrade failed due to unsupported upgrade path. Please log in as sysadmin and see Transaction logs for more detail.
You can roll back as per the guidelines for the infrastructure on which the VOSS Automate platform is deployed.
If there are errors for another reason, the install script stops with a failure message listing the problem. Contact VOSS Support.
For post-upgrade migrations, run the following command on a single application node of a cluster:
voss post-upgrade-migrationsData migrations that are not critical to system operation can have significant execution time at scale. These need to be performed after the primary upgrade, allowing the migration to proceed while the system is in use - thereby limiting upgrade windows.
View transaction progress. A transaction is queued on VOSS Automate and its progress displays as it executes.
On the primary database node, check cluster status and health:
cluster status
Post-template upgrade#

Verify the upgrade:
Log in on the Admin Portal, and check version details in About > Version.
If your web browser can’t open the user interface, clear your browser cache before trying to open the interface again.
Confirm that versions are upgraded (where XXX is the release version).
Release should display XXX
Platform version should display XXX
Check that themes on all roles are set correctly.
For configurations using Northbound Billing Integration (NBI), check the service status of NBI, and restart if necessary.
Log files and error checks#

Inspect the output of the command line interface for upgrade errors, for example, “File import failed!” or “Failed to execute command”.
If there are any errors referring to log files, for example:
For more information refer to the execution log file with ``log view platform/execute.log``
Then run the
log viewcommand on the primary application node command to view any log files indicated in the error messages.If required, send all the install log files in the install directory to an SFTP server:
log send sftp://x.x.x.x installLog in on the Admin Portal as system level admin, then go to Administration Tools > Transaction, and inspect the transaction list for errors.
Post-maintenance window#
In the post-maintenance part of the upgrade you will need to perform the following tasks:
End the maintenance window
Apply the license
Mount the Insights disk
End maintenance window and restore schedules#

On the CLI, run the following command to end the VOSS maintenance window:
cluster maintenance-mode stop
Scheduled data sync transactions can now resume, including insights sync operations added in 24.1. For details, see Maintenance Mode in the Platform Guide.
Licensing#

The Automate deployment requires a license. After installation, a 7-day grace period is available to license the product.
Since license processing is only scheduled every hour, if you wish to license immediately,
first run voss check-license from the primary application node CLI.
Obtain the required license token from VOSS.
Apply the license:
If applying a license via the GUI, follow the steps indicated in the Product License Management section of the Core Feature Guide.
If applying a license through the CLI, follow the steps indicated in Product Licensing in the Platform Guide.
Mount the Insights disk#

On each database/unified node, assign the insights-voss-sync:database mount point to the drive added for the Insights database prior to upgrade.
For example, if drives list shows the added disk as …
Unused disks: sde
Then run the following command on each database/unified node where the drive has been added:
drives add sde insights-voss-sync:databaseSample output:
$ drives add sde insights-voss-sync:database Configuration setting "devices/scan_lvs" unknown. Configuration setting "devices/allow_mixed_block_sizes" unknown. WARNING: Failed to connect to lvmetad. Falling back to device scanning. 71ad98e0-7622-49ad-9fg9-db04055e82bc Application insights-voss-sync processes stopped. Migrating data to new drive - this can take several minutes Data migration complete - reassigning drive Checking that /dev/sde1 is mounted Checking that /dev/dm-0 is mounted /opt/platform/apps/mongodb/dbroot Checking that /dev/sdc1 is mounted /backups Application services:firewall processes stopped. Reconfiguring applications... Application insights-voss-sync processes started.