Upgrade Automate#

Overview#

This section provides the steps for upgrading Automate with ISO and template, for all topologies. At each step in this procedure we’ve added labels to indicate the relevant topologies:

  • Unified node cluster topology: Unified-Node-Cluster

  • Modular node cluster topology: Modular-Node-Cluster

  • Single unified node topology: Single-Unified-Node

You can find out more about the Automate deployment topologies in the Automate Architecture and Hardware Specification Guide.

Before you start#

Note

For deployments with a two-node cluster with unified nodes topology and upgrading from release 24.X to the current release, contact VOSS support to carry out the commands below:

  • Root access: app install nrs

  • Run command: config.py --app=mongodb delete /servers_arb

Unified-Node-Cluster Modular-Node-Cluster Single-Unified-Node

Review section “Prepare for Upgrade” before proceeding.

Important

Before starting the upgrade, ensure that the application and hardware version of each of your virtual machines (VMs) is as indicated in the Compatibility Matrix.

Ensure your host CPU supports AVX (Advanced Vector Extensions). A cluster check command in the Automate pre-upgrade steps checks for AVX support.

Prior to maintenance window#

Prior to the maintenance window, you will need to complete the following tasks:

  1. Verify the primary database node and application node

  2. Download and check files

  3. Check the version

Verify the primary database node and application node#

Unified-Node-Cluster Modular-Node-Cluster

Note

This task is optional for a single unified node cluster topology.

  1. Verify the primary application node, run the following command on the node:

    cluster primary role application

    Note

    • In a modular node cluster topology, the application and database are on separate nodes.

    • In a unified node cluster topology you will need to ensure that the database and application status is primary on the same node (the node configured as “primary”). In this case (unified node cluster), you’ll need to run the command on each node until you find the “primary” node. For example, the database node with the highest “weight” is “primary”.

    The output should be true, for example:

    platform@UN2:~$ cluster primary role application
    is_primary: true
    
  2. Verify the primary database node, run the following command on the node:

    cluster primary role database

    Note

    • In a modular node cluster topology, the application and database are on separate nodes.

    • In a unified node cluster topology you will need to ensure that the database and application status is “primary” on the same node (the node configured as “primary”). In this case (unified node cluster), you’ll need to run the command on each node until you find the “primary” node. For example, the database node with the highest “weight” is “primary”.

    The output should be true, for example:

    platform@UN1:~$ cluster primary role database
    is_primary: true
    

Download and check files#

Unified-Node-Cluster Modular-Node-Cluster Single-Unified-Node

Note

Ensure that the .iso file is available on all nodes.

  1. Go to the download location for VOSS files (where XXX is the major version in your upgrade path requirement, for example, 24.2, if you’re upgrading to 24.2-PB1):

    https://voss.portalshape.com > Downloads > VOSS Automate > XXX > Upgrade

  2. Download .iso and .template files.

  3. Transfer the files to the media/ folder, using either SFTP or SCP:

    • Transfer the .iso file to the media/ folder of all nodes.

    • Transfer the .template file to the media/ folder of the primary application node.

    Transfer using SFTP:

    For all nodes

    sftp platform@<node_hostname>

    cd media

    put <upgrade_iso_file>

    For primary application node

    sftp platform@<application_node_hostname>

    cd media

    put <upgrade_template_file>

    Transfer using SCP:

    For all nodes

    scp <upgrade_iso_file> platform@<node_ip_address>:~/media

    For primary application node

    scp <upgrade_template_file> platform@<application_node_ip_address>:~/media

  4. Verify that the .iso image and .template file copied: ls -l media/

  5. Verify that the original .sha256 checksums on the Download site match:

    On any node, run:

    cluster run all system checksum media/<upgrade_iso_file>

    Note

    If you have multiple nodes, run this command on only one node.

    The output should be:

    Checksum: <SHA256>
    

    On the primary application node, run: system checksum media/<upgrade_template_file>

    The output should be:

    Checksum: <SHA256>
    

Version check#

Unified-Node-Cluster Modular-Node-Cluster Single-Unified-Node

  1. If you have customized data settings (data/Settings), record these or export as JSON. Customizations can be re-applied or the exported JSON instances can be merged following the upgrade. See Post-template upgrade.

  2. Record current version information for upgrade troubleshooting:

    1. Log in to the Admin Portal.

    2. Go to About > Version.

    3. Make a note of the system version information.

Maintenance window#

In the maintenance window, you will need to complete the following tasks:

  1. Perform security and health checks

  2. Validate system health

  3. Perform pre-upgrade steps

  4. Upgrade

  5. Perform post-upgrade and health check steps

  6. Perform database schema upgrade

  7. Perform template upgrade

  8. Perform post-template upgrade steps

  9. Inspect the log files and check for errors

Security and health checks#

Note

From Automate 25.1 and later, the security check and security update commands are no longer available, since security updates are included during the release upgrade process.

  1. Unified-Node-Cluster Modular-Node-Cluster

    Note

    This step is not relevant when upgrading a single unified node topology.

    Verify that the primary database node is the active primary node at the time of upgrade:

    database config

    Note

    A unified node cluster topology will have the primary and database on the same node.

  2. Unified-Node-Cluster Modular-Node-Cluster

    Note

    This step is not relevant when upgrading a single unified node topology.

    Ensure that the primary database node on which installation will be initiated has the stateStr parameter set to “PRIMARY” and has the highest priority number.

    The highest priority number could vary depending on cluster layout.

    Example output:

    <ip address>:27020:
        priority: <number>
        stateStr: PRIMARY
        storageEngine: WiredTiger
    
    <ip address>:27020:
        priority: 70.0
        stateStr: PRIMARY
        storageEngine: WiredTiger
    <ip address>:27030:
        priority: 0.0
        stateStr: ARBITER
        storageEngine: WiredTiger
    <ip address>:27020:
        priority: 50.0
        stateStr: SECONDARY
        storageEngine: WiredTiger
    <ip address>:27030:
        priority: 0.0
        stateStr: ARBITER
        storageEngine: WiredTiger
    <ip address>:27020:
        priority: 30.0
        stateStr: SECONDARY
        storageEngine: WiredTiger
    

Validate system health#

Unified-Node-Cluster Modular-Node-Cluster Single-Unified-Node

Note

From release 25.1, new packages are being installed at the start of upgrade and fresh install. There is no longer a need to do a security check.

  1. Mount upgrade ISO: system mount

  2. Install the new version of the cluster check command: app install check_cluster

    For details, see Cluster Check.

  3. Run cluster check.

    Inspect the output for warnings and errors. You can also use cluster check verbose to see more details, for example, to check that avx is enabled.

    Review and resolve any warnings or errors before proceeding with the upgrade. Contact VOSS Support for assistance, if required.

    For troubleshooting and resolutions, also refer to the Health Checks for Cluster Installations Guide and the Platform Guide.

    If there is any sign that the paths below are over 80% full, a clean-up is required, for example, to avoid the risk of full logs during upgrade. Recommended steps to resolve are indicated at each path:

    Path

    Resolution

    /

    Contact VOSS Support if over 80%

    /var/log

    Run log purge

    /opt/platform

    Remove any unnecessary files from /media directory

    /tmp

    Reboot

Note

  • If you run cluster status after installing the new version of cluster check, any error message regarding a failed command can be ignored. This error message will not show after upgrade.

  • Adaptation checks - if the GS SME Adaptation is installed, check for duplicate instances of of GS_SMETemplateData_DAT and delete any duplicates before upgrading to 24.2.

Pre-upgrade#

Unified-Node-Cluster Modular-Node-Cluster Single-Unified-Node

  1. Obtain a suitable restore point as part of the rollback procedure (as per the guidelines for the infrastructure on which the VOSS Automate platform is deployed).

    Important

    All nodes must be powered off prior to creating the restore point, and must be powered back on again when the restore point is complete.

    Optionally, if a backup is also required, use the following commands on the primary database node:

    backup add <location-name>

    backup create <location-name>

    For details, see the Platform Guide.

  2. Validate system health and check all services, nodes, and weights for the cluster:

    1. Run cluster run application cluster list, and ensure that all application nodes show.

    2. Run cluster check, then inspect the output of this command for warnings and errors. You can use the cluster check verbose command to see more details.

      Note

      When upgrading cloud deployments to release 25.1, the pre-upgrade cluster check command output will show a an error, with messages containing package in an undesired state. These messages can be safely ignored, as the newer check cluster installation will fix these errors.

    3. Ensure that no services are stopped or broken: app status

      The following message is normal on fresh database nodes:

      suspended waiting for mongo ...
      
    4. Important! Check that the database weights are set before upgrading a cluster.

      Run database weight list.

      Example output:

      <ip address>:
           weight: 80
       <ip address>:
           weight: 70
       <ip address>:
           weight: 60
       <ip address>:
           weight: 50
      
    5. Verify the primary node in the primary site and ensure no nodes are in the recovering state (stateStr is not “RECOVERING”).

Upgrade#

Unified-Node-Cluster Modular-Node-Cluster Single-Unified-Node

It is recommended that the upgrade steps are run in a terminal opened with the screen command.

By default, the cluster upgrade is carried out in parallel on all nodes and without any backup in order to provide a fast upgrade.

Important

The VOSS platform maintenance mode starts automatically when running cluster upgrade. This prevents any new occurrences of scheduled transactions, including database syncs associated with insights sync. For details, see Insights Analytics in the Platform Guide. Note however that after upgrade, the maintenance mode needs to be ended manually using cluster maintenance-mode stop - refer to the Post-maintenance window topic below.

  1. Verify that the ISO has been uploaded to the media/ directory on each node. This speeds up the upgrade time.

    On the primary database node (modular node cluster) or primary unified node (unified node cluster and single unified node), run the following commands:

    screen

    cluster upgrade media/<upgrade_iso_file>

  2. To remove a mount directory media/<iso_file basename> on nodes that may have remained after, for example, an upgrade, run:

    cluster run all app cleanup

  3. If the message: *** Reboot Required - New Kernel Installed vmlinuz-x.xx.x-xxx-generic *** displayed at the bottom after the upgrade, reboot the cluster:

    Topology

    Command

    Unified-Node-Cluster Modular-Node-Cluster

    cluster run notme system reboot

    When all other nodes have rebooted, run system reboot on the local node.

    Single-Unified-Node

    system reboot

    If the following node messages display, these can be ignored:

    <node name> failed with timeout
    

    Since all services will be stopped, this takes some time.

  4. Press Ctrl+d to close screen if no reboot was required.

Post-upgrade and health check#

Note

From Automate 25.1 and later, the security check and security update commands are no longer available, since security updates are included during the release upgrade process.

Unified-Node-Cluster Modular-Node-Cluster Single-Unified-Node

  1. Run cluster check and verify no errors display.

Database schema upgrade#

Important

Commands requiring a session post upgrade therefore require the use of tmux. For more details, see: Using the tmux command.

It is recommended that the upgrade steps are run in a terminal opened with the tmux command.

Unified-Node-Cluster Modular-Node-Cluster Single-Unified-Node

  1. On the primary application node, run the following:

    tmux

    voss upgrade_db

  2. Check cluster status: cluster check

Template upgrade#

It is recommended that the upgrade steps are run in a terminal opened with the tmux command.

Unified-Node-Cluster Modular-Node-Cluster Single-Unified-Node

  1. On the primary application node, run the following commands:

    tmux

    app template media/<VOSS Automate.template>

  2. View the message that displays:

    Running the DB-query to find the current environment's existing solution deployment config …
    
  3. View progress:

    • Python functions are deployed

    • System artifacts are imported

      Note

      To perform fewer upgrade steps, updates of instances of some models are skipped, where:

      • data/CallManager instance does not exist as instance in data/NetworkDeviceList

      • data/CallManager instance exists, but data/NetworkDeviceList is empty

      • Call Manager AXL Generic Driver and Call Manager Control Center Services match the data/CallManager IP

    • The template upgrade automatically detects the deployment mode, Enterprise or Provider. A system message displays for the selected deployment mode, for example:

      On Enterprise deployment:

      Importing EnterpriseOverlay.json
      

      On Provider deployment:

      Importing ProviderOverlay.json
      
    • The template install automatically restarts necessary applications. If a cluster is detected, the installation propagates changes throughout the cluster.

    Review the output to verify that the upgrade message displays:

    Deployment summary of PREVIOUS template solution
    (i.e. BEFORE upgrade):
    -------------------------------------------------
    
    Product: [PRODUCT]
    Version: [PREVIOUS PRODUCT RELEASE]
    Iteration-version: [PREVIOUS ITERATION]
    Platform-version: [PREVIOUS PLATFORM VERSION]
    

    This is followed by updated product and version details:

    Deployment summary of UPDATED template solution
    (i.e. current values after installation):
    -----------------------------------------------
    
    Product: [PRODUCT]
    Version: [UPDATED PRODUCT RELEASE]
    Iteration-version: [UPDATED ITERATION]
    Platform-version: [UPDATED PLATFORM VERSION]
    
  4. If no errors are indicated, create a restore point.

    As part of the rollback procedure, ensure that a suitable restore point is obtained prior to the start of the activity, as per the guidelines for the infrastructure on which the VOSS Automate platform is deployed.

    For unsupported upgrade paths, the install script stops with the message:

    Upgrade failed due to unsupported upgrade path.
    Please log in as sysadmin and see Transaction logs for more detail.
    

    You can roll back as per the guidelines for the infrastructure on which the VOSS Automate platform is deployed.

    If there are errors for another reason, the install script stops with a failure message listing the problem. Contact VOSS Support.

  5. For post-upgrade migrations, run the following command on a single application node of a cluster:

    voss post-upgrade-migrations

    Data migrations that are not critical to system operation can have significant execution time at scale. These need to be performed after the primary upgrade, allowing the migration to proceed while the system is in use - thereby limiting upgrade windows.

  6. View transaction progress. A transaction is queued on VOSS Automate and its progress displays as it executes.

  7. On the primary database node, check cluster status and health: cluster status

Post-template upgrade#

Unified-Node-Cluster Modular-Node-Cluster Single-Unified-Node

  1. Verify the upgrade:

    1. Log in on the Admin Portal, and check version details in About > Version.

      If your web browser can’t open the user interface, clear your browser cache before trying to open the interface again.

    2. Confirm that versions are upgraded (where XXX is the release version).

      • Release should display XXX

      • Platform version should display XXX

  2. Check that themes on all roles are set correctly.

  3. For configurations using Northbound Billing Integration (NBI), check the service status of NBI, and restart if necessary.

Log files and error checks#

Unified-Node-Cluster Modular-Node-Cluster Single-Unified-Node

  1. Inspect the output of the command line interface for upgrade errors, for example, “File import failed!” or “Failed to execute command”.

  2. If there are any errors referring to log files, for example:

    For more information refer to the execution log file with ``log view platform/execute.log``
    

    Then run the log view command on the primary application node command to view any log files indicated in the error messages.

    If required, send all the install log files in the install directory to an SFTP server:

    log send sftp://x.x.x.x install

  3. Log in on the Admin Portal as system level admin, then go to Administration Tools > Transaction, and inspect the transaction list for errors.

Post-maintenance window#

In the post-maintenance part of the upgrade you will need to perform the following tasks:

  1. End the maintenance window

  2. Apply the license

  3. Mount the Insights disk

End maintenance window and restore schedules#

Unified-Node-Cluster Modular-Node-Cluster Single-Unified-Node

On the CLI, run the following command to end the VOSS maintenance window:

cluster maintenance-mode stop

Scheduled data sync transactions can now resume, including insights sync operations added in 25.1. For details, see Maintenance Mode in the Platform Guide.

Licensing#

Unified-Node-Cluster Modular-Node-Cluster Single-Unified-Node

The Automate deployment requires a license. After installation, a 7-day grace period is available to license the product.

Since license processing is only scheduled every hour, if you wish to license immediately, first run voss check-license from the primary application node CLI.

  1. Obtain the required license token from VOSS.

  2. Apply the license:

    • If applying a license via the GUI, follow the steps indicated in the Product License Management section of the Core Feature Guide.

    • If applying a license through the CLI, follow the steps indicated in Product Licensing in the Platform Guide.

Mount the Insights disk#

Unified-Node-Cluster Modular-Node-Cluster Single-Unified-Node

  1. On each database/unified node, assign the insights-voss-sync:database mount point to the drive added for the Insights database prior to upgrade.

    For example, if drives list shows the added disk as …

    Unused disks:
    sde
    

    Then run the following command on each database/unified node where the drive has been added:

    drives add sde insights-voss-sync:database

    Sample output:

    $ drives add sde insights-voss-sync:database
    Configuration setting "devices/scan_lvs" unknown.
    Configuration setting "devices/allow_mixed_block_sizes" unknown.
    WARNING: Failed to connect to lvmetad. Falling back to device scanning.
    71ad98e0-7622-49ad-9fg9-db04055e82bc
    Application insights-voss-sync processes stopped.
    Migrating data to new drive - this can take several minutes
    Data migration complete - reassigning drive
    Checking that /dev/sde1 is mounted
    Checking that /dev/dm-0 is mounted
    /opt/platform/apps/mongodb/dbroot
    Checking that /dev/sdc1 is mounted
    /backups
    
    Application services:firewall processes stopped.
    Reconfiguring applications...
    Application insights-voss-sync processes started.