Upgrading from VOSS-4-UC 18.1.3 to 19.3.2¶
Note
While system upgrade takes approximately two hours at a single site, this may vary in accordance with your topology, number of devices and subscribers. Adjust your upgrade maintenance window to allow for your configuration.
You can follow the progress on the GUI transaction list.
The
screen
command can be used - see: Notes on the screen command
The upgrade process is in two stages:
Important
The process in this document only applies for the full upgrade from 18.3.1 to 19.3.2, and cannot be used for the upgrade to intermediate versions. Refer to the upgrade documents accompanying individual releases for intermediate version upgrades.
Note:
- For each stage, the required upgrade files need to be downloaded.
- A database filesystem conversion is required during the second, delta bundle upgrade.
18.3.1 to 19.2.1 ISO Upgrade¶
Upgrade a Multinode Environment with the ISO and Template
Note
- When upgrading from CUCDM 11.5.3 Patch Bundle 2 or VOSS-4-UC 18.1 Patch Bundle 2 and earlier, re-import specified CUC models according to your current version. Refer to the final upgrade procedure step.
Download Files and Check¶
Description and Steps | Notes and Status |
---|---|
VOSS SFTP server: Download:
Two transfer options: Either using SFTP:
Or using SCP:
Verify that the
Verify that the original
See values below. |
- ISO SHA256:
b2d1c5df04d0791d0de3d8d4aa2b3d1d9b62ea1d1df3bd61967a0dfa7836f985
- Template SHA256:
249fd0b1d797fdbebba03c136211ae1f7a79ea309f3049546b0152698586f9fa
Schedules, Transactions and Version Check¶
Description and Steps | Notes and Status |
---|---|
Turn off any scheduled imports to prevent syncs triggering part way through the upgrade. Two options are available: Individually for each job:
Mass modify:
|
|
Check for running imports. Either wait for them to complete or cancel them:
|
|
Record the current version information. This is required for upgrade troubleshooting.
Release 18.1.75
Platform 11.5.3-1521045619
Version 116.0
Build no 1558
|
Pre-Upgrade, Security and Health Steps¶
Description and Steps | Notes and Status |
---|---|
Verify that the primary node is the active primary node at the time of upgrade. database config Ensure that the node on which the installation will be initiated has the Example output <ip address>:27020:
priority: <number>
stateStr: PRIMARY
storageEngine: WiredTiger
Validate the system health. On the Primary Unified Node, verify cluster connectivity and health:
If there is any sign of the paths below are over 80% full, a clean-up is needed, for example to avoid risk of full logs occurring during upgrade. Clean-up steps are indicated next to the paths: / (call support if over 80%)
/var/log (run: log purge)
/opt/platform (remove any unnecessary files from /media directory)
/tmp (reboot)
On the Primary Unified Node, verify there are no pending Security Updates on any of the nodes:
|
|
Shutdown servers and take snapshots from VMWare and then start all servers: Use VMware snapshots. Consider the following:
Log into VMWare and take snapshots of all unified nodes and all web proxies. After snapshots, restart the servers:
Optional: If a backup is required in addition to the snapshot, use the backup add <location-name> and backup create <location-name> commands. For details, refer to the Platform Guide. |
Description and Steps | Notes and Status |
---|---|
Before upgrading, check all services, nodes and weights for the cluster: Make sure no services are stopped/broken. The message ‘suspended waiting for mongo’ is normal on the fresh unified nodes.
Make sure all application nodes show 3 or 5 nodes.
Check that the database weights are set. It is critical to ensure the weights are set before upgrading a cluster.
Example output: 172.29.21.240:
weight: 80
172.29.21.241:
weight: 70
172.29.21.243:
weight: 60
172.29.21.244:
weight: 50
Verify the primary node in the primary site and ensure no nodes are in the
‘recovering’ state (
|
Upgrade¶
Note
By default, the cluster upgrade is carried out in parallel on all nodes and without any backup in order to provide a fast upgrade. For backwards compatibility, this command is the same as for example cluster upgrade <upgrade_iso_file> backup none fast.
Use the cluster upgrade <upgrade_iso_file> serial command if the VMware host is under load.
Description and Steps | Notes and Status |
---|---|
From VOSS-4-UC 18.1 or CUCDM 11.5.3 onwards, it is recommended that the upgrade steps are run in a terminal opened with the screen command and to use a log file. See: Notes on the screen command. On the primary unified node:
|
Post-Upgrade, Security and Health Steps¶
Description and Steps | Notes and Status |
---|---|
On the primary unified node, verify the cluster status:
|
|
If upgrade is successful, the screen session can be closed by typing exit in the screen terminal. If errors occurred, keep the screen terminal open for troubleshooting purposes and contact VOSS support. | |
Complete all the security updates.
Note: If the system reboots, do not carry out the next manual reboot step. When upgrading from pre-19.1.1, an automatic reboot should be expected. Manual reboot only if security updates needed to be applied:
If node messages:
Since all services will be stopped, this takes some time. |
Database Schema Upgrade¶
Description and Steps | Notes and Status |
---|---|
From VOSS-4-UC 18.1 or CUCDM 11.5.3 onwards, it is recommended that the upgrade steps are run in a terminal opened with the screen command. On the primary unified node:
Append Check cluster status
|
Template Upgrade¶
Description and Steps | Notes and Status |
---|---|
From VOSS-4-UC 18.1 or CUCDM 11.5.3 onwards, it is recommended that the upgrade steps are run in a terminal opened with the screen command. On the primary unified node:
|
The following message appears:
Running the DB-query to find the current environment's
existing solution deployment config...
- Python functions are deployed
- System artifacts are imported.
The template upgrade automatically detects the deployment mode: “Enterprise”, “Provider with HCM-F” or “Provider without HCM-F”. A message displays according to the selected deployment type. Check for one of the messages below:
Importing EnterpriseOverlay.json
Importing ProviderOverlay_Hcmf.json ...
Importing ProviderOverlay_Decoupled.json ...
The template install automatically restarts necessary applications. If a cluster is detected, the installation propagates changes throughout the cluster.
Description and Steps | Notes and Status |
---|---|
Review the output from the app template command and confirm that the upgrade message appears: |
Deployment summary of PREVIOUS template solution
(i.e. BEFORE upgrade):
-------------------------------------------------
Product: [PRODUCT]
Version: [PREVIOUS PRODUCT RELEASE]
Iteration-version: [PREVIOUS ITERATION]
Platform-version: [PREVIOUS PLATFORM VERSION]
This is followed by updated product and version details:
Deployment summary of UPDATED template solution
(i.e. current values after installation):
-----------------------------------------------
Product: [PRODUCT]
Version: [UPDATED PRODUCT RELEASE]
Iteration-version: [UPDATED ITERATION]
Platform-version: [UPDATED PLATFORM VERSION]
Description and Steps | Notes and Status |
---|---|
|
|
For an unsupported upgrade path, the install script stops with the message: Upgrade failed due to unsupported upgrade path.
Please log in as sysadmin
and see Transaction logs for more detail.
You can restore to the backup or revert to the VM snapshot made before the upgrade. |
|
If there are errors for another reason, the install script stops with a failure message listing the problem. Contact VOSS support. | |
Verify the
|
|
Post upgrade migrations: On a single node of a cluster, run:
Data migrations that are not critical to system operation can have significant execution time at scale. These need to be performed after the primary upgrade, allowing the migration to proceed whilst the system is in use - thereby limiting upgrade windows. A transaction is queued on VOSS-4-UC and its progress is displayed as it executes. |
Description and Steps | Notes and Status |
---|---|
Check cluster status and health
|
Post Template Upgrade Tasks¶
Description and Steps | Notes and Status |
---|---|
Verify the upgrade: Log in on the GUI and check the information contained in the About > Extended Version menu. Confirm that versions have upgraded.
|
|
|
Release Specific Updates¶
Description and Steps | Notes and Status |
---|---|
When upgrading from CUCDM 11.5.3 Patch Bundle 2 or VOSS-4-UC 18.1 Patch Bundle 2 and earlier, re-import the following from all CUCM devices, since this upgrade deleted obsolete CUC timezone codes from the VOSS-4-UC database:
Note: This is a once off data migration step. If this was performed previously when upgrading to 19.1.x, then it does not have to be repeated. |
|
After upgrading, obtain and install the following patch according to its accompanying MOP file:
Note: This is a once off data migration step. If this was performed previously when upgrading to 19.1.x, then it does not have to be repeated. |
|
Re-import the following from all CUCM devices:
For steps to create a custom data sync, refer to the chapter on Data Sync in the Core Feature Guide. Note: This is a once off data migration step. If this was performed previously when upgrading to 19.1.x, then it does not have to be repeated. |
|
On VOSS-4-UC 18.1, an enhancement populates the E164 Inventory field on Directory Numbers (DNs)
when associating E164 numbers to DNs. In order to populate the E164 field for existing
associations, please execute the workflow: To run the workflow, login with Usage: Select the desired hierarchy, find and Execute the workflow named
|
Log Files and Error Checks¶
Description and Steps | Notes and Status |
---|---|
Inspect the output of the command line interface for upgrade errors,
for example Use the log view command to view any log files indicated in the error messages, for example, run the command if the following message appears: For more information refer to the execution log file with
'log view platform/execute.log'
For example, if it is required send all the install log files in the
|
|
Log in on the GUI as system level administrator, go to Administration Tools > Transaction and inspect the transactions list for errors. |
19.2.1 to 19.3.2 Delta Bundle Upgrade¶
Upgrade a Multinode Environment with the Delta Bundle
Download Files and Check¶
- Bundle SHA256:
01c06d4b6c3abf9f6fc45339aa4fbd59431bb38a1a37368cc1f2132d50b295c3
Version Check¶
Description and Steps | Notes and Status |
---|---|
Record the current version information. This is required for upgrade troubleshooting.
|
Pre-Upgrade, Security and Health Steps¶
Description and Steps | Notes and Status |
---|---|
Verify that the primary node is the active primary node at the time of upgrade. database config Ensure that the node on which the installation will be initiated has the Example output <ip address>:27020:
priority: <number>
stateStr: PRIMARY
storageEngine: WiredTiger
Validate the system health. On the Primary Unified Node, verify cluster connectivity:
On each node verify network connectivity, disk status and NTP.
If there is any sign of the paths below are over 80% full, a clean-up is needed to avoid risk of for example full logs occurring during upgrade. Clean-up steps are indicated next to the paths: / (call support if over 80%)
/var/log (run: log purge)
/opt/platform (remove any unnecessary files from /media directory)
/tmp (reboot)
On the Primary Unified Node, verify there are no pending Security Updates on any of the nodes:
|
|
Shutdown servers and take snapshots from VMWare and then power on all servers, starting with the primary: Use VMware snapshots. Consider the following:
Log into VMWare and take snapshots of all unified nodes and all web proxies. After snapshots, restart the servers:
Optional: If a backup is required in addition to the snapshot, use the backup add <location-name> and backup create <location-name> commands. For details, refer to the Platform Guide. |
Description and Steps | Notes and Status |
---|---|
Before upgrading, check all services, nodes and weights for the cluster: Make sure no services are stopped/broken. The message ‘suspended waiting for mongo’ is normal on the fresh unified nodes.
Make sure all application nodes show 3 or 5 nodes.
Check that the database weights are set. It is critical to ensure the weights are set before upgrading a cluster.
Example output: 172.29.21.240:
weight: 80
172.29.21.241:
weight: 70
172.29.21.243:
weight: 60
172.29.21.244:
weight: 50
Verify the primary node in the primary site and ensure no nodes are in the
‘recovering’ state (
|
Upgrade¶
Description and Steps | Notes and Status |
---|---|
On the primary unified node, use
Run (optionally with command parameters below):
|
Post-Upgrade, Security and Health Steps¶
Description and Steps | Notes and Status |
---|---|
On the primary unified node, verify the cluster status:
|
|
If upgrade is successful, the screen session can be closed by typing exit in the screen terminal. If errors occurred, keep the screen terminal open for troubleshooting purposes and contact VOSS support. | |
Check for needed security updates. On the primary node, run:
If one or more updates are required for any node, run on the primary Unified node:
Note: if the system reboots, do not carry out the next manual reboot step. Manual reboot only if security updates needed to be applied:
If node messages:
Since all services will be stopped, this takes some time. |
Database Filesystem Conversion¶
Important
This step is to be carried out only if you have not converted the file system before.
To check if the step is not required:
- Run database config and ensure that
the storage engine for all database nodes shows as
storageEngine: WiredTiger
. - Run drives list and ensure that the LVM storage shows for
all converted database nodes under
Volume Groups
.
The database convert_drive command provides parameters that allow for a flexible upgrade schedule in order to limit system downtime.
When the database convert_drive command is run, the voss-deviceapi
service will be stopped first and started after completion. The command
should therefore be run during a maintenance window while there are no
running transactions.
The procedure and commands in this step depend on:
- your topology
- latency between data centers
- upgrade maintenance windows - Window 1 to Window 3 represent chosen maintenance windows.
For the Database Filesystem Conversion step below, first inspect the table below for guidance on the commands to run according to your configuration and preferences.
Run all commands on the primary unified node:
Ensure states of database nodes are not DOWN - otherwise the command will fail:
database config (
stateStr
is notDOWN
)Ensure database weights are set and there is 1 maximum weight - otherwise the command will fail:
database weight list (one
weight
value is maximum)
For 2 and 3 maintenance windows: after the upgrade (prior to Windows 2 and 3), only nodes with converted drives will generate valid backups.
For example, if the primary drive is converted, backups from the primary node can be used to restore the database. If there is a database failover to the highest weight secondary node that was not converted, it will not be possible for backups to be generated on that secondary node until the drive is converted.
Note
The database convert_drive command can also be run on a single node only by running the following command and parameter from the specific node: database convert_drive standalone. This option can for example be used for performance reasons in cases where a node is in a remote location.
Topology | Window 1 | Window 2 | Window 3 | Commands (DC = valid data center name) | Description |
---|---|---|---|---|---|
multinode | Y | database convert_drive secondary all database convert_drive primary |
Recommended for a system with latency < 10ms. | ||
multinode | Y | Y | Window 1: database convert_drive primary Window 2: database convert_drive secondary all |
Can be used for a system with latency < or > 10ms. Allows for smaller maintenance windows. Cluster is not available during maintenance. |
|
multinode | Y | Y | Y | Window 1: database convert_drive primary Window 2: database convert_drive secondary <first DC> Window 3: database convert_drive secondary <second DC> |
Can be used for a system with latency > 10ms. Allows for smaller maintenance windows. Cluster is not available during maintenance. |
Description and Steps | Notes and Status |
---|---|
Database Filesystem Conversion step Shut down all the nodes. Since all services will be stopped, this takes some time.
Create a VMWare snapshot for all the unified servers so that the system can easily be reverted in the case of a conversion error. Boot all the systems in VMWare.
|
Post Template Upgrade Tasks¶
Description and Steps | Notes and Status |
---|---|
Verify the upgrade: Log in on the GUI and check the information contained in the About > Extended Version menu. Confirm that versions have upgraded:
If your web browser cannot open the user interface, clear your browser cache before trying to open the interface again. |
Restore Schedules¶
Description and Steps | Notes and Status |
---|---|
Re-enable scheduled imports if any were disabled prior to the upgrade. Two options are available: Individually for each job:
Mass modify:
|
Phone Based Registration Feature Installation (Optional)¶
Description and Steps | Notes and Status |
---|---|
If the phone based registration feature is required, you will need to download the latest Phone Based Registration install script to all unified nodes before continuing with the installation steps below in order to install the latest version. The required files can be located on the secure downloads server under:
Refer to the accompanying document MOP-PBR-19.3.2.pdf. Note that a full service restart is initiated on initial startup of the PBR web service on each VOSS-4-UC unified node. Verify the downloaded script:
with the accompanying SHA256 file:
The PBR web service is assigned the same web weights
as the platform@VOSS-WP-1:~$ web weight list
Default service weights
upstreamservers:
phonebasedreg:
phoneservices:
192.168.100.10:443: 0
192.168.100.3:443: 1
192.168.100.4:443: 1
192.168.100.5:443: 1
192.168.100.6:443: 1
192.168.100.9:443: 0
voss-deviceapi:
selfservice:
192.168.100.10:443: 0
192.168.100.3:443: 1
192.168.100.4:443: 1
192.168.100.5:443: 1
192.168.100.6:443: 1
192.168.100.9:443: 0
voss-deviceapi:
192.168.100.10:443: 0
192.168.100.3:443: 1
192.168.100.4:443: 1
192.168.100.5:443: 1
192.168.100.6:443: 1
192.168.100.9:443: 0
|
Log Files and Error Checks¶
Description and Steps | Notes and Status |
---|---|
Inspect the output of the command line interface for upgrade errors. Use the log view command to view any log files indicated in the error messages, for example, run the command if the following message appears: For more information refer to the execution log file with
'log view platform/execute.log'
For example, if it is required send all the install log files in the
|
|
Log in on the GUI as system level administrator, go to Administration Tools > Transaction and inspect the transactions list for errors. |
Notes on the screen
command¶
From VOSS-4-UC 18.1 or CUCDM 11.5.3 onwards, the standard screen command should be used where indicated, and the reconnect parameter is available if needed:
- screen - start a new session
- screen -ls - show sessions already available
- screen -r [screen PID] - reconnect to a disconnected session
We recommend using the screen
command to avoid failures if the connection is interrupted whilst
running the command. If the connection is interrupted whilst running
the command in screen
then the session can be retrieved by first
listing the sessions PID currently running in screen:
screen -ls, and then reconnecting to the session using
screen -r [screen PID].
The version of screen used in VOSS-4-UC also supports the creation of a log file. If long-running commands will be run, the log file captures screen console output up to the session timeout. A message shows:
timed out waiting for input: auto-logout
To create a screen log file:
- Run screen and wait for screen to open.
- Press <Ctrl>-a then : (colon). This will enter screen command mode at the bottom of the console.
- Create your screen logfile in the
media/
directory:- In screen command mode, type logfile media/<screen-logfilename>.log
- Press <Enter>
- Press <Ctrl>-a and then H to start writing to the log file
- Run your commands.
If the screen session times out, you can obtain console output from the log file, for example:
$ sftp platform@<host>:media/<screen-logfilename>.log