Place the system in maintenance mode and suspend any scheduled transactions. On an
application node of the system, run the following command:
clustermaintenance-modestart
In-progress scheduled transactions will be allowed to complete, else, cancel
data sync transactions that are in progress on the GUI.
See System Maintenance Mode in the Platform Guide.
This step checks if a CSR private key exists but no associated signed certificate is
available. This step is required ONLY if own private certificate and generated SAN
certificates are required and the webcertgen_csr command was run. For details,
see Web Certificate Setup Options in the Platform Guide.
The steps below are needed to check if a CSR private key exists but no associated
signed certificate is available.
Request VOSS support to run the following command (displayed for information only):
if [ -d /opt/platform/apps/nginx/config/csr/ ] ;
then
for LST in /opt/platform/apps/nginx/config/csr/*;
do
openssl x509 -in $LST -text -noout >/dev/null 2>&1 && SIGNED="$LST";
done;
echo $SIGNED;
else:
echo "No further action required";
fi
If the echo$SIGNED command output is blank, back up the csr/ directory with
for example the following command:
Before starting the upgrade, ensure that the hardware version of each of your virtual machines (VMs)
is at least version 11, compatible with ESXi 6.0 and up, and that your host CPU supports AVX (Advanced Vector Extensions).
A checkcluster command in the Automate pre-upgrade steps checks for AVX support. To ensure that AVX
support is added to the VMs, you’ll need to upgrade the compatibility of the VM in vCenter.
Steps
Status
Mount upgrade ISO: systemmount
Install the new version of the cluster check command: appinstallcheck_cluster
Inspect the output of this command for warnings: clustercheck
You can also use clustercheckverbose for more details for example, to check
that avx is enabled.
Review and resolve any warnings or errors before proceeding with the upgrade.
Contact VOSS Support for assistance, if required.
For troubleshooting and resolutions, also see the Health Checks for Cluster
Installations Guide and the Platform Guide.
If there is any sign that the paths below are over 80% full, a clean-up is required,
for example, to avoid the risk of full logs occurring during upgrade:
/ - Contact VOSS Support if over 80%
/var/log - Run logpurge
/opt/platform - Remove any unnecessary files from /media directory
/tmp - Reboot
On the primary application node, verify that there are no pending security updates on
any of the nodes.
If you run clusterstatus after installing the new version of clustercheck,
any error message regarding a failed command can be ignored. This error message will
not show after upgrade.
Obtain a suitable restore point as part of the rollback procedure (as per the
guidelines for the infrastructure on which the VOSS Automate platform is deployed)
Optionally, if a backup is also required, use the following commands on the primary
database node:
It is recommended that the upgrade steps are run in a terminal opened with the screen command.
By default, the cluster upgrade is carried out in parallel on all nodes and without any
backup in order to provide a fast upgrade.
Note
For systems upgrading to 24.2 from 21.4.0 - 21.4-PB5, the VOSS platform maintenance mode starts
automatically when running clusterupgrade. This prevents any new occurrences of scheduled
transactions, including the 24.2 database syncs associated with insightssync. For details, see
Insights Analytics in the Platform Guide.
For details on the VOSS platform maintenance mode, see Maintenance Mode in the Platform Guide.
Steps
Status
Verify that the ISO has been uploaded to the media/ directory on each node.
This speeds up the upgrade time.
On the primary database node, run the following commands:
screen
cluster upgrade media/<upgrade_iso_file>
Ctrl + a, then \ to close screen.
Log in on the primary database node, then run:
clusterrundatabaseappstatus
If the report shows insights-voss-sync:realtimestopped on some database
contact VOSS Support for assistance to perform the following on the primary
database node (displayed for information only):
Run /opt/platform/mags/insights-voss-sync-mag-scriptinstalldatabase
The output should be: ConfiguredPostgressecrets
Verify that the database nodes now all have the correct mongo info:
Check for required security updates. On the primary application node, run
clusterrunallsecuritycheck
If security updates are required on any nodes, run the following on the
primary application node: clusterrunallsecurityupdate
If upgrading a Cloud deployment (Microsoft Azure or AWS), run clustercheck.
Note
If the grub-pc:packageinanundesiredstate error displays at each node,
contact VOSS Support for assistance. Support runs the following command on
each node (displayed for informational purposes only):
dpkg--configure-a
Follow the prompts that display in the text window:
At GRUBinstalldevices, do not select any device. Press <Tab>
to highlight, then <Ok>, and then press <Enter>.
At ContinuingwithoutinstallingGRUB?, press <Yes>.
Run clustercheck again, and verify the error no longer displays.
If the system does not automatically reboot and you need to reboot manually:
Run clusterrunnotmesystemreboot. You can ignore the following node
messages: <nodename>failedwithtimeout.
Run systemreboot. This takes some time because all services are stopped.
Verify cluster status. On the primary node, run clustercheck.
If any errors display, run clusterrunalldiaghealth for details that may
help with troubleshooting.
To remove a mount directory (media/<iso_filebasename>) on nodes that may
have remained, after an upgrade for example, run the following on the primary
database node: clusterrunallappcleanup
If the upgrade succeeds, type exit in the terminal to close the
screen session.
If there are errors, keep the screen terminal open for troubleshooting
and contact VOSS Support.
The template install automatically restarts necessary applications. If a
cluster is detected, the installation propagates changes throughout the
cluster.
Steps
Status
Review the output from the apptemplate command and confirm that the
upgrade message displays:
This is followed by updated product and version details:
Deployment summary of UPDATED template solution
(i.e. current values after installation):
-----------------------------------------------
Product: [PRODUCT]
Version: [UPDATED PRODUCT RELEASE]
Iteration-version: [UPDATED ITERATION]
Platform-version: [UPDATED PLATFORM VERSION]
If no errors are indicated, create a restore point.
As part of the rollback procedure, ensure that a suitable restore point is
obtained prior to the start of the activity, as per the guidelines for the
infrastructure on which the VOSS Automate platform is deployed.
For unsupported upgrade paths, the install script stops with the message:
Upgrade failed due to unsupported upgrade path.
Please log in as sysadmin and see Transaction logs for more detail.
You can roll back as per the guidelines for the infrastructure on which the
VOSS Automate platform is deployed.
If there are errors for another reason, the install script stops with a failure
message listing the problem. Contact VOSS Support.
Steps
Status
On the primary application node, run the following command to verify that the
extra_functions have the same checksum across the cluster:
For post-upgrade migrations, run the following command on a single
application node of a cluster:
vosspost-upgrade-migrations
Data migrations that are not critical to system operation can have significant
execution time at scale. These need to be performed after the primary upgrade,
allowing the migration to proceed while the system is in use - thereby limiting
upgrade windows.
View transaction progress. A transaction is queued on VOSS Automate and its
progress displays as it executes.
On the primary database node, check cluster status and health:
Inspect the output of the command line interface for upgrade errors, for
example, Fileimportfailed! or Failed to execute command.
On the primary application node, use the logview command to view any log
files indicated in the error messages. For example, run this command if the
following message displays:
For more information refer to the execution log file with
``log view platform/execute.log``
If required, send all the install log files in the install directory to an
SFTP server:
logsendsftp://x.x.x.xinstall
Log in on the Admin Portal as system level admin, then go to Administration
Tools > Transaction, and inspect the transaction list for errors.
On the CLI, when upgrading to 24.2 from 21.4 or 21.4.-PBx., run the following
command to end the VOSS maintenance mode:
clustermaintenance-modestop
Scheduled data sync transactions can now resume, including insightssync
operations added in 24.1. For details on the VOSS platform maintenance mode, see
Maintenance Mode in the Platform Guide.
Restore schedules:
Schedules can easily be activated and deactivated from the Bulk Schedule
Activation / Deactivation menu available on the MVS-DataSync-Dashboard.
If you’re upgrading from [21.4, 21.4-PB1, 21.4-PB2, 21.4-PB3]:
Re-enable scheduled imports if any were disabled prior to the upgrade. There are
two ways to do this, either individually for each job, or mass modify:
Individually for each job:
Log in on the Admin Portal as a high level admin (above Provider level).
Select the Scheduling menu to view scheduled jobs.
Click each scheduled job, and on the Base tab, select the
Activate checkbox.
Mass modify:
Modify the exported sheet of schedules to activate scheduled syncs.
Import the sheet.
If you don’t want to execute schedules overlapping the maintenance window but
only execute afterwards, select Skip next execution.
For schedules enabled on the CLI, enable any disabled schedules that were
overlapping the maintenance window:
The Automate deployment requires a license. After installation, a 7-day grace period is available to
license the product.
Since license processing is only scheduled every hour, if you wish to license immediately,
first run vosscheck-license from the primary application node CLI.
Steps
Status
Obtain the required license token from VOSS.
Apply the license:
If applying a license via the GUI, follow the steps indicated in the Product
License Management section of the Core Feature Guide.
If applying a license through the CLI, follow the steps indicated in Product
Licensing in the Platform Guide.
On each database node, assign the insights-voss-sync:database mount point to
the drive added for the Insights database prior to upgrade.
For example, if driveslist shows the added disk as …
Unused disks:
sde
Then run the following command on each unified node where the drive has been
added:
drivesaddsdeinsights-voss-sync:database
Sample output:
$ drives add sde insights-voss-sync:database
Configuration setting "devices/scan_lvs" unknown.
Configuration setting "devices/allow_mixed_block_sizes" unknown.
WARNING: Failed to connect to lvmetad. Falling back to device scanning.
71ad98e0-7622-49ad-9fg9-db04055e82bc
Application insights-voss-sync processes stopped.
Migrating data to new drive - this can take several minutes
Data migration complete - reassigning drive
Checking that /dev/sde1 is mounted
Checking that /dev/dm-0 is mounted
/opt/platform/apps/mongodb/dbroot
Checking that /dev/sdc1 is mounted
/backups
Application services:firewall processes stopped.
Reconfiguring applications...
Application insights-voss-sync processes started.
Note
The following message can be ignored on release 24.1: Warning: Failed to
connect to lvmetad. Falling back to device scanning.
With release 24.2, the initial management of dashboards on the GUI and use of VOSS
Wingman is available after the first scheduled delta-sync of data - which is
scheduled to run every 30 minutes. No manual sync is therefore required after
upgrade.
For details, see the Insights Analytics section of the Platform Guide.