Installing Azure¶
Azure for VOSS Automate is available as a docker image that installs by means of terraform scripts.
During the run, terraform saves the plan file (terraform.tfstate
)
to the folder location created under
VOSS Automate Azure Installation Procedure - Step 3.
The file terraform.tfstate
should be backed up.
This will allow you to expand the installation in the future.
Hardware Requirements¶
For details on Standard and Modular Topologies, refer to the VOSS Automate Architecture and Hardware Specification Guide and Platform Guide.
Unified or Database Nodes:
VM Size: E4ds_v4 Standard
CPU: 4
RAM: 32
OS disk: 40GB, Premium_LRS
application disk: 50GB, Premium_LRS
backup disk: 55 GB, Standard LRS
DB disk: 250 GB, Premium_LRS
Total disk size: 395GB
Application Nodes:
VM Size: E4ds_v4 Standard
CPU: 4
RAM: 32
OS disk: 40GB, Premium_LRS
application disk: 50GB, Premium_LRS
Total disk size: 90GB
Network:
Address space: 10.0.0.0/16
Subnet prefix: 10.0.0.0/24
Web Proxies:
Web Proxies are replaced by the Azure Load Balancer
Network Communications External to the Cluster¶
The following details are all based on the default settings. These can vary depending on the application setup and network design (such as NAT) of the solution, so may need adjustment accordingly. Where a dependant is noted, this is fully dependant on the configuration with no default.
These communications are all related to communications with devices external to the cluster.
Outbound Communications to Devices from the Application/Unified nodes:
Communication
Protocol
Port
Cisco Unified Communications Manager (UCM)
HTTPS
TCP 8443
Cisco Unity Connection (CUXN)
HTTPS
TCP 443
Webex
HTTPS
TCP 443
LDAP directory
LDAP
TCP/UDP 389 and/or 636(TLS/SSL)
Cisco HCM-F
HTTPS
TCP 8443
Unified Node to Unified node
This is relevant to the communications between the unified nodes (application and database combined). If the application and database nodes are split, then see the relevant application and database node details below. Database arbiters run on port 27030.
Communication
Protocol
Port
Database access
database
TCP 27020 and 27030 bi-directional
Cluster Communications
HTTPS
TCP 8443
Ubuntu and Docker Installation Procedure¶
Deploy a LTS version of Ubuntu on Azure Portal. The instructions to follow are a guide for Ubuntu 20.04 (Focal Fossa).
https://ubuntu.com/#download
Install Docker on the newly deployed Ubuntu 20.04 VM. Docker can be installed either online from the repository or offline with the required packages downloaded.
For the online procedure - follow the instructions from step 3.
For the offline procedure - follow the instructions from step 4.
Online Docker installation
3.1. Open a terminal window.
3.2. Check if the system is up-to-date:
sudo apt update
3.3. Install Docker:
sudo apt install docker.io
3.4. Verify that Docker Engine - Community is installed correctly by running the hello-world image:
sudo docker run hello-world
3.5. Manage Docker as a non-root user. Add your user to the docker group:
sudo usermod -aG docker $USER
3.6. Reboot the system so that your group membership is re-evaluated.
3.7. Verify that you can run docker commands without sudo:
docker run hello-world
Offline Docker Installation using downloaded packages.
4.1. Download the latest packages from the offical Docker website.
https://download.docker.com/linux/ubuntu/dists/<dist codename>/pool/stable/amd64/ For Ubuntu 20.04 (Focal Fossa) - https://download.docker.com/linux/ubuntu/dists/focal/pool/stable/amd64/ Files required are: containerd.io_<latest_version>.deb docker-ce-cli_<latest_version>.deb docker-ce_<latest_version>.deb
4.2. Open a terminal window.
4.3. cd to the folder with the downloaded packages.
4.4. Install the downloaded packages of Docker Engine - Community and containerd:
sudo dpkg -i *.deb
4.5. Verify that Docker Engine - Community is installed correctly by running the hello-world image:
sudo docker run hello-world
4.6. Manage Docker as a non-root user. Add your user to the docker group:
sudo usermod -aG docker $USER
4.7. Reboot the system so that your group membership is re-evaluated.
4.8. Verify that you can run docker commands without sudo:
docker run hello-world
Azure Portal Configuration¶
Register an application
Search for and select Azure Active Directory. Under Manage, select App registrations > New registration.
Create a new application secret
From App registrations in Azure AD, select your application. Select Certificates & secrets. Select Client secrets -> New client secret. Provide a description of the secret, and a duration. When done, select Add. After saving the client secret, the value of the client secret is displayed. Copy this value because you won't be able to retrieve the key later. Store the key value where your application can retrieve it.
Assign a role to the application
In the Azure portal, search for and select Subscriptions, or select Subscriptions on the Home page. Select the particular subscription to assign the application to. Select Access control (IAM). Select Add role assignment. Select the Contributor role. Assign access to Azure AD user, group, or service principal. To find your application, search for the name and select it as done in step 1.
VOSS Automate Azure Installation Procedure¶
Download the VOSS Automate docker image along with the sha256 checksum from the provided location and verify the checksum.
sha256sum platform-azure.tgz cat platform-azure.tgz.sha256
Run: gunzip -c platform-azure.tgz | docker load
77cae8ab23bf: Loading layer [======>] 5.815MB/5.815MB f7030b11e80f: Loading layer [======>] 25.69MB/25.69MB ebf6924cf16a: Loading layer [======>] 50.64MB/50.64MB 3cf5f2bba2c7: Loading layer [======>] 1.536kB/1.536kB fda51b70dfb1: Loading layer [======>] 14.85kB/14.85kB 75a157f47953: Loading layer [======>] 1.897GB/1.897GB e1dad0f87419: Loading layer [======>] 115.9MB/115.9MB Loaded image: voss-azure:19.3.1-1583240523
Create a directory to store the plan file (
terraform.tfstate
).For example, mkdir -p /home/azure/plan
For installations over ssh, it is recommanded to use a screen session.
screen - start a new session screen -ls - show sessions already available screen -r [screen PID] - reconnect to a disconnected session
Run docker run -it -v <directory created>:/app/state voss-azure:<release_number>
For example, docker run -it -v /home/azure/plan/:/app/state/ voss-azure:19.3.1-1583240523
NB: Once the install is complete, please store this directory for future use.
Enter values at the prompt for the terraform script variables:
If installing on a standard, unified node topology, select:
var.voss_count_app=0
,var.voss_count_unified=4
.If installing a for a modular topology, select:
var.voss_count_app=2
,var.voss_count_unified=0
.For details on Standard and Modular Topologies, refer to the VOSS Automate Architecture and Hardware Specification Guide and Platform Guide.
Important
The user input is in clear text.
Variable
Description
var.azure_client_id
The Azure user ID of deployer
Azure portal: From App registrations in Azure AD, select your application.
var.azure_client_secret
The Azure password of deployer (secret)
Azure portal: From App registrations in Azure AD, select your application. Select Certificates & secrets. This value was required to be stored upon creation in step 2 of the Azure Portal Configuration.
var.azure_location
The location/region where the VOSS Automate system is created. Changing this forces a new resource to be created.
List of available regions - centralus, eastasia, southeastasia, eastus, eastus2, westus, westus2, northcentralus, southcentralus, westcentralus, northeurope, westeurope, japaneast, japanwest, brazilsouth, australiasoutheast, australiaeast, westindia, southindia, centralindia, canadacentral, canadaeast, uksouth, ukwest, koreacentral, koreasouth, francecentral, southafricanorth, uaenorth, australiacentral, switzerlandnorth, germanywestcentral, norwayeast, jioindiawest, australiacentral2
var.azure_resource_group_name
The name of the resource group in which to create the VOSS Automate system.
var.azure_subscription_id
The Azure subscription ID of deployer
Azure portal: Under the Azure services heading, select Subscriptions. Your Subscription IDs are listed in the second column.
var.azure_tenant_id
The Azure tenant ID of deployer Azure portal: From App registrations in Azure AD, select your application.
var.fault_domain_count
Set the fault domain count. The number of fault domains varies depending on which Azure region you’re using. Refer to the official Azure documentation.
var.ntp_name
NTP server to be used by VOSS Automate (e.g. ntp.ubuntu.com). Please make sure this is a publicly accessible NTP server.
var.update_domain_count
Set the update domain count. This should match the fault_domain_count.
var.voss_env
Tag for the environment (Example QA/Production)
var.voss_password
Password to be used for VOSS Automate ‘platform’ user
var.voss_count_app
Number of VOSS Automate application nodes to deploy
var.voss_count_unified
Number of VOSS Automate unified nodes to deploy
A terraform execution plan and resources are created (example resource):
An execution plan has been generated and is shown below. Resource actions are indicated with the following symbols: + create Terraform will perform the following actions: # azurerm_availability_set.avset will be created + resource "azurerm_availability_set" "avset" { + id = (known after apply) + location = "southafricanorth" + managed = true + name = "vossavset" + platform_fault_domain_count = 3 + platform_update_domain_count = 3 + resource_group_name = "VOSS_Cloud" + tags = { + "environment" = "dev" + "systemname" = "voss" } } [...]
Verify the resources in the plan:
Do you want to perform these actions? Terraform will perform the actions described above. Only 'yes' will be accepted to approve.
When to execution plan is carried out successfully, the login details for the created nodes are shown, for example:
Unified topology
azurerm_virtual_machine.voss_un[0]: Creation complete after 25m9s [id=...] Outputs: VM-ssh-access = tolist([ "ssh [email protected] -p 50001", "ssh [email protected] -p 50002", "ssh [email protected] -p 50003", "ssh [email protected] -p 50004", ]) VM-ssh-access-App_nodes = tolist([]) VM-ssh-access-DB_nodes = tolist([])
Modular topology
azurerm_virtual_machine.voss_app[1]: Creation complete after 24m23s [id=..] Outputs: VM-ssh-access = tolist([]) VM-ssh-access-App_nodes = tolist([ "ssh [email protected] -p 50011", "ssh [email protected] -p 50012", ]) VM-ssh-access-DB_nodes = tolist([ "ssh [email protected] -p 50021", "ssh [email protected] -p 50022", "ssh [email protected] -p 50023", ])
On each of the newly deployed nodes, log in as the platform user, using the password selected.
9.1. For example
ssh platform@10.13.168.197 -p 50001
9.2. Run: system reboot. Alternatively you can reset the Virtual Machine from the Azure Portal. This is to ensure all services are running before proceeding with the cluster configuration.
On each of the newly deployed nodes, log in as the platform user, using the password selected.
10.1. For example
ssh platform@10.13.168.197 -p 50001
10.2. Run: cluster prepnode
10.3. Obtain the IP address. Run: network interfaces
Example output:
$ network interfaces interfaces: eth0: gateway: 10.0.0.1 ip: 10.0.0.4 netmask: 255.255.255.0
Log in to the first node above again, for example
ssh platform@10.13.168.197 -p 50001
.
11.1. Add the IP addresses obtained in the previous step to the cluster:
cluster add <IP1>, cluster add <IP2>, …
11.2. Add database weights to your database nodes:
Log in to the database node.
For modular deployment, use the
50021
port to log into the database, for example:ssh [email protected] -p 50021database weight add <IP1> <priority>, database weight add <IP2> <priority>, …
Weights of 40, 30 are recommended for two Unified nodes
Weights of 40, 30, 20, and 10 are recommended for four Unified nodes
Weights of 60, 50, 40, 30, 20, and 10 are recommended for six Unified nodes
The higher the value, the higher priority.
11.3. Run cluster provision
11.4. Run voss cleardown
11.5. Upload the provided template install file via the scp (secure copy protocol) to the media folder on the primary node
11.6. Install the templates.
app template media/<template file name>.template
Deploying Additional Azure Nodes¶
To deploy additional nodes, make sure the docker image has been loaded with the current VOSS Automate version (for example, 19.3.1 in the example below) in the deployment.
gunzip -c platform-azure.tgz | docker load Loaded image: voss-azure:19.3.1-1583240523
Run the install command and specify the absolute folder path location of the
terraform.tfstate
file.Example (
/home/azure/plan
):/home/azure/plan$ ls -lA -rw-r--r-- 1 root root 37232 Feb 27 17:53 terraform.tfstate docker run -it -v /home/azure/plan/:/app/state voss-azure:<release_number>
Enter values at the prompt for the terraform script variables.
Total number of VOSS Automate nodes in deployment. Enter the total number of nodes required.
Example:
Unified topology (
var.voss_count_unified
)Initital deployment - 4 Nodes Additional nodes required - 2 Nodes Total Number of nodes to enter at the prompt - 6
Modular topology (
var.voss_count_app
)Initital deployment - 2 app nodes Additional nodes required - 1 app node var.voss_count_app=3
The additional nodes will be deployed and a new/current terraform plan state will be saved. The initial plan state will be renamed to
terraform.tfstate.backup
./home/azure/plan$ ls -lA -rw-r--r-- 1 root root 37232 Mar 1 13:11 terraform.tfstate -rw-r--r-- 1 root root 37232 Feb 27 17:53 terraform.tfstate.backup
On each of the newly deployed nodes, log in as the platform user, using the password selected.
6.1. For example
ssh platform@10.13.168.197 -p 50001
6.2. Run: system reboot. Alternatively you can reset the Virtual Machine from the Azure Portal. This is to ensure all services are running before proceeding with the cluster configuration.
On each of the newly deployed nodes, log in as the platform user, using the password selected.
7.1. For example
ssh platform@10.13.168.197 -p 50001
7.2. Run: cluster prepnode
7.3. Obtain the IP address. Run: network interfaces
Example output:
$ network interfaces interfaces: eth0: gateway: 10.0.0.1 ip: 10.0.0.4 netmask: 255.255.255.0
Determine which node is the primary node.
Run the following command on an Application/Unified node to determine the PRIMARY NODE:
Command :
cluster run application cluster primary role application
Search for node with
is_primary: true
Log in to the primary node, for example
ssh platform@10.13.168.197 -p 50001
.8.1. Add the newly deployed nodes IP addresses obtained in the previous step to the cluster:
cluster add <IP1>, cluster add <IP2>, …
8.2. Add database weights to your database nodes:
For modular deployment, this step can be skipped.
database weight add <IP1> <priority>, database weight add <IP2> <priority>, …
Weights of 40, 30 are recommended for two Unified nodes
Weights of 40, 30, 20, and 10 are recommended for four Unified nodes
Weights of 60, 50, 40, 30, 20, and 10 are recommended for six Unified nodes
The higher the value, the higher priority.
8.3. Run cluster provision