.. _architecture-offerings:

VOSS deployment topologies
-------------------------------

.. _21.1|VOSS-837:
.. _25.4|EKB-26981:


Overview 
..........


VOSS offers two main deployment topologies: 

* :ref:`unified-node-cluster-topology` 
* :ref:`modular-deployment-topology` 


Two additional deployment options are available: 

* :ref:`cloud-deployments`
* :ref:`voss-automate-cloudv1`



.. _nodes-stack:

Node types
...........

Deployment topologies are comprised of a configuration of the following types of nodes, each 
performing specific functions within the topologies: 

* Web proxy node 
* Unified/single node 
* Application node
* Database node 


Each node type is comprised of one or more of the following components (software subsystems): 

.. tabularcolumns:: |p{5cm}|p{10cm}|

+------------------+------------------------------------------------------------------------+
| Component        | Description                                                            |
+==================+========================================================================+
| Operating system | Ubuntu, stripped down / hardened                                       |
+------------------+------------------------------------------------------------------------+
| Platform         | Docker, isolated components                                            |
+------------------+------------------------------------------------------------------------+
| Web server       | Nginx, receives and forwards HTTP requests                             |
|                  |                                                                        |
|                  | * Hosts static files: CSS, JS and images                               |
|                  | * Load balance between unified nodes (UNs): round robin, configurable, |
|                  |   for example, two data centres                                        |
|                  | * Detects inactive UN: removes from round robin                        |
|                  | * Has ``robots.txt`` that disallows crawling of the entire site.       |
+------------------+------------------------------------------------------------------------+
| Database         | MongoDB (scalable, distributed), PostgreSQL (scalable)                 |
+------------------+------------------------------------------------------------------------+
| Application      | JavaScript, Python, REST API, device drivers, workflow engine,         |
|                  | transactions/queue engine, RBAC, search, bulk loader, and more ...     |
+------------------+------------------------------------------------------------------------+




The matrix outlined in the table describes the set of components in each node type: 

.. tabularcolumns:: |\Yc{0.2}|\Yc{0.2}|\Yc{0.2}|\Yc{0.13}|\Yc{0.13}|\Yc{0.13}|

+---------------------+-------------------+-----------+------------+----------+-------------+
| Node type           |            Components                                               |
+---------------------+-------------------+-----------+------------+----------+-------------+
|                     | Operating system  | Platform  | Web server | Database | Application |
+=====================+===================+===========+============+==========+=============+
| Web proxy           |   X               |  X        |  X         |          |             |
+---------------------+-------------------+-----------+------------+----------+-------------+
| Unified/single node |   X               |  X        |  X         |  X       |  X          |
+---------------------+-------------------+-----------+------------+----------+-------------+
| Application         |   X               |  X        |            |          |  X          |
+---------------------+-------------------+-----------+------------+----------+-------------+
| Database            |   X               |  X        |            |  X       |             |
+---------------------+-------------------+-----------+------------+----------+-------------+



.. _unified-node-cluster-topology:

Unified node cluster topology 
...............................


Automate's **unified node cluster** topology provides the following options:

* :ref:`single-node-cluster-of-one-standalone`
* :ref:`single-node-cluster-of-one-standalone-with-vmware`
* Two node with web proxies
* :ref:`four-node-with-web-proxies`
* :ref:`six-node-with-web-proxies`



.. important::

   Choose between a unified node deployment or a modular architecture deployment.


In a *unified node cluster* deployment, VOSS is deployed as *one* of the following: 

* A single unified node cluster
* Two unified nodes
* A cluster of multiple nodes with High Availability (HA) and Disaster Recovery (DR) qualities

Each node can be assigned one or more of the following functional roles:

.. tabularcolumns:: |p{5cm}|p{10cm}|

+----------------------+-----------------------------------------------------------------------------+
| Functional role      | Description                                                                 |
+======================+=============================================================================+
| Web proxy            | Load balances incoming HTTP requests across unified nodes.                  |
+----------------------+-----------------------------------------------------------------------------+
| Single unified node  | Combines the Application and Database roles for use in a                    |
|                      | non-multi-clustered test environment.                                       |
+----------------------+-----------------------------------------------------------------------------+
| Unified              | Similar to the *Single unified node* role Application and Database roles,   |
|                      | but clustered with other nodes to provide HA and DR capabilities.           |
+----------------------+-----------------------------------------------------------------------------+


The nginx web server is installed on the web proxy, *Single Unified Node*, and the *Unified Node Cluster*,
but is configured differently for each role.

In a clustered environment containing multiple *unified node clusters*, a load balancing function is 
required to offer HA (High Availability providing failover between redundant roles).

VOSS supports deployment of either the web proxy node or a DNS load balancer. Consider the 
following when deciding whether to select a web proxy node or a DNS: 

* The web proxy node takes load off the *unified node cluster* to deliver static content (HTML/JAVA scripts).
  When using DNS or a third-party load balancer, the *unified node cluster* must process this information.
* DNS is unaware of the state of the *unified node cluster*.
* The web proxy detects if a *unified node cluster* is down or corrupt.
  In this case, the web proxy will select the next *unified node cluster* in a round robin scheme.

.. important:: 

   It is recommended that you run no more than two *unified node clusters* and one web proxy node on a 
   physical (VMware) server. 
   
   Additionally, it is recommended that the disk sub-systems are unique for each *unified node cluster*.


The table describes the defined deployment topologies for test and production: 

.. tabularcolumns:: |p{5cm}|p{10cm}|

+--------------------------------+-----------------------------------------------------------------------------+
| Deployment topology            | Description                                                                 |
+================================+=============================================================================+
| Test                           | A standalone, *single unified node*, with Application and Database roles    |
|                                | combined.                                                                   |
|                                |                                                                             |
|                                | No high availability or disaster recovery (HA/DR) is available.             |
|                                |                                                                             |
|                                | .. important::                                                              |
|                                |                                                                             |
|                                |    A test deployment must be used only for test purposes.                   |
+--------------------------------+-----------------------------------------------------------------------------+
| Production with unified nodes  | In a clustered system, comprising:                                          |
|                                |                                                                             |
|                                | * Two, three, four, or six unified nodes (each with combined Application    |
|                                |   and Database roles)                                                       |
|                                | * Zero to four (maximum two if two unified nodes) web proxy nodes offering  |
|                                |   load balancing.                                                           |
|                                |                                                                             |
|                                |   The web proxy nodes can be omitted if an external load balancer is        |
|                                |   available.                                                                |
+--------------------------------+-----------------------------------------------------------------------------+


.. _single-node-cluster-of-one-standalone:

Single-node cluster (cluster-of-one/standalone) (testing-only)
''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''

.. note:: 

   A *Single-node cluster (cluster-of-one/standalone)* deployment should be used *only* for test purposes. 

.. image:: /src/images/standalone.png


The table describes the advantages and disadvantages of a *Single-node cluster (cluster-of-one/standalone)* 
deployment topology: 

.. tabularcolumns:: |p{7cm}|p{8cm}|

+------------------------------------+-----------------------------------------------+
| Advantages                         | Disadvantages                                 | 
+====================================+===============================================+
| * Smallest hardware footprint      | * No high availability or disaster recovery   |
|                                    | * Less throughput than clusters               |
+------------------------------------+-----------------------------------------------+



.. _single-node-cluster-of-one-standalone-with-vmware:

Single-node cluster (cluster-of-one/standalone) with VMWare HA
''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''

The table describes the advantages and disadvantages of a *Single-node cluster 
(cluster-of-one/standalone) with VMWare HA* deployment topology: 

.. tabularcolumns:: |p{7cm}|p{8cm}|

+------------------------------------+-----------------------------------------------+
| Advantages                         | Disadvantages                                 | 
+====================================+===============================================+
| * Smallest hardware footprint      | * Less throughput than clusters               |
| * Disaster recovery available      |                                               |
+------------------------------------+-----------------------------------------------+


.. _multinode-cluster-with-unified-nodes:

Multi-node cluster with unified nodes
''''''''''''''''''''''''''''''''''''''

.. index:: voss;voss workers
.. index:: web;web service

To achieve geo-redundancy using the unified nodes, consider the following:

* Either four or six unified nodes (each node combining Application and Database roles), are 
  clustered and split over two geographically disparate locations.
* Two web proxy nodes to provide high availability, ensuring that an Application role failure
  is gracefully handled. More may be added if web proxy nodes are required in a DMZ.

  .. important:: 

     It is strongly recommended *not* to allow customer end-users the same level of administrator access 
     as the restricted groups of Provider- and Customer administrators. For this reason, Self-service 
     web proxies as well as Administrator web proxies should be used.

     Systems with Self-service-only web proxies are *only* recommended where the system is customer facing, 
     but where the customer does not administer the system themselves.

* Web proxy and unified nodes can be contained in separate, firewalled networks.
* Database synchronization takes places between all database roles, thus offering disaster recovery 
  and high availability.
* For six unified nodes, all nodes in the cluster are active. For an eight node cluster (with latency 
  between data centers greater than 10ms), the two nodes in the disaster recovery node are passive; that is, the 
  ``voss workers 0`` command has been run on the disaster recovery nodes.

.. note::

   Primary and fall-back secondary database servers can be configured manually. Refer to the
   *Platform Guide* for details.


.. rubric:: Example: Six node cluster

The diagram illustrates an example of a *six node cluster*: 

.. image:: /src/images/cluster-site.png


.. rubric:: Example: Eight node cluster 

The diagram illustrates an example of an *eight node cluster*: 

.. image:: /src/images/6-node-topology.png


.. rubric:: Example: Two web proxy nodes in a DMZ 

The diagram illustrates an example of *two web proxy nodes in a DMZ*: 

.. image:: /src/images/cluster-site-dmz67.png


.. rubric:: Example: Four web proxy nodes in a DMZ (two admin, two Self-service)

The diagram illustrates an example of *four web proxy nodes (2 admin, and 2 Self-service) in a DMZ*: 

.. image:: /src/images/cluster-site-dmz-admin-self-webprx.png


.. _two-node-cluster-topology:

Two node cluster with unified nodes
''''''''''''''''''''''''''''''''''''''''

.. _19.1.1|VOSS-475:

To achieve geo-redundancy using the unified nodes, consider the following:

* Two unified nodes (each node combining application and database roles) are clustered and 
  optionally split over two geographically disparate locations.
* (Optional) Two web proxy nodes can be used. It may be omitted if an external load balancer is available.
* Web proxy and unified nodes can be contained in separate firewalled networks.
* Database synchronization takes place from primary to secondary unified nodes, thereby offering disaster recovery 
  if the primary node fails.
* If the secondary unified node has *more than 10ms latency* with the primary unified node,
  it must be configured to be in the *same* geographical location.


.. important::

   With only two unified nodes, with or without web proxies, there is no high availability.
   The database on the primary node is read/write, while the database on the secondary is read-only.

   Only redundancy is available in the following instances:

   * If the primary node fails, a manual delete of the primary
     node on the secondary and a cluster provision will be needed.
   * If the secondary node fails, it needs to be replaced.

   Refer to the topic on *Disaster recovery failover and recovery in a two node cluster* in the Platform Guide.


.. rubric:: Example: Two node cluster

The diagram illustrates a *two node cluster*:


.. image:: /src/images/2-node-cluster.png



.. _four-node-with-web-proxies:

Four node with web proxies
''''''''''''''''''''''''''''

The table describes the advantages and disadvantages of a *four node with web proxies* deployment topology: 

.. tabularcolumns:: |p{7cm}|p{8cm}|

+-----------------------------------------------+-----------------------------------------------+
| Advantages                                    | Disadvantages                                 | 
+===============================================+===============================================+
| * More disaster recovery scenarios supported  | * More hardware than 3 Node                   |
| * More throughput than 3 Node                 |                                               |                                               
+-----------------------------------------------+-----------------------------------------------+


.. _six-node-with-web-proxies:

Six node with web proxies
''''''''''''''''''''''''''''

The following are characteristics of a *six node with web proxies* deployment topology: 

* Typically deployed for multi-data center deployments 
* Supports Active/Standby 


.. _modular-deployment-topology:

Modular node cluster deployment topology
..........................................


Overview 
''''''''''

A *modular node cluster* topology has separate Application and Database nodes: 

* Three Database nodes
* One to eight Application nodes
* Web proxies 

A *modular node cluster** topology has the following advantages:

* Increased processing capacity 
* Horizontal scaling by adding more Application nodes
* Improved database resilience with dedicated nodes and isolation from application
* Improved database performance by removing application load from the primary database 


.. important::

   Choose between a *Unified Node Cluster* deployment or a *Modular Node Cluster* deployment.

VOSS is deployed as a *Modular Node Cluster* of multiple nodes, with High Availability (HA) and 
Disaster Recovery (DR) qualities. 

Each node can be assigned one or more of the following functional roles:

.. tabularcolumns:: |p{7cm}|p{8cm}|

+-----------------------------+--------------------------------------------------------+
| Functional role             | Description                                            | 
+=============================+========================================================+
| Web proxy                   | Load balances incoming HTTP requests across nodes.     |
+-----------------------------+--------------------------------------------------------+
| Application role node       | Clustered with other nodes to provide HA and DR        |
|                             | capabilities.                                          |
+-----------------------------+--------------------------------------------------------+
| Database role node          | Clustered with other nodes to provide HA and DR        |
|                             | capabilities.                                          |
+-----------------------------+--------------------------------------------------------+

The nginx web server is installed on the web proxy and application role node, but is configured 
differently for each role.





A load balancing function is required to offer HA (High Availability providing failover between redundant roles).

VOSS supports deployment of either the web proxy node or a DNS load balancer. When choosing 
between a web proxy node and a DNS, consider the following: 

* The web proxy takes load off the Application role node to deliver static content (HTML/JAVA scripts).
  When using DNS or a third-party load balancer, the Application role node has to process this information.
* DNS is unaware of the state of the Application role node.
* The web proxy detects if an Application role node is down or corrupt.
  In this case, the web proxy will select the next Application role node in a round robin scheme.

.. important:: 

   It is recommended that you run no more than one Application role node and one Database role node and 
   one web proxy node on a physical (VMWare) server. When choosing disk infrastructure, 
   high volume data access by database role replica sets must be considered where different disk 
   sub-systems may be required depending on the performance of the disk infrastructure.


The following *Modular Node Cluster* topology is recommended (minimum):

.. important::

   *Single Unified Node* topologies are not available for *Modular Node Cluster* deployments.

* Production with nodes (in a clustered system of two data centers):

  * DC1 = Data center 1, a primary data center containing primary database node (highest database weight)
  * DC2 = Data center 2, a data recovery data center
    
  The system comprises of the following nodes:

  * Three nodes with application roles (two in DC1; one in DC2)
  * Three nodes with database roles (two in DC1; one in DC2)
  * Maximum two web proxy nodes if two data centers; offering load balancing.  
    The web proxy nodes can be omitted if an external load balancer is available.


.. rubric:: Related topics 

* 
  .. raw:: html

     <p><a href="../install/modular-multinode-installation.html">Modular Architecture Multi-node Installation</a></p>

  .. raw:: latex

     Modular Architecture Multi-node Installation in the Install Guide.

* 
  .. raw:: html

     <p><a href="../platform/tasks-deployment-migrate-to-modular.html">Migrate a Unified Node Cluster to a Modular Node Cluster</a></p>

  .. raw:: latex

     Migrate a Unified Node Cluster to a Modular Node Cluster in the Platform Guide.



.. _multinode-modular-cluster-with-app-database-nodes:

Multi-node modular node cluster with application and database nodes
''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''


.. index:: voss;voss workers
.. index:: web;web service


To achieve geo-redundancy using Application and Database nodes, consider
the following:

* Six Application and Database nodes (three nodes with an application role and three nodes with a database role) 
  are clustered and split over two geographically disparate locations.
* Two web proxy nodes to provide High Availability so that an Application role failure
  is gracefully handled. More may be added if web proxy nodes are required in a DMZ.

  .. important:: 

     It is strongly recommended *not* to allow customer end-users the same level of administrator 
     access as the restricted groups of Provider- and Customer administrators. For this reason, 
     Self-service web proxies as well as Administrator web proxies should be used.

     Systems with Self-service-only web proxies are *only* recommended where the system is customer facing, 
     but where the customer does not administer the system themselves.

* Web proxy, Application and Database nodes can be contained in separate firewalled networks.
* Database synchronization takes places between all database role nodes, thus offering disaster recovery 
  and high availability.
* All nodes in the cluster are active.
 
.. note:: 
  
   Primary and fall-back secondary database servers can be configured manually. Refer to the 
   *Platform Guide* for details.


.. rubric:: Example: Six node cluster

The diagram illustrates an example of a *six node cluster*:

.. image:: /src/images/6-node-modular-cluster.png


.. rubric:: Example: Two web proxy nodes in a DMZ

The diagram illustrates an example of *two web proxy nodes in a DMZ*:

.. image:: /src/images/modular-cluster-site-dmz.png


.. rubric:: Example: Four web proxy nodes in a DMZ
   
The diagram illustrates an example of *four web proxy nodes in a DMZ* (two admin, two Self-service):

.. image:: /src/images/modular-cluster-site-dmz-admin-self-webprx.png


.. _cloud-deployments:

Customer environment cloud deployments 
......................................

VOSS supports the following customer environment cloud deployments: 

* Microsoft Azure 
* Amazon Web Services (AWS)

Although Google Cloud Platform (GCP) is not officially supported, contact us to discuss your requirements.

Advantages of a Cloud deployment topology: 

* Leverage cloud tooling, such as proxies (which can be used instead of VOSS Web Proxy)



.. _voss-automate-cloudv1:

VOSS Cloudv1 (SaaS)
..................................

VOSS Cloudv1 is a Software-as-a-Service (SaaS) offering hosted on a shared VOSS 
instance within Microsoft Azure. 

VOSS manages this instance, which seamlessly integrates with a customer's Unified Communications (UC) platform, 
Microsoft Exchange, Microsoft Active Directory, and third-party applications, such as ServiceNow and 
Identity Providers (IdPs) for Single Sign-On (SSO) authentication.

.. image:: /src/images/ZOOMvoss-cloudv1.png 

