Configurable Number of Queue Processes¶
Important
It is strongly advised that VOSS Support is consulted before making changes to the number of queue processes.
The number of queues cannot be set to a value larger than the number of cores in the VM. A message
Validate: <num> is not a valid less_than_cores_number
will show in this case.
Available commands:
voss queues <number> - Set the number of queue processes
This command restarts the
voss-queue
services.voss queues - Get the number of queue processes
When using these commands, a CLI warning is shown to refer to this documentation:
Warning, updating this setting, without proper consideration of
Best Practices or consultation with VOSS support, can lead to system
instability.
Do you wish to continue?
The number of queue processes is configurable in order to increase transaction throughput and will improve workload distribution across the cluster, but can only be made after considering other configuration changes or performance areas. These include:
The maximum number of queues cannot exceed the number of cores in the VM
Node memory configuration
Impact on API and indirectly GUI responsiveness
Number of workers for queue processes on different unified nodes
Overall load on the primary node (node with primary database responsible for all database updates)
Node Memory¶
When increasing queue processes, too little memory headroom can lead to out of memory errors on the unified node, which can cause services to be restarted and in rare situations, can also lead to database services being stopped.
The suggested required headroom per queue process should be considered as 4GB.
Impact on API and GUI responsiveness¶
A balance has to be created between the number of queue processes and API/GUI responsiveness. Increasing the number of queue processes on all nodes will increase the load on the primary node during high transaction load and the increased load on the database can lead to degraded API and GUI responsiveness if the number of queue processes are set too high.
Number of workers per queue process¶
Note
This consideration applies to the standard topology with unified nodes.
In order to alleviate load on the primary node, it is recommended to set the number of workers to zero. This will prevent any transactions from being processed on the primary node. This will allow the primary node to better service
the higher query load from secondary nodes due to higher transaction load
API requests requiring database interaction
A special consideration exists with setting the workers to zero on the designated primary node. When the primary node fails over to a secondary due to some event, the newly elected primary node will not have the number of queue workers set to zero, which could lead to an increased load on the newly elected primary that will process transactions, service API requests and service database queries.
Manual intervention will be required to set the number of workers to zero on the newly elected primary or restore the configured primary node to primary state.
It is recommended that monitoring be set up to automatically provide notifications in case of primary failover.