Architecture and Scalability
Our Server architecture is designed for organizations running a very large deployment of Team Reviewer requiring maximum application uptime. High availability is achieved by adding redundancy to every node in the system. When combined with the Horizontal Scalability feature, Server architecture ensures rapid, reliable code analysis reporting - even when your instance grows to global proportions hosting thousands of users and projects.
NGINX
The webserver NGINX delivers all static content, e.g. images, JavaScript files or CSS files. Because its roots are in performance optimization under scale, NGINX often outperforms other popular web servers in benchmark tests, especially in situations with static content and/or high concurrent requests.
uWSGI
uWSGI is the application server that runs the Dynamic Reviewer application, written in Python/Django, to serve all dynamic content.
Message Broker
The application server sends tasks to the Message Broker for asynchronous execution. RabbitMQ is our choice, an intermediary for messaging, but for on premises installations, Redis Queue can be also adopted. It gives our applications a common platform to send and receive messages, and our messages a safe place to live until received.
Celery Worker
Tasks like Deduplication or the JIRA synchronization are performed asynchronously in the background by the Celery Worker.
Celery Beat
In order to identify and notify users about things like upcoming Engagements, Team Reviewer runs scheduled tasks. These tasks are scheduled and run using Celery Beat. We have to ensure only a single scheduler is running for a schedule at a time, otherwise we’d end up with duplicate tasks. Using a centralized approach means the schedule doesn’t have to be synchronized, and the service can operate without using locks.
Initializer
The Initializer gets started during startup of Team Reviewer to initialize the database and run database migrations after upgrades of Team Reviewer. It shuts itself down after all tasks are performed. Migrations are Django’s way of propagating changes made to our models (adding a field, deleting a model, etc.) into our database schema. They’re designed to be mostly automatic. We should think of migrations as a version control system for our database schema. Initializer-makemigrations task is responsible for packaging up our model changes into individual migration files - analogous to commits - and Initializer-migrate task is responsible for applying those to our database.
Django
Team Reviewer provides Scalability through NGINX/UWSGI and Django as native application.
Django follows the MVT (Model-View-Template) architectural pattern, which is a variation of the traditional MVC (Model-View-Controller) design pattern used in web development.
Migrations are Django’s way of propagating changes we make to our models (adding a field, deleting a model, etc.) into our database schema. They’re mostly automatic, and the migration files for each app live in a “migrations” directory inside of that app, and are designed to be committed to, and distributed as part of, its codebase.
Database
The Database stores all data of Team Reviewer. Currently MySQL is used. PostgreSQL, Oracle RAC and MariaDB are also supported. Results are also maintained in the filesystem as XML, for facilitating the Upgrades. For Core Data Classes, Team Reviewer uses the OWASP DefectDojo Models. For large numbers of analyses per year, it is recommended to use a dedicated database server and not the preconfigured MySQL database. This will improve the performances.
Scalability
Our Server architecture is designed to run in a clustered configuration to make it resilient to failures. It is provided by Team Reviewer Server. The default configuration for a robust Server architecture comprises 5 servers and a Load Balancer:
Two application nodes responsible for handling web / REST requests from users and handling analysis reports. You can add application nodes to increase REST API response and reporting capabilities.
Two Team Reviewer nodes that host the scanning processes. You can add nodes to increase scanning capabilities.
A reverse proxy / load balancer to load balance traffic between the two application nodes. The installing organization must supply this hardware or software component.
PostgreSQL, Oracle, MariaDB or MySQL database server. This software must be supplied by the installing organization.
Default Schema
Here is a sample diagram of the default topology:
All servers, including the database server, must be co-located (geographical redundancy is not supported) and have static IP addresses (reference via hostname is not supported). Network traffic should not be restricted between application and nodes.
MySQL Master-Slave
Example of MySQL Master-Slave Replication Cluster deployed via ClusterControl:
To achieve high availability, deploying a cluster is not enough though. Nodes may (and will most probably) go down, and your system has to be able to adapt to those changes.
This adaptation can happen at different levels. You can implement some kind of logic within the application - it would check the state of cluster nodes and direct traffic to the ones which are reachable at the given moment. You can also build a proxy layer which will implement high availability in your system. You can achieve that using ClusterControl.
MariaDB Galera Cluster
Example of MariaDB Galera Cluster:
MariaDB Galera Cluster is a synchronous multi-primary (multi-master) database clustering solution for MariaDB, providing high availability, data consistency, and scalability. It allows for read and write operations on any node in the cluster, with virtually synchronous replication across all nodes.
PostgreSQL High Availability
Example of PostgreSQL High Availability with Multi-Master Deployments with Coordinator:
The primary coordinator is connected to PgBouncer connection poolers that are deployed alongside each worker node (PostgreSQL instance). The coordinator forwards application requests to the worker nodes using connections from of those pools.
Each primary worker instance has its own standby instances (or replica instances). The changes are replicated from a primary to an associated standby. So for a RPO=0 scenario using synchronous replication, it is crucial to have at least two standbys for each primary. This ensures that the primary can successfully commit changes even if one standby becomes unavailable.
Patroni agents are deployed alongside primary and standby workers, as well as primary and standby coordinator instances. Patroni monitors the state and healthiness of the cluster and takes care of the failover/failback procedures if there’s an outage.
Patroni agents use etcd cluster to reach consensus for various operations, including failover and failback.
Oracle RAC
Example of Oracle RAC configuration with Single Client Access Name (SCAN):
When a SCAN Listener receives a connection request, the SCAN Listener checks for the least loaded instance providing the requested service. It then re-directs the connection request to the local listener on the node where the least loaded instance is running. Subsequently, the client node is given the address of the local listener. The local listener then finally creates the connection to the Database instance.
COPYRIGHT (C) 2015-2026 SECURITY REVIEWER SRL. ALL RIGHTS RESERVED.