This document provides a comprehensive security assessment of the Axion system as configured for . It details the key control domains and the specific technologies and architectural patterns used to protect the confidentiality, integrity, and availability of 's data and services.
To protect 's public-facing web applications and APIs from common web-based attacks, ensure high availability, and enforce initial access controls.
The first line of defense for 's services is a hardened network edge. All incoming traffic is routed through an Apache reverse proxy, which acts as a gateway, terminating TLS connections and forwarding requests to the appropriate backend services like ServeWebsite.py and ServeWebHook.py. Integrated into Apache is the ModSecurity Web Application Firewall (WAF), configured with the OWASP Core Rule Set to provide robust protection against common threats such as SQL injection, Cross-Site Scripting (XSS), and remote code execution. At the network level, we use UFW (Uncomplicated Firewall) to enforce strict IP whitelisting, ensuring that only traffic from trusted sources, such as 's corporate network, can access sensitive endpoints. This layered approach ensures that malicious traffic is blocked before it can ever reach the application logic.
To ensure that only authorized users and systems can access 's services, and to enforce strong, multi-factor authentication for all administrative access.
For , we have implemented a centralized authentication and identity management solution using Keycloak, an open-source Identity and Access Management (IAM) server. Keycloak provides Single Sign-On (SSO) capabilities, allowing 's users to authenticate once and access multiple services. For administrative access, we enforce Multi-Factor Authentication (MFA) using Time-based One-Time Passwords (TOTP) with apps like Google Authenticator. For integrations with third-party systems, we use the industry-standard OAuth2 protocol, ensuring secure, token-based authorization. All authentication events are logged and published to our central monitoring system via MQTT, providing a real-time audit trail of all access to 's services.
To secure the internal communication between services, validate all data entering the business logic layer, and ensure the resilience of our asynchronous workflows.
The core of our backend is a service-oriented architecture where services communicate asynchronously via RabbitMQ, a robust message broker. This decouples our services, preventing a failure in one component from cascading to others. All data passed between services is validated against a strict schema, and the business logic in our ObjApi.py and ObjData.py classes provides a secure abstraction layer for all data operations. We use Pytest for extensive unit and integration testing to ensure the correctness and security of our code. Furthermore, all Python code is scanned with Ruff, a static analysis tool, to detect potential security vulnerabilities and ensure adherence to coding standards before it is deployed.
To protect 's data at rest through strong encryption, enforce strict access controls, and ensure the integrity and availability of data through regular backups and tested recovery procedures.
We employ a polyglot persistence strategy for 's data, using MariaDB on AWS RDS for structured data and MongoDB for unstructured data. All data is encrypted at rest using the industry-standard AES-256 algorithm. Cryptographic keys are managed by AWS Secrets Manager, which provides secure storage and automated rotation of keys. Each client, including , is assigned a unique set of keys, creating a "Chinese wall" that ensures strict data isolation. We take regular, automated snapshots of our databases using AWS RDS Snapshots, and these backups are tested quarterly to ensure they can be reliably restored.
To provide deep visibility into the health, performance, and security of the systems supporting , and to enable the rapid detection of and response to anomalies and security incidents.
Our monitoring and telemetry system is built on a real-time data pipeline. We use Telegraf agents to collect metrics and logs from all of our systems, which are then published to an MQTT broker. From there, the data is ingested into InfluxDB, a time-series database, for long-term storage and analysis. This provides us with a rich, high-granularity dataset that we use to build custom dashboards for monitoring system health and to configure automated alerts for potential security incidents. This centralized system gives us a unified view of the entire infrastructure supporting , allowing us to correlate events across different components and quickly identify the root cause of any issues.
To ensure the timely and reliable recovery of 's services in the event of a major outage or disaster.
Our disaster recovery (DR) strategy for is based on the principle of infrastructure-as-code. We use Terraform to define our entire infrastructure in code, which allows us to quickly and reliably recreate the entire environment in a different AWS region if necessary. Our DR plan includes the restoration of databases from AWS RDS Snapshots, the recovery of secrets from AWS Secrets Manager, and the re-establishment of network connectivity and access controls. We conduct regular DR drills and tabletop exercises to ensure that our procedures are effective and that our team is prepared to respond in a real disaster scenario.
To proactively identify and remediate security vulnerabilities in the Axion system for through a combination of continuous automated scanning and periodic, in-depth penetration testing conducted by independent third parties.
To ensure that our systems and processes comply with relevant regulations, such as GDPR and POPIA, and to provide with the necessary tools and documentation to meet their own compliance obligations.
Compliance for is not an afterthought; it is built into our system's design. Our data lineage and provenance tracking capabilities provide a clear audit trail of how data is collected, used, and shared, which is a key requirement of regulations like GDPR. We have implemented workflows to support Data Subject Rights (DSR) requests, allowing to respond to requests for access, rectification, or erasure in a timely manner. Our retention and deletion policies are enforced by automated scripts that ensure data is securely disposed of at the end of its lifecycle. This proactive approach to compliance helps to reduce risk and build trust with and their customers.