The Axion build pipeline compiles, packages, scans, and deploys
the platform as per-service Docker images to a Kubernetes (K3s) cluster.
Every step is tracked in MySQL, and a branded email report is sent on
completion.
config.yaml ──→ ObjBuild.pipeline()
│
├── 1. config Validate config.yaml
├── 2. pem_keys Check encryption keys
├── 3. compile Cython-compile all factory modules
├── 4. image Build 13 per-service Docker images
├── 5. scan Harbor auto-scan / Trivy fallback
├── 6. audit Full security audit (deps, config, secrets, image)
├── 7. push Push all images to registry
├── 8. topology Generate service mesh from config
├── 9. scaffold Generate Helm charts + Caddyfile + registries.yaml
├── 10. deploy Helm upgrade on K3s cluster
├── 11. verify Poll status endpoint for health
└── 12. wiki Build + sync docs to Wiki.js
│
▼
Email report + DB tracking
# Full pipeline (compile + build + push + scaffold)
python factory.deploy/ObjBuild.py pipeline homechoice --skip-deploy
# Full pipeline with email report
python factory.deploy/ObjBuild.py pipeline homechoice --notify user@example.com
# Full pipeline including K3s deploy + health check
python factory.deploy/ObjBuild.py pipeline homechoice --notify user@example.com
# Skip compile (use cached Docker layers)
python factory.deploy/ObjBuild.py pipeline homechoice --skip-compile --skip-deploy
# ARM64 cross-build via buildx
python factory.deploy/ObjBuild.py pipeline homechoice --platform linux/arm64
# Multi-arch manifest (amd64 + arm64)
python factory.deploy/ObjBuild.py pipeline homechoice --platform linux/amd64,linux/arm64
ObjConfig.validate_config(package) checks:
config.yaml exists and is valid YAMLVerifies data.config/{package}_private.pem and
data.config/{package}_public.pem exist. These are provisioned once
per package and must never be regenerated or baked into images.
ObjCompile stages source into resource.build/{package}/ and
Cython-compiles all .py files to .so binaries.
Compile optimisations:
cythonize.compile_cache.json)git diff HEAD~1 identifies changed files,CC="ccache gcc" caches C compiler output, second-j cpu_count — parallel gcc matches available cores-a flag — annotation HTML generation disabled (not used)ruff check catches syntax errors beforePerformance:
| Scenario | Time |
|---|---|
| Full compile (cold, no cache) | ~180s |
| Code change (few files) | ~30s |
| No-change rebuild | ~5s |
.py source from compiled outputpackage.core + active package are keptlog_build_compile (per-module) andlog_build_compile_set (per-factory-set with timing)--offline flag (no DB writes during Docker builds)Cython type hints — hot-path methods in ObjProcessText,
Objects.patch_param, ObjDataSql.sql_execute/sql_get_list have
typed locals (str, int, bool, list) so Cython generates
C-level operations instead of Python object calls.
Compiled-mode optimisation — _COMPILED flag (detected via
__file__ extension .so) skips all debug() calls, frame
inspection, and sets get_deployment() to LIVE. Zero overhead
in production .so modules.
build_all_targets() builds 13 per-service Docker images:
| Target | Factories Kept | CMD |
|---|---|---|
| runtime | all (base dispatcher) | entrypoint.sh |
| report | core, export, field, formlayout, text, report, pages, web, service | ServeReport.py |
| webhook | core, export, field, formlayout, text, webhook, pages, web, service | ServeWebHook.py |
| web | core, export, field, formlayout, text, pages, web, report, service | ServeWebsite.py |
| workflow | core, export, field, formlayout, text, workflow, web, service | ServeWorkflow.py |
| go | core, field, formlayout, text, pages, web | ServeGo.py |
| scheduler | core, export, field, formlayout, text, workflow, service | ServeScheduler.py |
| mqtt | core, export, field, formlayout, text, service | ServeMqtt.py |
| import | core, export, field, formlayout, text, import, service | ServeImport.py |
| conversation | core, export, field, formlayout, text, conversation, web, service | ServeConversation.py |
| ai | core, text, ai, web, learn, service | ServeAI.py |
| monitor | core, export, field, formlayout, text, service | ServeMonitor.py |
| sms | core, export, field, formlayout, text, sms, service | ServeSms.py |
Each image pushes to the registry immediately after build.
Image sizes are captured and stored in log_build_image.
Factory stripping: Each per-service target runs
strip_factories.sh which removes factory.* directories not in
the KEEP_FACTORIES list. This is defined in SERVICE_FACTORY_MAP
in ObjBuildRunnerBase.py.
WARNING: The factory lists are initial estimates based on static
import analysis. They need detailed review against runtime behaviour.
Dynamic imports, workflow step execution, and webhook dispatch may
pull factories not visible in static analysis. Test each trimmed
image thoroughly before production use.
When the registry is Harbor (deployment.registry.type: harbor),
_run_scan() fetches scan results from Harbor's built-in Trivy
scanner. Images are auto-scanned on push (auto_scan: true).
The pipeline polls Harbor for up to 60 seconds waiting for the
scan to complete.
When using registry:2 or when Harbor is unavailable, falls back
to a local trivy image invocation.
log_build_scanObjAudit.audit_all() runs four Trivy scan modes:
| Mode | Command | What it finds |
|---|---|---|
| deps | trivy fs --scanners vuln . |
CVEs in Python dependencies |
| config | trivy config resource.docker/ |
Dockerfile/Helm misconfigurations |
| secrets | trivy fs --scanners secret . |
Leaked credentials in source |
| image | trivy image <tag> |
Container image CVEs |
Results stored in log_audit. Standalone CLI:
python factory.deploy/ObjAudit.py deps
python factory.deploy/ObjAudit.py config
python factory.deploy/ObjAudit.py secrets
python factory.deploy/ObjAudit.py image registry.technocore.co.za/meridian-report-homechoice:latest
python factory.deploy/ObjAudit.py all --image-tag <tag>
Handled by build_all_targets() — each image is pushed immediately
after build. When --platform is set, buildx uses --push directly.
ObjBuildRunnerK3s.topology() reads base.services and
{package}.services from config.yaml to build a service mesh:
ports, replicas, and routing.
Generates all deployment files from config.yaml:
| File | Purpose |
|---|---|
helm/{package}/values.yaml |
Image, services, infra, env_overrides |
helm/{package}/Chart.yaml |
Helm chart metadata |
helm/{package}/templates/deployment.yaml |
Per-service deployments |
helm/{package}/templates/service.yaml |
ClusterIP/NodePort services |
helm/{package}/templates/infra-*.yaml |
MongoDB, Redis, RabbitMQ, etc. |
caddyfile/{package}.generated.Caddyfile |
Reverse proxy routing |
config.template/registries.yaml |
K3s registry mirror config |
ObjBuildRunnerK3s.deploy():
axion-config ConfigMap from local config.yamlaxion-pem Secret from data.config/*.pemhelm upgrade --install with kubeconfig from remote K3s_verify_deploy() polls the status endpoint (5 retries, 10s delay):
Images are named by version (from git branch) + service + package:
{registry}/{version}-{service}-{package}[-arm]:tag
| Parent Branch | Version | Example |
|---|---|---|
| develop | meridian | meridian-report-homechoice:latest |
| main | axion | axion-report-homechoice:latest |
| helix | helix | helix-report-homechoice:latest |
| other | prism | prism-report-homechoice:latest |
| ARM-only build | +suffix | meridian-report-homechoice-arm:latest |
Version is determined by ObjData.get_version_name() which reads
ObjData.VERSION_MAP and checks git merge-base against known branches.
Every pipeline run is fully instrumented:
| Table | Rows per build | Content |
|---|---|---|
log_build_run |
1 | GUID, package, status, timing, host arch/OS, platform |
log_build_step |
11 | Per-step status, timing, errors, result detail |
log_build_compile |
~838 | Per-module compile success/failure |
log_build_compile_set |
~19 | Per-factory-set timing summary |
log_build_scan |
1 | Trivy CVE counts by severity |
log_build_image |
13 | Image size per target |
log_audit |
4 | Audit results per mode |
All tables defined in factory.deploy/ObjBuild.yaml and
factory.deploy/ObjAudit.yaml.
The --notify flag sends a branded HTML email on completion
(pass or fail) with these sections:
Report class: factory.report/package.core/ObjReportBuild.py
Templates: factory.report/package.core/ObjReportBuild.yaml
All deployment configuration flows from config.yaml:
config.yaml
├── deployment.registry → image tags, registry auth
├── deployment.k3s → remote cluster target
├── deployment.buildx → remote builder for ARM
├── deployment.registry.mirrors → K3s registries.yaml
├── base.services → service ports, replicas
├── terraform.* → infra credentials (MongoDB, RabbitMQ, etc.)
└── {package}.services → package-specific overrides
│
▼
scaffold (ObjBuildRunnerK3s)
│
▼
values.yaml
├── image.registry, image.version, image.tag
├── services.{svc}.port, replicas, nodePort
├── infra.{name}.credentials, env, image
└── env_overrides.AXION_* (app pod env vars)
│
▼
Helm templates
│
▼
K3s pods
To change a credential or host: edit config.yaml, run scaffold,
deploy. No template editing needed.
# Pipeline
python factory.deploy/ObjBuild.py pipeline <package> [options]
--skip-compile Skip Cython compilation
--skip-push Skip registry push
--skip-deploy Skip K3s deployment
--notify <email> Send build report email
--platform <arch> Target platform(s) for buildx
# Individual steps
python factory.deploy/ObjBuild.py compile <package>
python factory.deploy/ObjBuild.py image <package>
python factory.deploy/ObjBuild.py scaffold <package>
python factory.deploy/ObjBuild.py deploy <package>
python factory.deploy/ObjBuild.py targets <package>
python factory.deploy/ObjBuild.py status <package>
# Build comparison
python factory.deploy/ObjBuild.py build-diff <guid-a> <guid-b>
# Registry maintenance
python factory.deploy/ObjBuild.py registry-prune --keep 3 [--dry-run]
# Security audit
python factory.deploy/ObjAudit.py deps
python factory.deploy/ObjAudit.py config
python factory.deploy/ObjAudit.py secrets
python factory.deploy/ObjAudit.py image <tag>
python factory.deploy/ObjAudit.py all --image-tag <tag>
factory.webhook/ObjHookBuild.py accepts POST requests to trigger
builds from Bitbucket or other CI systems:
POST /webhook/Build
{
"package": "homechoice",
"notify": "user@example.com",
"platform": "linux/arm64",
"skip_compile": false,
"skip_deploy": true
}
The build runs asynchronously and returns {"status": "accepted"}.
| Component | Location | Purpose |
|---|---|---|
| Registry | registry.technocore.co.za (LXC 136, 10.0.10.5) |
Harbor container registry |
| Builder | LXC 136 (24 cores, 64GB) via SSH | Remote buildx for ARM |
| K3s cluster | swarm1-4 (10.0.10.41-44) | Deployment target |
| Pipeline | LXC 109 (10.0.10.68) | CI/CD webhook runner |
| Caddy | anchor.technocore.co.za | TLS proxy for registry + K3s |
| Proxmox | hypervisor (10.0.10.18) | LXC host |
The registry runs Harbor v2.12 on LXC 136 (10.0.10.5:5000).
TLS is terminated at the anchor reverse proxy.
axion (all images pushed under axion/ namespace)registry.technocore.co.za/axion/{version}-{service}-{package}:latestconfig.yaml registry section:
deployment:
registry:
type: "harbor"
host: "registry.technocore.co.za"
project: "axion"
username: "admin"
password: "..."
auto_scan: true
CLI commands:
python factory.deploy/ObjDocker.py harbor-repos # list repos
python factory.deploy/ObjDocker.py harbor-scan <repo> # scan results
python factory.deploy/ObjDocker.py harbor-gc # garbage collect
python factory.deploy/ObjBuild.py registry-prune # prune old tags
┌─────────────────────────────────────────────┐
│ Stage 1: builder (python:3.12-bookworm) │
│ uv venv + pip install requirements.txt │
│ Strip ARM-incompatible packages on arm64 │
└────────────────┬────────────────────────────┘
│
┌────────────────▼────────────────────────────┐
│ Stage 2: compiler │
│ COPY source → /axion │
│ Trim config.yaml to package │
│ ObjCompile.py all --offline │
│ Replace factory.* with compiled .so │
└────────────────┬────────────────────────────┘
│
┌────────────────▼────────────────────────────┐
│ Stage 3: runtime (python:3.12-slim) │
│ COPY venv from builder │
│ Playwright Chromium for screenshots │
│ Volume mounts (docs, PEM, config) │
└────────────────┬────────────────────────────┘
│
┌────────────┼────────────┐
│ │ │
┌───▼──┐ ┌─────▼────┐ ┌────▼───┐
│report│ │ webhook │ │ sms │ ... 13 targets
│ │ │ │ │ │
│strip │ │ strip │ │ strip │ ← strip_factories.sh
│unused│ │ unused │ │ unused │ removes unneeded
│factory│ │ factory │ │factory │ factory.* dirs
└──────┘ └──────────┘ └────────┘
Each per-service image only includes the factory modules it needs,
defined in SERVICE_FACTORY_MAP (ObjBuildRunnerBase.py):
| Dependency | Required by |
|---|---|
| factory.core | ALL services |
| factory.field + factory.formlayout | field always needs formlayout |
| factory.text | most services (template rendering) |
| factory.export | services that produce data exports |
| factory.service | services that run calculations/API calls |
| factory.report | report service + web (report rendering) |
| factory.pages | web, go, webhook (page templates) |
| factory.web | services with HTTP endpoints |
| factory.workflow | workflow + scheduler |
WARNING: These mappings are based on static import analysis and
need thorough runtime testing. Dynamic module loading (workflow steps,
webhook dispatch, lazy imports) may require additional factories.
Test each trimmed service image by exercising all code paths before
production deployment.
This is the most important section. Getting this wrong causes bugs
that silently reappear after every scaffold run.
NEVER edit generated Helm template files directly. The scaffold
step overwrites them every time it runs. Fix the Python _*_tpl()
methods in ObjBuildRunnerK3s.py instead — they are the source of
truth for template structure.
config.yaml ObjBuildRunnerBase.yaml
│ │
▼ ▼
ObjBuildRunnerK3s.py INFRA_SERVICES
generate_deploy_config() SERVICE_FACTORY_MAP
_deployment_tpl() SERVICE_REQUIREMENTS_MAP
_service_tpl() SERVICE_ROUTE_MAP
_infra_statefulset_tpl() SERVICE_SCRIPT_MAP
│ │
▼ ▼
resource.docker/helm/{package}/ (loaded at import time)
values.yaml ← GENERATED, do not edit
templates/*.yaml ← GENERATED, do not edit
Chart.yaml ← GENERATED, do not edit
│
▼
helm upgrade → K3s pods
| What you want to change | Where to change it | NOT here |
|---|---|---|
| Service port, replicas | config.yaml base.services |
values.yaml |
| NodePort exposure | config.yaml base.services.{svc}.node_port |
service.yaml |
| Infra image (mongo:4.4) | ObjBuildRunnerBase.yaml infra_services |
values.yaml |
| Infra credentials | config.yaml terraform.{service} |
values.yaml |
| Infra env vars (MONGO_INITDB) | ObjBuildRunnerBase.yaml infra_services.{svc}.env_map |
infra-statefulset.yaml |
| App env vars (AXION_MQTT_HOST) | ObjBuildRunnerK3s.py generate_deploy_config() |
deployment.yaml |
| Factory stripping list | ObjBuildRunnerBase.yaml service_factories |
Dockerfile |
| Requirements tier | ObjBuildRunnerBase.yaml service_requirements |
Dockerfile |
| Deployment template structure | ObjBuildRunnerK3s.py _deployment_tpl() |
deployment.yaml |
| Service template structure | ObjBuildRunnerK3s.py _service_tpl() |
service.yaml |
| Infra template structure | ObjBuildRunnerK3s.py _infra_statefulset_tpl() |
infra-statefulset.yaml |
| Volume mounts | config.yaml deployment.volumes |
deployment.yaml |
| Registry config | config.yaml deployment.registry |
values.yaml |
| Timezone | config.yaml system.timezone |
deployment.yaml |
| Caddy routing | ObjBuildRunnerK3s.py generate_caddy() |
Caddyfile |
| Registries mirrors | config.yaml deployment.registry.mirrors |
registries.yaml |
Editing values.yaml directly — It will be overwritten by the
next scaffold run. Change config.yaml or
ObjBuildRunnerBase.yaml instead, then run scaffold.
Editing deployment.yaml directly — It will be overwritten.
Change _deployment_tpl() in ObjBuildRunnerK3s.py.
MongoDB wrong image — Change ObjBuildRunnerBase.yaml
infra_services.mongodb.image, not values.yaml.
MongoDB credentials mismatch — Credentials flow from
config.yaml terraform.mongo → ObjBuildRunnerBase.yaml env_map
→ values.yaml infra.mongodb.env → StatefulSet env vars.
If the StatefulSet doesn't have env vars, check
_infra_statefulset_tpl() has the {{- if $cfg.env }} block.
NodePort disappears after deploy — Add node_port: 30400
to config.yaml base.services.report, not to the service template.
Config sections leaking to pods — The _refresh_k8s_config()
method trims config.yaml before pushing the ConfigMap. But pods
must be restarted to pick up the new ConfigMap
(kubectl rollout restart deployment).
Volume names with underscores — K8s rejects underscores in
volume names. Template uses | replace "_" "-" filter.
Use hyphens in config.yaml volume keys if possible.
# 1. Edit config.yaml or ObjBuildRunnerBase.yaml or _*_tpl() methods
# 2. Regenerate all Helm files from sources
python factory.deploy/ObjBuild.py scaffold homechoice
# 3. Verify generated files look correct
cat resource.docker/helm/homechoice/values.yaml
# 4. Deploy (updates ConfigMap + Secret + Helm release)
python factory.deploy/ObjBuild.py deploy homechoice
# 5. If infra changed (new credentials, new image), delete + recreate
kubectl delete statefulset mongodb -n homechoice-dev
kubectl delete pvc data-mongodb-0 -n homechoice-dev
helm upgrade homechoice resource.docker/helm/homechoice ...
# 6. Restart pods to pick up ConfigMap changes
kubectl rollout restart deployment -n homechoice-dev
┌─────────────────────────────────────────────┐
│ Stage 1: builder (python:3.12-bookworm) │
│ Full venv for compilation │
│ uv pip install requirements.txt │
│ NOT copied to runtime │
└────────────┬────────────────────────────────┘
│
┌────────┼────────────────────────────────┐
│ Per-tier venv stages (built in builder) │
│ venv-minimal (40MB) │
│ venv-web (80MB) │
│ venv-report (200MB) │
│ venv-io (240MB) │
│ venv-ai (500MB) │
└────────┬────────────────────────────────┘
│
┌────────────▼────────────────────────────────┐
│ Stage 2: compiler (from builder) │
│ Cython-compile all .py → .so │
│ Uses full venv (needs all imports) │
│ Strips source, keeps only .so + .yaml │
└────────────┬────────────────────────────────┘
│
┌────────────▼────────────────────────────────┐
│ Stage 3: runtime (python:3.12-slim, NO venv)│
│ Minimal system deps only │
│ COPY compiled .so from compiler │
│ No Python packages installed │
└────────────┬────────────────────────────────┘
│
┌────────┼────────────┬────────────┐
│ │ │ │
┌───▼──┐ ┌──▼───┐ ┌─────▼────┐ ┌───▼───┐
│report│ │ mqtt │ │ webhook │ │ ai │
│ │ │ │ │ │ │ │
│COPY │ │COPY │ │COPY │ │COPY │
│venv- │ │venv- │ │venv-web │ │venv- │
│report│ │min │ │ │ │ai │
│+PW │ │ │ │strip │ │+mesa │
│+IM │ │strip │ │factories │ │strip │
│strip │ │ │ │ │ │ │
└──────┘ └──────┘ └──────────┘ └───────┘
200MB 40MB 80MB 500MB
| File | Purpose |
|---|---|
factory.deploy/ObjBuild.py |
Pipeline orchestrator + CLI |
factory.deploy/ObjBuild.yaml |
DB schemas + SQL queries |
factory.deploy/ObjAudit.py |
Security audit module |
factory.deploy/extend.build/ObjCompile.py |
Cython compiler |
factory.deploy/extend.build/ObjBuildDocker.py |
Docker build/push |
factory.deploy/extend.build/ObjBuildHelm.py |
Helm install/upgrade |
factory.deploy/extend.build/ObjConfig.py |
Config validation/trim |
factory.deploy/extend.runner/ObjBuildRunnerBase.py |
Loads maps from YAML |
factory.deploy/extend.runner/ObjBuildRunnerBase.yaml |
Source of truth: infra, factories, tiers, routes |
factory.deploy/extend.runner/ObjBuildRunnerK3s.py |
K3s scaffold/deploy + template generators |
factory.deploy/extend.substrate/ObjDocker.py |
Docker daemon + image naming |
factory.report/package.core/ObjReportBuild.py |
Email report renderer |
factory.webhook/ObjHookBuild.py |
Webhook build trigger |
resource.docker/dockerfile/axion.dockerfile |
Generic multi-stage Dockerfile (all packages) |
resource.docker/strip_factories.sh |
Factory stripping script |
resource.docker/helm/{package}/ |
GENERATED by scaffold — do not edit |
resource.config/requirements-*.txt |
Per-tier pip requirements |
resource.config/requirements-build.txt |
Build-only packages (Cython, docker, pygit2) |
resource.config/requirements-apt.txt |
System package dependencies |
resource.notes/futures/ |
Product roadmap (25 plans) |
| File | Tier | Services | ~Size |
|---|---|---|---|
requirements-minimal.txt |
minimal | mqtt, sms, monitor, scheduler, conversation | 40 MB |
requirements-web.txt |
web | webhook, go | 80 MB |
requirements-report.txt |
report | report, web (website) | 200 MB |
requirements-io.txt |
io | import | 240 MB |
requirements-ai-service.txt |
ai | ai | 500 MB |
requirements.txt |
full | workflow, runtime | 1.7 GB |
requirements-build.txt |
build-only | compiler stage (Cython, docker, pygit2) | — |
requirements-ai.txt |
GPU | torch, transformers (optional) | 4.5 GB |
Each tier inherits from minimal: report → web → minimal.
# Install all system deps
xargs -a resource.config/requirements-apt.txt sudo apt-get install -y
Key packages: build-essential, ccache, libmariadb-dev, trivy,
nfs-common, imagemagick, tini.
Services can be disabled per package in config.yaml:
technocore:
features:
ai: false # skip ai service + image
sms: false # skip sms service
monitor: true # keep monitoring
build_all_targets reads {package}.features and skips
targets with active: false. This also affects the workflow
tier — without AI, workflow can use a lighter venv.
# Send to all active RACI recipients
python ObjBuild.py pipeline homechoice --notify raci
# Send to specific email
python ObjBuild.py pipeline homechoice --notify user@example.com
RACI config supports per-role activation:
deployment:
raci:
active: true
responsible:
email: "devops@co.za"
active: true
accountable:
- email: "lead@co.za"
active: false # inactive during dev