This guide covers testing practices, optimizations, and best practices for the Axion project.
# Run all tests
pytest
# Run tests in a specific file
pytest resource.test/pytests/factory.core/test_ObjData.py
# Run a specific test
pytest resource.test/pytests/factory.core/test_ObjData.py::test_sql_retrieval
# Run with verbose output
pytest -v
# Run with warnings suppressed
pytest --disable-warnings
# Run only fast tests (skip slow model training tests)
pytest -m "not slow" -v
# Skip both slow and integration tests
pytest -m "not slow and not integration" -v
# Run only the last failed tests
pytest --lf
# Run tests and stop at first failure
pytest -x --maxfail=1
For significant speed improvements, run tests in parallel using pytest-xdist:
# Use all CPU cores
pytest -n auto
# Use specific number of workers
pytest -n 4
# Fast tests in parallel (recommended for development)
pytest -n auto -m "not slow" -v
Expected speedup: 3-5x faster with parallel execution
Tests are organized in resource.test/pytests/ mirroring the factory structure:
resource.test/pytests/
├── factory.core/
│ ├── test_ObjData.py
│ ├── test_ObjML.py
│ ├── test_objml_persistence.py
│ ├── test_objmldatasets_cli.py
│ ├── test_objlearning.py
│ ├── test_ObjAI.py
│ ├── test_ObjFeatureStore.py
│ └── test_ObjFeatureStoreEdit.py
├── factory.service/
└── factory.text/
The test suite has been optimized to run significantly faster through multiple improvements:
| Test Suite | Before | After | Improvement |
|---|---|---|---|
| test_objmldatasets_cli.py | 13m 49s | 4m 58s | 64-70% |
| test_Objects.py | 12m 35s | 1m 32s | 8.18x faster |
| test_ObjData.py | 2m 52s | 9.45s | 18.3x faster |
| ML Suite | 51 min | ~10m | 80% |
ObjMLDatasets.train_cost_sensitive_model()# Sample dataset for faster testing (keep max 200 rows)
if len(df) > 200:
df = df.sample(n=200, random_state=42)
Model parameters optimized for faster training while maintaining test validity:
# RandomForestClassifier
RandomForestClassifier(
n_estimators=10, # Reduced from 100 (90% fewer trees)
max_depth=5 # Limited depth for faster training
)
# GradientBoostingClassifier
GradientBoostingClassifier(
n_estimators=50, # Reduced from 100 (50% fewer iterations)
max_depth=3 # Shallower trees
)
# MLPClassifier
MLPClassifier(
hidden_layer_sizes=(50,), # Reduced from 100 (50% smaller)
max_iter=100 # Reduced from 500 (80% fewer iterations)
)
test_size from 0.3 to 0.5 (40% less training data)Core Foundation tests were experiencing extreme slowdowns (30+ seconds per test) due to Infisical secret lookups attempting network calls during test initialization.
Solution: Mock Infisical client initialization to prevent network calls:
import ConfigIni
from unittest.mock import patch
@pytest.fixture(scope="module", autouse=True)
def disable_infisical():
"""Disable Infisical to prevent slow secret lookups in tests."""
with patch.object(ConfigIni.ConfigIni, '_init_infisical', return_value=None):
with patch.object(ConfigIni.ConfigIni, '_has_infisical_credentials', return_value=False):
yield
Impact:
When to Use:
When NOT to Use:
test_ConfigIni_auto_creation.py that test Infisical functionalityApplying to Other Test Files:
If you notice tests taking 30+ seconds during setup or see "Secret 'XXX' not found" messages in output, add the fixture:
Import the required modules:
import ConfigIni
from unittest.mock import patch
Add the fixture at module level (after imports, before tests):
@pytest.fixture(scope="module", autouse=True)
def disable_infisical():
"""Disable Infisical to prevent slow secret lookups in tests."""
with patch.object(ConfigIni.ConfigIni, '_init_infisical', return_value=None):
with patch.object(ConfigIni.ConfigIni, '_has_infisical_credentials', return_value=False):
yield
Run the tests to verify the speedup
Files Already Optimized:
resource.test/pytests/factory.core/test_Objects.pyresource.test/pytests/factory.core/test_ObjData.pyMarkers are defined in pytest.ini:
[pytest]
markers =
integration: tests requiring external systems (DB, SFTP, APIs)
slow: tests that take a long time to run (model training, large datasets)
# Run only integration tests
pytest -m integration
# Skip integration tests
pytest -m "not integration"
# Skip slow tests (recommended for development)
pytest -m "not slow"
# Skip both slow and integration tests
pytest -m "not slow and not integration"
# Run only slow tests
pytest -m slow
Add markers to tests using decorators:
@pytest.mark.slow
def test_train_large_model():
"""This test trains a large model and takes several minutes."""
pass
@pytest.mark.integration
def test_database_connection():
"""This test requires a real database connection."""
pass
Feature Store tests require database tables to be created. By default, schema checks are disabled in Axion (Objects.DO_SCHEMA_CHECKS = False).
Fix: Tests now enable schema checks automatically using a session-scoped fixture:
@pytest.fixture(scope="session", autouse=True)
def enable_schema_checks():
"""Enable schema checks for all Feature Store tests."""
original_value = Objects.DO_SCHEMA_CHECKS
Objects.DO_SCHEMA_CHECKS = True
yield
Objects.DO_SCHEMA_CHECKS = original_value
This ensures all required tables (def_feature, def_features, def_feature_expectation, etc.) are created when tests run.
# Run all Feature Store tests
pytest resource.test/pytests/factory.core/test_ObjFeatureStore.py -v
# Run specific test class
pytest resource.test/pytests/factory.core/test_ObjFeatureStore.py::TestObjFeatureStore -v
The ML test suite includes:
During Development:
# Run only fast tests
pytest -m "not slow" resource.test/pytests/factory.core/test_objml*.py
Before Committing:
# Run all tests in parallel
pytest -n auto resource.test/pytests/factory.core/test_objml*.py
Full Validation:
# Run everything including slow tests
pytest resource.test/pytests/factory.core/test_objml*.py -v
Use pytest's --durations flag to identify slow tests:
# Show 10 slowest tests
pytest --durations=10
# Show all test durations
pytest --durations=0
Symptom: Tests don't complete or take excessive time
Common Causes:
Solutions:
disable_infisical fixture (see Infisical Mocking)-m "not slow" marker to skip model training teststimeout 300 pytest ...pytest --durations=10 to identify which tests are slowSymptom: Table 'xxx.def_feature' doesn't exist
Solution: Ensure Objects.DO_SCHEMA_CHECKS = True for tests that need table creation. See Feature Store Tests section.
Symptom: ModuleNotFoundError: No module named 'ObjData'
Solution: Tests should include proper path setup:
import os
import sys
base_path = os.getcwd()
paths = ["", "/factory.core", "/factory.service"]
for relative_path in paths:
if (base_path + relative_path) not in sys.path:
sys.path.append(base_path + relative_path)
Symptom: Tests pass/fail randomly
Possible Causes:
random_state parameter in ML modelsSolution: Ensure all ML operations use random_state=42 for reproducibility
Enable debug output for troubleshooting:
# Show all debug output
pytest -v -s
# Show stdout/stderr
pytest --capture=no
# Very verbose (includes pytest internal output)
pytest -vv
Consider setting up pre-commit hooks to run fast tests automatically:
# .pre-commit-config.yaml
repos:
- repo: local
hooks:
- id: pytest-fast
name: pytest-fast
entry: pytest -m "not slow" --maxfail=1
language: system
pass_filenames: false
always_run: true
Generate test coverage reports:
# Run tests with coverage
pytest --cov=factory.core --cov=factory.learn --cov-report=html
# View coverage report
open htmlcov/index.html # macOS
xdg-open htmlcov/index.html # Linux
random_state=42 for reproducible ML testspytest -n auto for faster CI/CD pipelinespytest -m "not slow" during development/readme.md/CLAUDE.md (Claude Code instructions)