Scan and run pytest files individually with parallel execution,
retries, TUI dashboard, and previous-run comparison.
# Run all tests (8 parallel workers, excludes integration tests)
python factory.deploy/ObjTest.py run
# Run specific factory
python factory.deploy/ObjTest.py run factory.core
# Run with TUI dashboard
python factory.deploy/ObjTest.py run --tui
# Sequential execution
python factory.deploy/ObjTest.py run -w 1
# Include integration tests
python factory.deploy/ObjTest.py run --marker ""
# Custom timeout and retries
python factory.deploy/ObjTest.py run --timeout 60 --test-timeout 30 --retries 2
# List discovered test files
python factory.deploy/ObjTest.py run --list
# Generate markdown report
python factory.deploy/ObjTest.py report
# Compare with previous run
python factory.deploy/ObjTest.py compare
runDiscover and execute test files. Each file runs in an isolated
subprocess (python -m pytest <file>).
| Option | Default | Description |
|---|---|---|
factory |
"" |
Filter by factory (e.g. factory.core) |
-v |
off | Verbose pytest output |
-s |
off | Stop on first failure (forces sequential) |
-l |
off | List files without running |
-t |
300 | Subprocess timeout per file (seconds) |
--test-timeout |
0 | Per-test timeout passed to pytest |
-m |
"not integration" |
Pytest marker expression |
-r |
0 | Retry failed files N times |
-w |
8 | Parallel workers |
-x |
"" |
Comma-separated files to exclude |
--tui |
off | Rich TUI dashboard |
reportRun all tests and save a markdown report to
local.documents/tests/test_report_<timestamp>.md.
compareShow changes since the last run using
local.documents/tests/last_run.json.
historyShow test run history with pass/fail trends.
# Show last 20 runs
python factory.deploy/ObjTest.py history
# Show last 50 runs
python factory.deploy/ObjTest.py history -n 50
Each run is saved as local.documents/tests/run_<timestamp>.json
for audit evidence.
ObjTest.discover() Find test_*.py files
|
ObjTest.run_all() Orchestrate execution
|
+-- _run_parallel() ThreadPoolExecutor (default 8 workers)
| or
+-- _run_sequential() One-by-one (with --stop or -w 1)
|
+-- run_file() subprocess.run() per file
| |
| +-- _parse_pytest_summary() Extract counts
| +-- _feed_tui() Update TUI
|
+-- _retry_failures() Re-run failed files in-place
|
+-- _save_results() Write last_run.json
+-- _print_summary() Console summary with diff
Each test file runs in a completely isolated subprocess. No state
leaks between files. The orchestration uses threads (not processes)
since the work is subprocess I/O bound, not CPU bound.
min(cpu_count * 2, 8) — enough parallelism"not integration" — integration tests excludedEnabled with --tui. Uses Rich Live for a full-screen terminal
dashboard with:
The TUI runs a background thread (_update_loop) that rebuilds
panels under lock every 0.5s. After completion, a summary table
of failures is printed to the console.
Failure detail extraction captures: assertion errors, collection
errors, timeouts, fixture errors, import errors, tracebacks,
and syntax errors. Output buffer is 5000 chars per failure.
Module: factory.deploy/extend.tui/ObjTestTUI.py
Results are saved to local.documents/tests/last_run.json after
each run. The summary shows:
Comparison is at the file level (pass/fail), not individual test
level.
All threading.Thread instances in tests must use
daemon=True. Non-daemon threads prevent the subprocess from
exiting, causing ObjTest to report TIMEOUT.
t = threading.Thread(target=my_func, daemon=True)
t.start()
t.join(timeout=2.0)
ObjTest excludes @pytest.mark.integration tests by default.
If a test fails in the TUI because it needs a live database,
MQTT broker, or external service, mark it as integration to
skip it during parallel runs:
Single test class:
@pytest.mark.integration
class TestMyIntegration:
"""These tests need a live database."""
...
Entire file:
import pytest
pytestmark = pytest.mark.integration
Multiple markers on a file:
pytestmark = [pytest.mark.slow, pytest.mark.integration]
When to mark as integration:
ObjData() / ObjTicket() / Report() etc.sql_execute(), sql_get_list(), has_table()Running integration tests explicitly:
# Include integration tests (sequential recommended)
python factory.deploy/ObjTest.py run --marker "" -w 1
# Run ONLY integration tests
python factory.deploy/ObjTest.py run --marker "integration" -w 1
Why sequential for integration? Parallel workers each open
DB connections. 8 workers running integration tests exhaust
the connection pool, causing timeouts and flaky failures.
The conftest GC cleanup closes DB connections after each test.
Do not rely on a connection from a previous test still being
alive. Create a fresh connection per test function:
# Bad: shared fixture, connection dies after first test
@pytest.fixture(scope="class")
def helper():
return ObjData()
# Good: fresh connection per test
@pytest.fixture
def helper():
return ObjData()
Always wrap DB.close() in try/except in teardown_module:
def teardown_module(module):
if db_connection:
try:
db_connection.DB.close()
except Exception:
pass
Integration tests creating tables should use PID-suffixed names
to avoid lock conflicts with stuck queries from previous runs:
_SUFFIX = f"_{os.getpid()}"
TABLE = f"test_data{_SUFFIX}"
| File | Purpose |
|---|---|
factory.deploy/ObjTest.py |
Test runner and CLI |
factory.deploy/extend.tui/ObjTestTUI.py |
Rich TUI dashboard |
local.documents/tests/last_run.json |
Latest run snapshot |
local.documents/tests/run_*.json |
Historical run archive |
local.documents/tests/test_report_*.md |
Generated reports |