
NOTICE: All information contained herein is, and remains
the property of TechnoCore.
The intellectual and technical concepts contained
herein are proprietary to TechnoCore and dissemination of this information or reproduction of this material
is strictly forbidden unless prior written permission is obtained
from TechnoCore.
ObjFeatureStoreObjFeatureStore is a comprehensive feature engineering and management system within the Axion framework. It provides a database-driven approach to creating, computing, versioning, and validating feature sets for machine learning and analytics applications.
Inheriting from ObjData.ObjData, ObjFeatureStore is the runtime engine for feature computation. It loads feature definitions from the database, creates feature tables, and computes transformations.
Editing, validation, and statistics operations live in the child class ObjFeatureStoreEdit in extend.edit. Visual/lineage diagrams live in ObjFeatureStoreVisual in extend.visual.
The feature store uses seven core tables defined in the database.schema section of ObjFeatureStore.yaml.
SQL queries are split across two YAML files:
ObjFeatureStore.yaml — Runtime queries: compute templates, table management, statistics, validationObjFeatureStoreEdit.yaml — Editing queries: CRUD operations, versioning, lineage, documentationdef_featureStores feature set metadata (source query, primary key, target table, last computed timestamp).
def_featuresStores individual feature definitions with type, definition, data type, notes, and computation order.
def_feature_dependencyTracks dependencies between features to enable topological sorting.
def_feature_expectationStores Great Expectations validation rules with severity levels (ERROR, WARNING, INFO).
def_feature_versionSnapshots feature configurations for versioning and A/B testing.
def_feature_statisticsHistorical statistics per feature (null counts, distributions, aggregates).
def_feature_lineageRecords source table/column dependencies for impact analysis.
compute_features(feature_code, package, use_batch=False, use_transaction=False, respect_dependencies=True)Computes all features for a feature set. Creates the target table if it doesn't exist, then executes feature transformations.
Parameters:
use_batch: Use single UPDATE statement for all features (faster for large datasets)use_transaction: Wrap computation in transaction with rollback on errorrespect_dependencies: Sort features by dependency order before computingExample:
# Compute all features with batch mode and transactions
fs.compute_features(
"customer_features",
"prod",
use_batch=True,
use_transaction=True
)
compute_features_batch(table_name, features_to_compute)Computes all compatible features in a single SQL UPDATE statement for optimal performance. Features of type AGGREGATE, WINDOW, and JSON_EXTRACT are computed separately.
Performance: Can reduce computation time by 50-80% for feature sets with many DIRECT_MAP and SYMBOLIC features.
compute_features_incremental(feature_code, package, filter_condition, use_batch=False)Computes features only for rows matching a filter condition. Enables efficient real-time updates.
Example:
# Update only customers modified today
fs.compute_features_incremental(
"customer_features",
"prod",
filter_condition="updated_at > DATE_SUB(NOW(), INTERVAL 1 DAY)",
use_batch=True
)
compute_feature_statistics(feature_code, package)Computes comprehensive statistics for all features and stores them in def_feature_statistics.
Statistics Collected:
Example:
stats = fs.compute_feature_statistics("customer_features", "prod")
for feature, feature_stats in stats.items():
print(f"{feature}: {feature_stats['null_pct']:.2f}% null")
validate_features(feature_code, package, feature="*")Runs Great Expectations validation against computed features. Returns validation results dictionary.
Example:
result = fs.validate_features("customer_features", "prod")
if result['success']:
print("All validations passed!")
else:
print("Validation failures detected")
ObjFeatureStore.py provides CLI commands via typer for runtime operations.
Usage: python factory.core/ObjFeatureStore.py [COMMAND]
For editing commands (add-entry, add-feature, add-expectation, add-dependency, etc.), see ObjFeatureStoreEdit.py.
create-tables --feature-code CODE --package PKG
Creates the feature store tables and the feature table for the specified feature code and package.
compute-all-features --feature-code CODE --package PKG [--batch]
Computes all features for a feature set.
validate-features --feature-code CODE --package PKG [--feature NAME]
Runs Great Expectations validation.
compute-stats --feature-code CODE --package PKG
Computes and stores statistics for all features in a feature set.
# 1. Set up feature definitions (using ObjFeatureStoreEdit CLI)
python factory.core/ObjFeatureStoreEdit.py add-entry \
--feature-code customer_features \
--package prod \
--module analytics \
--source-query customers \
--source-query-pk customer_id
python factory.core/ObjFeatureStoreEdit.py add-feature \
--feature-code customer_features \
--package prod \
--feature age_group \
--feature-type SQL_CASE \
--feature-definition "CASE WHEN age < 30 THEN 'young' WHEN age < 50 THEN 'middle' ELSE 'senior' END" \
--feature-data-type VARCHAR
# 2. Create tables
python factory.core/ObjFeatureStore.py create-tables \
--feature-code customer_features \
--package prod
# 3. Compute features
python factory.core/ObjFeatureStore.py compute-all-features \
--feature-code customer_features \
--package prod \
--batch
# 4. Validate features
python factory.core/ObjFeatureStore.py validate-features \
--feature-code customer_features \
--package prod
# 5. Compute statistics
python factory.core/ObjFeatureStore.py compute-stats \
--feature-code customer_features \
--package prod
Feature definitions are added via ObjFeatureStoreEdit.add_feature_definition(). The examples below use:
from ObjFeatureStoreEdit import ObjFeatureStoreEdit
editor = ObjFeatureStoreEdit()
Direct column mapping from source table to feature table.
Definition Format: source_column_name
Example:
editor.add_feature_definition(
"customer_features", "prod",
"customer_name", FeatureTypeEnum.DIRECT_MAP,
"name", FeatureDataTypeEnum.VARCHAR
)
SQL CASE or IF expressions for conditional logic.
Definition Format: Valid SQL expression
Example:
editor.add_feature_definition(
"customer_features", "prod",
"is_premium", FeatureTypeEnum.SQL_CASE,
"IF(total_spent > 10000, 1, 0)", FeatureDataTypeEnum.INT
)
Mathematical expressions with column references from source table.
Definition Format: Expression with column names (automatically qualified)
Example:
editor.add_feature_definition(
"customer_features", "prod",
"total_value", FeatureTypeEnum.SYMBOLIC,
"unit_price * quantity * (1 - discount_pct/100)", FeatureDataTypeEnum.FLOAT
)
Aggregations from related tables via joins.
Definition Format: AGG_FUNC(table.column)|related_table|join_key
Example:
editor.add_feature_definition(
"customer_features", "prod",
"total_order_amount", FeatureTypeEnum.AGGREGATE,
"SUM(orders.amount)|orders|customer_id", FeatureDataTypeEnum.FLOAT
)
Window function features for ranking, running totals, etc.
Definition Format: WINDOW_FUNC(column) OVER (PARTITION BY col1 ORDER BY col2)
Example:
editor.add_feature_definition(
"customer_features", "prod",
"order_recency_rank", FeatureTypeEnum.WINDOW,
"ROW_NUMBER() OVER (PARTITION BY customer_id ORDER BY order_date DESC)", FeatureDataTypeEnum.INT
)
Extract values from JSON columns.
Definition Format: json_column|$.json.path
Example:
editor.add_feature_definition(
"customer_features", "prod",
"preferred_language", FeatureTypeEnum.JSON_EXTRACT,
"preferences|$.language", FeatureDataTypeEnum.VARCHAR
)
The Input Table feature enables runtime joining of additional configuration tables to enrich feature computations with external data. This allows features to reference columns from secondary tables without requiring those columns in the source data.
Input tables solve the common problem of needing configuration data, lookup values, or enrichment columns during feature computation:
$column_name$ syntaxFour new columns in def_features enable input table functionality:
| Column | Type | Description |
|---|---|---|
InputTable |
VARCHAR(255) | Name of the table or query to join |
InputTablePk |
VARCHAR(255) | Primary key column in input table (falls back to source PK if NULL) |
InputTableNullStrategy |
VARCHAR(50) | How to handle NULL values: KEEP_NULL, USE_ZERO, SKIP_COMPUTE |
InputTableConstants |
TEXT | JSON map of constant placeholders to input table columns |
Add configuration data to features via runtime join:
from ObjFeatureStoreEdit import ObjFeatureStoreEdit, FeatureTypeEnum, FeatureDataTypeEnum
editor = ObjFeatureStoreEdit()
# Feature using input table columns
editor.add_feature_definition(
feature_code="customer_features",
package="prod",
feature="credit_limit",
feature_type=FeatureTypeEnum.DIRECT_MAP,
feature_definition="$max_credit$", # Placeholder for input table column
feature_data_type=FeatureDataTypeEnum.FLOAT,
input_table="customer_config", # Table to join
input_table_pk="customer_id" # Join key
)
Generated SQL:
UPDATE customer_features_prod ft
LEFT JOIN customer_config it ON ft.customer_id = it.customer_id
SET ft.credit_limit = it.max_credit
Placeholders work in any feature type:
# SQL_CASE with placeholders
editor.add_feature_definition(
feature_code="customer_features",
package="prod",
feature="exceeds_limit",
feature_type=FeatureTypeEnum.SQL_CASE,
feature_definition="CASE WHEN st.balance > $max_credit$ THEN 1 ELSE 0 END",
feature_data_type=FeatureDataTypeEnum.INT,
input_table="customer_config",
input_table_pk="customer_id"
)
# SYMBOLIC with placeholders
editor.add_feature_definition(
feature_code="customer_features",
package="prod",
feature="adjusted_score",
feature_type=FeatureTypeEnum.SYMBOLIC,
feature_definition="st.base_score * $score_multiplier$ + $score_offset$",
feature_data_type=FeatureDataTypeEnum.FLOAT,
input_table="scoring_config",
input_table_pk="customer_id"
)
Constants are static values loaded once at startup and substituted as literal values. This eliminates repeated joins for configuration that doesn't vary by row.
Use Constants when:
Use Placeholders when:
import json
# Define constants mapping
constants = {
"tax_rate": "TaxRate", # Map $tax_rate$ to config.TaxRate column
"min_threshold": "MinThreshold", # Map $min_threshold$ to config.MinThreshold
"max_limit": "MaxLimit" # Map $max_limit$ to config.MaxLimit
}
# Add feature using constants
editor.add_feature_definition(
feature_code="order_features",
package="prod",
feature="total_with_tax",
feature_type=FeatureTypeEnum.SYMBOLIC,
feature_definition="st.subtotal * (1 + $tax_rate$)",
feature_data_type=FeatureDataTypeEnum.FLOAT,
input_table="global_config",
input_table_constants=json.dumps(constants) # Pass as JSON string
)
What Happens:
global_config once: SELECT TaxRate FROM global_config LIMIT 10.15 (example)st.subtotal * (1 + $tax_rate$) → st.subtotal * (1 + 0.15)Performance Impact:
Control how NULL values from input tables are handled:
# KEEP_NULL: Preserve NULL values (default)
editor.add_feature_definition(
...,
input_table="config",
input_table_null_strategy="KEEP_NULL"
)
# USE_ZERO: Convert NULL to 0 (useful for numeric calculations)
editor.add_feature_definition(
...,
input_table="config",
input_table_null_strategy="USE_ZERO"
)
# SKIP_COMPUTE: Skip feature computation when input value is NULL
editor.add_feature_definition(
...,
input_table="config",
input_table_null_strategy="SKIP_COMPUTE"
)
Input table column matching is case-insensitive for resilience:
# These all match the same column, regardless of database case:
feature_definition="$PersonNo$" # Matches: PersonNo, personno, PERSONNO
feature_definition="$personno$" # Matches: PersonNo, personno, PERSONNO
feature_definition="$PERSONNO$" # Matches: PersonNo, personno, PERSONNO
Features are processed in three stages:
$constant_name$ → literal values from InputTableConstants$column_name$ → it.qualified_column from input tablest.qualified_column from source tableExample:
# Original definition
"st.price * $tax_rate$ * $quantity_adjustment$"
# After constants (tax_rate is constant)
"st.price * 0.15 * $quantity_adjustment$"
# After placeholders (quantity_adjustment needs row-level join)
"st.price * 0.15 * it.quantity_adjustment"
# After symbolic (price from source)
"st.price * 0.15 * it.quantity_adjustment"
The input table feature includes comprehensive performance metrics:
# View performance metrics
python factory.core/ObjFeatureStore.py input_table_metrics \
--feature-code customer_features \
--package prod
Output:
================================================================================
Input Table Performance Metrics
================================================================================
📊 Loading Performance:
Constants Loaded: 45 (in 0.127s)
Tables Cached: 3 (in 0.089s)
🔄 Join Optimization:
Joins Created (with placeholders): 12
Joins Skipped (no placeholders): 8
Optimization Rate: 40.0% joins avoided
🔧 Substitutions:
Constant Substitutions: 67
Placeholder Substitutions: 34
✅ No validation warnings
Metrics Tracked:
# Show which features use input tables
python factory.core/ObjFeatureStore.py input_table_stats \
--feature-code customer_features \
--package prod
Output:
Input Table Usage Statistics:
- 45 features use input tables
- 3 unique input tables referenced
- 67 total placeholder substitutions
# Detailed performance metrics (see above)
python factory.core/ObjFeatureStore.py input_table_metrics \
--feature-code customer_features \
--package prod
This example demonstrates input tables, constants, and placeholders working together:
from ObjFeatureStoreEdit import ObjFeatureStoreEdit, FeatureTypeEnum, FeatureDataTypeEnum
import json
editor = ObjFeatureStoreEdit()
# Constants for global thresholds (loaded once)
risk_constants = {
"high_risk_threshold": "HighRiskThreshold",
"default_score": "DefaultScore"
}
# 1. Risk score using constants and source data
editor.add_feature_definition(
feature_code="customer_features",
package="prod",
feature="base_risk_score",
feature_type=FeatureTypeEnum.SYMBOLIC,
feature_definition="st.delinquency_rate * 100 + $default_score$",
feature_data_type=FeatureDataTypeEnum.FLOAT,
input_table="risk_config",
input_table_constants=json.dumps(risk_constants)
)
# 2. Adjusted score using customer-specific multiplier (row-level join)
editor.add_feature_definition(
feature_code="customer_features",
package="prod",
feature="adjusted_risk_score",
feature_type=FeatureTypeEnum.SYMBOLIC,
feature_definition="st.base_risk_score * $risk_multiplier$",
feature_data_type=FeatureDataTypeEnum.FLOAT,
input_table="customer_risk_config",
input_table_pk="customer_id",
input_table_null_strategy="USE_ZERO" # Default to 1.0 if NULL
)
# 3. Risk category using constants for thresholds
editor.add_feature_definition(
feature_code="customer_features",
package="prod",
feature="risk_category",
feature_type=FeatureTypeEnum.SQL_CASE,
feature_definition="""
CASE
WHEN st.adjusted_risk_score > $high_risk_threshold$ THEN 'HIGH'
WHEN st.adjusted_risk_score > $medium_risk_threshold$ THEN 'MEDIUM'
ELSE 'LOW'
END
""",
feature_data_type=FeatureDataTypeEnum.VARCHAR,
input_table="risk_config",
input_table_constants=json.dumps({
"high_risk_threshold": "HighRiskThreshold",
"medium_risk_threshold": "MediumRiskThreshold"
})
)
The system validates constants usage at load time:
Valid:
# All constants defined
constants = {"threshold": "Threshold"}
definition = "$threshold$ * 100"
# ✅ No warnings
Invalid:
# Undefined constant
constants = {"threshold": "Threshold"}
definition = "$threshold$ * $undefined_value$"
# ⚠️ Warning: Feature uses undefined constants: $undefined_value$
input_table_metrics to ensure optimization is workingProblem: Feature shows validation warning about undefined constants
Solution: Check constant mapping matches definition:
# Definition uses $max_limit$
definition = "$max_limit$ * 100"
# Constants must define max_limit
constants = {"max_limit": "MaxLimit"} # Column name in input table
Problem: Input table not being joined
Solution: Ensure placeholders are present in definition:
# This will skip join (no placeholders)
definition = "st.column1 * 100"
# This will create join (has placeholder)
definition = "st.column1 * $multiplier$"
Problem: Getting NULL values unexpectedly
Solution: Set appropriate NULL strategy:
input_table_null_strategy="USE_ZERO" # or SKIP_COMPUTE
Features can reference different input tables:
# Feature 1: Uses pricing config
editor.add_feature_definition(
feature="base_price",
feature_definition="$unit_price$ * st.quantity",
input_table="pricing_config",
...
)
# Feature 2: Uses discount config (different table)
editor.add_feature_definition(
feature="discount_amount",
feature_definition="$discount_rate$ * st.base_price",
input_table="discount_config",
...
)
Each feature maintains its own input table context. The system automatically manages multiple joins when computing features in batch mode.
Input table columns are tracked in lineage with type INPUT_TABLE:
python factory.core/ObjFeatureStore.py show-feature-lineage \
--feature-code customer_features \
--package prod \
--feature adjusted_score
Output:
Feature Lineage: adjusted_score
================================================================================
Dependencies:
customers.base_score (COMPUTED)
scoring_config.multiplier (INPUT_TABLE)
scoring_config.offset (INPUT_TABLE)
Use use_batch=True when computing features with many DIRECT_MAP and SYMBOLIC features. Can reduce computation time by 50-80% by executing a single UPDATE statement instead of one per feature.
Use compute_features_incremental() with appropriate filter conditions for real-time feature updates. Only processes changed rows instead of the entire table.
Use use_transaction=True when computing features in production to ensure atomic updates with automatic rollback on errors.
Set respect_dependencies=True to automatically compute features in the correct order based on def_feature_dependency relationships.
Version Before Major Changes: Create a version snapshot before modifying feature definitions in production.
Track Lineage: Use track_feature_lineage() to document source dependencies for impact analysis.
Add Expectations Early: Define validation rules during feature development, not after deployment.
Compute Statistics Regularly: Run compute_feature_statistics() after feature computation to monitor data quality.
Use Incremental Updates: For real-time features, use incremental computation with appropriate filters instead of recomputing the entire table.
Document Features: Use generate_feature_documentation() to create up-to-date documentation for data consumers.
Test with Transactions: Use use_transaction=True during development to enable easy rollback of failed computations.
The feature store automatically tracks lineage relationships between features and their source columns during computation. This enables impact analysis, debugging, and documentation of data dependencies.
Lineage is automatically tracked when features are computed. The system analyzes each feature definition to identify source table columns and records dependencies in the def_feature_lineage table.
Lineage Types:
Features WITH lineage (depend on source data):
# DIRECT_MAP: Maps source column directly
"customer_name" → DIRECT_MAP → "name"
# Lineage: customer_name ← customers.name (DIRECT)
# SYMBOLIC: Expression using source columns
"total_value" → SYMBOLIC → "unit_price * quantity"
# Lineage: total_value ← products.unit_price (COMPUTED)
# total_value ← products.quantity (COMPUTED)
# SQL_CASE: Conditional logic on source columns
"age_group" → SQL_CASE → "CASE WHEN age < 30 THEN 'young' ELSE 'senior' END"
# Lineage: age_group ← customers.age (COMPUTED)
Features WITHOUT lineage (constants/functions):
# Pure constants
"tax_rate" → SYMBOLIC → "0.15"
# No lineage tracked
# SQL functions without source columns
"compute_date" → SYMBOLIC → "NOW()"
# No lineage tracked
# Date arithmetic without source columns
"billing_date" → SYMBOLIC → "DATE(NOW()) - INTERVAL 9 DAY"
# No lineage tracked
The CLI provides commands to analyze feature dependencies:
Find all features that depend on a specific source column:
python factory.core/ObjFeatureStore.py show-column-impact \
--feature-code customer_features \
--package prod \
--column age
Output:
Column Impact Analysis: customers.age
================================================================================
Features depending on this column (3):
age_group (COMPUTED)
is_senior (COMPUTED)
age_category (COMPUTED)
WARNING: Modifying or removing this column will affect 3 features.
See all source columns a specific feature depends on:
python factory.core/ObjFeatureStore.py show-feature-lineage \
--feature-code customer_features \
--package prod \
--feature risk_score
Output:
Feature Lineage: risk_score
================================================================================
Dependencies:
customers.balance (COMPUTED)
customers.payment_history (COMPUTED)
customers.account_age (COMPUTED)
This feature depends on 3 source columns.
Export complete lineage as JSON or text:
# Export as JSON
python factory.core/ObjFeatureStore.py export-lineage \
--feature-code customer_features \
--package prod \
--format json \
--output lineage.json
# Export as text report
python factory.core/ObjFeatureStore.py export-lineage \
--feature-code customer_features \
--package prod \
--format text
JSON Output:
{
"feature_code": "customer_features",
"package": "prod",
"lineage": [
{
"feature": "age_group",
"source_table": "customers",
"source_column": "age",
"lineage_type": "COMPUTED"
}
]
}
Generate interactive Mermaid diagrams to visualize feature dependencies:
# Generate diagram for all features
python factory.core/ObjFeatureStore.py visualize-lineage \
--feature-code customer_features \
--package prod \
--output lineage.mmd
# Generate diagram for specific feature
python factory.core/ObjFeatureStore.py visualize-lineage \
--feature-code customer_features \
--package prod \
--feature risk_score \
--output risk_score_lineage.mmd
Mermaid Output Example:
Rendering Diagrams:
mmdc from mermaid-cli:npm install -g @mermaid-js/mermaid-cli
mmdc -i lineage.mmd -o lineage.svg
```mermaid
graph LR
source["source.column"] -->|COMPUTED| feature["feature_name"]
```
Automatic Updates: Lineage diagrams are automatically updated in the LineageDiagram field of def_features every time features are computed. Access them programmatically:
from ObjFeatureStore import ObjFeatureStore
fs = ObjFeatureStore()
# Get stored diagram for a feature
sql = """
SELECT LineageDiagram
FROM def_features
WHERE FeatureCode = 'customer_features'
AND Package = 'prod'
AND Feature = 'risk_score'
"""
diagram = fs.sql_get_value(sql)
print(diagram)
Color Coding:
Before computing features, the system validates that all referenced source columns still exist:
# Automatic validation during computation
fs.compute_features("customer_features", "prod")
Validation checks:
Error messages:
ValidationError: Column 'old_column' referenced by feature 'customer_age'
does not exist in source table 'customers'.
Available columns: id, name, birth_date, status
Suggestion: Update feature 'customer_age' definition or add missing column.
Scenario: Database team wants to rename column balance to account_balance.
Step 1: Check impact
python factory.core/ObjFeatureStore.py show-column-impact \
--feature-code customer_features \
--package prod \
--column balance
Step 2: Review affected features
Features depending on 'balance':
- credit_score
- balance_category
- is_high_balance
- risk_score
Step 3: Update feature definitions
# Update each affected feature definition to use new column name
editor.update_feature_definition(
"customer_features", "prod",
"credit_score",
new_definition="account_balance * 0.7 + payment_history * 0.3"
)
Step 4: Validate changes
# Recompute features (lineage auto-updates)
python factory.core/ObjFeatureStore.py compute-all-features \
--feature-code customer_features \
--package prod
# Verify new lineage
python factory.core/ObjFeatureStore.py show-column-impact \
--column account_balance
The def_feature_lineage table stores lineage records:
CREATE TABLE def_feature_lineage (
FeatureCode VARCHAR(64),
Package VARCHAR(64),
Feature VARCHAR(128),
SourceTable VARCHAR(128),
SourceColumn VARCHAR(128),
LineageType VARCHAR(32),
PRIMARY KEY (FeatureCode, Package, Feature, SourceTable, SourceColumn)
);
Columns:
FeatureCode: Feature set identifierPackage: Package nameFeature: Feature nameSourceTable: Source table or query nameSourceColumn: Source column nameLineageType: DIRECT, COMPUTED, or AGGREGATEQuery lineage directly from Python:
from ObjFeatureStore import ObjFeatureStore
fs = ObjFeatureStore()
# Get all dependencies for a feature
lineage = fs.get_feature_lineage("customer_features", "prod", "risk_score")
for record in lineage:
print(f"{record['source_table']}.{record['source_column']}")
# Get all features using a column
impact = fs.get_column_impact("customer_features", "prod", "age")
print(f"Column 'age' affects {len(impact)} features")
The feature store integrates with Great Expectations for data quality validation. Expectations are stored in def_feature_expectation and executed via validate_features().
Supported Expectation Types:
expect_column_values_to_not_be_nullexpect_column_values_to_be_in_setexpect_column_values_to_be_betweenexpect_column_values_to_match_regexObjFeatureStore inherits thread-safe database connection management from ObjData. All operations use connection pooling and thread-local storage for safe concurrent usage.
use_transaction=TrueObjFeatureStore supports three placeholder patterns for
resolving feature values in templates:
| Pattern | Example | Where it works |
|---|---|---|
{feature:code:name:pk_col:pk_val} |
{feature:CREDIT_SCORE:risk_band:PersonNo:100} |
Any process_text template |
$feature.code.name$ |
$feature.CREDIT_SCORE.risk_band$ |
ObjFeatureStore.patch_param |
$feature_name$ |
$risk_band$ |
After compute (feature_code from context) |
Feature placeholders are resolved before standard patch_param
to prevent the parent from stripping unrecognised tokens.
| Node | Type | Description |
|---|---|---|
| FEATURESTORE | Compute | Compute, validate, stats, create tables |
| FEATURERENDER | Render | Resolve templates using computed features |
workflow_feature(text, guid, feature_code) — computes
incrementally for one guid, resolves all $feature_name$
placeholders.
workflow_feature_bulk(...) — computes full table, renders
all rows via SQL JOIN + REPLACE.
START → FEATURESTORE → FEATURERENDER → AI → END
(compute) (render) (summarise)
ObjFeatureRender: Template rendering engineObjWorkflowFeatureStore: FEATURESTORE workflow nodeObjWorkflowFeatureRender: FEATURERENDER workflow nodeObjTextFeature: process_text factory for {feature:...}ObjFeatureStoreEdit: Feature editing, versioning, lineage, YAML import/exportObjData: Parent class providing database operationsObjEnum: Defines FeatureTypeEnum and FeatureDataTypeEnum enumerations