
NOTICE: All information contained herein is, and remains the property of TechnoCore.
The intellectual and technical concepts contained herein are proprietary to TechnoCore and dissemination of this information or reproduction of this material is strictly forbidden unless prior written permission is obtained from TechnoCore.
ObjDataKeyMulti-backend key-value store manager for Axion. Supports Redis, RocksDB, and DragonflyDB as configurable backends for caching and key-value storage.
ObjDataKey provides a unified interface for caching and key-value storage across three backends:
Backend selection is configured via config.yaml under the keystore.type parameter.
ObjDataKey inherits from ObjData.ObjData, providing full access to database connectivity, configuration management, and logging capabilities. Key responsibilities include:
set_cache, get_cache)has_keystore)ObjDataRedisSet the backend type in config.yaml:
keystore:
type: redis # Options: redis, dragonfly, rocksdb
keystore:
type: redis
redis:
ip: localhost # Empty string defaults to localhost or Docker IP
port: 6379 # Redis server port
user: "" # Optional authentication
password: "" # Optional authentication
ssl: false # Enable SSL/TLS for remote connections
Docker environment detection: If ip is empty and DOCKER_AXION environment variable is set, defaults to 172.17.0.1.
keystore:
type: dragonfly
dragonfly:
ip: localhost
port: 6379 # DragonflyDB default port
user: ""
password: ""
ssl: false
Note: DragonflyDB is Redis-protocol compatible, so the same redis Python library is used.
keystore:
type: rocksdb
rocksdb:
path: local.documents/rocksdb # Relative or absolute path
create_if_missing: true # Create database if it doesn't exist
ttl_cleanup_interval: 300 # Seconds between expired key cleanup
Note: RocksDB is an embedded database (no server required). Requires python-rocksdb package.
system:
cachettl: 300 # Default TTL in seconds (5 minutes)
Old redis.* configuration keys are still supported but deprecated:
redis:
activate: yes
redisip: localhost # Deprecated: use keystore.redis.ip
redisport: 6379 # Deprecated: use keystore.redis.port
ObjData.ObjData (parent)
└── ObjDataKey (multi-backend keystore)
Inheritance Benefits:
ObjData parentget_ini_value()self.debug()1. __init__() called
2. Read keystore.type from config
3. Default to "redis" if empty
4. Call backend initializer:
├── _init_redis() → Connect to Redis server
├── _init_dragonfly() → Connect to DragonflyDB server
└── _init_rocksdb() → Open embedded RocksDB
5. Set compatibility aliases (redis_conn, redisConn)
6. Ready for cache operations
DEF_VERSION = "8.0" - Module versionDO_DEBUG = False - Enable debug loggingDO_KEYSTORE = True - Global keystore enable/disable flagREDIS_DOCKER_IP = "172.17.0.1" - Default Docker bridge IPREDIS_HOST = "localhost" - Default Redis hostREDIS_PORT = "6379" - Default Redis portHAS_ROCKSDB - Boolean flag indicating if python-rocksdb is available__init__()Initializes the key-value store based on keystore.type configuration.
Backend Selection Logic:
keystore.type from config.yaml_init_redis(), _init_dragonfly(), or _init_rocksdb())redis_conn, redisConn)Attributes Set:
self.KeystoreType (str) - Active backend typeself.keystore_conn - Connection object or 0 if failedself.redis_conn - Backward compatibility aliasself.redisConn - Backward compatibility alias for ObjDataExample:
from ObjDataKey import ObjDataKey
cache = ObjDataKey() # Auto-connects based on config
print(cache.KeystoreType) # "redis", "dragonfly", or "rocksdb"
connect_keystore(**kwargs)Universal connection method for Redis and DragonflyDB backends.
Parameters:
backend (str): Backend type override ("redis", "dragonfly", "rocksdb")ip (str): Server IP address (Redis/Dragonfly only)port (str): Server portuser (str): Username for authenticationpassword (str): Password for authenticationReturns: Connection object or 0 on failure
Note: RocksDB is initialized in _init_rocksdb() as it's embedded.
set_cache(name, store_data, ttl=-1)Stores data in the key-value store with optional TTL.
Parameters:
name (str): Cache keystore_data (str|dict|list): Data to store (auto-serialized to JSON)ttl (int): Time-to-live in seconds (-1 = use default from system.cachettl)Behavior:
system.cachettl config value (minimum 5 seconds)SETEX commandExample:
cache.set_cache("user:123", {"name": "Alice", "email": "alice@example.com"}, ttl=600)
get_cache(name)Retrieves data from the key-value store.
Parameters:
name (str): Cache keyReturns: Deserialized data (original type) or None if not found/expired
Behavior:
None for missing/expired keysExample:
user_data = cache.get_cache("user:123")
print(user_data) # {'name': 'Alice', 'email': 'alice@example.com'}
has_keystore()Health check for key-value store connection.
Returns: bool - True if connected and healthy
Backend Checks:
PING commandExample:
if cache.has_keystore():
print("Cache is healthy")
else:
print("Cache connection failed")
has_redis() (Deprecated)Backward compatibility alias for has_keystore().
connect_redis() (Deprecated)Backward compatibility alias for connect_keystore() with backend="redis".
_init_redis()
keystore.redis.* or legacy redis.* keysDOCKER_AXION environment variableconnect_keystore() with Redis parametersself.RedisIp and self.RedisPort attributes_init_dragonfly()
keystore.dragonfly.*self.DragonflyIp and self.DragonflyPort attributesconnect_keystore() with DragonflyDB parameters_set_cache_redis(name, data, ttl)
SET command with EX (TTL) parameterself.debug()_get_cache_redis(name)
GET command_set_cache_dragonfly() / _get_cache_dragonfly()
_init_rocksdb()
python-rocksdb availability via HAS_ROCKSDB flagkeystore.rocksdb.pathrocksdb.Options() with configurable settingsrocksdb.DB(path, options)self.RocksDbPath attribute_set_cache_rocksdb(name, data, ttl)
{"data": <json>, "expires_at": <timestamp>}current_time + ttlexpires_at: -1)_get_cache_rocksdb(name)
expires_at against current time_start_rocksdb_cleanup_thread(interval)
time.sleep(interval)_rocksdb_cleanup_expired() periodically_rocksdb_cleanup_expired()
keystore_conn.iteritems()expires_at with current timeConnection:
redis-py libraryTTL: Native support via SETEX command
Persistence: Optional (configured at Redis server level)
Connection:
redis-py library (protocol-compatible)keystore.dragonfly.*TTL: Native support (Redis-compatible)
Persistence: Built-in (always persisted to disk)
Connection:
TTL Emulation:
{"data": <json>, "expires_at": <unix_timestamp>}Storage Format:
{
"data": "{\"user\": \"alice\"}",
"expires_at": 1703001234
}
Cleanup Thread:
ttl_cleanup_interval (default: 300s)checkVerify keystore connectivity and display configuration.
Usage:
python factory.core/ObjDataKey.py check
Example Output:
Reading folder /home/vheyn/projects/axion for config YAML (Objects)
Current keystore type: redis
Keystore connection is active.
infoDisplay keystore configuration details.
Usage:
python factory.core/ObjDataKey.py info
Example Output:
Keystore Type: redis
Redis IP: localhost
Redis Port: 6379
Old:
from ObjDataRedis import ObjDataRedis
redis_mgr = ObjDataRedis()
redis_mgr.connect_redis(ip="localhost", port="6379")
redis_mgr.set_cache("key", data, ttl=300)
value = redis_mgr.get_cache("key")
New:
from ObjDataKey import ObjDataKey
key_mgr = ObjDataKey()
# Connection happens automatically in __init__
key_mgr.set_cache("key", data, ttl=300)
value = key_mgr.get_cache("key")
Old (config.yaml):
redis:
activate: yes
redisip: localhost
redisport: 6379
New (config.yaml):
keystore:
type: redis
redis:
ip: localhost
port: 6379
Note: Old configuration keys still work but show deprecation warnings.
Replace all imports:
# Old
from ObjDataRedis import ObjDataRedis
# New
from ObjDataKey import ObjDataKey
Update class references:
# Old
obj = ObjDataRedis()
# New
obj = ObjDataKey()
| Feature | Redis | DragonflyDB | RocksDB |
|---|---|---|---|
| Deployment | Network server | Network server | Embedded library |
| Protocol | Redis | Redis-compatible | Native |
| TTL Support | Native | Native | Emulated (metadata) |
| Clustering | Yes | Yes | No |
| Persistence | Optional | Yes (always) | Yes (always) |
| Performance | Excellent (in-memory) | Excellent+ (in-memory) | Very good (disk-based) |
| Memory Usage | High (all in RAM) | Medium (smart caching) | Low (mostly disk) |
| Best For | Distributed cache, high-speed reads | Modern Redis replacement | Single-node persistent cache |
| Dependencies | Redis server | DragonflyDB server | python-rocksdb library |
from ObjDataKey import ObjDataKey
# Initialize (auto-connects based on config)
cache = ObjDataKey()
# Store data
cache.set_cache("session:abc123", {"user_id": 42, "role": "admin"}, ttl=3600)
# Retrieve data
session = cache.get_cache("session:abc123")
print(session) # {'user_id': 42, 'role': 'admin'}
# Check connection
if cache.has_keystore():
print("Cache is operational")
Config (config.yaml):
keystore:
type: rocksdb
rocksdb:
path: local.documents/rocksdb
create_if_missing: true
Code:
from ObjDataKey import ObjDataKey
cache = ObjDataKey()
# RocksDB is now embedded - no server needed!
cache.set_cache("dev_data", {"status": "testing"}, ttl=60)
cache = ObjDataKey()
# Store nested structures
report_data = {
"title": "Q4 Sales Report",
"metrics": {
"revenue": 125000,
"growth": 15.3
},
"regions": ["North", "South", "East"]
}
cache.set_cache("reports:q4_2024", report_data, ttl=86400) # 1 day
# Retrieve maintains structure
report = cache.get_cache("reports:q4_2024")
print(report["metrics"]["revenue"]) # 125000
Config:
keystore:
type: dragonfly
dragonfly:
ip: dragonfly.internal.network
port: 6379
Code:
cache = ObjDataKey()
# Uses DragonflyDB backend (faster than Redis for many workloads)
for i in range(10000):
cache.set_cache(f"item:{i}", {"index": i}, ttl=300)
Redis/DragonflyDB:
KeystoreError: Redis connection failed: [Errno 111] Connection refused
Solutions:
redis-cli ping or docker psconfig.yamlRocksDB:
RocksDBError: IO error: lock /path/to/rocksdb/LOCK: Resource temporarily unavailable
Solutions:
ps aux | grep pythonSymptoms: Cached data persists longer than expected
RocksDB-specific:
ttl_cleanup_interval in config for more frequent cleanupRedis/Dragonfly:
redis-cli TTL <key>RocksDB backend requested but python-rocksdb not installed
Solution:
source dev-env/bin/activate
uv pip install python-rocksdb
Redis/Dragonfly:
RocksDB:
ttl_cleanup_interval if cleanup impacts performancedb.compact_range()ObjDataRedis to ObjDataKeyDOCKER_AXIONWhen set (any value), indicates the application is running in a Docker container.
Effect: If keystore.redis.ip is empty, defaults to 172.17.0.1 (Docker bridge IP) instead of localhost.
Example:
export DOCKER_AXION=1
python factory.core/ObjDataKey.py check
AXION_PACKAGESpecifies the active package/environment (e.g., "homechoice", "test", "production").
Effect: Used by configuration system to load package-specific settings.
Set DO_KEYSTORE = False in the module to disable all keystore operations:
import factory.core.ObjDataKey as ObjDataKeyModule
ObjDataKeyModule.DO_KEYSTORE = False
# All cache operations will now no-op
cache = ObjDataKey()
cache.has_keystore() # Returns False
You can create multiple instances with different backends (not recommended for RocksDB due to file locking):
# Redis cache for sessions
redis_cache = ObjDataKey() # Uses config default
# Manually override for specific instance (advanced)
from ObjData import ObjData
custom_cache = ObjDataKey()
custom_cache.KeystoreType = "rocksdb"
custom_cache._init_rocksdb()
Warning: Overriding KeystoreType after initialization may cause inconsistent state.
Classes inheriting from ObjData automatically get keystore access:
from ObjData import ObjData
class MyService(ObjData):
def __init__(self):
super().__init__()
# keystore_conn is available via parent
def cache_report(self, report_id, data):
# Use inherited cache methods
self.set_cache(f"report:{report_id}", data, ttl=3600)
def get_cached_report(self, report_id):
return self.get_cache(f"report:{report_id}")
Note: The ObjData parent class uses redisConn attribute, which is aliased to keystore_conn for compatibility.
Use consistent key prefixes to organize cached data:
cache = ObjDataKey()
# Namespace patterns
cache.set_cache("user:session:abc123", session_data)
cache.set_cache("api:response:endpoint_name", api_response)
cache.set_cache("report:daily:2024-01-15", report_data)
cache.set_cache("temp:processing:job_42", temp_data)
# Easy to delete all keys in namespace (Redis/Dragonfly)
# redis-cli KEYS "user:session:*" | xargs redis-cli DEL
Cache only when keystore is available:
cache = ObjDataKey()
def get_expensive_data():
# Try cache first
if cache.has_keystore():
cached = cache.get_cache("expensive_computation")
if cached:
return cached
# Compute if not cached
result = expensive_computation()
# Cache if keystore available
if cache.has_keystore():
cache.set_cache("expensive_computation", result, ttl=3600)
return result
Short TTL for frequently changing data:
cache.set_cache("stock_price:AAPL", price, ttl=60) # 1 minute
Long TTL for static data:
cache.set_cache("user_profile:123", profile, ttl=86400) # 24 hours
Session-based TTL:
# Match session timeout
session_ttl = 3600 # 1 hour
cache.set_cache(f"session:{session_id}", session_data, ttl=session_ttl)
Permanent cache (RocksDB only, no expiration):
# Only works with RocksDB backend
# Redis/Dragonfly minimum is 5 seconds
cache.set_cache("permanent_config", config, ttl=-1)
When deploying in Docker containers:
Redis/DragonflyDB: Use service names as hostnames
keystore:
type: redis
redis:
ip: redis-service # Docker service name
port: 6379
RocksDB: Mount persistent volume
volumes:
- ./data/rocksdb:/app/local.documents/rocksdb
Set environment variable:
ENV DOCKER_AXION=1
Redis: Use Redis Sentinel or Redis Cluster for HA
# Configure multiple Redis nodes (requires custom connection logic)
keystore:
type: redis
redis:
nodes:
- host: redis-1.internal
port: 6379
- host: redis-2.internal
port: 6379
DragonflyDB: Use replication (primary-replica setup)
RocksDB: Not suitable for HA (single-process only). Use Redis/Dragonfly for distributed deployments.
Key metrics to monitor:
Example monitoring:
import time
class MonitoredCache(ObjDataKey):
def __init__(self):
super().__init__()
self.hits = 0
self.misses = 0
def get_cache(self, name):
start = time.time()
result = super().get_cache(name)
duration = time.time() - start
if result:
self.hits += 1
else:
self.misses += 1
# Log metrics
self.debug(f"Cache GET {name}: {duration:.4f}s, hit_rate={self.hit_rate():.2%}")
return result
def hit_rate(self):
total = self.hits + self.misses
return self.hits / total if total > 0 else 0
Always check keystore health before critical operations
if not cache.has_keystore():
# Fallback logic
Use appropriate TTLs - Don't cache forever, don't expire too soon
Namespace your keys - Use prefixes like user:, session:, api:
Handle cache misses gracefully - Always have fallback logic
Monitor cache performance - Track hit rates and response times
Choose backend appropriately:
Serialize complex objects - Use JSON, pickle, or msgpack
Avoid storing secrets - Don't cache passwords, tokens, or API keys
Set reasonable TTLs - Balance freshness vs. performance
Use connection pooling (Redis/Dragonfly) for high-traffic applications
The ObjDataKey module has test coverage for configuration and initialization.
| Test Suite | Tests | Type | Purpose |
|---|---|---|---|
test_ObjDataKey.py |
5 | Unit | Keystore type configuration, has_keystore checks |
| Total | 5 |
# Run all ObjDataKey tests
pytest resource.test/pytests/factory.core/test_ObjDataKey.py -v
ObjData.md - Parent class documentationresource.notes/howto/caching.md - Caching strategiesfactory.core/Objects.py - Global configuration object