Trigger System
Lock-free, sequence-based execution system for deterministic event processing.
Overview
All operations (WebSocket messages, cron tasks, data updates) flow through a priority queue, execute in parallel, but commit in strict sequential order with optimistic conflict detection.
Key Features
- Lock-free reads: Snapshots are deep copies, no blocking
- Sequential commits: Total ordering via sequence numbers
- Optimistic concurrency: Conflicts detected, retry with same seq
- Priority preservation: High-priority work never blocked by low-priority
- Long-running agents: Execute in parallel, commit sequentially
- Deterministic replay: Can reproduce exact system state at any seq
Architecture
┌─────────────┐
│ WebSocket │───┐
│ Messages │ │
└─────────────┘ │
├──→ ┌─────────────────┐
┌─────────────┐ │ │ TriggerQueue │
│ Cron │───┤ │ (Priority Queue)│
│ Scheduled │ │ └────────┬────────┘
└─────────────┘ │ │ Assign seq
│ ↓
┌─────────────┐ │ ┌─────────────────┐
│ DataSource │───┘ │ Execute Trigger│
│ Updates │ │ (Parallel OK) │
└─────────────┘ └────────┬────────┘
│ CommitIntents
↓
┌─────────────────┐
│ CommitCoordinator│
│ (Sequential) │
└────────┬────────┘
│ Commit in seq order
↓
┌─────────────────┐
│ VersionedStores │
│ (w/ Backends) │
└─────────────────┘
Core Components
1. ExecutionContext (context.py)
Tracks execution seq and store snapshots via contextvars (auto-propagates through async calls).
from trigger import get_execution_context
ctx = get_execution_context()
print(f"Running at seq {ctx.seq}")
2. Trigger Types (types.py)
from trigger import Trigger, Priority, CommitIntent
class MyTrigger(Trigger):
async def execute(self) -> list[CommitIntent]:
# Read snapshot
seq, data = some_store.read_snapshot()
# Modify
new_data = modify(data)
# Prepare commit
intent = some_store.prepare_commit(seq, new_data)
return [intent]
3. VersionedStore (store.py)
Stores with pluggable backends and optimistic concurrency:
from trigger import VersionedStore, PydanticStoreBackend
# Wrap existing Pydantic model
backend = PydanticStoreBackend(order_store)
versioned_store = VersionedStore("OrderStore", backend)
# Lock-free snapshot read
seq, snapshot = versioned_store.read_snapshot()
# Prepare commit (does not modify yet)
intent = versioned_store.prepare_commit(seq, modified_snapshot)
Pluggable Backends:
PydanticStoreBackend: For existing Pydantic models (OrderStore, ChartStore, etc.)FileStoreBackend: Future - version files (Python scripts, configs)DatabaseStoreBackend: Future - version database rows
4. CommitCoordinator (coordinator.py)
Manages sequential commits with conflict detection:
- Waits for seq N to commit before N+1
- Detects conflicts (expected_seq vs committed_seq)
- Re-executes (not re-enqueues) on conflict with same seq
- Tracks execution state for debugging
5. TriggerQueue (queue.py)
Priority queue with seq assignment:
from trigger import TriggerQueue
queue = TriggerQueue(coordinator)
await queue.start()
# Enqueue trigger
await queue.enqueue(my_trigger, Priority.HIGH)
6. TriggerScheduler (scheduler.py)
APScheduler integration for cron triggers:
from trigger.scheduler import TriggerScheduler
scheduler = TriggerScheduler(queue)
scheduler.start()
# Every 5 minutes
scheduler.schedule_interval(
IndicatorUpdateTrigger("rsi_14"),
minutes=5
)
# Daily at 9 AM
scheduler.schedule_cron(
SyncExchangeStateTrigger(),
hour="9",
minute="0"
)
Integration Example
Basic Setup in main.py
from trigger import (
CommitCoordinator,
TriggerQueue,
VersionedStore,
PydanticStoreBackend,
)
from trigger.scheduler import TriggerScheduler
# Create coordinator
coordinator = CommitCoordinator()
# Wrap existing stores
order_store_versioned = VersionedStore(
"OrderStore",
PydanticStoreBackend(order_store)
)
coordinator.register_store(order_store_versioned)
chart_store_versioned = VersionedStore(
"ChartStore",
PydanticStoreBackend(chart_store)
)
coordinator.register_store(chart_store_versioned)
# Create queue and scheduler
trigger_queue = TriggerQueue(coordinator)
await trigger_queue.start()
scheduler = TriggerScheduler(trigger_queue)
scheduler.start()
WebSocket Message Handler
from trigger.handlers import AgentTriggerHandler
@app.websocket("/ws")
async def websocket_endpoint(websocket: WebSocket):
await websocket.accept()
while True:
data = await websocket.receive_json()
if data["type"] == "agent_user_message":
# Enqueue agent trigger instead of direct Gateway call
trigger = AgentTriggerHandler(
session_id=data["session_id"],
message_content=data["content"],
gateway_handler=gateway.route_user_message,
coordinator=coordinator,
)
await trigger_queue.enqueue(trigger)
DataSource Updates
from trigger.handlers import DataUpdateTrigger
# In subscription_manager._on_source_update()
def _on_source_update(self, source_key: tuple, bar: dict):
# Enqueue data update trigger
trigger = DataUpdateTrigger(
source_name=source_key[0],
symbol=source_key[1],
resolution=source_key[2],
bar_data=bar,
coordinator=coordinator,
)
asyncio.create_task(trigger_queue.enqueue(trigger))
Custom Trigger
from trigger import Trigger, CommitIntent, Priority
class RecalculatePortfolioTrigger(Trigger):
def __init__(self, coordinator):
super().__init__("recalc_portfolio", Priority.NORMAL)
self.coordinator = coordinator
async def execute(self) -> list[CommitIntent]:
# Read snapshots from multiple stores
order_seq, orders = self.coordinator.get_store("OrderStore").read_snapshot()
chart_seq, chart = self.coordinator.get_store("ChartStore").read_snapshot()
# Calculate portfolio value
portfolio_value = calculate_portfolio(orders, chart)
# Update chart state with portfolio value
chart.portfolio_value = portfolio_value
# Prepare commit
intent = self.coordinator.get_store("ChartStore").prepare_commit(
chart_seq,
chart
)
return [intent]
# Schedule it
scheduler.schedule_interval(
RecalculatePortfolioTrigger(coordinator),
minutes=1
)
Execution Flow
Normal Flow (No Conflicts)
seq=100: WebSocket message arrives → enqueue → dequeue → assign seq=100 → execute
seq=101: Cron trigger fires → enqueue → dequeue → assign seq=101 → execute
seq=101 finishes first → waits in commit queue
seq=100 finishes → commits immediately (next in order)
seq=101 commits next
Conflict Flow
seq=100: reads OrderStore at seq=99 → executes for 30 seconds
seq=101: reads OrderStore at seq=99 → executes for 5 seconds
seq=101 finishes first → tries to commit based on seq=99
seq=100 finishes → commits OrderStore at seq=100
Coordinator detects conflict:
expected_seq=99, committed_seq=100
seq=101 evicted → RE-EXECUTES with same seq=101 (not re-enqueued)
reads OrderStore at seq=100 → executes again
finishes → commits successfully at seq=101
Benefits
For Agent System
- Long-running agents work naturally: Agent starts at seq=100, runs for 60 seconds while market data updates at seq=101-110, commits only if no conflicts
- No deadlocks: No locks = no deadlock possibility
- Deterministic: Can replay from any seq for debugging
For Strategy Execution
- High-frequency data doesn't block strategies: Data updates enqueued, executed in parallel, commit sequentially
- Priority preservation: Critical order execution never blocked by indicator calculations
- Conflict detection: If market moved during strategy calculation, automatically retry with fresh data
For Scaling
- Single-node first: Runs on single asyncio event loop, no complex distributed coordination
- Future-proof: Can swap queue for Redis/PostgreSQL-backed distributed queue later
- Event sourcing ready: All commits have seq numbers, can build event log
Debugging
Check Current State
# Coordinator stats
stats = coordinator.get_stats()
print(f"Current seq: {stats['current_seq']}")
print(f"Pending commits: {stats['pending_commits']}")
print(f"Executions by state: {stats['state_counts']}")
# Store state
store = coordinator.get_store("OrderStore")
print(f"Store: {store}") # Shows committed_seq and version
# Execution record
record = coordinator.get_execution_record(100)
print(f"Seq 100: {record}") # Shows state, retry_count, error
Common Issues
Symptoms: High conflict rate
- Cause: Multiple triggers modifying same store frequently
- Solution: Batch updates, use debouncing, or redesign to reduce contention
Symptoms: Commits stuck (next_commit_seq not advancing)
- Cause: Execution at that seq failed or is taking too long
- Solution: Check execution_records for that seq, look for errors in logs
Symptoms: Queue depth growing
- Cause: Executions slower than enqueue rate
- Solution: Profile trigger execution, optimize slow paths, add rate limiting
Testing
Unit Test: Conflict Detection
import pytest
from trigger import VersionedStore, PydanticStoreBackend, CommitCoordinator
@pytest.mark.asyncio
async def test_conflict_detection():
coordinator = CommitCoordinator()
store = VersionedStore("TestStore", PydanticStoreBackend(TestModel()))
coordinator.register_store(store)
# Seq 1: read at 0, modify, commit
seq1, data1 = store.read_snapshot()
data1.value = "seq1"
intent1 = store.prepare_commit(seq1, data1)
# Seq 2: read at 0 (same snapshot), modify
seq2, data2 = store.read_snapshot()
data2.value = "seq2"
intent2 = store.prepare_commit(seq2, data2)
# Commit seq 1 (should succeed)
# ... coordinator logic ...
# Commit seq 2 (should conflict and retry)
# ... verify conflict detected ...
Future Enhancements
- Distributed queue: Redis-backed queue for multi-worker deployment
- Event log persistence: Store all commits for event sourcing/audit
- Metrics dashboard: Real-time view of queue depth, conflict rate, latency
- Transaction snapshots: Full system state at any seq for replay/debugging
- Automatic batching: Coalesce rapid updates to same store