redesign fully scaffolded and web login works

This commit is contained in:
2026-03-17 20:10:47 -04:00
parent b9cc397e05
commit f6bd22a8ef
143 changed files with 17317 additions and 693 deletions

View File

@@ -0,0 +1,351 @@
# Agent Harness
Comprehensive agent orchestration system for Dexorder AI platform, built on LangChain.js and LangGraph.js.
## Architecture Overview
```
gateway/src/harness/
├── memory/ # Storage layer (Redis + Iceberg + Qdrant)
├── skills/ # Individual capabilities (markdown + TypeScript)
├── subagents/ # Specialized agents with multi-file memory
├── workflows/ # LangGraph state machines
├── tools/ # Platform tools (non-MCP)
├── config/ # Configuration files
└── index.ts # Main exports
```
## Core Components
### 1. Memory Layer (`memory/`)
Tiered storage architecture as per [architecture discussion](/chat/harness-rag.txt):
- **Redis**: Hot state (active sessions, checkpoints)
- **Iceberg**: Cold storage (durable conversations, analytics)
- **Qdrant**: Vector search (RAG, semantic memory)
**Key Files:**
- `checkpoint-saver.ts`: LangGraph checkpoint persistence
- `conversation-store.ts`: Message history management
- `rag-retriever.ts`: Vector similarity search
- `embedding-service.ts`: Text→vector conversion
- `session-context.ts`: User context with channel metadata
### 2. Skills (`skills/`)
Self-contained capabilities with markdown definitions:
- `*.skill.md`: Human-readable documentation
- `*.ts`: Implementation extending `BaseSkill`
- Input validation and error handling
- Can use LLM, MCP tools, or platform tools
**Example:**
```typescript
import { MarketAnalysisSkill } from './skills';
const skill = new MarketAnalysisSkill(logger, model);
const result = await skill.execute({
context: userContext,
parameters: { ticker: 'BTC/USDT', period: '4h' }
});
```
See [skills/README.md](skills/README.md) for authoring guide.
### 3. Subagents (`subagents/`)
Specialized agents with multi-file memory:
```
subagents/
code-reviewer/
config.yaml # Model, memory files, capabilities
system-prompt.md # System instructions
memory/ # Multi-file knowledge base
review-guidelines.md
common-patterns.md
best-practices.md
index.ts # Implementation
```
**Features:**
- Dedicated system prompts
- Split memory into logical files (better organization)
- Model overrides
- Capability tagging
**Example:**
```typescript
const codeReviewer = await createCodeReviewerSubagent(model, logger, basePath);
const review = await codeReviewer.execute({ userContext }, strategyCode);
```
### 4. Workflows (`workflows/`)
LangGraph state machines with:
- Validation loops (retry with fixes)
- Human-in-the-loop (approval gates)
- Multi-step orchestration
- Error recovery
**Example Workflows:**
- `strategy-validation/`: Code review → backtest → risk → approval
- `trading-request/`: Analysis → risk → approval → execute
See individual workflow READMEs for details.
### 5. Configuration (`config/`)
YAML-based configuration:
- `models.yaml`: LLM providers, routing, rate limits
- `subagent-routing.yaml`: When to use which subagent
## User Context
Enhanced session context with channel awareness for multi-channel support:
```typescript
interface UserContext {
userId: string;
sessionId: string;
license: UserLicense;
activeChannel: {
type: 'websocket' | 'telegram' | 'slack' | 'discord';
channelUserId: string;
capabilities: {
supportsMarkdown: boolean;
supportsImages: boolean;
supportsButtons: boolean;
maxMessageLength: number;
};
};
conversationHistory: BaseMessage[];
relevantMemories: MemoryChunk[];
workspaceState: WorkspaceContext;
}
```
This allows workflows to:
- Route responses to correct channel
- Format output for channel capabilities
- Handle channel-specific interactions (buttons, voice, etc.)
## Storage Architecture
Based on [harness-rag.txt discussion](../../chat/harness-rag.txt):
### Hot Path (Redis)
- Active checkpoints (TTL: 1 hour)
- Recent messages (last 50)
- Session metadata
- Fast reads for active conversations
### Cold Path (Iceberg)
- Full conversation history (partitioned by user_id, session_id)
- Checkpoint snapshots
- Time-travel queries
- GDPR-compliant deletion with compaction
### Vector Search (Qdrant)
- Conversation embeddings
- Long-term memory
- RAG retrieval
- Payload-indexed by user_id for fast GDPR deletion
- **Global knowledge base** (user_id="0") loaded from markdown files
### GDPR Compliance
```typescript
// Delete user data across all stores
await conversationStore.deleteUserData(userId);
await ragRetriever.deleteUserData(userId);
await checkpointSaver.delete(userId);
await containerManager.deleteContainer(userId);
// Iceberg physical delete
await icebergTable.expire_snapshots();
await icebergTable.rewrite_data_files();
```
## Standard Patterns
### Validation Loop (Retry with Fixes)
```typescript
graph.addConditionalEdges('validate', (state) => {
if (state.errors.length > 0 && state.retryCount < 3) {
return 'fix_errors'; // Loop back
}
return state.errors.length === 0 ? 'approve' : 'reject';
});
```
### Human-in-the-Loop (Approval Gates)
```typescript
const approvalNode = async (state) => {
// Send to user's channel
await sendToChannel(state.userContext.activeChannel, {
type: 'approval_request',
data: { /* details */ }
});
// LangGraph pauses here via Interrupt
// Resume with user input: graph.invoke(state, { ...resumeConfig })
return { approvalRequested: true };
};
```
## Getting Started
### 1. Install Dependencies
Already in `gateway/package.json`:
```json
{
"@langchain/core": "^0.3.24",
"@langchain/langgraph": "^0.2.26",
"@langchain/anthropic": "^0.3.8",
"ioredis": "^5.4.2"
}
```
### 2. Initialize Memory Layer
```typescript
import Redis from 'ioredis';
import {
TieredCheckpointSaver,
ConversationStore,
EmbeddingService,
RAGRetriever
} from './harness/memory';
const redis = new Redis(process.env.REDIS_URL);
const checkpointSaver = new TieredCheckpointSaver(redis, logger);
const conversationStore = new ConversationStore(redis, logger);
const embeddings = new EmbeddingService({ provider: 'openai', apiKey }, logger);
const ragRetriever = new RAGRetriever({ url: QDRANT_URL }, logger);
await ragRetriever.initialize();
```
### 3. Create Subagents
```typescript
import { createCodeReviewerSubagent } from './harness/subagents';
import { ModelRouter } from './llm/router';
const model = await modelRouter.route(query, license);
const codeReviewer = await createCodeReviewerSubagent(
model,
logger,
'gateway/src/harness/subagents/code-reviewer'
);
```
### 4. Build Workflows
```typescript
import { createStrategyValidationWorkflow } from './harness/workflows';
const workflow = await createStrategyValidationWorkflow(
model,
codeReviewer,
mcpBacktestFn,
logger,
'gateway/src/harness/workflows/strategy-validation/config.yaml'
);
const result = await workflow.execute({
userContext,
strategyCode: '...',
ticker: 'BTC/USDT',
timeframe: '4h'
});
```
### 5. Use Skills
```typescript
import { MarketAnalysisSkill } from './harness/skills';
const skill = new MarketAnalysisSkill(logger, model);
const analysis = await skill.execute({
context: userContext,
parameters: { ticker: 'BTC/USDT', period: '1h' }
});
```
## Global Knowledge System
The harness includes a document loader that automatically loads markdown files from `gateway/knowledge/` into Qdrant as global knowledge (user_id="0").
### Directory Structure
```
gateway/knowledge/
├── platform/ # Platform capabilities and architecture
├── trading/ # Trading concepts and fundamentals
├── indicators/ # Indicator development guides
└── strategies/ # Strategy patterns and examples
```
### How It Works
1. **Startup**: Documents are loaded automatically when gateway starts
2. **Chunking**: Intelligent splitting by markdown headers (~1000 tokens/chunk)
3. **Embedding**: Chunks are embedded using configured embedding service
4. **Storage**: Stored in Qdrant with user_id="0" (global namespace)
5. **Updates**: Content hashing detects changes for incremental updates
### RAG Query Flow
When a user sends a message:
1. Query is embedded using same embedding service
2. Qdrant searches vectors with filter: `user_id = current_user OR user_id = "0"`
3. Results include both user-specific and global knowledge
4. Relevant chunks are added to LLM context
5. LLM generates response with platform knowledge
### Managing Knowledge
**Add new documents**:
```bash
# Create markdown file in appropriate directory
echo "# New Topic" > gateway/knowledge/platform/new-topic.md
# Reload knowledge (development)
curl -X POST http://localhost:3000/admin/reload-knowledge
```
**Check stats**:
```bash
curl http://localhost:3000/admin/knowledge-stats
```
**In production**: Just deploy updated markdown files - they'll be loaded on startup.
See [gateway/knowledge/README.md](../../knowledge/README.md) for detailed documentation.
## Next Steps
1. **Implement Iceberg Integration**: Complete TODOs in checkpoint-saver.ts and conversation-store.ts
2. **Add More Subagents**: Risk analyzer, market analyst, etc.
3. **Implement Interrupts**: Full human-in-the-loop with LangGraph interrupts
4. **Add Platform Tools**: Market data queries, chart rendering, etc.
5. **Expand Knowledge Base**: Add more platform documentation to knowledge/
## References
- Architecture discussion: [chat/harness-rag.txt](../../chat/harness-rag.txt)
- LangGraph docs: https://langchain-ai.github.io/langgraphjs/
- Qdrant docs: https://qdrant.tech/documentation/
- Apache Iceberg: https://iceberg.apache.org/docs/latest/

View File

@@ -1,4 +1,4 @@
import type { BaseChatModel } from '@langchain/core/language_models/chat_models';
import type { BaseMessage } from '@langchain/core/messages';
import { HumanMessage, AIMessage, SystemMessage } from '@langchain/core/messages';
import type { FastifyBaseLogger } from 'fastify';
@@ -286,15 +286,7 @@ Available features: ${JSON.stringify(this.config.license.features, null, 2)}`;
return prompt;
}
/**
* Get platform tools (non-user-specific tools)
*/
private getPlatformTools(): Array<{ name: string; description?: string }> {
// Platform tools that don't need user's MCP
return [
// TODO: Add platform tools like market data queries, chart rendering, etc.
];
}
/**
* Cleanup resources

View File

@@ -0,0 +1,110 @@
# Default LLM Model Configuration
# Default model for general agent tasks
default:
provider: anthropic
model: claude-3-5-sonnet-20241022
temperature: 0.7
maxTokens: 4096
# Model overrides for specific use cases
models:
# Fast model for simple tasks (routing, classification)
fast:
provider: anthropic
model: claude-3-haiku-20240307
temperature: 0.3
maxTokens: 1024
# Reasoning model for complex analysis
reasoning:
provider: anthropic
model: claude-3-5-sonnet-20241022
temperature: 0.5
maxTokens: 8192
# Precise model for code generation/review
code:
provider: anthropic
model: claude-3-5-sonnet-20241022
temperature: 0.2
maxTokens: 8192
# Creative model for strategy brainstorming
creative:
provider: anthropic
model: claude-3-5-sonnet-20241022
temperature: 0.9
maxTokens: 4096
# Embedding model configuration
embeddings:
provider: openai
model: text-embedding-3-small
dimensions: 1536
# Model routing rules (complexity-based)
routing:
# Simple queries → fast model
simple:
keywords:
- "what is"
- "define"
- "list"
- "show me"
maxTokens: 100
model: fast
# Code-related → code model
code:
keywords:
- "code"
- "function"
- "implement"
- "debug"
- "review"
model: code
# Analysis tasks → reasoning model
analysis:
keywords:
- "analyze"
- "compare"
- "evaluate"
- "assess"
model: reasoning
# Everything else → default
default:
model: default
# Cost optimization settings
costControl:
# Cache system prompts (Anthropic prompt caching)
cacheSystemPrompts: true
# Token limits per license type
tokenLimits:
free:
maxTokensPerMessage: 2048
maxTokensPerDay: 50000
pro:
maxTokensPerMessage: 8192
maxTokensPerDay: 500000
enterprise:
maxTokensPerMessage: 16384
maxTokensPerDay: -1 # unlimited
# Rate limiting
rateLimits:
# Requests per minute by license
requestsPerMinute:
free: 10
pro: 60
enterprise: 120
# Concurrent requests
concurrentRequests:
free: 1
pro: 3
enterprise: 10

View File

@@ -0,0 +1,98 @@
# Subagent Routing Configuration
# When to use which subagent based on task type
subagents:
# Code Reviewer Subagent
code-reviewer:
enabled: true
path: src/harness/subagents/code-reviewer
triggers:
keywords:
- "review code"
- "check code"
- "code review"
- "analyze code"
- "audit code"
patterns:
- "review.*code"
- "check.*strategy"
- "analyze.*function"
priority: high
timeout: 60000 # 1 minute
# Risk Analyzer Subagent (TODO: implement)
risk-analyzer:
enabled: false
path: src/harness/subagents/risk-analyzer
triggers:
keywords:
- "risk"
- "exposure"
- "drawdown"
- "volatility"
patterns:
- "assess.*risk"
- "calculate.*risk"
- "risk.*analysis"
priority: high
timeout: 30000
# Market Analyst Subagent (TODO: implement)
market-analyst:
enabled: false
path: src/harness/subagents/market-analyst
triggers:
keywords:
- "market"
- "trend"
- "technical analysis"
- "price action"
patterns:
- "analyze.*market"
- "market.*conditions"
priority: medium
timeout: 45000
# Routing strategy
routing:
# Check triggers in priority order
strategy: priority
# Fallback to main agent if no subagent matches
fallback: main_agent
# Allow chaining (one subagent can invoke another)
allowChaining: true
maxChainDepth: 3
# Subagent memory settings
memory:
# Reload memory files on every request (dev mode)
hotReload: false
# Cache memory files in production
cacheMemory: true
cacheTTL: 3600000 # 1 hour
# Parallel execution
parallel:
# Allow multiple subagents to run in parallel
enabled: true
# Max concurrent subagents
maxConcurrent: 3
# Combine results strategy
combineStrategy: merge # merge | first | best
# Monitoring
monitoring:
# Log subagent performance
logPerformance: true
# Track usage by subagent
trackUsage: true
# Alert on slow subagents
alertThreshold: 30000 # 30 seconds

View File

@@ -0,0 +1,17 @@
// Main harness exports
// Memory
export * from './memory/index.js';
// Skills
export * from './skills/index.js';
// Subagents
export * from './subagents/index.js';
// Workflows
export * from './workflows/index.js';
// Re-export agent harness (for backward compatibility)
export { AgentHarness, type AgentHarnessConfig } from './agent-harness.js';
export { MCPClientConnector } from './mcp-client.js';

View File

@@ -1,5 +1,5 @@
import { Client } from '@modelcontextprotocol/sdk/client/index.js';
import { StdioClientTransport } from '@modelcontextprotocol/sdk/client/stdio.js';
import type { FastifyBaseLogger } from 'fastify';
export interface MCPClientConfig {
@@ -44,10 +44,9 @@ export class MCPClientConnector {
},
{
capabilities: {
tools: {},
resources: {},
sampling: {},
},
}
} as any
);
// TODO: Replace with HTTP transport when user containers are ready

View File

@@ -0,0 +1,236 @@
import { BaseCheckpointSaver } from '@langchain/langgraph';
import type { Checkpoint, CheckpointMetadata, CheckpointTuple } from '@langchain/langgraph';
import type { RunnableConfig } from '@langchain/core/runnables';
import type Redis from 'ioredis';
import type { FastifyBaseLogger } from 'fastify';
/**
* Tiered checkpoint saver: Redis (hot) + Iceberg (cold)
*
* Hot path: Active checkpoints stored in Redis with TTL
* Cold path: Durable storage in Iceberg for long-term retention
*
* Based on architecture discussion: Redis for active sessions,
* Iceberg for durable storage with time-travel capabilities.
*/
export class TieredCheckpointSaver extends BaseCheckpointSaver<number> {
private readonly HOT_TTL_SECONDS = 3600; // 1 hour
private readonly KEY_PREFIX = 'ckpt:';
constructor(
private redis: Redis,
private logger: FastifyBaseLogger,
// Note: Iceberg writes are handled via Kafka + Flink for consistency
// Reads can be implemented when needed using IcebergClient
// private iceberg?: IcebergClient
) {
super();
}
/**
* Get checkpoint from Redis (hot) or Iceberg (cold)
*/
async getTuple(config: RunnableConfig): Promise<CheckpointTuple | undefined> {
const threadId = config.configurable?.thread_id as string;
if (!threadId) {
throw new Error('thread_id required in config.configurable');
}
const checkpointId = config.configurable?.checkpoint_id as string | undefined;
this.logger.debug({ threadId, checkpointId }, 'Getting checkpoint');
// Hot path: Try Redis first
const key = this.getRedisKey(threadId, checkpointId);
const cached = await this.redis.get(key);
if (cached) {
this.logger.debug({ threadId, checkpointId }, 'Checkpoint found in Redis (hot)');
return this.deserialize(cached);
}
// Cold path: Load from Iceberg (if needed)
// Note: Implement when Iceberg query is required
// Can use IcebergClient to query gateway.checkpoints table
// or set up a Kafka topic for checkpoint persistence
this.logger.debug({ threadId, checkpointId }, 'Checkpoint not in Redis, Iceberg cold storage not yet implemented');
return undefined;
}
/**
* Save checkpoint to Redis (hot) and async flush to Iceberg (cold)
*/
async put(
config: RunnableConfig,
checkpoint: Checkpoint,
metadata: CheckpointMetadata
): Promise<RunnableConfig> {
const threadId = config.configurable?.thread_id as string;
if (!threadId) {
throw new Error('thread_id required in config.configurable');
}
this.logger.debug({ threadId, checkpointId: checkpoint.id }, 'Saving checkpoint');
const serialized = this.serialize(checkpoint, metadata);
// Hot: Redis with TTL
const key = this.getRedisKey(threadId, checkpoint.id);
await this.redis.set(key, serialized, 'EX', this.HOT_TTL_SECONDS);
// Also store latest checkpoint pointer
const latestKey = this.getRedisKey(threadId);
await this.redis.set(latestKey, serialized, 'EX', this.HOT_TTL_SECONDS);
// Cold: Async flush to Iceberg (fire and forget)
this.flushToIceberg(threadId, checkpoint, metadata).catch((error) => {
this.logger.error({ error, threadId }, 'Failed to flush checkpoint to Iceberg');
});
return {
configurable: {
...config.configurable,
thread_id: threadId,
checkpoint_id: checkpoint.id,
},
};
}
/**
* List all checkpoints for a thread
*/
async *list(
config: RunnableConfig
): AsyncGenerator<CheckpointTuple> {
const threadId = config.configurable?.thread_id as string;
if (!threadId) {
throw new Error('thread_id required in config.configurable');
}
// Try to get from Redis first
const pattern = `${this.KEY_PREFIX}${threadId}:*`;
const keys = await this.redis.keys(pattern);
for (const key of keys) {
const data = await this.redis.get(key);
if (data) {
const tuple = this.deserialize(data);
if (tuple) {
yield tuple;
}
}
}
// TODO: Also scan Iceberg for historical checkpoints
}
/**
* Delete thread (for GDPR compliance)
*/
async deleteThread(threadId: string): Promise<void> {
this.logger.info({ threadId }, 'Deleting thread');
const pattern = `${this.KEY_PREFIX}${threadId}*`;
const keys = await this.redis.keys(pattern);
if (keys.length > 0) {
await this.redis.del(...keys);
}
// TODO: Also delete from Iceberg
// await this.deleteFromIceberg(threadId);
}
/**
* Put writes (required by BaseCheckpointSaver)
*/
async putWrites(
config: RunnableConfig,
writes: [string, unknown][],
taskId: string
): Promise<void> {
// For this simple implementation, we just log writes
// In a full implementation, you'd store pending writes separately
const threadId = config.configurable?.thread_id;
this.logger.debug({ threadId, taskId, writes }, 'Put writes called');
}
/**
* Generate Redis key for checkpoint
*/
private getRedisKey(threadId: string, checkpointId?: string): string {
if (checkpointId) {
return `${this.KEY_PREFIX}${threadId}:${checkpointId}`;
}
return `${this.KEY_PREFIX}${threadId}:latest`;
}
/**
* Serialize checkpoint to JSON string
*/
private serialize(checkpoint: Checkpoint, metadata: CheckpointMetadata): string {
return JSON.stringify({
checkpoint,
metadata,
savedAt: new Date().toISOString(),
});
}
/**
* Deserialize checkpoint from JSON string
*/
private deserialize(data: string): CheckpointTuple | undefined {
try {
const parsed = JSON.parse(data);
return {
config: {
configurable: {
thread_id: parsed.checkpoint.id,
checkpoint_id: parsed.checkpoint.id,
},
},
checkpoint: parsed.checkpoint,
metadata: parsed.metadata,
parentConfig: undefined,
};
} catch (error) {
this.logger.error({ error }, 'Failed to deserialize checkpoint');
return undefined;
}
}
/**
* Async flush checkpoint to Iceberg for durable storage
*
* Note: For production, send to Kafka topic that Flink consumes:
* - Topic: gateway_checkpoints
* - Flink job writes to gateway.checkpoints Iceberg table
* - Ensures consistent write pattern with rest of system
*/
private async flushToIceberg(
_threadId: string,
checkpoint: Checkpoint,
_metadata: CheckpointMetadata
): Promise<void> {
// TODO: Send to Kafka topic for Flink processing
// const kafkaMessage = {
// user_id: metadata.userId || '0',
// session_id: threadId,
// checkpoint_id: checkpoint.id,
// checkpoint_data: JSON.stringify(checkpoint),
// metadata: JSON.stringify(metadata),
// timestamp: Date.now() * 1000, // microseconds
// };
// await this.kafkaProducer.send({
// topic: 'gateway_checkpoints',
// messages: [{ value: JSON.stringify(kafkaMessage) }]
// });
this.logger.debug({ threadId: _threadId, checkpointId: checkpoint.id },
'Checkpoint flush to Iceberg (via Kafka) not yet implemented');
}
}

View File

@@ -0,0 +1,252 @@
import type Redis from 'ioredis';
import type { FastifyBaseLogger } from 'fastify';
import type { BaseMessage } from '@langchain/core/messages';
import { HumanMessage, AIMessage, SystemMessage } from '@langchain/core/messages';
/**
* Message record for storage
*/
export interface StoredMessage {
id: string;
userId: string;
sessionId: string;
role: 'user' | 'assistant' | 'system';
content: string;
timestamp: number; // microseconds (Iceberg convention)
metadata?: Record<string, unknown>;
}
/**
* Conversation store: Redis (hot) + Iceberg (cold)
*
* Hot path: Recent messages in Redis for fast access
* Cold path: Full history in Iceberg for durability and analytics
*
* Architecture:
* - Redis stores last N messages per session with TTL
* - Iceberg stores all messages partitioned by user_id, session_id
* - Supports time-travel queries for debugging and analysis
*/
export class ConversationStore {
private readonly HOT_MESSAGE_LIMIT = 50; // Keep last 50 messages in Redis
private readonly HOT_TTL_SECONDS = 3600; // 1 hour
constructor(
private redis: Redis,
private logger: FastifyBaseLogger
// TODO: Add Iceberg catalog
// private iceberg: IcebergCatalog
) {}
/**
* Save a message to both Redis and Iceberg
*/
async saveMessage(
userId: string,
sessionId: string,
role: 'user' | 'assistant' | 'system',
content: string,
metadata?: Record<string, unknown>
): Promise<void> {
const message: StoredMessage = {
id: `${userId}:${sessionId}:${Date.now()}`,
userId,
sessionId,
role,
content,
timestamp: Date.now() * 1000, // Convert to microseconds
metadata,
};
this.logger.debug({ userId, sessionId, role }, 'Saving message');
// Hot: Add to Redis list (LPUSH for newest first)
const key = this.getRedisKey(userId, sessionId);
await this.redis.lpush(key, JSON.stringify(message));
// Trim to keep only recent messages
await this.redis.ltrim(key, 0, this.HOT_MESSAGE_LIMIT - 1);
// Set TTL
await this.redis.expire(key, this.HOT_TTL_SECONDS);
// Cold: Async append to Iceberg
this.appendToIceberg(message).catch((error) => {
this.logger.error({ error, userId, sessionId }, 'Failed to append message to Iceberg');
});
}
/**
* Get recent messages from Redis (hot path)
*/
async getRecentMessages(
userId: string,
sessionId: string,
limit: number = 20
): Promise<StoredMessage[]> {
const key = this.getRedisKey(userId, sessionId);
const messages = await this.redis.lrange(key, 0, limit - 1);
return messages
.map((msg) => {
try {
return JSON.parse(msg) as StoredMessage;
} catch (error) {
this.logger.error({ error, message: msg }, 'Failed to parse message');
return null;
}
})
.filter((msg): msg is StoredMessage => msg !== null)
.reverse(); // Oldest first
}
/**
* Get full conversation history from Iceberg (cold path)
*/
async getFullHistory(
userId: string,
sessionId: string,
timeRange?: { start: number; end: number }
): Promise<StoredMessage[]> {
this.logger.debug({ userId, sessionId, timeRange }, 'Loading full history from Iceberg');
// TODO: Implement Iceberg query
// const table = this.iceberg.loadTable('gateway.conversations');
// const filters = [
// EqualTo('user_id', userId),
// EqualTo('session_id', sessionId),
// ];
//
// if (timeRange) {
// filters.push(GreaterThanOrEqual('timestamp', timeRange.start));
// filters.push(LessThanOrEqual('timestamp', timeRange.end));
// }
//
// const df = await table.scan({
// row_filter: And(...filters)
// }).to_pandas();
//
// if (!df.empty) {
// return df.sort_values('timestamp').to_dict('records');
// }
// Fallback to Redis if Iceberg not available
return await this.getRecentMessages(userId, sessionId, 1000);
}
/**
* Convert stored messages to LangChain message format
*/
toLangChainMessages(messages: StoredMessage[]): BaseMessage[] {
return messages.map((msg) => {
switch (msg.role) {
case 'user':
return new HumanMessage(msg.content);
case 'assistant':
return new AIMessage(msg.content);
case 'system':
return new SystemMessage(msg.content);
default:
throw new Error(`Unknown role: ${msg.role}`);
}
});
}
/**
* Delete all messages for a session (Redis only, Iceberg handled separately)
*/
async deleteSession(userId: string, sessionId: string): Promise<void> {
this.logger.info({ userId, sessionId }, 'Deleting session from Redis');
const key = this.getRedisKey(userId, sessionId);
await this.redis.del(key);
}
/**
* Delete all messages for a user (GDPR compliance)
*/
async deleteUserData(userId: string): Promise<void> {
this.logger.info({ userId }, 'Deleting all user messages for GDPR compliance');
// Delete from Redis
const pattern = `conv:${userId}:*`;
const keys = await this.redis.keys(pattern);
if (keys.length > 0) {
await this.redis.del(...keys);
}
// Delete from Iceberg
// Note: For GDPR compliance, need to:
// 1. Send delete command via Kafka OR
// 2. Use Iceberg REST API to delete rows (if supported) OR
// 3. Coordinate with Flink job to handle deletes
//
// Iceberg delete flow:
// - Mark rows for deletion (equality delete files)
// - Run compaction to physically remove
// - Expire old snapshots
this.logger.info({ userId }, 'User messages deleted from Redis - Iceberg GDPR delete not yet implemented');
}
/**
* Get Redis key for conversation
*/
private getRedisKey(userId: string, sessionId: string): string {
return `conv:${userId}:${sessionId}`;
}
/**
* Append message to Iceberg for durable storage
*
* Note: For production, send to Kafka topic that Flink consumes:
* - Topic: gateway_conversations
* - Flink job writes to gateway.conversations Iceberg table
* - Ensures consistent write pattern with rest of system
*/
private async appendToIceberg(message: StoredMessage): Promise<void> {
// TODO: Send to Kafka topic for Flink processing
// const kafkaMessage = {
// id: message.id,
// user_id: message.userId,
// session_id: message.sessionId,
// role: message.role,
// content: message.content,
// metadata: JSON.stringify(message.metadata || {}),
// timestamp: message.timestamp,
// };
// await this.kafkaProducer.send({
// topic: 'gateway_conversations',
// messages: [{ value: JSON.stringify(kafkaMessage) }]
// });
this.logger.debug(
{ messageId: message.id, userId: message.userId, sessionId: message.sessionId },
'Message append to Iceberg (via Kafka) not yet implemented'
);
}
/**
* Get conversation statistics
*/
async getStats(userId: string, sessionId: string): Promise<{
messageCount: number;
firstMessage?: Date;
lastMessage?: Date;
}> {
const key = this.getRedisKey(userId, sessionId);
const count = await this.redis.llen(key);
if (count === 0) {
return { messageCount: 0 };
}
const messages = await this.getRecentMessages(userId, sessionId, count);
const timestamps = messages.map((m) => m.timestamp / 1000); // Convert to milliseconds
return {
messageCount: count,
firstMessage: new Date(Math.min(...timestamps)),
lastMessage: new Date(Math.max(...timestamps)),
};
}
}

View File

@@ -0,0 +1,356 @@
import { readdir, readFile } from 'fs/promises';
import { join, relative } from 'path';
import { createHash } from 'crypto';
import type { FastifyBaseLogger } from 'fastify';
import { RAGRetriever } from './rag-retriever.js';
import { EmbeddingService } from './embedding-service.js';
/**
* Document metadata stored with each chunk
*/
export interface DocumentMetadata {
document_id: string;
chunk_index: number;
content_hash: string;
last_updated: number;
tags: string[];
heading?: string;
file_path: string;
}
/**
* Document chunk with content and metadata
*/
export interface DocumentChunk {
content: string;
metadata: DocumentMetadata;
}
/**
* Document loader configuration
*/
export interface DocumentLoaderConfig {
knowledgeDir: string;
maxChunkSize?: number; // in tokens (approximate by chars)
chunkOverlap?: number; // overlap between chunks
}
/**
* Global knowledge document loader
*
* Loads markdown documents from a directory structure and stores them
* as global knowledge (user_id="0") in Qdrant for RAG retrieval.
*
* Features:
* - Intelligent chunking by markdown headers
* - Content hashing for change detection
* - Metadata extraction (tags, headings)
* - Automatic embedding generation
* - Incremental updates (only changed docs)
*
* Directory structure:
* gateway/knowledge/
* platform/
* trading/
* indicators/
* strategies/
*/
export class DocumentLoader {
private config: DocumentLoaderConfig;
private logger: FastifyBaseLogger;
private embeddings: EmbeddingService;
private rag: RAGRetriever;
private loadedDocs: Map<string, string> = new Map(); // path -> hash
constructor(
config: DocumentLoaderConfig,
embeddings: EmbeddingService,
rag: RAGRetriever,
logger: FastifyBaseLogger
) {
this.config = {
maxChunkSize: 4000, // ~1000 tokens
chunkOverlap: 200,
...config,
};
this.embeddings = embeddings;
this.rag = rag;
this.logger = logger;
}
/**
* Load all documents from knowledge directory
*/
async loadAll(): Promise<{ loaded: number; updated: number; skipped: number }> {
this.logger.info({ dir: this.config.knowledgeDir }, 'Loading knowledge documents');
const stats = { loaded: 0, updated: 0, skipped: 0 };
try {
const files = await this.findMarkdownFiles(this.config.knowledgeDir);
for (const filePath of files) {
const result = await this.loadDocument(filePath);
if (result === 'loaded') stats.loaded++;
else if (result === 'updated') stats.updated++;
else stats.skipped++;
}
this.logger.info(stats, 'Knowledge documents loaded');
return stats;
} catch (error) {
this.logger.error({ error }, 'Failed to load knowledge documents');
throw error;
}
}
/**
* Load a single document
*/
async loadDocument(filePath: string): Promise<'loaded' | 'updated' | 'skipped'> {
try {
// Read file content
const content = await readFile(filePath, 'utf-8');
const contentHash = this.hashContent(content);
// Check if document has changed
const relativePath = relative(this.config.knowledgeDir, filePath);
const existingHash = this.loadedDocs.get(relativePath);
if (existingHash === contentHash) {
this.logger.debug({ file: relativePath }, 'Document unchanged, skipping');
return 'skipped';
}
const isUpdate = !!existingHash;
// Parse and chunk document
const chunks = this.chunkDocument(content, relativePath);
this.logger.info(
{ file: relativePath, chunks: chunks.length, update: isUpdate },
'Processing document'
);
// Generate embeddings and store chunks
for (const chunk of chunks) {
const embedding = await this.embeddings.embed(chunk.content);
// Create unique ID for this chunk
const chunkId = `global:${chunk.metadata.document_id}:${chunk.metadata.chunk_index}`;
// Store in Qdrant as global knowledge
await this.rag.storeGlobalKnowledge(
chunkId,
chunk.content,
embedding,
{
...chunk.metadata,
type: 'knowledge_doc',
}
);
}
// Update loaded docs tracking
this.loadedDocs.set(relativePath, contentHash);
return isUpdate ? 'updated' : 'loaded';
} catch (error) {
this.logger.error({ error, file: filePath }, 'Failed to load document');
throw error;
}
}
/**
* Reload a specific document (for updates)
*/
async reloadDocument(filePath: string): Promise<void> {
this.logger.info({ file: filePath }, 'Reloading document');
await this.loadDocument(filePath);
}
/**
* Chunk document by markdown headers with smart splitting
*/
private chunkDocument(content: string, documentId: string): DocumentChunk[] {
const chunks: DocumentChunk[] = [];
const tags = this.extractTags(content);
const lastModified = Date.now();
// Split by headers (## or ###)
const sections = this.splitByHeaders(content);
let chunkIndex = 0;
for (const section of sections) {
// If section is too large, split it further
const subChunks = this.splitLargeSection(section.content);
for (const subContent of subChunks) {
if (subContent.trim().length === 0) continue;
chunks.push({
content: subContent,
metadata: {
document_id: documentId,
chunk_index: chunkIndex++,
content_hash: this.hashContent(content),
last_updated: lastModified,
tags,
heading: section.heading,
file_path: documentId,
},
});
}
}
return chunks;
}
/**
* Split document by markdown headers
*/
private splitByHeaders(content: string): Array<{ heading?: string; content: string }> {
const lines = content.split('\n');
const sections: Array<{ heading?: string; content: string }> = [];
let currentSection: string[] = [];
let currentHeading: string | undefined;
for (const line of lines) {
// Check for markdown header (##, ###, ####)
const headerMatch = line.match(/^(#{2,4})\s+(.+)$/);
if (headerMatch) {
// Save previous section
if (currentSection.length > 0) {
sections.push({
heading: currentHeading,
content: currentSection.join('\n'),
});
}
// Start new section
currentHeading = headerMatch[2].trim();
currentSection = [line];
} else {
currentSection.push(line);
}
}
// Add final section
if (currentSection.length > 0) {
sections.push({
heading: currentHeading,
content: currentSection.join('\n'),
});
}
return sections;
}
/**
* Split large sections into smaller chunks
*/
private splitLargeSection(content: string): string[] {
const maxSize = this.config.maxChunkSize!;
const overlap = this.config.chunkOverlap!;
if (content.length <= maxSize) {
return [content];
}
const chunks: string[] = [];
let start = 0;
while (start < content.length) {
const end = Math.min(start + maxSize, content.length);
let chunkEnd = end;
// Try to break at sentence boundary
if (end < content.length) {
const sentenceEnd = content.lastIndexOf('.', end);
const paragraphEnd = content.lastIndexOf('\n\n', end);
if (paragraphEnd > start + maxSize / 2) {
chunkEnd = paragraphEnd;
} else if (sentenceEnd > start + maxSize / 2) {
chunkEnd = sentenceEnd + 1;
}
}
chunks.push(content.substring(start, chunkEnd));
start = chunkEnd - overlap;
}
return chunks;
}
/**
* Extract tags from document (frontmatter or first heading)
*/
private extractTags(content: string): string[] {
const tags: string[] = [];
// Try to extract from YAML frontmatter
const frontmatterMatch = content.match(/^---\n([\s\S]*?)\n---/);
if (frontmatterMatch) {
const frontmatter = frontmatterMatch[1];
const tagsMatch = frontmatter.match(/tags:\s*\[([^\]]+)\]/);
if (tagsMatch) {
tags.push(...tagsMatch[1].split(',').map((t) => t.trim()));
}
}
// Extract from first heading
const headingMatch = content.match(/^#\s+(.+)$/m);
if (headingMatch) {
tags.push(headingMatch[1].toLowerCase().replace(/\s+/g, '-'));
}
return tags;
}
/**
* Hash content for change detection
*/
private hashContent(content: string): string {
return createHash('md5').update(content).digest('hex');
}
/**
* Recursively find all markdown files
*/
private async findMarkdownFiles(dir: string): Promise<string[]> {
const files: string[] = [];
try {
const entries = await readdir(dir, { withFileTypes: true });
for (const entry of entries) {
const fullPath = join(dir, entry.name);
if (entry.isDirectory()) {
const subFiles = await this.findMarkdownFiles(fullPath);
files.push(...subFiles);
} else if (entry.isFile() && entry.name.endsWith('.md')) {
files.push(fullPath);
}
}
} catch (error) {
this.logger.warn({ error, dir }, 'Failed to read directory');
}
return files;
}
/**
* Get loaded document stats
*/
getStats(): { totalDocs: number; totalSize: number } {
return {
totalDocs: this.loadedDocs.size,
totalSize: Array.from(this.loadedDocs.values()).reduce((sum, hash) => sum + hash.length, 0),
};
}
}

View File

@@ -0,0 +1,270 @@
import type { FastifyBaseLogger } from 'fastify';
import { Ollama } from 'ollama';
/**
* Embedding provider configuration
*/
export interface EmbeddingConfig {
provider: 'ollama' | 'openai' | 'anthropic' | 'local' | 'voyage' | 'cohere' | 'none';
model?: string;
apiKey?: string;
dimensions?: number;
ollamaUrl?: string;
}
/**
* Embedding service for generating vectors from text
*
* Supports multiple providers:
* - Ollama (all-minilm, nomic-embed-text, mxbai-embed-large) - RECOMMENDED
* - OpenAI (text-embedding-3-small/large)
* - Voyage AI (voyage-2)
* - Cohere (embed-english-v3.0)
* - Local models (via transformers.js or Python sidecar)
* - None (for development without embeddings)
*
* Used by RAGRetriever to generate embeddings for storage and search.
*
* For production, use Ollama with all-minilm (90MB model, runs on CPU, ~100MB RAM).
* Ollama can run in-container or as a separate pod/sidecar.
*/
export class EmbeddingService {
private readonly model: string;
private readonly dimensions: number;
private ollama?: Ollama;
constructor(
private config: EmbeddingConfig,
private logger: FastifyBaseLogger
) {
// Set defaults based on provider
switch (config.provider) {
case 'ollama':
this.model = config.model || 'all-minilm';
this.dimensions = config.dimensions || 384;
this.ollama = new Ollama({
host: config.ollamaUrl || 'http://localhost:11434',
});
break;
case 'openai':
this.model = config.model || 'text-embedding-3-small';
this.dimensions = config.dimensions || 1536;
break;
case 'anthropic':
case 'voyage':
this.model = config.model || 'voyage-2';
this.dimensions = config.dimensions || 1024;
break;
case 'cohere':
this.model = config.model || 'embed-english-v3.0';
this.dimensions = config.dimensions || 1024;
break;
case 'local':
this.model = config.model || 'all-MiniLM-L6-v2';
this.dimensions = config.dimensions || 384;
break;
case 'none':
// No embeddings configured - will return zero vectors
this.model = 'none';
this.dimensions = config.dimensions || 1536;
this.logger.warn('Embedding service initialized with provider=none - RAG will not function properly');
break;
default:
throw new Error(`Unknown embedding provider: ${config.provider}`);
}
if (config.provider !== 'none') {
this.logger.info(
{ provider: config.provider, model: this.model, dimensions: this.dimensions },
'Initialized embedding service'
);
}
}
/**
* Generate embedding for a single text
*/
async embed(text: string): Promise<number[]> {
if (this.config.provider === 'none') {
// Return zero vector when no embeddings configured
return new Array(this.dimensions).fill(0);
}
this.logger.debug({ textLength: text.length, provider: this.config.provider }, 'Generating embedding');
try {
switch (this.config.provider) {
case 'ollama':
return await this.embedOllama(text);
case 'openai':
return await this.embedOpenAI(text);
case 'anthropic':
case 'voyage':
return await this.embedVoyage(text);
case 'cohere':
return await this.embedCohere(text);
case 'local':
return await this.embedLocal(text);
default:
throw new Error(`Unknown provider: ${this.config.provider}`);
}
} catch (error) {
this.logger.error({ error, provider: this.config.provider }, 'Failed to generate embedding');
// Return zero vector as fallback to prevent crashes
return new Array(this.dimensions).fill(0);
}
}
/**
* Generate embeddings for multiple texts (batch)
*/
async embedBatch(texts: string[]): Promise<number[][]> {
this.logger.debug({ count: texts.length, provider: this.config.provider }, 'Generating batch embeddings');
// Ollama supports native batch operations
if (this.config.provider === 'ollama' && this.ollama) {
try {
const response = await this.ollama.embed({
model: this.model,
input: texts,
});
return response.embeddings;
} catch (error) {
this.logger.error({ error }, 'Ollama batch embedding failed, falling back to sequential');
// Fall through to sequential processing
}
}
// Fallback: call embed() for each text sequentially
const embeddings = await Promise.all(texts.map((text) => this.embed(text)));
return embeddings;
}
/**
* Get embedding dimensions
*/
getDimensions(): number {
return this.dimensions;
}
/**
* Get model name
*/
getModel(): string {
return this.model;
}
/**
* Generate embedding using Ollama
*/
private async embedOllama(text: string): Promise<number[]> {
if (!this.ollama) {
this.logger.error('Ollama client not initialized');
return new Array(this.dimensions).fill(0);
}
try {
const response = await this.ollama.embed({
model: this.model,
input: text,
});
// Ollama returns single embedding for single input
return response.embeddings[0];
} catch (error) {
this.logger.error({ error }, 'Ollama embedding failed, returning zero vector');
return new Array(this.dimensions).fill(0);
}
}
/**
* Generate embedding using OpenAI API
*/
private async embedOpenAI(text: string): Promise<number[]> {
if (!this.config.apiKey) {
this.logger.warn('OpenAI API key not configured, returning zero vector');
return new Array(this.dimensions).fill(0);
}
try {
const response = await fetch('https://api.openai.com/v1/embeddings', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'Authorization': `Bearer ${this.config.apiKey}`,
},
body: JSON.stringify({
model: this.model,
input: text,
}),
});
if (!response.ok) {
const errorText = await response.text();
throw new Error(`OpenAI API error: ${response.status} ${errorText}`);
}
const data = await response.json() as { data: Array<{ embedding: number[] }> };
return data.data[0].embedding;
} catch (error) {
this.logger.error({ error }, 'OpenAI embedding failed, returning zero vector');
return new Array(this.dimensions).fill(0);
}
}
/**
* Generate embedding using Voyage AI API (Anthropic partnership)
*/
private async embedVoyage(_text: string): Promise<number[]> {
// TODO: Implement Voyage AI embedding when API key available
// API endpoint: https://api.voyageai.com/v1/embeddings
this.logger.warn('Voyage AI embedding not yet implemented, returning zero vector');
return new Array(this.dimensions).fill(0);
}
/**
* Generate embedding using Cohere API
*/
private async embedCohere(_text: string): Promise<number[]> {
// TODO: Implement Cohere embedding when API key available
// API endpoint: https://api.cohere.ai/v1/embed
this.logger.warn('Cohere embedding not yet implemented, returning zero vector');
return new Array(this.dimensions).fill(0);
}
/**
* Generate embedding using local model
*/
private async embedLocal(_text: string): Promise<number[]> {
// TODO: Implement local embedding (via transformers.js or Python sidecar)
// Options:
// 1. transformers.js (pure JS/WebAssembly) - slower but self-contained
// 2. Python sidecar service running sentence-transformers - faster
// 3. ONNX runtime with pre-exported models - good balance
this.logger.warn('Local embedding not implemented, returning zero vector');
return new Array(this.dimensions).fill(0);
}
/**
* Calculate cosine similarity between two embeddings
*/
static cosineSimilarity(a: number[], b: number[]): number {
if (a.length !== b.length) {
throw new Error('Embeddings must have same dimensions');
}
let dotProduct = 0;
let normA = 0;
let normB = 0;
for (let i = 0; i < a.length; i++) {
dotProduct += a[i] * b[i];
normA += a[i] * a[i];
normB += b[i] * b[i];
}
return dotProduct / (Math.sqrt(normA) * Math.sqrt(normB));
}
}

View File

@@ -0,0 +1,20 @@
// Memory layer exports
export { TieredCheckpointSaver } from './checkpoint-saver.js';
export { ConversationStore } from './conversation-store.js';
export { EmbeddingService } from './embedding-service.js';
export { RAGRetriever } from './rag-retriever.js';
export { DocumentLoader } from './document-loader.js';
export {
createUserContext,
touchContext,
isContextExpired,
serializeContext,
deserializeContext,
getDefaultCapabilities,
type UserContext,
type ActiveChannel,
type ChannelCapabilities,
type WorkspaceContext,
type MemoryChunk,
} from './session-context.js';

View File

@@ -0,0 +1,210 @@
import type { FastifyBaseLogger } from 'fastify';
import { QdrantClient } from '../../clients/qdrant-client.js';
/**
* Vector point with metadata for Qdrant
*/
export interface VectorPoint {
id: string;
vector: number[];
payload: {
user_id: string;
session_id: string;
content: string;
role: 'user' | 'assistant' | 'system';
timestamp: number;
[key: string]: unknown;
};
}
/**
* Search result from Qdrant
*/
export interface SearchResult {
id: string;
score: number;
payload: VectorPoint['payload'];
}
/**
* Qdrant client configuration
*/
export interface QdrantConfig {
url: string;
apiKey?: string;
collectionName?: string;
}
/**
* RAG retriever using Qdrant for vector similarity search
*
* Features:
* - **Global namespace** (user_id="0") for platform knowledge
* - **User-specific namespaces** for personal memories
* - **Queries join both** global and user memories
* - Semantic search across conversation history
* - Context retrieval for agent prompts
* - User preference and pattern learning
*
* Architecture: Gateway-side vector store, user_id indexed for GDPR compliance
*/
export class RAGRetriever {
private qdrant: QdrantClient;
constructor(
config: QdrantConfig,
private logger: FastifyBaseLogger,
vectorDimension: number = 1536
) {
this.qdrant = new QdrantClient(config, logger, vectorDimension);
}
/**
* Initialize Qdrant collection with proper schema
*/
async initialize(): Promise<void> {
await this.qdrant.initialize();
}
/**
* Store conversation message as vector
*/
async storeMessage(
userId: string,
sessionId: string,
role: 'user' | 'assistant' | 'system',
content: string,
embedding: number[],
metadata?: Record<string, unknown>
): Promise<void> {
const id = `${userId}:${sessionId}:${Date.now()}`;
const payload = {
user_id: userId,
session_id: sessionId,
content,
role,
timestamp: Date.now(),
...metadata,
};
this.logger.debug(
{ userId, sessionId, role, contentLength: content.length },
'Storing message vector'
);
await this.qdrant.upsertPoint(id, embedding, payload);
}
/**
* Store global platform knowledge (user_id = "0")
*/
async storeGlobalKnowledge(
id: string,
content: string,
embedding: number[],
metadata?: Record<string, unknown>
): Promise<void> {
this.logger.debug({ id, contentLength: content.length }, 'Storing global knowledge');
await this.qdrant.storeGlobalKnowledge(id, embedding, {
session_id: 'global',
content,
role: 'system',
timestamp: Date.now(),
...metadata,
});
}
/**
* Search for relevant memories using vector similarity
* Queries BOTH global (user_id="0") and user-specific memories
*/
async search(
userId: string,
queryEmbedding: number[],
options?: {
limit?: number;
sessionId?: string;
minScore?: number;
timeRange?: { start: number; end: number };
}
): Promise<SearchResult[]> {
const limit = options?.limit || 5;
const minScore = options?.minScore || 0.7;
this.logger.debug(
{ userId, limit, sessionId: options?.sessionId },
'Searching for relevant memories (global + user)'
);
// Qdrant client handles the "should" logic: user_id = userId OR user_id = "0"
const results = await this.qdrant.search(userId, queryEmbedding, {
limit,
scoreThreshold: minScore,
sessionId: options?.sessionId,
timeRange: options?.timeRange,
});
return results.map(r => ({
id: r.id,
score: r.score,
payload: r.payload as VectorPoint['payload'],
}));
}
/**
* Get recent conversation history for context
*/
async getRecentHistory(
userId: string,
sessionId: string,
limit: number = 10
): Promise<SearchResult[]> {
this.logger.debug({ userId, sessionId, limit }, 'Getting recent conversation history');
const result = await this.qdrant.scroll(userId, {
sessionId,
limit,
});
return result.points.map(p => ({
id: p.id,
score: 1.0, // Not a search result, so score is 1.0
payload: p.payload as VectorPoint['payload'],
}));
}
/**
* Delete all vectors for a user (GDPR compliance)
*/
async deleteUserData(userId: string): Promise<void> {
this.logger.info({ userId }, 'Deleting all user vectors for GDPR compliance');
await this.qdrant.deleteUserData(userId);
}
/**
* Delete all vectors for a session
*/
async deleteSession(userId: string, sessionId: string): Promise<void> {
this.logger.info({ userId, sessionId }, 'Deleting session vectors');
await this.qdrant.deleteSession(userId, sessionId);
}
/**
* Get collection statistics
*/
async getStats(): Promise<{
vectorCount: number;
indexedCount: number;
collectionSize: number;
}> {
const info = await this.qdrant.getCollectionInfo();
return {
vectorCount: info.vectorsCount,
indexedCount: info.indexedVectorsCount,
collectionSize: info.pointsCount,
};
}
}

View File

@@ -0,0 +1,226 @@
import type { UserLicense, ChannelType } from '../../types/user.js';
import type { BaseMessage } from '@langchain/core/messages';
/**
* Channel capabilities (what the channel supports)
*/
export interface ChannelCapabilities {
supportsMarkdown: boolean;
supportsImages: boolean;
supportsButtons: boolean;
supportsVoice: boolean;
supportsFiles: boolean;
maxMessageLength: number;
}
/**
* Active channel information for multi-channel routing
*/
export interface ActiveChannel {
type: ChannelType;
channelUserId: string; // Platform-specific ID (telegram_id, discord_id, etc)
capabilities: ChannelCapabilities;
metadata?: Record<string, unknown>;
}
/**
* Workspace state (current user context)
*/
export interface WorkspaceContext {
activeIndicators: string[];
activeStrategies: string[];
watchlist: string[];
recentQueries: string[];
preferences: Record<string, unknown>;
}
/**
* Memory chunk from RAG retrieval
*/
export interface MemoryChunk {
id: string;
content: string;
role: 'user' | 'assistant' | 'system';
timestamp: number;
relevanceScore: number;
metadata?: Record<string, unknown>;
}
/**
* Enhanced user context for agent harness
*
* Contains all necessary context for an agent session:
* - User identity and license
* - Active channel info (for multi-channel support)
* - Conversation state and history
* - RAG-retrieved relevant memories
* - Workspace state
*
* This object is passed to all agent nodes and tools.
*/
export interface UserContext {
// Identity
userId: string;
sessionId: string;
license: UserLicense;
// Channel context (for multi-channel routing)
activeChannel: ActiveChannel;
// Conversation state
conversationHistory: BaseMessage[];
currentMessage?: string;
// RAG context
relevantMemories: MemoryChunk[];
// Workspace state
workspaceState: WorkspaceContext;
// Metadata
createdAt: Date;
lastActivity: Date;
}
/**
* Get default channel capabilities based on type
*/
export function getDefaultCapabilities(channelType: ChannelType): ChannelCapabilities {
switch (channelType) {
case 'websocket':
return {
supportsMarkdown: true,
supportsImages: true,
supportsButtons: true,
supportsVoice: false,
supportsFiles: true,
maxMessageLength: 100000,
};
case 'telegram':
return {
supportsMarkdown: true,
supportsImages: true,
supportsButtons: true,
supportsVoice: true,
supportsFiles: true,
maxMessageLength: 4096,
};
case 'slack':
return {
supportsMarkdown: true,
supportsImages: true,
supportsButtons: true,
supportsVoice: false,
supportsFiles: true,
maxMessageLength: 40000,
};
case 'discord':
return {
supportsMarkdown: true,
supportsImages: true,
supportsButtons: true,
supportsVoice: true,
supportsFiles: true,
maxMessageLength: 2000,
};
default:
// Default fallback
return {
supportsMarkdown: false,
supportsImages: false,
supportsButtons: false,
supportsVoice: false,
supportsFiles: false,
maxMessageLength: 1000,
};
}
}
/**
* Create a new user context
*/
export function createUserContext(params: {
userId: string;
sessionId: string;
license: UserLicense;
channelType: ChannelType;
channelUserId: string;
channelCapabilities?: Partial<ChannelCapabilities>;
}): UserContext {
const defaultCapabilities = getDefaultCapabilities(params.channelType);
const capabilities: ChannelCapabilities = {
...defaultCapabilities,
...params.channelCapabilities,
};
return {
userId: params.userId,
sessionId: params.sessionId,
license: params.license,
activeChannel: {
type: params.channelType,
channelUserId: params.channelUserId,
capabilities,
},
conversationHistory: [],
relevantMemories: [],
workspaceState: {
activeIndicators: [],
activeStrategies: [],
watchlist: [],
recentQueries: [],
preferences: {},
},
createdAt: new Date(),
lastActivity: new Date(),
};
}
/**
* Update last activity timestamp
*/
export function touchContext(context: UserContext): UserContext {
return {
...context,
lastActivity: new Date(),
};
}
/**
* Check if context has expired (for TTL management)
*/
export function isContextExpired(context: UserContext, ttlSeconds: number): boolean {
const now = Date.now();
const lastActivity = context.lastActivity.getTime();
return (now - lastActivity) / 1000 > ttlSeconds;
}
/**
* Serialize context for Redis storage
*/
export function serializeContext(context: UserContext): string {
return JSON.stringify({
...context,
createdAt: context.createdAt.toISOString(),
lastActivity: context.lastActivity.toISOString(),
// Don't serialize conversation history (too large, use checkpoint instead)
conversationHistory: undefined,
});
}
/**
* Deserialize context from Redis storage
*/
export function deserializeContext(data: string): Partial<UserContext> {
const parsed = JSON.parse(data);
return {
...parsed,
createdAt: new Date(parsed.createdAt),
lastActivity: new Date(parsed.lastActivity),
conversationHistory: [], // Will be loaded from checkpoint
};
}

View File

@@ -0,0 +1,146 @@
# Skills
Skills are individual capabilities that the agent can use to accomplish tasks. Each skill is a self-contained unit with:
- A markdown definition file (`*.skill.md`)
- A TypeScript implementation extending `BaseSkill`
- Clear input/output contracts
- Parameter validation
- Error handling
## Skill Structure
```
skills/
├── base-skill.ts # Base class
├── {skill-name}.skill.md # Definition
├── {skill-name}.ts # Implementation
└── README.md # This file
```
## Creating a New Skill
### 1. Create the Definition File
Create `{skill-name}.skill.md`:
```markdown
# My Skill
**Version:** 1.0.0
**Author:** Your Name
**Tags:** category1, category2
## Description
What does this skill do?
## Inputs
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| param1 | string | Yes | What it does |
## Outputs
What does it return?
## Example Usage
Show code example
```
### 2. Create the Implementation
Create `{skill-name}.ts`:
```typescript
import { BaseSkill, SkillInput, SkillResult, SkillMetadata } from './base-skill.js';
export class MySkill extends BaseSkill {
getMetadata(): SkillMetadata {
return {
name: 'my-skill',
description: 'What it does',
version: '1.0.0',
};
}
getParametersSchema(): Record<string, unknown> {
return {
type: 'object',
required: ['param1'],
properties: {
param1: { type: 'string' },
},
};
}
validateInput(parameters: Record<string, unknown>): boolean {
return typeof parameters.param1 === 'string';
}
async execute(input: SkillInput): Promise<SkillResult> {
this.logStart(input);
try {
// Your implementation here
const result = this.success({ data: 'result' });
this.logEnd(result);
return result;
} catch (error) {
return this.error(error as Error);
}
}
}
```
### 3. Register the Skill
Add to `index.ts`:
```typescript
export { MySkill } from './my-skill.js';
```
## Using Skills in Workflows
Skills can be used in LangGraph workflows:
```typescript
import { MarketAnalysisSkill } from '../skills/market-analysis.js';
const analyzeNode = async (state) => {
const skill = new MarketAnalysisSkill(logger, model);
const result = await skill.execute({
context: state.userContext,
parameters: {
ticker: state.ticker,
period: '4h',
},
});
return {
analysis: result.data,
};
};
```
## Best Practices
1. **Single Responsibility**: Each skill should do one thing well
2. **Validation**: Always validate inputs thoroughly
3. **Error Handling**: Use try/catch and return meaningful errors
4. **Logging**: Use `logStart()` and `logEnd()` helpers
5. **Documentation**: Keep the `.skill.md` file up to date
6. **Testing**: Write unit tests for skill logic
7. **Idempotency**: Skills should be safe to retry
## Available Skills
- **market-analysis**: Analyze market conditions and trends
- *(Add more as you build them)*
## Skill Categories
- **Market Data**: Query and analyze market information
- **Trading**: Execute trades, manage positions
- **Analysis**: Technical and fundamental analysis
- **Risk**: Risk assessment and management
- **Utilities**: Helper functions and utilities

View File

@@ -0,0 +1,128 @@
import type { BaseChatModel } from '@langchain/core/language_models/chat_models';
import type { FastifyBaseLogger } from 'fastify';
import type { UserContext } from '../memory/session-context.js';
/**
* Skill metadata
*/
export interface SkillMetadata {
name: string;
description: string;
version: string;
author?: string;
tags?: string[];
}
/**
* Skill input parameters
*/
export interface SkillInput {
context: UserContext;
parameters: Record<string, unknown>;
}
/**
* Skill execution result
*/
export interface SkillResult {
success: boolean;
data?: unknown;
error?: string;
metadata?: Record<string, unknown>;
}
/**
* Base skill interface
*
* Skills are individual capabilities that the agent can use.
* Each skill is defined by:
* - A markdown file (*.skill.md) describing purpose, inputs, outputs
* - A TypeScript implementation extending BaseSkill
*
* Skills can use:
* - LLM calls for reasoning
* - User's MCP server tools
* - Platform tools (market data, charts, etc.)
*/
export abstract class BaseSkill {
protected logger: FastifyBaseLogger;
protected model?: BaseChatModel;
constructor(logger: FastifyBaseLogger, model?: BaseChatModel) {
this.logger = logger;
this.model = model;
}
/**
* Get skill metadata
*/
abstract getMetadata(): SkillMetadata;
/**
* Validate input parameters
*/
abstract validateInput(parameters: Record<string, unknown>): boolean;
/**
* Execute the skill
*/
abstract execute(input: SkillInput): Promise<SkillResult>;
/**
* Get required parameters schema (JSON Schema format)
*/
abstract getParametersSchema(): Record<string, unknown>;
/**
* Helper: Log skill execution start
*/
protected logStart(input: SkillInput): void {
const metadata = this.getMetadata();
this.logger.info(
{
skill: metadata.name,
userId: input.context.userId,
sessionId: input.context.sessionId,
parameters: input.parameters,
},
'Starting skill execution'
);
}
/**
* Helper: Log skill execution end
*/
protected logEnd(result: SkillResult): void {
const metadata = this.getMetadata();
this.logger.info(
{
skill: metadata.name,
success: result.success,
error: result.error,
},
'Skill execution completed'
);
}
/**
* Helper: Create success result
*/
protected success(data: unknown, metadata?: Record<string, unknown>): SkillResult {
return {
success: true,
data,
metadata,
};
}
/**
* Helper: Create error result
*/
protected error(error: string | Error, metadata?: Record<string, unknown>): SkillResult {
return {
success: false,
error: error instanceof Error ? error.message : error,
metadata,
};
}
}

View File

@@ -0,0 +1,10 @@
// Skills exports
export {
BaseSkill,
type SkillMetadata,
type SkillInput,
type SkillResult,
} from './base-skill.js';
export { MarketAnalysisSkill } from './market-analysis.js';

View File

@@ -0,0 +1,78 @@
# Market Analysis Skill
**Version:** 1.0.0
**Author:** Dexorder AI Platform
**Tags:** market-data, analysis, trading
## Description
Analyzes market conditions for a given ticker and timeframe. Provides insights on:
- Price trends and patterns
- Volume analysis
- Support and resistance levels
- Market sentiment indicators
## Inputs
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `ticker` | string | Yes | Market identifier (e.g., "BINANCE:BTC/USDT") |
| `period` | string | Yes | Analysis period ("1h", "4h", "1d", "1w") |
| `startTime` | number | No | Start timestamp (microseconds), defaults to 7 days ago |
| `endTime` | number | No | End timestamp (microseconds), defaults to now |
| `indicators` | string[] | No | Additional indicators to include (e.g., ["RSI", "MACD"]) |
## Outputs
```typescript
{
success: true,
data: {
ticker: string,
period: string,
timeRange: { start: number, end: number },
trend: "bullish" | "bearish" | "neutral",
priceChange: number,
volumeProfile: {
average: number,
recent: number,
trend: "increasing" | "decreasing" | "stable"
},
supportLevels: number[],
resistanceLevels: number[],
indicators: Record<string, unknown>,
analysis: string // LLM-generated natural language analysis
}
}
```
## Example Usage
```typescript
const skill = new MarketAnalysisSkill(logger, model);
const result = await skill.execute({
context: userContext,
parameters: {
ticker: "BINANCE:BTC/USDT",
period: "4h",
indicators: ["RSI", "MACD"]
}
});
console.log(result.data.analysis);
// "Bitcoin is showing bullish momentum with RSI at 65 and MACD crossing above signal line..."
```
## Implementation Notes
- Queries OHLC data from Iceberg warehouse
- Uses LLM for natural language analysis
- Caches results for 5 minutes to reduce computation
- Falls back to reduced analysis if Iceberg unavailable
## Dependencies
- Iceberg client (market data)
- LLM model (analysis generation)
- User's MCP server (optional custom indicators)

View File

@@ -0,0 +1,198 @@
import { BaseSkill, type SkillInput, type SkillResult, type SkillMetadata } from './base-skill.js';
import type { BaseChatModel } from '@langchain/core/language_models/chat_models';
import type { FastifyBaseLogger } from 'fastify';
import { HumanMessage, SystemMessage } from '@langchain/core/messages';
/**
* Market analysis skill implementation
*
* See market-analysis.skill.md for full documentation
*/
export class MarketAnalysisSkill extends BaseSkill {
constructor(logger: FastifyBaseLogger, model?: BaseChatModel) {
super(logger, model);
}
getMetadata(): SkillMetadata {
return {
name: 'market-analysis',
description: 'Analyze market conditions for a given ticker and timeframe',
version: '1.0.0',
author: 'Dexorder AI Platform',
tags: ['market-data', 'analysis', 'trading'],
};
}
getParametersSchema(): Record<string, unknown> {
return {
type: 'object',
required: ['ticker', 'period'],
properties: {
ticker: {
type: 'string',
description: 'Market identifier (e.g., "BINANCE:BTC/USDT")',
},
period: {
type: 'string',
enum: ['1h', '4h', '1d', '1w'],
description: 'Analysis period',
},
startTime: {
type: 'number',
description: 'Start timestamp in microseconds',
},
endTime: {
type: 'number',
description: 'End timestamp in microseconds',
},
indicators: {
type: 'array',
items: { type: 'string' },
description: 'Additional indicators to include',
},
},
};
}
validateInput(parameters: Record<string, unknown>): boolean {
if (!parameters.ticker || typeof parameters.ticker !== 'string') {
return false;
}
if (!parameters.period || typeof parameters.period !== 'string') {
return false;
}
return true;
}
async execute(input: SkillInput): Promise<SkillResult> {
this.logStart(input);
if (!this.validateInput(input.parameters)) {
return this.error('Invalid parameters: ticker and period are required');
}
try {
const ticker = input.parameters.ticker as string;
const period = input.parameters.period as string;
const indicators = (input.parameters.indicators as string[]) || [];
// 1. Fetch OHLC data from Iceberg
// TODO: Implement Iceberg query
// const ohlcData = await this.fetchOHLCData(ticker, period, startTime, endTime);
const ohlcData = this.getMockOHLCData(); // Placeholder
// 2. Calculate technical indicators
const analysis = this.calculateAnalysis(ohlcData, indicators);
// 3. Generate natural language analysis using LLM
let narrativeAnalysis = '';
if (this.model) {
narrativeAnalysis = await this.generateNarrativeAnalysis(
ticker,
period,
analysis
);
}
const result = this.success({
ticker,
period,
timeRange: {
start: ohlcData.startTime,
end: ohlcData.endTime,
},
trend: analysis.trend,
priceChange: analysis.priceChange,
volumeProfile: analysis.volumeProfile,
supportLevels: analysis.supportLevels,
resistanceLevels: analysis.resistanceLevels,
indicators: analysis.indicators,
analysis: narrativeAnalysis,
});
this.logEnd(result);
return result;
} catch (error) {
const result = this.error(error as Error);
this.logEnd(result);
return result;
}
}
/**
* Calculate technical analysis from OHLC data
*/
private calculateAnalysis(
ohlcData: any,
_requestedIndicators: string[]
): any {
// TODO: Implement proper technical analysis
// This is a simplified placeholder
const priceChange = ((ohlcData.close - ohlcData.open) / ohlcData.open) * 100;
const trend = priceChange > 1 ? 'bullish' : priceChange < -1 ? 'bearish' : 'neutral';
return {
trend,
priceChange,
volumeProfile: {
average: ohlcData.avgVolume,
recent: ohlcData.currentVolume,
trend: ohlcData.currentVolume > ohlcData.avgVolume ? 'increasing' : 'decreasing',
},
supportLevels: [ohlcData.low * 0.98, ohlcData.low * 0.95],
resistanceLevels: [ohlcData.high * 1.02, ohlcData.high * 1.05],
indicators: {},
};
}
/**
* Generate natural language analysis using LLM
*/
private async generateNarrativeAnalysis(
ticker: string,
period: string,
analysis: any
): Promise<string> {
if (!this.model) {
return 'LLM not available for narrative analysis';
}
const systemPrompt = `You are a professional market analyst.
Provide concise, actionable market analysis based on technical data.
Focus on key insights and avoid jargon.`;
const userPrompt = `Analyze the following market data for ${ticker} (${period}):
Trend: ${analysis.trend}
Price Change: ${analysis.priceChange.toFixed(2)}%
Volume: ${analysis.volumeProfile.trend}
Support Levels: ${analysis.supportLevels.join(', ')}
Resistance Levels: ${analysis.resistanceLevels.join(', ')}
Provide a 2-3 sentence analysis suitable for a trading decision.`;
const response = await this.model.invoke([
new SystemMessage(systemPrompt),
new HumanMessage(userPrompt),
]);
return response.content as string;
}
/**
* Mock OHLC data (placeholder until Iceberg integration)
*/
private getMockOHLCData(): any {
return {
startTime: Date.now() - 7 * 24 * 60 * 60 * 1000,
endTime: Date.now(),
open: 50000,
high: 52000,
low: 49000,
close: 51500,
avgVolume: 1000000,
currentVolume: 1200000,
};
}
}

View File

@@ -0,0 +1,273 @@
# Subagents
Specialized agents with dedicated knowledge bases and system prompts.
## What are Subagents?
Subagents are focused AI agents designed for specific tasks. Unlike general-purpose agents, each subagent has:
- **Specialized knowledge**: Multi-file memory directory with domain-specific info
- **Custom system prompt**: Tailored instructions for the task
- **Model override**: Can use different models than the main agent
- **Capability tags**: Declare what they can do
## Directory Structure
```
subagents/
├── base-subagent.ts # Base class
├── {subagent-name}/
│ ├── config.yaml # Configuration
│ ├── system-prompt.md # System instructions
│ ├── memory/ # Knowledge base (multi-file)
│ │ ├── file1.md
│ │ ├── file2.md
│ │ └── file3.md
│ └── index.ts # Implementation
└── README.md # This file
```
## Creating a New Subagent
### 1. Create Directory Structure
```bash
mkdir -p subagents/my-subagent/memory
```
### 2. Create config.yaml
```yaml
name: my-subagent
description: What it does
# Model override (optional)
model: claude-3-5-sonnet-20241022
temperature: 0.3
maxTokens: 4096
# Memory files to load
memoryFiles:
- guidelines.md
- examples.md
- best-practices.md
# System prompt file
systemPromptFile: system-prompt.md
# Capabilities
capabilities:
- capability1
- capability2
```
### 3. Write system-prompt.md
```markdown
# My Subagent System Prompt
You are an expert in [domain].
## Your Role
[What the subagent does]
## Approach
1. [Step 1]
2. [Step 2]
## Output Format
[How to structure responses]
```
### 4. Create Memory Files
Split knowledge into logical files:
```markdown
<!-- memory/guidelines.md -->
# Guidelines
## What to Check
- Thing 1
- Thing 2
## What to Avoid
- Anti-pattern 1
- Anti-pattern 2
```
### 5. Implement Subagent
```typescript
// index.ts
import { BaseSubagent, SubagentConfig, SubagentContext } from '../base-subagent.js';
import { BaseChatModel } from '@langchain/core/language_models/chat_models';
import { FastifyBaseLogger } from 'fastify';
export class MySubagent extends BaseSubagent {
constructor(config: SubagentConfig, model: BaseChatModel, logger: FastifyBaseLogger) {
super(config, model, logger);
}
async execute(context: SubagentContext, input: string): Promise<string> {
this.logger.info({ subagent: this.getName() }, 'Executing subagent');
const messages = this.buildMessages(context, input);
const response = await this.model.invoke(messages);
return response.content as string;
}
}
// Factory function
export async function createMySubagent(
model: BaseChatModel,
logger: FastifyBaseLogger,
basePath: string
): Promise<MySubagent> {
const { readFile } = await import('fs/promises');
const { join } = await import('path');
const yaml = await import('js-yaml');
const configPath = join(basePath, 'config.yaml');
const configContent = await readFile(configPath, 'utf-8');
const config = yaml.load(configContent) as SubagentConfig;
const subagent = new MySubagent(config, model, logger);
await subagent.initialize(basePath);
return subagent;
}
```
### 6. Export from index.ts
```typescript
// subagents/index.ts
export { MySubagent, createMySubagent } from './my-subagent/index.js';
```
## Using Subagents
### Direct Usage
```typescript
import { createMySubagent } from './harness/subagents';
const subagent = await createMySubagent(model, logger, basePath);
const result = await subagent.execute({ userContext }, 'input text');
```
### In Workflows
```typescript
const analyzeNode = async (state) => {
const result = await mySubagent.execute(
{ userContext: state.userContext },
state.input
);
return { analysis: result };
};
```
### With Routing
Add to `config/subagent-routing.yaml`:
```yaml
subagents:
my-subagent:
enabled: true
path: src/harness/subagents/my-subagent
triggers:
keywords:
- "keyword1"
- "keyword2"
patterns:
- "pattern.*regex"
priority: medium
timeout: 30000
```
## Multi-File Memory Benefits
### Why Split Memory?
1. **Organization**: Easier to maintain separate concerns
2. **Versioning**: Update specific files without touching others
3. **Collaboration**: Multiple people can work on different files
4. **Context Management**: LLM sees structured knowledge
### Example Split
For a code reviewer:
- `review-guidelines.md`: What to check
- `common-patterns.md`: Good/bad examples
- `best-practices.md`: Industry standards
All files are loaded and concatenated at initialization.
## Best Practices
### Memory Files
- **Be Specific**: Include concrete examples, not just theory
- **Use Markdown**: Tables, lists, code blocks for clarity
- **Keep Focused**: Each file should have a clear purpose
- **Update Regularly**: Improve based on real usage
### System Prompts
- **Define Role Clearly**: "You are a [specific role]"
- **Specify Output Format**: Show examples of expected output
- **Set Constraints**: What to do, what not to do
- **Give Context**: Why this subagent exists
### Configuration
- **Model Selection**: Use faster models for simple tasks
- **Temperature**: Lower (0.2-0.3) for precise work, higher (0.7-0.9) for creative
- **Capabilities**: Tag accurately for routing
## Available Subagents
### code-reviewer
Reviews trading strategy code for bugs, performance, and best practices.
**Capabilities:**
- `static_analysis`
- `performance_review`
- `security_audit`
- `code_quality`
**Memory:**
- Review guidelines
- Common patterns
- Best practices
### risk-analyzer (TODO)
Analyzes trading risk and exposure.
### market-analyst (TODO)
Provides market analysis and insights.
## Troubleshooting
### Memory Files Not Loading
- Check file paths in config.yaml
- Ensure files exist in memory/ directory
- Check file permissions
### Subagent Not Being Routed
- Verify triggers in subagent-routing.yaml
- Check priority (higher priority matches first)
- Ensure enabled: true
### Model Errors
- Verify API keys in environment
- Check model override is valid
- Ensure token limits not exceeded

View File

@@ -0,0 +1,179 @@
import type { BaseChatModel } from '@langchain/core/language_models/chat_models';
import type { BaseMessage } from '@langchain/core/messages';
import { SystemMessage, HumanMessage } from '@langchain/core/messages';
import type { FastifyBaseLogger } from 'fastify';
import type { UserContext } from '../memory/session-context.js';
import { readFile } from 'fs/promises';
import { join } from 'path';
/**
* Subagent configuration (loaded from config.yaml)
*/
export interface SubagentConfig {
name: string;
model?: string; // Override default model
temperature?: number;
maxTokens?: number;
memoryFiles: string[]; // Memory files to load from memory/ directory
capabilities: string[];
systemPromptFile?: string; // Path to system-prompt.md
}
/**
* Subagent execution context
*/
export interface SubagentContext {
userContext: UserContext;
conversationHistory?: BaseMessage[];
}
/**
* Base subagent class
*
* Subagents are specialized agents with:
* - Dedicated system prompts
* - Multi-file memory (guidelines, patterns, best practices)
* - Optional model override
* - Specific capabilities
*
* Structure:
* subagents/
* code-reviewer/
* config.yaml
* system-prompt.md
* memory/
* review-guidelines.md
* common-patterns.md
* index.ts
*/
export abstract class BaseSubagent {
protected logger: FastifyBaseLogger;
protected model: BaseChatModel;
protected config: SubagentConfig;
protected systemPrompt?: string;
protected memoryContext: string[] = [];
constructor(
config: SubagentConfig,
model: BaseChatModel,
logger: FastifyBaseLogger
) {
this.config = config;
this.model = model;
this.logger = logger;
}
/**
* Initialize subagent: load system prompt and memory files
*/
async initialize(basePath: string): Promise<void> {
this.logger.info({ subagent: this.config.name }, 'Initializing subagent');
// Load system prompt
if (this.config.systemPromptFile) {
const promptPath = join(basePath, this.config.systemPromptFile);
this.systemPrompt = await this.loadFile(promptPath);
}
// Load memory files
for (const memoryFile of this.config.memoryFiles) {
const memoryPath = join(basePath, 'memory', memoryFile);
const content = await this.loadFile(memoryPath);
if (content) {
this.memoryContext.push(`# ${memoryFile}\n\n${content}`);
}
}
this.logger.info(
{
subagent: this.config.name,
memoryFiles: this.config.memoryFiles.length,
systemPromptLoaded: !!this.systemPrompt,
},
'Subagent initialized'
);
}
/**
* Execute subagent with given input
*/
abstract execute(
context: SubagentContext,
input: string
): Promise<string>;
/**
* Stream execution (optional, default to non-streaming)
*/
async *stream(
context: SubagentContext,
input: string
): AsyncGenerator<string> {
const result = await this.execute(context, input);
yield result;
}
/**
* Build messages with system prompt and memory context
*/
protected buildMessages(
context: SubagentContext,
currentInput: string
): BaseMessage[] {
const messages: BaseMessage[] = [];
// System prompt with memory context
let systemContent = this.systemPrompt || `You are ${this.config.name}.`;
if (this.memoryContext.length > 0) {
systemContent += '\n\n# Knowledge Base\n\n';
systemContent += this.memoryContext.join('\n\n---\n\n');
}
messages.push(new SystemMessage(systemContent));
// Add conversation history if provided
if (context.conversationHistory && context.conversationHistory.length > 0) {
messages.push(...context.conversationHistory);
}
// Add current input
messages.push(new HumanMessage(currentInput));
return messages;
}
/**
* Load file content
*/
private async loadFile(path: string): Promise<string | undefined> {
try {
const content = await readFile(path, 'utf-8');
return content;
} catch (error) {
this.logger.warn({ error, path }, 'Failed to load file');
return undefined;
}
}
/**
* Get subagent name
*/
getName(): string {
return this.config.name;
}
/**
* Get subagent capabilities
*/
getCapabilities(): string[] {
return this.config.capabilities;
}
/**
* Check if subagent has a specific capability
*/
hasCapability(capability: string): boolean {
return this.config.capabilities.includes(capability);
}
}

View File

@@ -0,0 +1,26 @@
# Code Reviewer Subagent Configuration
name: code-reviewer
description: Reviews trading strategy code for bugs, performance issues, and best practices
# Model configuration (optional override)
model: claude-3-5-sonnet-20241022
temperature: 0.3
maxTokens: 4096
# Memory files to load from memory/ directory
memoryFiles:
- review-guidelines.md
- common-patterns.md
- best-practices.md
# System prompt file
systemPromptFile: system-prompt.md
# Capabilities this subagent provides
capabilities:
- static_analysis
- performance_review
- security_audit
- code_quality
- best_practices

View File

@@ -0,0 +1,91 @@
import { BaseSubagent, type SubagentConfig, type SubagentContext } from '../base-subagent.js';
import type { BaseChatModel } from '@langchain/core/language_models/chat_models';
import type { FastifyBaseLogger } from 'fastify';
/**
* Code Reviewer Subagent
*
* Specialized agent for reviewing trading strategy code.
* Reviews for:
* - Logic errors and bugs
* - Performance issues
* - Security vulnerabilities
* - Trading best practices
* - Code quality
*
* Loads knowledge from multi-file memory:
* - review-guidelines.md: What to check for
* - common-patterns.md: Good and bad examples
* - best-practices.md: Industry standards
*/
export class CodeReviewerSubagent extends BaseSubagent {
constructor(config: SubagentConfig, model: BaseChatModel, logger: FastifyBaseLogger) {
super(config, model, logger);
}
/**
* Review code and provide structured feedback
*/
async execute(context: SubagentContext, code: string): Promise<string> {
this.logger.info(
{
subagent: this.getName(),
userId: context.userContext.userId,
codeLength: code.length,
},
'Reviewing code'
);
const messages = this.buildMessages(context, `Review the following trading strategy code:\n\n\`\`\`typescript\n${code}\n\`\`\``);
const response = await this.model.invoke(messages);
return response.content as string;
}
/**
* Stream code review
*/
async *stream(context: SubagentContext, code: string): AsyncGenerator<string> {
this.logger.info(
{
subagent: this.getName(),
userId: context.userContext.userId,
codeLength: code.length,
},
'Streaming code review'
);
const messages = this.buildMessages(context, `Review the following trading strategy code:\n\n\`\`\`typescript\n${code}\n\`\`\``);
const stream = await this.model.stream(messages);
for await (const chunk of stream) {
yield chunk.content as string;
}
}
}
/**
* Factory function to create and initialize CodeReviewerSubagent
*/
export async function createCodeReviewerSubagent(
model: BaseChatModel,
logger: FastifyBaseLogger,
basePath: string
): Promise<CodeReviewerSubagent> {
const { readFile } = await import('fs/promises');
const { join } = await import('path');
const yaml = await import('js-yaml');
// Load config
const configPath = join(basePath, 'config.yaml');
const configContent = await readFile(configPath, 'utf-8');
const config = yaml.load(configContent) as SubagentConfig;
// Create and initialize subagent
const subagent = new CodeReviewerSubagent(config, model, logger);
await subagent.initialize(basePath);
return subagent;
}

View File

@@ -0,0 +1,227 @@
# Trading Strategy Best Practices
## Code Organization
### Separation of Concerns
```typescript
// Good: Clear separation
class Strategy {
async analyze(data: MarketData): Promise<Signal> { }
}
class RiskManager {
validateSignal(signal: Signal): boolean { }
}
class ExecutionEngine {
async execute(signal: Signal): Promise<Order> { }
}
// Bad: Everything in one function
async function trade() {
// Analysis, risk, execution all mixed
}
```
### Configuration Management
```typescript
// Good: External configuration
interface StrategyConfig {
stopLossPercent: number;
takeProfitPercent: number;
maxPositionSize: number;
riskPerTrade: number;
}
const config = loadConfig('strategy.yaml');
// Bad: Hardcoded values scattered throughout
const stopLoss = price * 0.95; // What if you want to change this?
```
## Testing Considerations
### Testable Code
```typescript
// Good: Pure functions, easy to test
function calculateRSI(prices: number[], period: number = 14): number {
// Pure calculation, no side effects
return rsi;
}
// Bad: Hard to test
async function strategy() {
const data = await fetchLiveData(); // Can't control in tests
const signal = analyze(data);
await executeTrade(signal); // Side effects
}
```
### Mock-Friendly Design
```typescript
// Good: Dependency injection
class Strategy {
constructor(
private dataProvider: DataProvider,
private executor: OrderExecutor
) {}
async run() {
const data = await this.dataProvider.getData();
// ...
}
}
// In tests: inject mocks
const strategy = new Strategy(mockDataProvider, mockExecutor);
```
## Performance Optimization
### Avoid Recalculation
```typescript
// Good: Cache indicator results
class IndicatorCache {
private cache = new Map<string, { value: number, timestamp: number }>();
get(key: string, ttl: number, calculator: () => number): number {
const cached = this.cache.get(key);
if (cached && Date.now() - cached.timestamp < ttl) {
return cached.value;
}
const value = calculator();
this.cache.set(key, { value, timestamp: Date.now() });
return value;
}
}
// Bad: Recalculate every time
for (const ticker of tickers) {
const rsi = calculateRSI(await getData(ticker)); // Slow
}
```
### Batch Operations
```typescript
// Good: Batch API calls
const results = await Promise.all(
tickers.map(ticker => dataProvider.getOHLC(ticker))
);
// Bad: Sequential API calls
const results = [];
for (const ticker of tickers) {
results.push(await dataProvider.getOHLC(ticker)); // Slow
}
```
## Error Handling
### Graceful Degradation
```typescript
// Good: Fallback behavior
async function getMarketData(ticker: string): Promise<OHLC[]> {
try {
return await primarySource.fetch(ticker);
} catch (error) {
logger.warn('Primary source failed, trying backup');
try {
return await backupSource.fetch(ticker);
} catch (backupError) {
logger.error('All sources failed');
return getCachedData(ticker); // Last resort
}
}
}
// Bad: Let it crash
async function getMarketData(ticker: string) {
return await api.fetch(ticker); // Uncaught errors
}
```
### Detailed Logging
```typescript
// Good: Structured logging with context
logger.info({
action: 'order_placed',
ticker: 'BTC/USDT',
side: 'buy',
size: 0.1,
price: 50000,
orderId: 'abc123',
strategy: 'mean-reversion'
});
// Bad: String concatenation
console.log('Placed order'); // No context
```
## Documentation
### Self-Documenting Code
```typescript
// Good: Clear naming and JSDoc
/**
* Calculate position size using Kelly Criterion
* @param winRate Probability of winning (0-1)
* @param avgWin Average win amount
* @param avgLoss Average loss amount
* @param capital Total available capital
* @returns Optimal position size in base currency
*/
function calculateKellyPosition(
winRate: number,
avgWin: number,
avgLoss: number,
capital: number
): number {
const kellyPercent = (winRate * avgWin - (1 - winRate) * avgLoss) / avgWin;
return Math.max(0, Math.min(kellyPercent * capital, capital * 0.25)); // Cap at 25%
}
// Bad: Cryptic names
function calc(w: number, a: number, b: number, c: number) {
return (w * a - (1 - w) * b) / a * c;
}
```
## Security
### Input Validation
```typescript
// Good: Validate all external inputs
function validateTicker(ticker: string): boolean {
return /^[A-Z]+:[A-Z]+\/[A-Z]+$/.test(ticker);
}
function validatePeriod(period: string): boolean {
return ['1m', '5m', '15m', '1h', '4h', '1d', '1w'].includes(period);
}
// Bad: Trust user input
function getOHLC(ticker: string, period: string) {
return db.query(`SELECT * FROM ohlc WHERE ticker='${ticker}'`); // SQL injection!
}
```
### Rate Limiting
```typescript
// Good: Prevent API abuse
class RateLimiter {
private calls: number[] = [];
async throttle(maxCallsPerMinute: number): Promise<void> {
const now = Date.now();
this.calls = this.calls.filter(t => now - t < 60000);
if (this.calls.length >= maxCallsPerMinute) {
const wait = 60000 - (now - this.calls[0]);
await sleep(wait);
}
this.calls.push(now);
}
}
```

View File

@@ -0,0 +1,124 @@
# Common Trading Strategy Patterns
## Pattern: Trend Following
```typescript
// Good: Clear trend detection with multiple confirmations
function detectTrend(prices: number[], period: number = 20): 'bull' | 'bear' | 'neutral' {
const sma = calculateSMA(prices, period);
const currentPrice = prices[prices.length - 1];
const priceVsSMA = (currentPrice - sma) / sma;
// Use threshold to avoid noise
if (priceVsSMA > 0.02) return 'bull';
if (priceVsSMA < -0.02) return 'bear';
return 'neutral';
}
// Bad: Single indicator, no confirmation
function detectTrend(prices: number[]): string {
return prices[prices.length - 1] > prices[prices.length - 2] ? 'bull' : 'bear';
}
```
## Pattern: Mean Reversion
```typescript
// Good: Proper boundary checks and position sizing
async function checkMeanReversion(ticker: string): Promise<TradeSignal | null> {
const data = await getOHLC(ticker, 100);
const mean = calculateMean(data.close);
const stdDev = calculateStdDev(data.close);
const current = data.close[data.close.length - 1];
const zScore = (current - mean) / stdDev;
// Only trade at extreme deviations
if (zScore < -2) {
return {
side: 'buy',
size: calculatePositionSize(Math.abs(zScore)), // Scale with confidence
stopLoss: current * 0.95,
};
}
return null;
}
// Bad: No risk management, arbitrary thresholds
function checkMeanReversion(price: number, avg: number): boolean {
return price < avg; // Too simplistic
}
```
## Pattern: Breakout Detection
```typescript
// Good: Volume confirmation and false breakout protection
function detectBreakout(ohlc: OHLC[], resistance: number): boolean {
const current = ohlc[ohlc.length - 1];
const previous = ohlc[ohlc.length - 2];
// Price breaks resistance
const priceBreak = current.close > resistance && previous.close <= resistance;
// Volume confirmation (at least 1.5x average)
const avgVolume = ohlc.slice(-20, -1).reduce((sum, c) => sum + c.volume, 0) / 19;
const volumeConfirm = current.volume > avgVolume * 1.5;
// Wait for candle close to avoid false breaks
const candleClosed = true; // Check if candle is complete
return priceBreak && volumeConfirm && candleClosed;
}
// Bad: No confirmation, premature signal
function detectBreakout(price: number, resistance: number): boolean {
return price > resistance; // False positives
}
```
## Pattern: Risk Management
```typescript
// Good: Comprehensive risk checks
class PositionManager {
private readonly MAX_POSITION_PERCENT = 0.05; // 5% of portfolio
private readonly MAX_DAILY_LOSS = 0.02; // 2% daily drawdown limit
async openPosition(signal: TradeSignal, accountBalance: number): Promise<boolean> {
// Check daily loss limit
if (this.getDailyPnL() / accountBalance < -this.MAX_DAILY_LOSS) {
logger.warn('Daily loss limit reached');
return false;
}
// Position size check
const maxSize = accountBalance * this.MAX_POSITION_PERCENT;
const actualSize = Math.min(signal.size, maxSize);
// Risk/reward check
const risk = Math.abs(signal.price - signal.stopLoss);
const reward = Math.abs(signal.takeProfit - signal.price);
if (reward / risk < 2) {
logger.info('Risk/reward ratio too low');
return false;
}
return await this.executeOrder(signal, actualSize);
}
}
// Bad: No risk checks
async function openPosition(signal: any) {
return await exchange.buy(signal.ticker, signal.size); // Dangerous
}
```
## Anti-Patterns to Avoid
1. **Magic Numbers**: Use named constants
2. **Global State**: Pass state explicitly
3. **Synchronous Blocking**: Use async for I/O
4. **No Error Handling**: Always wrap in try/catch
5. **Ignoring Slippage**: Factor in execution costs

View File

@@ -0,0 +1,67 @@
# Code Review Guidelines
## Trading Strategy Specific Checks
### Position Sizing
- ✅ Check for dynamic position sizing based on account balance
- ✅ Verify max position size limits
- ❌ Flag hardcoded position sizes
- ❌ Flag missing position size validation
### Order Handling
- ✅ Verify order type is appropriate (market vs limit)
- ✅ Check for order timeout handling
- ❌ Flag missing order confirmation checks
- ❌ Flag potential duplicate orders
### Risk Management
- ✅ Verify stop-loss is always set
- ✅ Check take-profit levels are realistic
- ❌ Flag missing drawdown protection
- ❌ Flag strategies without maximum daily loss limits
### Data Handling
- ✅ Check for proper OHLC data validation
- ✅ Verify timestamp handling (timezone, microseconds)
- ❌ Flag missing null/undefined checks
- ❌ Flag potential look-ahead bias
### Performance
- ✅ Verify indicators are calculated efficiently
- ✅ Check for unnecessary re-calculations
- ❌ Flag O(n²) or worse algorithms in hot paths
- ❌ Flag large memory allocations in loops
## Severity Levels
### Critical (🔴)
- Will cause financial loss or system crash
- Security vulnerabilities
- Data integrity issues
- Must be fixed before deployment
### High (🟠)
- Significant bugs or edge cases
- Performance issues that affect execution
- Risk management gaps
- Should be fixed before deployment
### Medium (🟡)
- Code quality issues
- Minor performance improvements
- Best practice violations
- Fix when convenient
### Low (🟢)
- Style preferences
- Documentation improvements
- Nice-to-have refactorings
- Optional improvements
## Common Pitfalls
1. **Look-Ahead Bias**: Using future data in backtests
2. **Overfitting**: Too many parameters, not enough data
3. **Slippage Ignorance**: Not accounting for execution costs
4. **Survivorship Bias**: Testing only on assets that survived
5. **Data Snooping**: Testing multiple strategies, reporting only the best

View File

@@ -0,0 +1,51 @@
# Code Reviewer System Prompt
You are an expert code reviewer specializing in trading strategies and financial algorithms.
## Your Role
Review trading strategy code with a focus on:
- **Correctness**: Logic errors, edge cases, off-by-one errors
- **Performance**: Inefficient loops, unnecessary calculations
- **Security**: Input validation, overflow risks, race conditions
- **Trading Best Practices**: Position sizing, risk management, order handling
- **Code Quality**: Readability, maintainability, documentation
## Review Approach
1. **Read the entire code** before providing feedback
2. **Identify critical issues first** (bugs, security, data loss)
3. **Suggest improvements** with specific code examples
4. **Explain the "why"** behind each recommendation
5. **Be constructive** - focus on helping, not criticizing
## Output Format
Structure your review as:
```
## Summary
Brief overview of code quality (1-2 sentences)
## Critical Issues
- Issue 1: Description with line number
- Issue 2: Description with line number
## Improvements
- Suggestion 1: Description with example
- Suggestion 2: Description with example
## Best Practices
- Practice 1: Why it matters
- Practice 2: Why it matters
## Overall Assessment
Pass / Needs Revision / Reject
```
## Important Notes
- Be specific with line numbers and code references
- Provide actionable feedback
- Consider the trading context (not just general coding)
- Flag any risk management issues immediately

View File

@@ -0,0 +1,12 @@
// Subagents exports
export {
BaseSubagent,
type SubagentConfig,
type SubagentContext,
} from './base-subagent.js';
export {
CodeReviewerSubagent,
createCodeReviewerSubagent,
} from './code-reviewer/index.js';

View File

View File

@@ -0,0 +1,461 @@
# Workflows
LangGraph-based workflows for multi-step agent orchestration.
## What are Workflows?
Workflows are state machines that orchestrate complex multi-step tasks with:
- **State Management**: Typed state with annotations
- **Conditional Routing**: Different paths based on state
- **Validation Loops**: Retry with fixes
- **Human-in-the-Loop**: Approval gates and interrupts
- **Error Recovery**: Graceful handling of failures
Built on [LangGraph.js](https://langchain-ai.github.io/langgraphjs/).
## Directory Structure
```
workflows/
├── base-workflow.ts # Base class and utilities
├── {workflow-name}/
│ ├── config.yaml # Workflow configuration
│ ├── state.ts # State schema (Annotations)
│ ├── nodes.ts # Node implementations
│ └── graph.ts # StateGraph definition
└── README.md # This file
```
## Workflow Components
### State (state.ts)
Defines what data flows through the workflow:
```typescript
import { Annotation } from '@langchain/langgraph';
import { BaseWorkflowState } from '../base-workflow.js';
export const MyWorkflowState = Annotation.Root({
...BaseWorkflowState.spec, // Inherit base fields
// Your custom fields
input: Annotation<string>(),
result: Annotation<string | null>({ default: () => null }),
errorCount: Annotation<number>({ default: () => 0 }),
});
export type MyWorkflowStateType = typeof MyWorkflowState.State;
```
### Nodes (nodes.ts)
Functions that transform state:
```typescript
export function createMyNode(deps: Dependencies) {
return async (state: MyWorkflowStateType): Promise<Partial<MyWorkflowStateType>> => {
// Do work
const result = await doSomething(state.input);
// Return partial state update
return { result };
};
}
```
### Graph (graph.ts)
Connects nodes with edges:
```typescript
import { StateGraph } from '@langchain/langgraph';
import { BaseWorkflow } from '../base-workflow.js';
export class MyWorkflow extends BaseWorkflow<MyWorkflowStateType> {
buildGraph(): StateGraph<MyWorkflowStateType> {
const graph = new StateGraph(MyWorkflowState);
// Add nodes
graph
.addNode('step1', createStep1Node())
.addNode('step2', createStep2Node());
// Add edges
graph
.addEdge('__start__', 'step1')
.addEdge('step1', 'step2')
.addEdge('step2', '__end__');
return graph;
}
}
```
### Config (config.yaml)
Workflow settings:
```yaml
name: my-workflow
description: What it does
timeout: 300000 # 5 minutes
maxRetries: 3
requiresApproval: true
approvalNodes:
- human_approval
# Custom settings
myCustomSetting: value
```
## Common Patterns
### 1. Validation Loop (Retry with Fixes)
```typescript
graph
.addNode('validate', validateNode)
.addNode('fix', fixNode)
.addConditionalEdges('validate', (state) => {
if (state.isValid) return 'next_step';
if (state.retryCount >= 3) return '__end__'; // Give up
return 'fix'; // Try to fix
})
.addEdge('fix', 'validate'); // Loop back
```
### 2. Human-in-the-Loop (Approval)
```typescript
const approvalNode = async (state) => {
// Send approval request to user's channel
await sendToChannel(state.userContext.activeChannel, {
type: 'approval_request',
data: {
action: 'execute_trade',
details: state.tradeDetails,
}
});
// Mark as waiting for approval
return { approvalRequested: true, userApproved: false };
};
graph.addConditionalEdges('approval', (state) => {
return state.userApproved ? 'execute' : '__end__';
});
// To resume after user input:
// const updated = await workflow.execute({ ...state, userApproved: true });
```
### 3. Parallel Execution
```typescript
import { Branch } from '@langchain/langgraph';
graph
.addNode('parallel_start', startNode)
.addNode('task_a', taskANode)
.addNode('task_b', taskBNode)
.addNode('merge', mergeNode);
// Branch to parallel tasks
graph.addEdge('parallel_start', Branch.parallel(['task_a', 'task_b']));
// Merge results
graph
.addEdge('task_a', 'merge')
.addEdge('task_b', 'merge');
```
### 4. Error Recovery
```typescript
const resilientNode = async (state) => {
try {
const result = await riskyOperation();
return { result, error: null };
} catch (error) {
logger.error({ error }, 'Operation failed');
return {
error: error.message,
fallbackUsed: true,
result: await fallbackOperation()
};
}
};
```
### 5. Conditional Routing
```typescript
graph.addConditionalEdges('decision', (state) => {
if (state.score > 0.8) return 'high_confidence';
if (state.score > 0.5) return 'medium_confidence';
return 'low_confidence';
});
graph
.addNode('high_confidence', autoApproveNode)
.addNode('medium_confidence', humanReviewNode)
.addNode('low_confidence', rejectNode);
```
## Available Workflows
### strategy-validation
Validates trading strategies with multiple steps and a validation loop.
**Flow:**
1. Code Review (using CodeReviewerSubagent)
2. If issues → Fix Code → loop back
3. Backtest (via MCP)
4. If failed → Fix Code → loop back
5. Risk Assessment
6. Human Approval
7. Final Recommendation
**Features:**
- Max 3 retry attempts
- Multi-file memory from subagent
- Risk-based auto-approval
- Comprehensive state tracking
### trading-request
Human-in-the-loop workflow for trade execution.
**Flow:**
1. Analyze market conditions
2. Calculate risk and position size
3. Request human approval (PAUSE)
4. If approved → Execute trade
5. Generate summary
**Features:**
- Interrupt at approval node
- Channel-aware approval UI
- Risk validation
- Execution confirmation
## Creating a New Workflow
### 1. Create Directory
```bash
mkdir -p workflows/my-workflow
```
### 2. Define State
```typescript
// state.ts
import { Annotation } from '@langchain/langgraph';
import { BaseWorkflowState } from '../base-workflow.js';
export const MyWorkflowState = Annotation.Root({
...BaseWorkflowState.spec,
// Add your fields
input: Annotation<string>(),
step1Result: Annotation<string | null>({ default: () => null }),
step2Result: Annotation<string | null>({ default: () => null }),
});
export type MyWorkflowStateType = typeof MyWorkflowState.State;
```
### 3. Create Nodes
```typescript
// nodes.ts
import { MyWorkflowStateType } from './state.js';
export function createStep1Node(deps: any) {
return async (state: MyWorkflowStateType) => {
const result = await doStep1(state.input);
return { step1Result: result };
};
}
export function createStep2Node(deps: any) {
return async (state: MyWorkflowStateType) => {
const result = await doStep2(state.step1Result);
return { step2Result: result, output: result };
};
}
```
### 4. Build Graph
```typescript
// graph.ts
import { StateGraph } from '@langchain/langgraph';
import { BaseWorkflow, WorkflowConfig } from '../base-workflow.js';
import { MyWorkflowState, MyWorkflowStateType } from './state.js';
import { createStep1Node, createStep2Node } from './nodes.js';
export class MyWorkflow extends BaseWorkflow<MyWorkflowStateType> {
constructor(config: WorkflowConfig, private deps: any, logger: Logger) {
super(config, logger);
}
buildGraph(): StateGraph<MyWorkflowStateType> {
const graph = new StateGraph(MyWorkflowState);
const step1 = createStep1Node(this.deps);
const step2 = createStep2Node(this.deps);
graph
.addNode('step1', step1)
.addNode('step2', step2)
.addEdge('__start__', 'step1')
.addEdge('step1', 'step2')
.addEdge('step2', '__end__');
return graph;
}
}
```
### 5. Create Config
```yaml
# config.yaml
name: my-workflow
description: My workflow description
timeout: 60000
maxRetries: 3
requiresApproval: false
model: claude-3-5-sonnet-20241022
```
### 6. Add Factory Function
```typescript
// graph.ts (continued)
export async function createMyWorkflow(
deps: any,
logger: Logger,
configPath: string
): Promise<MyWorkflow> {
const config = await loadYAML(configPath);
const workflow = new MyWorkflow(config, deps, logger);
workflow.compile();
return workflow;
}
```
## Usage
### Execute Workflow
```typescript
import { createMyWorkflow } from './harness/workflows';
const workflow = await createMyWorkflow(deps, logger, configPath);
const result = await workflow.execute({
userContext,
input: 'my input'
});
console.log(result.output);
```
### Stream Workflow
```typescript
for await (const state of workflow.stream({ userContext, input })) {
console.log('Current state:', state);
}
```
### With Interrupts (Human-in-the-Loop)
```typescript
// Initial execution (pauses at interrupt)
const pausedState = await workflow.execute(initialState);
// User provides input
const userInput = await getUserApproval();
// Resume from paused state
const finalState = await workflow.execute({
...pausedState,
userApproved: userInput.approved
});
```
## Best Practices
### State Design
- **Immutable Updates**: Return partial state, don't mutate
- **Type Safety**: Use TypeScript annotations
- **Defaults**: Provide sensible defaults
- **Nullable Fields**: Use `| null` with `default: () => null`
### Node Implementation
- **Pure Functions**: Avoid side effects in state logic
- **Error Handling**: Catch errors, return error state
- **Logging**: Log entry/exit of nodes
- **Partial Updates**: Only return fields that changed
### Graph Design
- **Single Responsibility**: Each node does one thing
- **Clear Flow**: Easy to visualize the graph
- **Error Paths**: Handle failures gracefully
- **Idempotency**: Safe to retry nodes
### Configuration
- **Timeouts**: Set reasonable limits
- **Retries**: Don't retry forever
- **Approvals**: Mark approval nodes explicitly
- **Documentation**: Explain complex config values
## Debugging
### View Graph
```typescript
// Get graph structure
const compiled = workflow.compile();
console.log(compiled.getGraph());
```
### Log State
```typescript
const debugNode = async (state) => {
logger.debug({ state }, 'Current state');
return {}; // No changes
};
graph.addNode('debug', debugNode);
```
### Test Nodes in Isolation
```typescript
const step1 = createStep1Node(deps);
const result = await step1({ input: 'test', /* ... */ });
expect(result.step1Result).toBe('expected');
```
## References
- [LangGraph.js Docs](https://langchain-ai.github.io/langgraphjs/)
- [LangChain.js Docs](https://js.langchain.com/)
- [Example: strategy-validation](./strategy-validation/graph.ts)
- [Example: trading-request](./trading-request/graph.ts)

View File

@@ -0,0 +1,200 @@
import { Annotation } from '@langchain/langgraph';
import type { FastifyBaseLogger } from 'fastify';
import type { UserContext } from '../memory/session-context.js';
/**
* Workflow configuration (loaded from config.yaml)
*/
export interface WorkflowConfig {
name: string;
description: string;
timeout?: number; // Milliseconds
maxRetries?: number;
requiresApproval?: boolean;
approvalNodes?: string[]; // Nodes that require human approval
}
/**
* Base workflow state (all workflows extend this)
*/
export const BaseWorkflowState = Annotation.Root({
userContext: Annotation<UserContext>(),
input: Annotation<string>(),
output: Annotation<string | null>({
value: (left, right) => right ?? left,
default: () => null,
}),
error: Annotation<string | null>({
value: (left, right) => right ?? left,
default: () => null,
}),
metadata: Annotation<Record<string, unknown>>({
value: (left, right) => ({ ...left, ...right }),
default: () => ({}),
}),
});
export type BaseWorkflowStateType = typeof BaseWorkflowState.State;
/**
* Workflow node function type
*/
export type WorkflowNode<TState> = (state: TState) => Promise<Partial<TState>>;
/**
* Workflow edge condition function type
*/
export type WorkflowEdgeCondition<TState> = (state: TState) => string;
/**
* Base workflow class
*
* Workflows are LangGraph state machines with:
* - Config-driven setup (timeout, retries, approval gates)
* - Standardized state structure
* - Support for human-in-the-loop
* - Validation loops
* - Error handling
*
* Structure:
* workflows/
* strategy-validation/
* config.yaml
* graph.ts
* nodes.ts
* state.ts
*/
export abstract class BaseWorkflow<TState extends BaseWorkflowStateType> {
protected logger: FastifyBaseLogger;
protected config: WorkflowConfig;
protected graph?: any;
constructor(config: WorkflowConfig, logger: FastifyBaseLogger) {
this.config = config;
this.logger = logger;
}
/**
* Build the workflow graph (implemented by subclasses)
*/
abstract buildGraph(): any;
/**
* Compile the workflow graph
*/
compile(): void {
this.logger.info({ workflow: this.config.name }, 'Compiling workflow graph');
const stateGraph = this.buildGraph();
this.graph = stateGraph.compile();
}
/**
* Execute the workflow
*/
async execute(initialState: Partial<TState>): Promise<TState> {
if (!this.graph) {
throw new Error('Workflow not compiled. Call compile() first.');
}
this.logger.info(
{ workflow: this.config.name, userId: initialState.userContext?.userId },
'Executing workflow'
);
const startTime = Date.now();
try {
// Execute with timeout if configured
const result = this.config.timeout
? await this.executeWithTimeout(initialState)
: await this.graph.invoke(initialState);
const duration = Date.now() - startTime;
this.logger.info(
{
workflow: this.config.name,
duration,
success: !result.error,
},
'Workflow execution completed'
);
return result;
} catch (error) {
this.logger.error(
{ error, workflow: this.config.name },
'Workflow execution failed'
);
throw error;
}
}
/**
* Stream workflow execution
*/
async *stream(initialState: Partial<TState>): AsyncGenerator<TState> {
if (!this.graph) {
throw new Error('Workflow not compiled. Call compile() first.');
}
this.logger.info(
{ workflow: this.config.name, userId: initialState.userContext?.userId },
'Streaming workflow execution'
);
try {
const stream = await this.graph.stream(initialState);
for await (const state of stream) {
yield state;
}
} catch (error) {
this.logger.error(
{ error, workflow: this.config.name },
'Workflow streaming failed'
);
throw error;
}
}
/**
* Execute with timeout
*/
private async executeWithTimeout(initialState: Partial<TState>): Promise<TState> {
if (!this.config.timeout || !this.graph) {
throw new Error('Invalid state');
}
return await Promise.race([
this.graph.invoke(initialState) as Promise<TState>,
new Promise<TState>((_, reject) =>
setTimeout(
() => reject(new Error(`Workflow timeout after ${this.config.timeout}ms`)),
this.config.timeout
)
),
]);
}
/**
* Get workflow name
*/
getName(): string {
return this.config.name;
}
/**
* Check if workflow requires approval
*/
requiresApproval(): boolean {
return this.config.requiresApproval || false;
}
/**
* Get approval nodes
*/
getApprovalNodes(): string[] {
return this.config.approvalNodes || [];
}
}

View File

@@ -0,0 +1,20 @@
// Workflows exports
export {
BaseWorkflow,
BaseWorkflowState,
type WorkflowConfig,
type BaseWorkflowStateType,
type WorkflowNode,
type WorkflowEdgeCondition,
} from './base-workflow.js';
export {
StrategyValidationWorkflow,
createStrategyValidationWorkflow,
} from './strategy-validation/graph.js';
export {
TradingRequestWorkflow,
createTradingRequestWorkflow,
} from './trading-request/graph.js';

View File

@@ -0,0 +1,19 @@
# Strategy Validation Workflow Configuration
name: strategy-validation
description: Validates trading strategies with code review, backtest, and risk assessment
# Workflow settings
timeout: 300000 # 5 minutes
maxRetries: 3
requiresApproval: true
approvalNodes:
- human_approval
# Validation loop settings
maxValidationRetries: 3 # Max times to retry fixing errors
minBacktestScore: 0.5 # Minimum Sharpe ratio to pass
# Model override (optional)
model: claude-3-5-sonnet-20241022
temperature: 0.3

View File

@@ -0,0 +1,138 @@
import { StateGraph } from '@langchain/langgraph';
import { BaseWorkflow, type WorkflowConfig } from '../base-workflow.js';
import { StrategyValidationState, type StrategyValidationStateType } from './state.js';
import {
createCodeReviewNode,
createFixCodeNode,
createBacktestNode,
createRiskAssessmentNode,
createHumanApprovalNode,
createRecommendationNode,
} from './nodes.js';
import type { FastifyBaseLogger } from 'fastify';
import type { BaseChatModel } from '@langchain/core/language_models/chat_models';
import type { CodeReviewerSubagent } from '../../subagents/code-reviewer/index.js';
/**
* Strategy Validation Workflow
*
* Multi-step workflow with validation loop:
* 1. Code Review (using CodeReviewerSubagent)
* 2. If issues found → Fix Code → Loop back to Code Review
* 3. Backtest (using user's MCP server)
* 4. If backtest fails → Fix Code → Loop back to Code Review
* 5. Risk Assessment
* 6. Human Approval (pause for user input)
* 7. Final Recommendation
*
* Features:
* - Validation loop with max retries
* - Human-in-the-loop approval gate
* - Multi-file memory from CodeReviewerSubagent
* - Comprehensive state tracking
*/
export class StrategyValidationWorkflow extends BaseWorkflow<StrategyValidationStateType> {
constructor(
config: WorkflowConfig,
private model: BaseChatModel,
private codeReviewer: CodeReviewerSubagent,
private mcpBacktestFn: (code: string, ticker: string, timeframe: string) => Promise<Record<string, unknown>>,
logger: FastifyBaseLogger
) {
super(config, logger);
}
buildGraph(): any {
const graph = new StateGraph(StrategyValidationState);
// Create nodes
const codeReviewNode = createCodeReviewNode(this.codeReviewer, this.logger);
const fixCodeNode = createFixCodeNode(this.model, this.logger);
const backtestNode = createBacktestNode(this.mcpBacktestFn, this.logger);
const riskAssessmentNode = createRiskAssessmentNode(this.model, this.logger);
const humanApprovalNode = createHumanApprovalNode(this.logger);
const recommendationNode = createRecommendationNode(this.model, this.logger);
// Add nodes to graph
graph
.addNode('code_review', codeReviewNode)
.addNode('fix_code', fixCodeNode)
.addNode('backtest', backtestNode)
.addNode('risk_assessment', riskAssessmentNode)
.addNode('human_approval', humanApprovalNode)
.addNode('recommendation', recommendationNode);
// Define edges
(graph as any).addEdge('__start__', 'code_review');
// Conditional: After code review, fix if needed or proceed to backtest
(graph as any).addConditionalEdges('code_review', (state: any) => {
if (state.needsFixing && state.validationRetryCount < 3) {
return 'fix_code';
}
if (state.needsFixing && state.validationRetryCount >= 3) {
return 'recommendation'; // Give up, generate rejection
}
return 'backtest';
});
// After fixing code, loop back to code review
(graph as any).addEdge('fix_code', 'code_review');
// Conditional: After backtest, fix if failed or proceed to risk assessment
(graph as any).addConditionalEdges('backtest', (state: any) => {
if (!state.backtestPassed && state.validationRetryCount < 3) {
return 'fix_code';
}
if (!state.backtestPassed && state.validationRetryCount >= 3) {
return 'recommendation'; // Give up
}
return 'risk_assessment';
});
// After risk assessment, go to human approval
(graph as any).addEdge('risk_assessment', 'human_approval');
// Conditional: After human approval, proceed to recommendation or reject
(graph as any).addConditionalEdges('human_approval', (state: any) => {
return state.humanApproved ? 'recommendation' : '__end__';
});
// Final recommendation is terminal
(graph as any).addEdge('recommendation', '__end__');
return graph;
}
}
/**
* Factory function to create and compile workflow
*/
export async function createStrategyValidationWorkflow(
model: BaseChatModel,
codeReviewer: CodeReviewerSubagent,
mcpBacktestFn: (code: string, ticker: string, timeframe: string) => Promise<Record<string, unknown>>,
logger: FastifyBaseLogger,
configPath: string
): Promise<StrategyValidationWorkflow> {
const { readFile } = await import('fs/promises');
const yaml = await import('js-yaml');
// Load config
const configContent = await readFile(configPath, 'utf-8');
const config = yaml.load(configContent) as WorkflowConfig;
// Create workflow
const workflow = new StrategyValidationWorkflow(
config,
model,
codeReviewer,
mcpBacktestFn,
logger
);
// Compile graph
workflow.compile();
return workflow;
}

View File

@@ -0,0 +1,233 @@
import type { StrategyValidationStateType } from './state.js';
import type { FastifyBaseLogger } from 'fastify';
import type { BaseChatModel } from '@langchain/core/language_models/chat_models';
import type { CodeReviewerSubagent } from '../../subagents/code-reviewer/index.js';
import { HumanMessage, SystemMessage } from '@langchain/core/messages';
/**
* Node: Code Review
* Reviews strategy code using CodeReviewerSubagent
*/
export function createCodeReviewNode(
codeReviewer: CodeReviewerSubagent,
logger: FastifyBaseLogger
) {
return async (state: StrategyValidationStateType): Promise<Partial<StrategyValidationStateType>> => {
logger.info('Strategy validation: Code review');
const review = await codeReviewer.execute(
{ userContext: state.userContext },
state.strategyCode
);
// Simple issue detection (in production, parse structured output)
const hasIssues = review.toLowerCase().includes('critical') ||
review.toLowerCase().includes('reject');
return {
codeReview: review,
codeIssues: hasIssues ? ['Issues detected in code review'] : [],
needsFixing: hasIssues,
};
};
}
/**
* Node: Fix Code Issues
* Uses LLM to fix issues identified in code review
*/
export function createFixCodeNode(
model: BaseChatModel,
logger: FastifyBaseLogger
) {
return async (state: StrategyValidationStateType): Promise<Partial<StrategyValidationStateType>> => {
logger.info('Strategy validation: Fixing code issues');
const systemPrompt = `You are a trading strategy developer.
Fix the issues identified in the code review while maintaining the strategy's logic.
Return only the corrected code without explanation.`;
const userPrompt = `Original code:
\`\`\`typescript
${state.strategyCode}
\`\`\`
Code review feedback:
${state.codeReview}
Provide the corrected code:`;
const response = await model.invoke([
new SystemMessage(systemPrompt),
new HumanMessage(userPrompt),
]);
const fixedCode = (response.content as string)
.replace(/```typescript\n?/g, '')
.replace(/```\n?/g, '')
.trim();
return {
strategyCode: fixedCode,
validationRetryCount: state.validationRetryCount + 1,
};
};
}
/**
* Node: Backtest Strategy
* Runs backtest using user's MCP server
*/
export function createBacktestNode(
mcpBacktestFn: (code: string, ticker: string, timeframe: string) => Promise<Record<string, unknown>>,
logger: FastifyBaseLogger
) {
return async (state: StrategyValidationStateType): Promise<Partial<StrategyValidationStateType>> => {
logger.info('Strategy validation: Running backtest');
try {
const results = await mcpBacktestFn(
state.strategyCode,
state.ticker,
state.timeframe
);
// Check if backtest passed (simplified)
const sharpeRatio = (results.sharpeRatio as number) || 0;
const passed = sharpeRatio > 0.5;
return {
backtestResults: results,
backtestPassed: passed,
needsFixing: !passed,
};
} catch (error) {
logger.error({ error }, 'Backtest failed');
return {
backtestResults: { error: (error as Error).message },
backtestPassed: false,
needsFixing: true,
};
}
};
}
/**
* Node: Risk Assessment
* Analyzes backtest results for risk
*/
export function createRiskAssessmentNode(
model: BaseChatModel,
logger: FastifyBaseLogger
) {
return async (state: StrategyValidationStateType): Promise<Partial<StrategyValidationStateType>> => {
logger.info('Strategy validation: Risk assessment');
const systemPrompt = `You are a risk management expert.
Analyze the strategy and backtest results to assess risk level.
Provide: risk level (low/medium/high) and detailed assessment.`;
const userPrompt = `Strategy code:
\`\`\`typescript
${state.strategyCode}
\`\`\`
Backtest results:
${JSON.stringify(state.backtestResults, null, 2)}
Provide risk assessment in format:
RISK_LEVEL: [low/medium/high]
ASSESSMENT: [detailed explanation]`;
const response = await model.invoke([
new SystemMessage(systemPrompt),
new HumanMessage(userPrompt),
]);
const assessment = response.content as string;
// Parse risk level (simplified)
let riskLevel: 'low' | 'medium' | 'high' = 'medium';
if (assessment.includes('RISK_LEVEL: low')) riskLevel = 'low';
if (assessment.includes('RISK_LEVEL: high')) riskLevel = 'high';
return {
riskAssessment: assessment,
riskLevel,
};
};
}
/**
* Node: Human Approval
* Pauses workflow for human review
*/
export function createHumanApprovalNode(logger: FastifyBaseLogger) {
return async (state: StrategyValidationStateType): Promise<Partial<StrategyValidationStateType>> => {
logger.info('Strategy validation: Awaiting human approval');
// In real implementation, this would:
// 1. Send approval request to user's channel
// 2. Store workflow state with interrupt
// 3. Wait for user response
// 4. Resume with approval decision
// For now, auto-approve if risk is low/medium and backtest passed
const autoApprove = state.backtestPassed &&
(state.riskLevel === 'low' || state.riskLevel === 'medium');
return {
humanApproved: autoApprove,
approvalComment: autoApprove ? 'Auto-approved: passed validation' : 'Needs manual review',
};
};
}
/**
* Node: Final Recommendation
* Generates final recommendation based on all steps
*/
export function createRecommendationNode(
model: BaseChatModel,
logger: FastifyBaseLogger
) {
return async (state: StrategyValidationStateType): Promise<Partial<StrategyValidationStateType>> => {
logger.info('Strategy validation: Generating recommendation');
const systemPrompt = `You are the final decision maker for strategy deployment.
Based on all validation steps, provide a clear recommendation: approve, reject, or revise.`;
const userPrompt = `Strategy validation summary:
Code Review: ${state.codeIssues.length === 0 ? 'Passed' : 'Issues found'}
Backtest: ${state.backtestPassed ? 'Passed' : 'Failed'}
Risk Level: ${state.riskLevel}
Human Approved: ${state.humanApproved}
Backtest Results:
${JSON.stringify(state.backtestResults, null, 2)}
Risk Assessment:
${state.riskAssessment}
Provide final recommendation (approve/reject/revise) and reasoning:`;
const response = await model.invoke([
new SystemMessage(systemPrompt),
new HumanMessage(userPrompt),
]);
const recommendation = response.content as string;
// Parse recommendation (simplified)
let decision: 'approve' | 'reject' | 'revise' = 'revise';
if (recommendation.toLowerCase().includes('approve')) decision = 'approve';
if (recommendation.toLowerCase().includes('reject')) decision = 'reject';
return {
recommendation: decision,
recommendationReason: recommendation,
output: recommendation,
};
};
}

View File

@@ -0,0 +1,78 @@
import { Annotation } from '@langchain/langgraph';
import { BaseWorkflowState } from '../base-workflow.js';
/**
* Strategy validation workflow state
*
* Extends base workflow state with strategy-specific fields
*/
export const StrategyValidationState = Annotation.Root({
...BaseWorkflowState.spec,
// Input
strategyCode: Annotation<string>(),
ticker: Annotation<string>(),
timeframe: Annotation<string>(),
// Code review step
codeReview: Annotation<string | null>({
value: (left, right) => right ?? left,
default: () => null,
}),
codeIssues: Annotation<string[]>({
value: (left, right) => right ?? left,
default: () => [],
}),
// Backtest step
backtestResults: Annotation<Record<string, unknown> | null>({
value: (left, right) => right ?? left,
default: () => null,
}),
backtestPassed: Annotation<boolean>({
value: (left, right) => right ?? left,
default: () => false,
}),
// Risk assessment step
riskAssessment: Annotation<string | null>({
value: (left, right) => right ?? left,
default: () => null,
}),
riskLevel: Annotation<'low' | 'medium' | 'high' | null>({
value: (left, right) => right ?? left,
default: () => null,
}),
// Human approval step
humanApproved: Annotation<boolean>({
value: (left, right) => right ?? left,
default: () => false,
}),
approvalComment: Annotation<string | null>({
value: (left, right) => right ?? left,
default: () => null,
}),
// Validation loop control
validationRetryCount: Annotation<number>({
value: (left, right) => right ?? left,
default: () => 0,
}),
needsFixing: Annotation<boolean>({
value: (left, right) => right ?? left,
default: () => false,
}),
// Final output
recommendation: Annotation<'approve' | 'reject' | 'revise' | null>({
value: (left, right) => right ?? left,
default: () => null,
}),
recommendationReason: Annotation<string | null>({
value: (left, right) => right ?? left,
default: () => null,
}),
});
export type StrategyValidationStateType = typeof StrategyValidationState.State;

View File

@@ -0,0 +1,19 @@
# Trading Request Workflow Configuration
name: trading-request
description: Human-in-the-loop workflow for executing trading requests
# Workflow settings
timeout: 600000 # 10 minutes (includes human wait time)
maxRetries: 1
requiresApproval: true
approvalNodes:
- await_approval
# Trading limits
maxPositionPercent: 0.05 # 5% of portfolio max
minRiskRewardRatio: 2.0 # Minimum 2:1 risk/reward
# Model override (optional)
model: claude-3-5-sonnet-20241022
temperature: 0.2

View File

@@ -0,0 +1,229 @@
import { StateGraph } from '@langchain/langgraph';
import { BaseWorkflow, type WorkflowConfig } from '../base-workflow.js';
import { TradingRequestState, type TradingRequestStateType } from './state.js';
import type { FastifyBaseLogger } from 'fastify';
import type { BaseChatModel } from '@langchain/core/language_models/chat_models';
import { HumanMessage, SystemMessage } from '@langchain/core/messages';
/**
* Trading Request Workflow
*
* Human-in-the-loop workflow for executing trades:
* 1. Analyze market conditions
* 2. Calculate risk and position size
* 3. Request human approval (PAUSE HERE)
* 4. If approved → Execute trade
* 5. Generate execution summary
*
* Features:
* - Interrupt at approval node
* - Resume with user input
* - Risk validation
* - Multi-channel approval UI
*/
export class TradingRequestWorkflow extends BaseWorkflow<TradingRequestStateType> {
constructor(
config: WorkflowConfig,
private model: BaseChatModel,
private marketDataFn: (ticker: string) => Promise<{ price: number; [key: string]: unknown }>,
private executeTradeFn: (order: any) => Promise<{ orderId: string; status: string; price: number }>,
logger: FastifyBaseLogger
) {
super(config, logger);
}
buildGraph(): any {
const graph = new StateGraph(TradingRequestState);
// Node: Analyze market
const analyzeNode = async (state: TradingRequestStateType): Promise<Partial<TradingRequestStateType>> => {
this.logger.info('Trading request: Analyzing market');
const marketData = await this.marketDataFn(state.ticker);
const systemPrompt = `You are a market analyst. Analyze current conditions for a ${state.side} order.`;
const userPrompt = `Ticker: ${state.ticker}
Current Price: ${marketData.price}
Requested: ${state.side} ${state.amount} at ${state.price || 'market'}
Provide 2-3 sentence analysis:`;
const response = await this.model.invoke([
new SystemMessage(systemPrompt),
new HumanMessage(userPrompt),
]);
return {
marketAnalysis: response.content as string,
currentPrice: marketData.price,
};
};
// Node: Calculate risk
const calculateRiskNode = async (state: TradingRequestStateType): Promise<Partial<TradingRequestStateType>> => {
this.logger.info('Trading request: Calculating risk');
// Simplified risk calculation
const accountBalance = state.userContext.license.features.maxBacktestDays * 1000; // Mock
const maxPosition = accountBalance * 0.05; // 5% max
const positionValue = state.amount * (state.currentPrice || 0);
const positionSize = Math.min(positionValue, maxPosition);
// Mock risk/reward (in production, calculate from stop-loss and take-profit)
const riskRewardRatio = 2.5;
return {
riskAssessment: {
accountBalance,
maxPosition,
positionValue,
positionSize,
},
riskRewardRatio,
positionSize,
};
};
// Node: Request approval (INTERRUPT POINT)
const requestApprovalNode = async (state: TradingRequestStateType): Promise<Partial<TradingRequestStateType>> => {
this.logger.info('Trading request: Requesting approval');
// TODO: Send approval request to user's active channel
// In production, this would:
// 1. Format approval UI for the channel (buttons for Telegram, etc.)
// 2. Send message with trade details
// 3. Store workflow state
// 4. Return with interrupt signal
// 5. LangGraph will pause here until resumed with user input
// For now, mock approval
const approvalMessage = `
Trade Request Approval Needed:
- ${state.side.toUpperCase()} ${state.amount} ${state.ticker}
- Current Price: $${state.currentPrice}
- Position Size: $${state.positionSize}
- Risk/Reward: ${state.riskRewardRatio}:1
Market Analysis:
${state.marketAnalysis}
Reply 'approve' or 'reject'
`;
return {
approvalRequested: true,
approvalMessage,
approvalTimestamp: Date.now(),
// In production, this node would use Interrupt here
userApproved: false, // Wait for user input
};
};
// Node: Execute trade
const executeTradeNode = async (state: TradingRequestStateType): Promise<Partial<TradingRequestStateType>> => {
this.logger.info('Trading request: Executing trade');
try {
const order = {
ticker: state.ticker,
side: state.side,
amount: state.amount,
type: state.requestType,
price: state.price,
};
const result = await this.executeTradeFn(order);
return {
orderPlaced: true,
orderId: result.orderId,
executionPrice: result.price,
executionStatus: result.status as any,
};
} catch (error) {
this.logger.error({ error }, 'Trade execution failed');
return {
orderPlaced: false,
executionStatus: 'rejected',
error: (error as Error).message,
};
}
};
// Node: Generate summary
const summaryNode = async (state: TradingRequestStateType): Promise<Partial<TradingRequestStateType>> => {
this.logger.info('Trading request: Generating summary');
const summary = state.orderPlaced
? `Trade executed successfully:
- Order ID: ${state.orderId}
- ${state.side.toUpperCase()} ${state.amount} ${state.ticker}
- Execution Price: $${state.executionPrice}
- Status: ${state.executionStatus}`
: `Trade not executed:
- Reason: ${state.userApproved ? 'Execution failed' : 'User rejected'}`;
return {
summary,
output: summary,
};
};
// Add nodes
graph
.addNode('analyze', analyzeNode)
.addNode('calculate_risk', calculateRiskNode)
.addNode('request_approval', requestApprovalNode)
.addNode('execute_trade', executeTradeNode)
.addNode('summary', summaryNode);
// Define edges
(graph as any).addEdge('__start__', 'analyze');
(graph as any).addEdge('analyze', 'calculate_risk');
(graph as any).addEdge('calculate_risk', 'request_approval');
// Conditional: After approval, execute or reject
(graph as any).addConditionalEdges('request_approval', (state: any) => {
// In production, this would check if user approved via interrupt resume
return state.userApproved ? 'execute_trade' : 'summary';
});
(graph as any).addEdge('execute_trade', 'summary');
(graph as any).addEdge('summary', '__end__');
return graph;
}
}
/**
* Factory function to create and compile workflow
*/
export async function createTradingRequestWorkflow(
model: BaseChatModel,
marketDataFn: (ticker: string) => Promise<{ price: number; [key: string]: unknown }>,
executeTradeFn: (order: any) => Promise<{ orderId: string; status: string; price: number }>,
logger: FastifyBaseLogger,
configPath: string
): Promise<TradingRequestWorkflow> {
const { readFile } = await import('fs/promises');
const yaml = await import('js-yaml');
// Load config
const configContent = await readFile(configPath, 'utf-8');
const config = yaml.load(configContent) as WorkflowConfig;
// Create workflow
const workflow = new TradingRequestWorkflow(
config,
model,
marketDataFn,
executeTradeFn,
logger
);
// Compile graph
workflow.compile();
return workflow;
}

View File

@@ -0,0 +1,89 @@
import { Annotation } from '@langchain/langgraph';
import { BaseWorkflowState } from '../base-workflow.js';
/**
* Trading request workflow state
*
* Handles human-in-the-loop approval for trade execution
*/
export const TradingRequestState = Annotation.Root({
...BaseWorkflowState.spec,
// Input
requestType: Annotation<'market_order' | 'limit_order' | 'stop_loss'>(),
ticker: Annotation<string>(),
side: Annotation<'buy' | 'sell'>(),
amount: Annotation<number>(), // Requested amount
price: Annotation<number | null>({
value: (left, right) => right ?? left,
default: () => null,
}),
// Analysis step
marketAnalysis: Annotation<string | null>({
value: (left, right) => right ?? left,
default: () => null,
}),
currentPrice: Annotation<number | null>({
value: (left, right) => right ?? left,
default: () => null,
}),
// Risk calculation
riskAssessment: Annotation<Record<string, unknown> | null>({
value: (left, right) => right ?? left,
default: () => null,
}),
riskRewardRatio: Annotation<number | null>({
value: (left, right) => right ?? left,
default: () => null,
}),
positionSize: Annotation<number | null>({
value: (left, right) => right ?? left,
default: () => null,
}),
// Human approval
approvalRequested: Annotation<boolean>({
value: (left, right) => right ?? left,
default: () => false,
}),
userApproved: Annotation<boolean>({
value: (left, right) => right ?? left,
default: () => false,
}),
approvalMessage: Annotation<string | null>({
value: (left, right) => right ?? left,
default: () => null,
}),
approvalTimestamp: Annotation<number | null>({
value: (left, right) => right ?? left,
default: () => null,
}),
// Execution
orderPlaced: Annotation<boolean>({
value: (left, right) => right ?? left,
default: () => false,
}),
orderId: Annotation<string | null>({
value: (left, right) => right ?? left,
default: () => null,
}),
executionPrice: Annotation<number | null>({
value: (left, right) => right ?? left,
default: () => null,
}),
executionStatus: Annotation<'pending' | 'filled' | 'rejected' | 'cancelled' | null>({
value: (left, right) => right ?? left,
default: () => null,
}),
// Output
summary: Annotation<string | null>({
value: (left, right) => right ?? left,
default: () => null,
}),
});
export type TradingRequestStateType = typeof TradingRequestState.State;