LangGraph Workflows for Trading
Complex, stateful workflows built with LangGraph for trading-specific tasks.
Overview
LangGraph provides:
- Stateful execution: Workflow state persists across failures
- Conditional branching: Route based on market conditions, backtest results, etc.
- Human-in-the-loop: Pause for user approval before executing trades
- Loops & retries: Backtest with different parameters, retry failed operations
- Multi-agent: Different LLMs for different tasks (analysis, risk, execution)
Workflows
Strategy Analysis (strategy-analysis.ts)
Multi-step pipeline for analyzing trading strategies:
import { buildStrategyAnalysisWorkflow } from './workflows/strategy-analysis.js';
const workflow = buildStrategyAnalysisWorkflow(model, logger, mcpBacktestFn);
const result = await workflow.invoke({
strategyCode: userStrategy,
ticker: 'BTC/USDT',
timeframe: '1h',
});
console.log(result.recommendation); // Go/no-go decision
Steps:
- Code Review - LLM analyzes strategy code for bugs, logic errors
- Backtest - Runs backtest via user's MCP server
- Risk Assessment - LLM evaluates results (drawdown, Sharpe, etc.)
- Human Approval - Pauses for user review
- Recommendation - Final go/no-go decision
Benefits:
- Stateful: Can resume if server restarts
- Human-in-the-loop: User must approve before deployment
- Multi-step reasoning: Each step builds on previous
Future Workflows
Market Scanner
Scan multiple tickers for trading opportunities:
const scanner = buildMarketScannerWorkflow(model, logger);
const result = await scanner.invoke({
tickers: ['BTC/USDT', 'ETH/USDT', 'SOL/USDT'],
strategies: ['momentum', 'mean_reversion'],
timeframe: '1h',
});
// Returns ranked opportunities
Steps:
- Fetch Data - Get OHLC for all tickers
- Apply Strategies - Run each strategy on each ticker (parallel)
- Rank Signals - Score by confidence, risk/reward
- Filter - Apply user's risk limits
- Return Top N - Best opportunities
Portfolio Optimization
Optimize position sizing across multiple strategies:
const optimizer = buildPortfolioOptimizerWorkflow(model, logger);
const result = await optimizer.invoke({
strategies: [strategy1, strategy2, strategy3],
totalCapital: 100000,
maxRiskPerTrade: 0.02,
});
// Returns optimal allocation
Steps:
- Backtest All - Run backtests for each strategy
- Correlation Analysis - Check strategy correlation
- Monte Carlo - Simulate portfolio performance
- Optimize - Find optimal weights (Sharpe maximization)
- Risk Check - Validate against user limits
Trade Execution Monitor
Monitor trade execution and adapt to market conditions:
const monitor = buildTradeExecutionWorkflow(model, logger, exchange);
const result = await monitor.invoke({
tradeId: 'xyz',
targetPrice: 45000,
maxSlippage: 0.001,
timeLimit: 60, // seconds
});
Steps:
- Place Order - Submit order to exchange
- Monitor Fill - Check fill status every second
- Adapt - If not filling, adjust price (within slippage)
- Retry Logic - If rejected, retry with backoff
- Timeout - Cancel if time limit exceeded
- Report - Final execution report
Using Workflows in Gateway
Simple Chat vs Complex Workflow
// gateway/src/orchestrator.ts
export class MessageOrchestrator {
async handleMessage(msg: InboundMessage) {
// Route based on complexity
if (this.isSimpleQuery(msg)) {
// Use agent harness for streaming chat
return this.harness.streamMessage(msg);
}
if (this.isWorkflowRequest(msg)) {
// Use LangGraph for complex analysis
return this.executeWorkflow(msg);
}
}
async executeWorkflow(msg: InboundMessage) {
const { type, params } = this.parseWorkflowRequest(msg);
switch (type) {
case 'analyze_strategy':
const workflow = buildStrategyAnalysisWorkflow(...);
return await workflow.invoke(params);
case 'scan_market':
const scanner = buildMarketScannerWorkflow(...);
return await scanner.invoke(params);
// ... more workflows
}
}
}
Benefits for Trading
vs Simple LLM Calls
| Scenario | Simple LLM | LangGraph Workflow |
|---|---|---|
| "What's the RSI?" | ✅ Fast, streaming | ❌ Overkill |
| "Analyze this strategy" | ❌ Limited context | ✅ Multi-step analysis |
| "Backtest 10 param combos" | ❌ No loops | ✅ Conditional loops |
| "Execute if approved" | ❌ No state | ✅ Human-in-the-loop |
| Server crashes mid-analysis | ❌ Lost progress | ✅ Resume from checkpoint |
When to Use Workflows
Use LangGraph when:
- Multi-step analysis (backtest → risk → approval)
- Conditional logic (if bullish → momentum, else → mean-reversion)
- Human approval required (pause workflow)
- Loops needed (try different parameters)
- Long-running (can survive restarts)
Use Agent Harness when:
- Simple Q&A ("What is RSI?")
- Fast response needed (streaming chat)
- Single tool call ("Get my watchlist")
- Real-time interaction (Telegram, WebSocket)
Implementation Notes
State Persistence
LangGraph can persist state to database:
import { MemorySaver } from '@langchain/langgraph';
const checkpointer = new MemorySaver();
const workflow = graph.compile({ checkpointer });
// Resume from checkpoint
const result = await workflow.invoke(input, {
configurable: { thread_id: 'user-123-strategy-analysis' }
});
Human-in-the-Loop
Pause workflow for user input:
const workflow = graph
.addNode('human_approval', humanApprovalNode)
.interrupt('human_approval'); // Pauses here
// User reviews in UI
const approved = await getUserApproval(workflowId);
// Resume workflow
await workflow.resume(state, { approved });
Multi-Agent
Use different models for different tasks:
const analysisModel = new ChatAnthropic({ model: 'claude-3-opus' }); // Smart
const codeModel = new ChatOpenAI({ model: 'gpt-4o' }); // Good at code
const cheapModel = new ChatOpenAI({ model: 'gpt-4o-mini' }); // Fast
const workflow = graph
.addNode('analyze', (state) => analysisModel.invoke(...))
.addNode('code_review', (state) => codeModel.invoke(...))
.addNode('summarize', (state) => cheapModel.invoke(...));
Next Steps
- Implement remaining workflows (scanner, optimizer, execution)
- Add state persistence (PostgreSQL checkpointer)
- Integrate human-in-the-loop with WebSocket
- Add workflow monitoring dashboard
- Performance optimization (parallel execution)