backend redesign

This commit is contained in:
2026-03-11 18:47:11 -04:00
parent 8ff277c8c6
commit e99ef5d2dd
210 changed files with 12147 additions and 155 deletions

View File

@@ -0,0 +1,139 @@
# Chart Utilities - Standard OHLC Plotting
## Overview
The `chart_utils.py` module provides convenience functions for creating beautiful, professional OHLC candlestick charts with a consistent look and feel. This is designed to be used by the LLM in `analyze_chart_data` scripts, eliminating the need to write custom matplotlib code for every chart.
## Key Features
- **Beautiful by default**: Uses mplfinance with seaborn-inspired aesthetics
- **Consistent styling**: Professional color scheme (teal green up, coral red down)
- **Easy to use**: Simple function calls instead of complex matplotlib code
- **Customizable**: Supports all mplfinance options via kwargs
- **Volume integration**: Optional volume subplot
## Installation
The required package `mplfinance` has been added to `requirements.txt`:
```bash
pip install mplfinance
```
## Available Functions
### 1. `plot_ohlc(df, title=None, volume=True, figsize=(14, 8), **kwargs)`
Main function for creating standard OHLC candlestick charts.
**Parameters:**
- `df`: pandas DataFrame with DatetimeIndex and OHLCV columns
- `title`: Optional chart title
- `volume`: Whether to include volume subplot (default: True)
- `figsize`: Figure size in inches (default: (14, 8))
- `**kwargs`: Additional mplfinance.plot() arguments
**Example:**
```python
fig = plot_ohlc(df, title='BTC/USDT 15min', volume=True)
```
### 2. `add_indicators_to_plot(df, indicators, **plot_kwargs)`
Creates OHLC chart with technical indicators overlaid.
**Parameters:**
- `df`: DataFrame with OHLCV data and indicator columns
- `indicators`: Dict mapping indicator column names to display parameters
- `**plot_kwargs`: Additional arguments for plot_ohlc()
**Example:**
```python
df['SMA_20'] = df['close'].rolling(20).mean()
df['SMA_50'] = df['close'].rolling(50).mean()
fig = add_indicators_to_plot(
df,
indicators={
'SMA_20': {'color': 'blue', 'width': 1.5},
'SMA_50': {'color': 'red', 'width': 1.5}
},
title='Price with Moving Averages'
)
```
### 3. Preset Functions
- `plot_price_volume(df, title=None)` - Standard price + volume chart
- `plot_price_only(df, title=None)` - Candlesticks without volume
## Integration with analyze_chart_data
These functions are automatically available in the `analyze_chart_data` tool's script environment:
```python
# In an analyze_chart_data script:
# df is already provided
# Simple usage
fig = plot_ohlc(df, title='Price Action')
# With indicators
df['SMA'] = df['close'].rolling(20).mean()
fig = add_indicators_to_plot(
df,
indicators={'SMA': {'color': 'blue', 'width': 1.5}},
title='Price with SMA'
)
# Return data for the assistant
df[['close', 'SMA']].tail(10)
```
## Styling
The default style includes:
- **Up candles**: Teal green (#26a69a)
- **Down candles**: Coral red (#ef5350)
- **Background**: Light gray with white axes
- **Grid**: Subtle dashed lines with 30% alpha
- **Professional fonts**: Clean, readable sizes
## Why This Matters
**Before:**
```python
# LLM had to write this every time
import matplotlib.pyplot as plt
fig, ax = plt.subplots(figsize=(12, 6))
ax.plot(df.index, df['close'], label='Close')
# ... lots more code for styling, colors, etc.
```
**After:**
```python
# LLM can now just do this
fig = plot_ohlc(df, title='BTC/USDT')
```
Benefits:
- ✅ Less code to generate → faster response
- ✅ Consistent appearance across all charts
- ✅ Professional look out of the box
- ✅ Easier to maintain and customize
- ✅ Better use of mplfinance's candlestick rendering
## Example Output
See `chart_utils_example.py` for runnable examples demonstrating:
1. Basic OHLC chart with volume
2. OHLC chart with multiple indicators
3. Price-only chart
4. Custom styling options
## File Locations
- **Main module**: `backend/src/agent/tools/chart_utils.py`
- **Integration**: `backend/src/agent/tools/chart_tools.py` (lines 306-328)
- **Examples**: `backend/src/agent/tools/chart_utils_example.py`
- **Dependency**: `backend/requirements.txt` (mplfinance added)

View File

@@ -0,0 +1,373 @@
# Agent Trigger Tools
Agent tools for automating tasks via the trigger system.
## Overview
These tools allow the agent to:
- **Schedule recurring tasks** - Run agent prompts on intervals or cron schedules
- **Execute one-time tasks** - Trigger sub-agent runs immediately
- **Manage scheduled jobs** - List and cancel scheduled triggers
- **React to events** - (Future) Connect data updates to agent actions
## Available Tools
### 1. `schedule_agent_prompt`
Schedule an agent to run with a specific prompt on a recurring schedule.
**Use Cases:**
- Daily market analysis reports
- Hourly portfolio rebalancing checks
- Weekly performance summaries
- Monitoring alerts
**Arguments:**
- `prompt` (str): The prompt to send to the agent when triggered
- `schedule_type` (str): "interval" or "cron"
- `schedule_config` (dict): Schedule configuration
- `name` (str, optional): Descriptive name for this task
**Schedule Config:**
*Interval-based:*
```json
{"minutes": 5}
{"hours": 1, "minutes": 30}
{"seconds": 30}
```
*Cron-based:*
```json
{"hour": "9", "minute": "0"} // Daily at 9:00 AM
{"hour": "9", "minute": "0", "day_of_week": "mon-fri"} // Weekdays at 9 AM
{"minute": "0"} // Every hour on the hour
{"hour": "*/6", "minute": "0"} // Every 6 hours
```
**Returns:**
```json
{
"job_id": "interval_123",
"message": "Scheduled 'daily_report' with job_id=interval_123",
"schedule_type": "cron",
"config": {"hour": "9", "minute": "0"}
}
```
**Examples:**
```python
# Every 5 minutes: check BTC price
schedule_agent_prompt(
prompt="Check current BTC price on Binance. If > $50k, alert me.",
schedule_type="interval",
schedule_config={"minutes": 5},
name="btc_price_monitor"
)
# Daily at 9 AM: market summary
schedule_agent_prompt(
prompt="Generate a comprehensive market summary for BTC, ETH, and SOL. Include price changes, volume, and notable events from the last 24 hours.",
schedule_type="cron",
schedule_config={"hour": "9", "minute": "0"},
name="daily_market_summary"
)
# Every hour on weekdays: portfolio check
schedule_agent_prompt(
prompt="Review current portfolio positions. Check if any rebalancing is needed based on target allocations.",
schedule_type="cron",
schedule_config={"minute": "0", "day_of_week": "mon-fri"},
name="hourly_portfolio_check"
)
```
### 2. `execute_agent_prompt_once`
Execute an agent prompt once, immediately (enqueued with priority).
**Use Cases:**
- Background analysis tasks
- One-time data processing
- Responding to specific events
- Sub-agent delegation
**Arguments:**
- `prompt` (str): The prompt to send to the agent
- `priority` (str): "high", "normal", or "low" (default: "normal")
**Returns:**
```json
{
"queue_seq": 42,
"message": "Enqueued agent prompt with priority=normal",
"prompt": "Analyze the last 100 BTC/USDT bars..."
}
```
**Examples:**
```python
# Immediate analysis with high priority
execute_agent_prompt_once(
prompt="Analyze the last 100 BTC/USDT 1m bars and identify key support/resistance levels",
priority="high"
)
# Background task with normal priority
execute_agent_prompt_once(
prompt="Research the latest news about Ethereum upgrades and summarize findings",
priority="normal"
)
# Low priority cleanup task
execute_agent_prompt_once(
prompt="Review and archive old chart drawings from last month",
priority="low"
)
```
### 3. `list_scheduled_triggers`
List all currently scheduled triggers.
**Returns:**
```json
[
{
"id": "cron_456",
"name": "Cron: daily_market_summary",
"next_run_time": "2024-03-05 09:00:00",
"trigger": "cron[hour='9', minute='0']"
},
{
"id": "interval_123",
"name": "Interval: btc_price_monitor",
"next_run_time": "2024-03-04 14:35:00",
"trigger": "interval[0:05:00]"
}
]
```
**Example:**
```python
jobs = list_scheduled_triggers()
for job in jobs:
print(f"{job['name']} - next run: {job['next_run_time']}")
```
### 4. `cancel_scheduled_trigger`
Cancel a scheduled trigger by its job ID.
**Arguments:**
- `job_id` (str): The job ID from `schedule_agent_prompt` or `list_scheduled_triggers`
**Returns:**
```json
{
"status": "success",
"message": "Cancelled job interval_123"
}
```
**Example:**
```python
# List jobs to find the ID
jobs = list_scheduled_triggers()
# Cancel specific job
cancel_scheduled_trigger("interval_123")
```
### 5. `on_data_update_run_agent`
**(Future)** Set up an agent to run whenever new data arrives for a specific symbol.
**Arguments:**
- `source_name` (str): Data source name (e.g., "binance")
- `symbol` (str): Trading pair (e.g., "BTC/USDT")
- `resolution` (str): Time resolution (e.g., "1m", "5m")
- `prompt_template` (str): Template with variables like {close}, {volume}, {symbol}
**Example:**
```python
on_data_update_run_agent(
source_name="binance",
symbol="BTC/USDT",
resolution="1m",
prompt_template="New bar on {symbol}: close={close}, volume={volume}. Check if price crossed any key levels."
)
```
### 6. `get_trigger_system_stats`
Get statistics about the trigger system.
**Returns:**
```json
{
"queue_depth": 3,
"queue_running": true,
"coordinator_stats": {
"current_seq": 1042,
"next_commit_seq": 1043,
"pending_commits": 1,
"total_executions": 1042,
"state_counts": {
"COMMITTED": 1038,
"EXECUTING": 2,
"WAITING_COMMIT": 1,
"FAILED": 1
}
}
}
```
**Example:**
```python
stats = get_trigger_system_stats()
print(f"Queue has {stats['queue_depth']} pending triggers")
print(f"System has processed {stats['coordinator_stats']['total_executions']} total triggers")
```
## Integration Example
Here's how these tools enable autonomous agent behavior:
```python
# Agent conversation:
User: "Monitor BTC price and send me a summary every hour during market hours"
Agent: I'll set that up for you using the trigger system.
# Agent uses tool:
schedule_agent_prompt(
prompt="""
Check the current BTC/USDT price on Binance.
Calculate the price change from 1 hour ago.
If price moved > 2%, provide a detailed analysis.
Otherwise, provide a brief status update.
Send results to user as a notification.
""",
schedule_type="cron",
schedule_config={
"minute": "0",
"hour": "9-17", # 9 AM to 5 PM
"day_of_week": "mon-fri"
},
name="btc_hourly_monitor"
)
Agent: Done! I've scheduled an hourly BTC price monitor that runs during market hours (9 AM - 5 PM on weekdays). You'll receive updates every hour.
# Later...
User: "Can you show me all my scheduled tasks?"
Agent: Let me check what's scheduled.
# Agent uses tool:
jobs = list_scheduled_triggers()
Agent: You have 3 scheduled tasks:
1. "btc_hourly_monitor" - runs every hour during market hours
2. "daily_market_summary" - runs daily at 9 AM
3. "portfolio_rebalance_check" - runs every 4 hours
Would you like to modify or cancel any of these?
```
## Use Case: Autonomous Trading Bot
```python
# Step 1: Set up data monitoring
execute_agent_prompt_once(
prompt="""
Subscribe to BTC/USDT 1m bars from Binance.
When subscribed, set up the following:
1. Calculate RSI(14) on each new bar
2. If RSI > 70, execute prompt: "RSI overbought on BTC, check if we should sell"
3. If RSI < 30, execute prompt: "RSI oversold on BTC, check if we should buy"
""",
priority="high"
)
# Step 2: Schedule periodic portfolio review
schedule_agent_prompt(
prompt="""
Review current portfolio:
1. Calculate current allocation percentages
2. Compare to target allocation (60% BTC, 30% ETH, 10% stable)
3. If deviation > 5%, generate rebalancing trades
4. Submit trades for execution
""",
schedule_type="interval",
schedule_config={"hours": 4},
name="portfolio_rebalance"
)
# Step 3: Schedule daily risk check
schedule_agent_prompt(
prompt="""
Daily risk assessment:
1. Calculate portfolio VaR (Value at Risk)
2. Check current leverage across all positions
3. Review stop-loss placements
4. If risk exceeds threshold, alert and suggest adjustments
""",
schedule_type="cron",
schedule_config={"hour": "8", "minute": "0"},
name="daily_risk_check"
)
```
## Benefits
**Autonomous operation** - Agent can schedule its own tasks
**Event-driven** - React to market data, time, or custom events
**Flexible scheduling** - Interval or cron-based
**Self-managing** - Agent can list and cancel its own jobs
**Priority control** - High-priority tasks jump the queue
**Future-proof** - Easy to add Python lambdas, strategy execution, etc.
## Future Enhancements
- **Python script execution** - Schedule arbitrary Python code
- **Strategy triggers** - Connect to strategy execution system
- **Event composition** - AND/OR logic for complex event patterns
- **Conditional execution** - Only run if conditions met (e.g., volatility > threshold)
- **Result chaining** - Use output of one trigger as input to another
- **Backtesting mode** - Test trigger logic on historical data
## Setup in main.py
```python
from agent.tools import set_trigger_queue, set_trigger_scheduler, set_coordinator
from trigger import TriggerQueue, CommitCoordinator
from trigger.scheduler import TriggerScheduler
# Initialize trigger system
coordinator = CommitCoordinator()
queue = TriggerQueue(coordinator)
scheduler = TriggerScheduler(queue)
await queue.start()
scheduler.start()
# Make available to agent tools
set_trigger_queue(queue)
set_trigger_scheduler(scheduler)
set_coordinator(coordinator)
# Add TRIGGER_TOOLS to agent's tool list
from agent.tools import TRIGGER_TOOLS
agent_tools = [..., *TRIGGER_TOOLS]
```
Now the agent has full control over the trigger system! 🚀

View File

@@ -0,0 +1,64 @@
"""Agent tools for trading operations.
This package provides tools for:
- Synchronization stores (sync_tools)
- Data sources and market data (datasource_tools)
- Chart data access and analysis (chart_tools)
- Technical indicators (indicator_tools)
- Shape/drawing management (shape_tools)
- Trigger system and automation (trigger_tools)
"""
# Global registries that will be set by main.py
_registry = None
_datasource_registry = None
_indicator_registry = None
def set_registry(registry):
"""Set the global SyncRegistry instance for tools to use."""
global _registry
_registry = registry
def set_datasource_registry(datasource_registry):
"""Set the global DataSourceRegistry instance for tools to use."""
global _datasource_registry
_datasource_registry = datasource_registry
def set_indicator_registry(indicator_registry):
"""Set the global IndicatorRegistry instance for tools to use."""
global _indicator_registry
_indicator_registry = indicator_registry
# Import all tools from submodules
from .sync_tools import SYNC_TOOLS
from .datasource_tools import DATASOURCE_TOOLS
from .chart_tools import CHART_TOOLS
from .indicator_tools import INDICATOR_TOOLS
from .research_tools import RESEARCH_TOOLS
from .shape_tools import SHAPE_TOOLS
from .trigger_tools import (
TRIGGER_TOOLS,
set_trigger_queue,
set_trigger_scheduler,
set_coordinator,
)
__all__ = [
"set_registry",
"set_datasource_registry",
"set_indicator_registry",
"set_trigger_queue",
"set_trigger_scheduler",
"set_coordinator",
"SYNC_TOOLS",
"DATASOURCE_TOOLS",
"CHART_TOOLS",
"INDICATOR_TOOLS",
"RESEARCH_TOOLS",
"SHAPE_TOOLS",
"TRIGGER_TOOLS",
]

View File

@@ -0,0 +1,454 @@
"""Chart data access and analysis tools."""
from typing import Dict, Any, Optional, Tuple
import io
import uuid
import logging
from pathlib import Path
from contextlib import redirect_stdout, redirect_stderr
from langchain_core.tools import tool
logger = logging.getLogger(__name__)
def _get_registry():
"""Get the global registry instance."""
from . import _registry
return _registry
def _get_datasource_registry():
"""Get the global datasource registry instance."""
from . import _datasource_registry
return _datasource_registry
def _get_indicator_registry():
"""Get the global indicator registry instance."""
from . import _indicator_registry
return _indicator_registry
def _get_order_store():
"""Get the global OrderStore instance."""
registry = _get_registry()
if registry and "OrderStore" in registry.entries:
return registry.entries["OrderStore"].model
return None
def _get_chart_store():
"""Get the global ChartStore instance."""
registry = _get_registry()
if registry and "ChartStore" in registry.entries:
return registry.entries["ChartStore"].model
return None
async def _get_chart_data_impl(countback: Optional[int] = None):
"""Internal implementation for getting chart data.
This is a helper function that can be called by both get_chart_data tool
and analyze_chart_data tool.
Returns:
Tuple of (HistoryResult, chart_context dict, source_name)
"""
registry = _get_registry()
datasource_registry = _get_datasource_registry()
if not registry:
raise ValueError("SyncRegistry not initialized - cannot read ChartStore")
if not datasource_registry:
raise ValueError("DataSourceRegistry not initialized - cannot query data")
# Read current chart state
chart_store = registry.entries.get("ChartStore")
if not chart_store:
raise ValueError("ChartStore not found in registry")
chart_state = chart_store.model.model_dump(mode="json")
chart_data = chart_state.get("chart_state", {})
symbol = chart_data.get("symbol", "")
interval = chart_data.get("interval", "15")
start_time = chart_data.get("start_time")
end_time = chart_data.get("end_time")
if not symbol or symbol is None:
raise ValueError(
"No chart visible - ChartStore symbol is None. "
"The user is likely on a narrow screen (mobile) where charts are hidden. "
"Let them know they can view charts on a wider screen, or use get_historical_data() "
"if they specify a symbol and timeframe."
)
# Parse the symbol to extract exchange/source and symbol name
# Format is "EXCHANGE:SYMBOL" (e.g., "BINANCE:BTC/USDT", "DEMO:BTC/USD")
if ":" not in symbol:
raise ValueError(
f"Invalid symbol format: '{symbol}'. Expected format is 'EXCHANGE:SYMBOL' "
f"(e.g., 'BINANCE:BTC/USDT' or 'DEMO:BTC/USD')"
)
exchange_prefix, symbol_name = symbol.split(":", 1)
source_name = exchange_prefix.lower()
# Get the data source
source = datasource_registry.get(source_name)
if not source:
available = datasource_registry.list_sources()
raise ValueError(
f"Data source '{source_name}' not found. Available sources: {available}. "
f"Make sure the exchange in the symbol '{symbol}' matches an available source."
)
# Determine time range - REQUIRE it to be set, no defaults
if start_time is None or end_time is None:
raise ValueError(
f"Chart time range not set in ChartStore. start_time={start_time}, end_time={end_time}. "
f"The user needs to load the chart first, or the frontend may not be sending the visible range. "
f"Wait for the chart to fully load before analyzing data."
)
from_time = int(start_time)
end_time = int(end_time)
logger.info(
f"Using ChartStore time range: from_time={from_time}, end_time={end_time}, "
f"countback={countback}"
)
logger.info(
f"Querying data source '{source_name}' for symbol '{symbol_name}', "
f"resolution '{interval}'"
)
# Query the data source
result = await source.get_bars(
symbol=symbol_name,
resolution=interval,
from_time=from_time,
to_time=end_time,
countback=countback
)
logger.info(
f"Received {len(result.bars)} bars from data source. "
f"First bar time: {result.bars[0].time if result.bars else 'N/A'}, "
f"Last bar time: {result.bars[-1].time if result.bars else 'N/A'}"
)
# Build chart context to return along with result
chart_context = {
"symbol": symbol,
"interval": interval,
"start_time": start_time,
"end_time": end_time
}
return result, chart_context, source_name
@tool
async def get_chart_data(countback: Optional[int] = None) -> Dict[str, Any]:
"""Get the candle/bar data for what the user is currently viewing on their chart.
This is a convenience tool that automatically:
1. Reads the ChartStore to see what chart the user is viewing
2. Parses the symbol to determine the data source (exchange prefix)
3. Queries the appropriate data source for that symbol's data
4. Returns the data for the visible time range and interval
This is the preferred way to access chart data when helping the user analyze
what they're looking at, since it automatically uses their current chart context.
**IMPORTANT**: This tool will fail if ChartStore.symbol is None (no chart visible).
This happens when the user is on a narrow screen (mobile) where charts are hidden.
In that case, let the user know charts are only visible on wider screens, or use
get_historical_data() if they specify a symbol and timeframe.
Args:
countback: Optional limit on number of bars to return. If not specified,
returns all bars in the visible time range.
Returns:
Dictionary containing:
- chart_context: Current chart state (symbol, interval, time range)
- symbol: The trading pair being viewed
- resolution: The chart interval
- bars: List of bar data with 'time' and 'data' fields
- columns: Schema describing available data columns
- source: Which data source was used
Raises:
ValueError: If ChartStore or DataSourceRegistry is not initialized,
if no chart is visible (symbol is None), or if the symbol format is invalid
Example:
# User is viewing BINANCE:BTC/USDT on 15min chart
data = get_chart_data()
# Returns BTC/USDT data from binance source at 15min resolution
# for the currently visible time range
"""
result, chart_context, source_name = await _get_chart_data_impl(countback)
# Return enriched result with chart context
response = result.model_dump()
response["chart_context"] = chart_context
response["source"] = source_name
return response
@tool
async def execute_python(code: str, countback: Optional[int] = None) -> Dict[str, Any]:
"""Execute Python code for technical analysis with automatic chart data loading.
**PRIMARY TOOL for all technical analysis, indicator computation, and chart generation.**
This is your go-to tool whenever the user asks about indicators, wants to see
a chart, or needs any computational analysis of market data.
Pre-loaded Environment:
- `pd` : pandas
- `np` : numpy
- `plt` : matplotlib.pyplot (figures auto-saved to plot_urls)
- `talib` : TA-Lib technical analysis library
- `indicator_registry`: 150+ registered indicators
- `plot_ohlc(df)` : Helper function for beautiful candlestick charts
- `registry` : SyncRegistry instance - access to all registered stores
- `datasource_registry`: DataSourceRegistry - access to data sources (binance, etc.)
- `order_store` : OrderStore instance - current orders list
- `chart_store` : ChartStore instance - current chart state
Auto-loaded when user has a chart visible (ChartStore.symbol is not None):
- `df` : pandas DataFrame with DatetimeIndex and columns:
open, high, low, close, volume (OHLCV data ready to use)
- `chart_context` : dict with symbol, interval, start_time, end_time
When NO chart is visible (narrow screen/mobile):
- `df` : None
- `chart_context` : None
If `df` is None, you can still load alternative data by:
- Using chart_store to see what symbol/timeframe is configured
- Using datasource_registry.get_source('binance') to access data sources
- Calling datasource.get_history(symbol, interval, start, end) to load any data
- This allows you to make plots of ANY chart even when not connected to chart view
The `plot_ohlc()` Helper:
Create professional candlestick charts instantly:
- `plot_ohlc(df)` - basic OHLC chart with volume
- `plot_ohlc(df, title='BTC 15min')` - with custom title
- `plot_ohlc(df, volume=False)` - price only, no volume
- Returns a matplotlib Figure that's automatically saved to plot_urls
Args:
code: Python code to execute
countback: Optional limit on number of bars to load (default: all visible bars)
Returns:
Dictionary with:
- script_output : printed output + last expression result
- result_dataframe : serialized DataFrame if last expression is a DataFrame
- plot_urls : list of image URLs (e.g., ["/uploads/plot_abc123.png"])
- chart_context : {symbol, interval, start_time, end_time} or None
- error : traceback if execution failed
Examples:
# RSI indicator with chart
execute_python(\"\"\"
df['RSI'] = talib.RSI(df['close'], 14)
fig = plot_ohlc(df, title='BTC/USDT with RSI')
print(f"Current RSI: {df['RSI'].iloc[-1]:.2f}")
df[['close', 'RSI']].tail(5)
\"\"\")
# Multiple indicators
execute_python(\"\"\"
df['SMA_20'] = df['close'].rolling(20).mean()
df['SMA_50'] = df['close'].rolling(50).mean()
df['BB_upper'] = df['close'].rolling(20).mean() + 2*df['close'].rolling(20).std()
df['BB_lower'] = df['close'].rolling(20).mean() - 2*df['close'].rolling(20).std()
fig = plot_ohlc(df, title=f"{chart_context['symbol']} - Bollinger Bands")
current_price = df['close'].iloc[-1]
sma20 = df['SMA_20'].iloc[-1]
print(f"Price: {current_price:.2f}, SMA20: {sma20:.2f}")
df[['close', 'SMA_20', 'BB_upper', 'BB_lower']].tail(10)
\"\"\")
# Pattern detection
execute_python(\"\"\"
# Find swing highs
df['swing_high'] = (df['high'] > df['high'].shift(1)) & (df['high'] > df['high'].shift(-1))
swing_highs = df[df['swing_high']][['high']].tail(5)
fig = plot_ohlc(df, title='Swing High Detection')
print("Recent swing highs:")
print(swing_highs)
\"\"\")
# Load alternative data when df is None or for different symbol/timeframe
execute_python(\"\"\"
from datetime import datetime, timedelta
# Get data source
binance = datasource_registry.get_source('binance')
# Load ETH data even if viewing BTC chart
end_time = datetime.now()
start_time = end_time - timedelta(days=7)
result = await binance.get_history(
symbol='ETH/USDT',
interval='1h',
start=int(start_time.timestamp()),
end=int(end_time.timestamp())
)
# Convert to DataFrame
rows = [{'time': pd.to_datetime(bar.time, unit='s'), **bar.data} for bar in result.bars]
eth_df = pd.DataFrame(rows).set_index('time')
# Calculate RSI and plot
eth_df['RSI'] = talib.RSI(eth_df['close'], 14)
fig = plot_ohlc(eth_df, title='ETH/USDT 1h - RSI Analysis')
print(f"ETH RSI: {eth_df['RSI'].iloc[-1]:.2f}")
\"\"\")
# Access chart store to see current state
execute_python(\"\"\"
print(f"Current symbol: {chart_store.chart_state.symbol}")
print(f"Current interval: {chart_store.chart_state.interval}")
print(f"Orders: {len(order_store.orders)}")
\"\"\")
"""
import pandas as pd
import numpy as np
import matplotlib
matplotlib.use('Agg')
import matplotlib.pyplot as plt
try:
import talib
except ImportError:
talib = None
logger.warning("TA-Lib not available in execute_python environment")
# --- Attempt to load chart data ---
df = None
chart_context = None
registry = _get_registry()
datasource_registry = _get_datasource_registry()
if registry and datasource_registry:
try:
result, chart_context, source_name = await _get_chart_data_impl(countback)
bars = result.bars
if bars:
rows = []
for bar in bars:
rows.append({'time': pd.to_datetime(bar.time, unit='s'), **bar.data})
df = pd.DataFrame(rows).set_index('time')
for col in ['open', 'high', 'low', 'close', 'volume']:
if col in df.columns:
df[col] = pd.to_numeric(df[col], errors='coerce')
logger.info(f"execute_python: loaded {len(df)} bars for {chart_context['symbol']}")
except Exception as e:
logger.info(f"execute_python: no chart data loaded ({e})")
# --- Import chart utilities ---
from .chart_utils import plot_ohlc
# --- Get indicator registry ---
indicator_registry = _get_indicator_registry()
# --- Get DataStores ---
order_store = _get_order_store()
chart_store = _get_chart_store()
# --- Build globals ---
script_globals: Dict[str, Any] = {
'pd': pd,
'np': np,
'plt': plt,
'talib': talib,
'indicator_registry': indicator_registry,
'registry': registry,
'datasource_registry': datasource_registry,
'order_store': order_store,
'chart_store': chart_store,
'df': df,
'chart_context': chart_context,
'plot_ohlc': plot_ohlc,
}
# --- Execute ---
uploads_dir = Path(__file__).parent.parent.parent.parent / "data" / "uploads"
uploads_dir.mkdir(parents=True, exist_ok=True)
stdout_capture = io.StringIO()
result_df = None
error_msg = None
plot_urls = []
try:
with redirect_stdout(stdout_capture), redirect_stderr(stdout_capture):
exec(code, script_globals)
# Capture last expression
lines = code.strip().splitlines()
if lines:
last = lines[-1].strip()
if last and not any(last.startswith(kw) for kw in (
'if', 'for', 'while', 'def', 'class', 'import',
'from', 'with', 'try', 'return', '#'
)):
try:
last_val = eval(last, script_globals)
if isinstance(last_val, pd.DataFrame):
result_df = last_val
elif last_val is not None:
stdout_capture.write(str(last_val))
except Exception:
pass
# Save plots
for fig_num in plt.get_fignums():
fig = plt.figure(fig_num)
filename = f"plot_{uuid.uuid4()}.png"
fig.savefig(uploads_dir / filename, format='png', bbox_inches='tight', dpi=100)
plot_urls.append(f"/uploads/{filename}")
plt.close(fig)
except Exception as e:
import traceback
error_msg = f"{type(e).__name__}: {e}\n{traceback.format_exc()}"
# --- Build response ---
response: Dict[str, Any] = {
'script_output': stdout_capture.getvalue(),
'chart_context': chart_context,
'plot_urls': plot_urls,
}
if result_df is not None:
response['result_dataframe'] = {
'columns': result_df.columns.tolist(),
'index': result_df.index.astype(str).tolist(),
'data': result_df.values.tolist(),
'shape': result_df.shape,
}
if error_msg:
response['error'] = error_msg
return response
CHART_TOOLS = [
get_chart_data,
execute_python
]

View File

@@ -0,0 +1,224 @@
"""Chart plotting utilities for creating standard, beautiful OHLC charts."""
import pandas as pd
import matplotlib.pyplot as plt
from typing import Optional, Tuple
import logging
logger = logging.getLogger(__name__)
def plot_ohlc(
df: pd.DataFrame,
title: Optional[str] = None,
volume: bool = True,
figsize: Tuple[int, int] = (14, 8),
style: str = 'seaborn-v0_8-darkgrid',
**kwargs
) -> plt.Figure:
"""Create a beautiful standard OHLC candlestick chart.
This is a convenience function that generates a professional-looking candlestick
chart with consistent styling across all generated charts. It uses mplfinance
with seaborn aesthetics for a polished appearance.
Args:
df: pandas DataFrame with DatetimeIndex and columns: open, high, low, close, volume
title: Optional chart title. If None, uses symbol from chart context
volume: Whether to include volume subplot (default: True)
figsize: Figure size as (width, height) in inches (default: (14, 8))
style: Base matplotlib style to use (default: 'seaborn-v0_8-darkgrid')
**kwargs: Additional arguments to pass to mplfinance.plot()
Returns:
matplotlib.figure.Figure: The created figure object
Example:
```python
# Basic usage in analyze_chart_data script
fig = plot_ohlc(df, title='BTC/USDT 15min')
# Customize with additional indicators
fig = plot_ohlc(df, volume=True, title='Price Action')
# Add custom overlays after calling plot_ohlc
df['SMA20'] = df['close'].rolling(20).mean()
fig = plot_ohlc(df, title='With SMA')
# Note: For mplfinance overlays, use the mav or addplot parameters
```
Note:
The DataFrame must have a DatetimeIndex and the standard OHLCV columns.
Column names should be lowercase: open, high, low, close, volume
"""
try:
import mplfinance as mpf
except ImportError:
raise ImportError(
"mplfinance is required for plot_ohlc(). "
"Install it with: pip install mplfinance"
)
# Validate DataFrame structure
required_cols = ['open', 'high', 'low', 'close']
missing_cols = [col for col in required_cols if col not in df.columns]
if missing_cols:
raise ValueError(
f"DataFrame missing required columns: {missing_cols}. "
f"Required: {required_cols}"
)
if not isinstance(df.index, pd.DatetimeIndex):
raise ValueError(
"DataFrame must have a DatetimeIndex. "
"Convert with: df.index = pd.to_datetime(df.index)"
)
# Ensure volume column exists for volume plot
if volume and 'volume' not in df.columns:
logger.warning("volume=True but 'volume' column not found in DataFrame. Disabling volume.")
volume = False
# Create custom style with seaborn aesthetics
# Using a professional color scheme: green for up candles, red for down candles
mc = mpf.make_marketcolors(
up='#26a69a', # Teal green (calmer than bright green)
down='#ef5350', # Coral red (softer than pure red)
edge='inherit', # Match candle color for edges
wick='inherit', # Match candle color for wicks
volume='in', # Volume bars colored by price direction
alpha=0.9 # Slight transparency for elegance
)
s = mpf.make_mpf_style(
base_mpf_style='charles', # Clean base style
marketcolors=mc,
rc={
'font.size': 10,
'axes.labelsize': 11,
'axes.titlesize': 12,
'xtick.labelsize': 9,
'ytick.labelsize': 9,
'legend.fontsize': 10,
'figure.facecolor': '#f0f0f0',
'axes.facecolor': '#ffffff',
'axes.grid': True,
'grid.alpha': 0.3,
'grid.linestyle': '--',
}
)
# Prepare plot parameters
plot_params = {
'type': 'candle',
'style': s,
'volume': volume,
'figsize': figsize,
'tight_layout': True,
'returnfig': True,
'warn_too_much_data': 1000, # Warn if > 1000 candles for performance
}
# Add title if provided
if title:
plot_params['title'] = title
# Merge any additional kwargs
plot_params.update(kwargs)
# Create the plot
logger.info(
f"Creating OHLC chart with {len(df)} candles, "
f"date range: {df.index.min()} to {df.index.max()}, "
f"volume: {volume}"
)
fig, axes = mpf.plot(df, **plot_params)
return fig
def add_indicators_to_plot(
df: pd.DataFrame,
indicators: dict,
**plot_kwargs
) -> plt.Figure:
"""Create an OHLC chart with technical indicators overlaid.
This extends plot_ohlc() to include common technical indicators using
mplfinance's addplot functionality for proper overlay on candlestick charts.
Args:
df: pandas DataFrame with OHLCV data and indicator columns
indicators: Dictionary mapping indicator names to parameters
Example: {
'SMA_20': {'color': 'blue', 'width': 1.5},
'EMA_50': {'color': 'orange', 'width': 1.5}
}
**plot_kwargs: Additional arguments for plot_ohlc()
Returns:
matplotlib.figure.Figure: The created figure object
Example:
```python
# Calculate indicators
df['SMA_20'] = df['close'].rolling(20).mean()
df['SMA_50'] = df['close'].rolling(50).mean()
# Plot with indicators
fig = add_indicators_to_plot(
df,
indicators={
'SMA_20': {'color': 'blue', 'width': 1.5, 'label': '20 SMA'},
'SMA_50': {'color': 'red', 'width': 1.5, 'label': '50 SMA'}
},
title='BTC/USDT with Moving Averages'
)
```
"""
try:
import mplfinance as mpf
except ImportError:
raise ImportError(
"mplfinance is required. Install it with: pip install mplfinance"
)
# Build addplot list for indicators
addplots = []
for indicator_col, params in indicators.items():
if indicator_col not in df.columns:
logger.warning(f"Indicator column '{indicator_col}' not found in DataFrame. Skipping.")
continue
color = params.get('color', 'blue')
width = params.get('width', 1.0)
panel = params.get('panel', 0) # 0 = main panel with candles
ylabel = params.get('ylabel', '')
addplots.append(
mpf.make_addplot(
df[indicator_col],
color=color,
width=width,
panel=panel,
ylabel=ylabel
)
)
# Pass addplot to plot_ohlc via kwargs
if addplots:
plot_kwargs['addplot'] = addplots
return plot_ohlc(df, **plot_kwargs)
# Convenience presets for common chart types
def plot_price_volume(df: pd.DataFrame, title: Optional[str] = None) -> plt.Figure:
"""Create a standard price + volume chart."""
return plot_ohlc(df, title=title, volume=True, figsize=(14, 8))
def plot_price_only(df: pd.DataFrame, title: Optional[str] = None) -> plt.Figure:
"""Create a price-only candlestick chart without volume."""
return plot_ohlc(df, title=title, volume=False, figsize=(14, 6))

View File

@@ -0,0 +1,154 @@
"""
Example usage of chart_utils.py plotting functions.
This demonstrates how the LLM can use the plot_ohlc() convenience function
in analyze_chart_data scripts to create beautiful, standard OHLC charts.
"""
import pandas as pd
import numpy as np
from datetime import datetime, timedelta
def create_sample_data(days=30):
"""Create sample OHLCV data for testing."""
dates = pd.date_range(end=datetime.now(), periods=days * 24, freq='1H')
# Simulate price movement
np.random.seed(42)
close = 50000 + np.cumsum(np.random.randn(len(dates)) * 100)
data = {
'open': close + np.random.randn(len(dates)) * 50,
'high': close + np.abs(np.random.randn(len(dates))) * 100,
'low': close - np.abs(np.random.randn(len(dates))) * 100,
'close': close,
'volume': np.abs(np.random.randn(len(dates))) * 1000000
}
df = pd.DataFrame(data, index=dates)
# Ensure high is highest and low is lowest
df['high'] = df[['open', 'high', 'low', 'close']].max(axis=1)
df['low'] = df[['open', 'high', 'low', 'close']].min(axis=1)
return df
if __name__ == "__main__":
from chart_utils import plot_ohlc, add_indicators_to_plot, plot_price_volume
# Create sample data
df = create_sample_data(days=30)
print("=" * 60)
print("Example 1: Basic OHLC chart with volume")
print("=" * 60)
print("\nScript the LLM would generate:")
print("""
fig = plot_ohlc(df, title='BTC/USDT 1H', volume=True)
df.tail(5)
""")
# Execute it
fig = plot_ohlc(df, title='BTC/USDT 1H', volume=True)
print("\n✓ Chart created successfully!")
print(f" Figure size: {fig.get_size_inches()}")
print(f" Number of axes: {len(fig.axes)}")
print("\n" + "=" * 60)
print("Example 2: OHLC chart with indicators")
print("=" * 60)
print("\nScript the LLM would generate:")
print("""
# Calculate indicators
df['SMA_20'] = df['close'].rolling(20).mean()
df['SMA_50'] = df['close'].rolling(50).mean()
df['EMA_12'] = df['close'].ewm(span=12, adjust=False).mean()
# Plot with indicators
fig = add_indicators_to_plot(
df,
indicators={
'SMA_20': {'color': 'blue', 'width': 1.5},
'SMA_50': {'color': 'red', 'width': 1.5},
'EMA_12': {'color': 'green', 'width': 1.0}
},
title='BTC/USDT with Moving Averages',
volume=True
)
df[['close', 'SMA_20', 'SMA_50', 'EMA_12']].tail(5)
""")
# Execute it
df['SMA_20'] = df['close'].rolling(20).mean()
df['SMA_50'] = df['close'].rolling(50).mean()
df['EMA_12'] = df['close'].ewm(span=12, adjust=False).mean()
fig = add_indicators_to_plot(
df,
indicators={
'SMA_20': {'color': 'blue', 'width': 1.5},
'SMA_50': {'color': 'red', 'width': 1.5},
'EMA_12': {'color': 'green', 'width': 1.0}
},
title='BTC/USDT with Moving Averages',
volume=True
)
print("\n✓ Chart with indicators created successfully!")
print(f" Last close: ${df['close'].iloc[-1]:,.2f}")
print(f" SMA 20: ${df['SMA_20'].iloc[-1]:,.2f}")
print(f" SMA 50: ${df['SMA_50'].iloc[-1]:,.2f}")
print("\n" + "=" * 60)
print("Example 3: Price-only chart (no volume)")
print("=" * 60)
print("\nScript the LLM would generate:")
print("""
from chart_utils import plot_price_only
fig = plot_price_only(df, title='Clean Price Action')
""")
# Execute it
from chart_utils import plot_price_only
fig = plot_price_only(df, title='Clean Price Action')
print("\n✓ Price-only chart created successfully!")
print("\n" + "=" * 60)
print("Summary")
print("=" * 60)
print("""
The chart_utils module provides:
1. plot_ohlc() - Main function for beautiful candlestick charts
- Professional seaborn-inspired styling
- Consistent color scheme (teal up, coral down)
- Optional volume subplot
- Customizable figure size
2. add_indicators_to_plot() - OHLC charts with technical indicators
- Overlay multiple indicators
- Customizable colors and line widths
- Proper integration with mplfinance
3. Preset functions for common chart types:
- plot_price_volume() - Standard price + volume
- plot_price_only() - Candlesticks without volume
Benefits:
✓ Consistent look and feel across all charts
✓ Less code for the LLM to generate
✓ Professional appearance out of the box
✓ Easy to customize when needed
✓ Works seamlessly with analyze_chart_data tool
The LLM can now simply call plot_ohlc(df) instead of writing
custom matplotlib code for every chart request!
""")

View File

@@ -0,0 +1,158 @@
"""Data source and market data tools."""
from typing import Dict, Any, List, Optional
from langchain_core.tools import tool
def _get_datasource_registry():
"""Get the global datasource registry instance."""
from . import _datasource_registry
return _datasource_registry
@tool
def list_data_sources() -> List[str]:
"""List all available data sources.
Returns:
List of data source names that can be queried for market data
"""
registry = _get_datasource_registry()
if not registry:
return []
return registry.list_sources()
@tool
async def search_symbols(
query: str,
type: Optional[str] = None,
exchange: Optional[str] = None,
limit: int = 30,
) -> Dict[str, Any]:
"""Search for trading symbols across all data sources.
Automatically searches all available data sources and returns aggregated results.
Use this to find symbols before calling get_symbol_info or get_historical_data.
Args:
query: Search query (e.g., "BTC", "AAPL", "EUR")
type: Optional filter by instrument type (e.g., "crypto", "stock", "forex")
exchange: Optional filter by exchange (e.g., "binance", "nasdaq")
limit: Maximum number of results per source (default: 30)
Returns:
Dictionary mapping source names to lists of matching symbols.
Each symbol includes: symbol, full_name, description, exchange, type.
Use the source name and symbol from results with get_symbol_info or get_historical_data.
Example response:
{
"demo": [
{
"symbol": "BTC/USDT",
"full_name": "Bitcoin / Tether USD",
"description": "Bitcoin perpetual futures",
"exchange": "demo",
"type": "crypto"
}
]
}
"""
registry = _get_datasource_registry()
if not registry:
raise ValueError("DataSourceRegistry not initialized")
# Always search all sources
results = await registry.search_all(query, type, exchange, limit)
return {name: [r.model_dump() for r in matches] for name, matches in results.items()}
@tool
async def get_symbol_info(source_name: str, symbol: str) -> Dict[str, Any]:
"""Get complete metadata for a trading symbol.
This retrieves full information about a symbol including:
- Description and type
- Supported time resolutions
- Available data columns (OHLCV, volume, funding rates, etc.)
- Trading session information
- Price scale and precision
Args:
source_name: Name of the data source (use list_data_sources to see available)
symbol: Symbol identifier (e.g., "BTC/USDT", "AAPL", "EUR/USD")
Returns:
Dictionary containing complete symbol metadata including column schema
Raises:
ValueError: If source_name or symbol is not found
"""
registry = _get_datasource_registry()
if not registry:
raise ValueError("DataSourceRegistry not initialized")
symbol_info = await registry.resolve_symbol(source_name, symbol)
return symbol_info.model_dump()
@tool
async def get_historical_data(
source_name: str,
symbol: str,
resolution: str,
from_time: int,
to_time: int,
countback: Optional[int] = None,
) -> Dict[str, Any]:
"""Get historical bar/candle data for a symbol.
Retrieves time-series data between the specified timestamps. The data
includes all columns defined for the symbol (OHLCV + any custom columns).
Args:
source_name: Name of the data source
symbol: Symbol identifier
resolution: Time resolution (e.g., "1" = 1min, "5" = 5min, "60" = 1hour, "1D" = 1day)
from_time: Start time as Unix timestamp in seconds
to_time: End time as Unix timestamp in seconds
countback: Optional limit on number of bars to return
Returns:
Dictionary containing:
- symbol: The requested symbol
- resolution: The time resolution
- bars: List of bar data with 'time' and 'data' fields
- columns: Schema describing available data columns
- nextTime: If present, indicates more data is available for pagination
Raises:
ValueError: If source, symbol, or resolution is invalid
Example:
# Get 1-hour BTC data for the last 24 hours
import time
to_time = int(time.time())
from_time = to_time - 86400 # 24 hours ago
data = get_historical_data("demo", "BTC/USDT", "60", from_time, to_time)
"""
registry = _get_datasource_registry()
if not registry:
raise ValueError("DataSourceRegistry not initialized")
source = registry.get(source_name)
if not source:
available = registry.list_sources()
raise ValueError(f"Data source '{source_name}' not found. Available sources: {available}")
result = await source.get_bars(symbol, resolution, from_time, to_time, countback)
return result.model_dump()
DATASOURCE_TOOLS = [
list_data_sources,
search_symbols,
get_symbol_info,
get_historical_data,
]

View File

@@ -0,0 +1,435 @@
"""Technical indicator tools.
These tools allow the agent to:
1. Discover available indicators (list, search, get info)
2. Add indicators to the chart
3. Update/remove indicators
4. Query currently applied indicators
"""
from typing import Dict, Any, List, Optional
from langchain_core.tools import tool
import logging
import time
logger = logging.getLogger(__name__)
def _get_indicator_registry():
"""Get the global indicator registry instance."""
from . import _indicator_registry
return _indicator_registry
def _get_registry():
"""Get the global sync registry instance."""
from . import _registry
return _registry
def _get_indicator_store():
"""Get the global IndicatorStore instance."""
registry = _get_registry()
if registry and "IndicatorStore" in registry.entries:
return registry.entries["IndicatorStore"].model
return None
@tool
def list_indicators() -> List[str]:
"""List all available technical indicators.
Returns:
List of indicator names that can be used in analysis and strategies
"""
registry = _get_indicator_registry()
if not registry:
return []
return registry.list_indicators()
@tool
def get_indicator_info(indicator_name: str) -> Dict[str, Any]:
"""Get detailed information about a specific indicator.
Retrieves metadata including description, parameters, category, use cases,
input/output schemas, and references.
Args:
indicator_name: Name of the indicator (e.g., "RSI", "SMA", "MACD")
Returns:
Dictionary containing:
- name: Indicator name
- display_name: Human-readable name
- description: What the indicator computes and why it's useful
- category: Category (momentum, trend, volatility, volume, etc.)
- parameters: List of configurable parameters with types and defaults
- use_cases: Common trading scenarios where this indicator helps
- tags: Searchable tags
- input_schema: Required input columns (e.g., OHLCV requirements)
- output_schema: Columns this indicator produces
Raises:
ValueError: If indicator_name is not found
"""
registry = _get_indicator_registry()
if not registry:
raise ValueError("IndicatorRegistry not initialized")
metadata = registry.get_metadata(indicator_name)
if not metadata:
total_count = len(registry.list_indicators())
raise ValueError(
f"Indicator '{indicator_name}' not found. "
f"Total available: {total_count} indicators. "
f"Use search_indicators() to find indicators by name, category, or tag."
)
input_schema = registry.get_input_schema(indicator_name)
output_schema = registry.get_output_schema(indicator_name)
result = metadata.model_dump()
result["input_schema"] = input_schema.model_dump() if input_schema else None
result["output_schema"] = output_schema.model_dump() if output_schema else None
return result
@tool
def search_indicators(
query: Optional[str] = None,
category: Optional[str] = None,
tag: Optional[str] = None
) -> List[Dict[str, Any]]:
"""Search for indicators by text query, category, or tag.
Returns lightweight summaries - use get_indicator_info() for full details on specific indicators.
Use this to discover relevant indicators for your trading strategy or analysis.
Can filter by category (momentum, trend, volatility, etc.) or search by keywords.
Args:
query: Optional text search across names, descriptions, and use cases
category: Optional category filter (momentum, trend, volatility, volume, pattern, etc.)
tag: Optional tag filter (e.g., "oscillator", "moving-average", "talib")
Returns:
List of lightweight indicator summaries. Each contains:
- name: Indicator name (use with get_indicator_info() for full details)
- display_name: Human-readable name
- description: Brief one-line description
- category: Category (momentum, trend, volatility, etc.)
Example:
# Find all momentum indicators
results = search_indicators(category="momentum")
# Returns [{name: "RSI", display_name: "RSI", description: "...", category: "momentum"}, ...]
# Then get details on interesting ones
rsi_details = get_indicator_info("RSI") # Full parameters, schemas, use cases
# Search for moving average indicators
search_indicators(query="moving average")
# Find all TA-Lib indicators
search_indicators(tag="talib")
"""
registry = _get_indicator_registry()
if not registry:
raise ValueError("IndicatorRegistry not initialized")
results = []
if query:
results = registry.search_by_text(query)
elif category:
results = registry.search_by_category(category)
elif tag:
results = registry.search_by_tag(tag)
else:
# Return all indicators if no filter
results = registry.get_all_metadata()
# Return lightweight summaries only
return [
{
"name": r.name,
"display_name": r.display_name,
"description": r.description,
"category": r.category
}
for r in results
]
@tool
def get_indicator_categories() -> Dict[str, int]:
"""Get all indicator categories and their counts.
Returns a summary of available indicator categories, useful for
exploring what types of indicators are available.
Returns:
Dictionary mapping category name to count of indicators in that category.
Example: {"momentum": 25, "trend": 15, "volatility": 8, ...}
"""
registry = _get_indicator_registry()
if not registry:
raise ValueError("IndicatorRegistry not initialized")
categories: Dict[str, int] = {}
for metadata in registry.get_all_metadata():
category = metadata.category
categories[category] = categories.get(category, 0) + 1
return categories
@tool
async def add_indicator_to_chart(
indicator_id: str,
talib_name: str,
parameters: Optional[Dict[str, Any]] = None,
symbol: Optional[str] = None
) -> Dict[str, Any]:
"""Add a technical indicator to the chart.
This will create a new indicator instance and display it on the TradingView chart.
The indicator will be synchronized with the frontend in real-time.
Args:
indicator_id: Unique identifier for this indicator instance (e.g., 'rsi_14', 'sma_50')
talib_name: Name of the TA-Lib indicator (e.g., 'RSI', 'SMA', 'MACD', 'BBANDS')
Use search_indicators() or get_indicator_info() to find available indicators
parameters: Optional dictionary of indicator parameters
Example for RSI: {'timeperiod': 14}
Example for SMA: {'timeperiod': 50}
Example for MACD: {'fastperiod': 12, 'slowperiod': 26, 'signalperiod': 9}
Example for BBANDS: {'timeperiod': 20, 'nbdevup': 2, 'nbdevdn': 2}
symbol: Optional symbol to apply the indicator to (defaults to current chart symbol)
Returns:
Dictionary with:
- status: 'created' or 'updated'
- indicator: The complete indicator object
Example:
# Add RSI(14)
await add_indicator_to_chart(
indicator_id='rsi_14',
talib_name='RSI',
parameters={'timeperiod': 14}
)
# Add 50-period SMA
await add_indicator_to_chart(
indicator_id='sma_50',
talib_name='SMA',
parameters={'timeperiod': 50}
)
# Add MACD with default parameters
await add_indicator_to_chart(
indicator_id='macd_default',
talib_name='MACD'
)
"""
from schema.indicator import IndicatorInstance
registry = _get_registry()
if not registry:
raise ValueError("SyncRegistry not initialized")
indicator_store = _get_indicator_store()
if not indicator_store:
raise ValueError("IndicatorStore not initialized")
# Verify the indicator exists
indicator_registry = _get_indicator_registry()
if not indicator_registry:
raise ValueError("IndicatorRegistry not initialized")
metadata = indicator_registry.get_metadata(talib_name)
if not metadata:
raise ValueError(
f"Indicator '{talib_name}' not found. "
f"Use search_indicators() to find available indicators."
)
# Check if updating existing indicator
existing_indicator = indicator_store.indicators.get(indicator_id)
is_update = existing_indicator is not None
# If symbol is not provided, try to get it from ChartStore
if symbol is None and "ChartStore" in registry.entries:
chart_store = registry.entries["ChartStore"].model
if hasattr(chart_store, 'chart_state') and hasattr(chart_store.chart_state, 'symbol'):
symbol = chart_store.chart_state.symbol
logger.info(f"Using current chart symbol for indicator: {symbol}")
now = int(time.time())
# Create indicator instance
indicator = IndicatorInstance(
id=indicator_id,
talib_name=talib_name,
instance_name=f"{talib_name}_{indicator_id}",
parameters=parameters or {},
visible=True,
pane='chart', # Most indicators go on the chart pane
symbol=symbol,
created_at=existing_indicator.get('created_at') if existing_indicator else now,
modified_at=now
)
# Update the store
indicator_store.indicators[indicator_id] = indicator.model_dump(mode="json")
# Trigger sync
await registry.push_all()
logger.info(
f"{'Updated' if is_update else 'Created'} indicator '{indicator_id}' "
f"(TA-Lib: {talib_name}) with parameters: {parameters}"
)
return {
"status": "updated" if is_update else "created",
"indicator": indicator.model_dump(mode="json")
}
@tool
async def remove_indicator_from_chart(indicator_id: str) -> Dict[str, str]:
"""Remove an indicator from the chart.
Args:
indicator_id: ID of the indicator instance to remove
Returns:
Dictionary with status message
Raises:
ValueError: If indicator doesn't exist
Example:
await remove_indicator_from_chart('rsi_14')
"""
registry = _get_registry()
if not registry:
raise ValueError("SyncRegistry not initialized")
indicator_store = _get_indicator_store()
if not indicator_store:
raise ValueError("IndicatorStore not initialized")
if indicator_id not in indicator_store.indicators:
raise ValueError(f"Indicator '{indicator_id}' not found")
# Delete the indicator
del indicator_store.indicators[indicator_id]
# Trigger sync
await registry.push_all()
logger.info(f"Removed indicator '{indicator_id}'")
return {
"status": "success",
"message": f"Indicator '{indicator_id}' removed"
}
@tool
def list_chart_indicators(symbol: Optional[str] = None) -> List[Dict[str, Any]]:
"""List all indicators currently applied to the chart.
Args:
symbol: Optional filter by symbol (defaults to current chart symbol)
Returns:
List of indicator instances, each containing:
- id: Indicator instance ID
- talib_name: TA-Lib indicator name
- instance_name: Display name
- parameters: Current parameter values
- visible: Whether indicator is visible
- pane: Which pane it's displayed in
- symbol: Symbol it's applied to
Example:
# List all indicators on current symbol
indicators = list_chart_indicators()
# List indicators on specific symbol
btc_indicators = list_chart_indicators(symbol='BINANCE:BTC/USDT')
"""
indicator_store = _get_indicator_store()
if not indicator_store:
raise ValueError("IndicatorStore not initialized")
logger.info(f"list_chart_indicators: Raw store indicators: {indicator_store.indicators}")
# If symbol is not provided, try to get it from ChartStore
if symbol is None:
registry = _get_registry()
if registry and "ChartStore" in registry.entries:
chart_store = registry.entries["ChartStore"].model
if hasattr(chart_store, 'chart_state') and hasattr(chart_store.chart_state, 'symbol'):
symbol = chart_store.chart_state.symbol
indicators = list(indicator_store.indicators.values())
logger.info(f"list_chart_indicators: Converted to list: {indicators}")
logger.info(f"list_chart_indicators: Filtering by symbol: {symbol}")
# Filter by symbol if provided
if symbol:
indicators = [ind for ind in indicators if ind.get('symbol') == symbol]
logger.info(f"list_chart_indicators: Returning {len(indicators)} indicators")
return indicators
@tool
def get_chart_indicator(indicator_id: str) -> Dict[str, Any]:
"""Get details of a specific indicator on the chart.
Args:
indicator_id: ID of the indicator instance
Returns:
Dictionary containing the indicator data
Raises:
ValueError: If indicator doesn't exist
Example:
indicator = get_chart_indicator('rsi_14')
print(f"Indicator: {indicator['talib_name']}")
print(f"Parameters: {indicator['parameters']}")
"""
indicator_store = _get_indicator_store()
if not indicator_store:
raise ValueError("IndicatorStore not initialized")
indicator = indicator_store.indicators.get(indicator_id)
if not indicator:
raise ValueError(f"Indicator '{indicator_id}' not found")
return indicator
INDICATOR_TOOLS = [
# Discovery tools
list_indicators,
get_indicator_info,
search_indicators,
get_indicator_categories,
# Chart indicator management tools
add_indicator_to_chart,
remove_indicator_from_chart,
list_chart_indicators,
get_chart_indicator
]

View File

@@ -0,0 +1,171 @@
"""Research and external data tools for trading analysis."""
from typing import Dict, Any, Optional
from langchain_core.tools import tool
from langchain_community.tools import (
ArxivQueryRun,
WikipediaQueryRun,
DuckDuckGoSearchRun
)
from langchain_community.utilities import (
ArxivAPIWrapper,
WikipediaAPIWrapper,
DuckDuckGoSearchAPIWrapper
)
@tool
def search_arxiv(query: str, max_results: int = 5) -> str:
"""Search arXiv for academic papers on quantitative finance, trading strategies, and machine learning.
Use this to find research papers on topics like:
- Market microstructure and order flow
- Algorithmic trading strategies
- Machine learning for finance
- Time series forecasting
- Risk management
- Portfolio optimization
Args:
query: Search query (e.g., "machine learning algorithmic trading", "deep learning stock prediction")
max_results: Maximum number of results to return (default: 5)
Returns:
Summary of papers including titles, authors, abstracts, and links
Example:
search_arxiv("reinforcement learning trading", max_results=3)
"""
arxiv = ArxivQueryRun(api_wrapper=ArxivAPIWrapper(top_k_results=max_results))
return arxiv.run(query)
@tool
def search_wikipedia(query: str) -> str:
"""Search Wikipedia for information on finance, trading, and economics concepts.
Use this to get background information on:
- Financial instruments and markets
- Economic indicators
- Trading terminology
- Technical analysis concepts
- Historical market events
Args:
query: Search query (e.g., "Black-Scholes model", "technical analysis", "options trading")
Returns:
Wikipedia article summary with key information
Example:
search_wikipedia("Bollinger Bands")
"""
wikipedia = WikipediaQueryRun(api_wrapper=WikipediaAPIWrapper())
return wikipedia.run(query)
@tool
def search_web(query: str, max_results: int = 5) -> str:
"""Search the web for current information on markets, news, and trading.
Use this to find:
- Latest market news and analysis
- Company announcements and earnings
- Economic events and indicators
- Cryptocurrency updates
- Exchange status and updates
- Trading strategy discussions
Args:
query: Search query (e.g., "Bitcoin price news", "Fed interest rate decision")
max_results: Maximum number of results to return (default: 5)
Returns:
Search results with titles, snippets, and links
Example:
search_web("Ethereum merge update", max_results=3)
"""
# Lazy initialization to avoid hanging during import
search = DuckDuckGoSearchRun(api_wrapper=DuckDuckGoSearchAPIWrapper())
# Note: max_results parameter doesn't work properly with current wrapper
return search.run(query)
@tool
def http_get(url: str, params: Optional[Dict[str, str]] = None) -> str:
"""Make HTTP GET request to fetch data from APIs or web pages.
Use this to retrieve:
- Exchange API data (if public endpoints)
- Market data from external APIs
- Documentation and specifications
- News articles and blog posts
- JSON/XML data from web services
Args:
url: The URL to fetch
params: Optional query parameters as a dictionary
Returns:
Response text from the URL
Raises:
ValueError: If the request fails
Example:
http_get("https://api.coingecko.com/api/v3/simple/price",
params={"ids": "bitcoin", "vs_currencies": "usd"})
"""
import requests
try:
response = requests.get(url, params=params, timeout=10)
response.raise_for_status()
return response.text
except requests.RequestException as e:
raise ValueError(f"HTTP GET request failed: {str(e)}")
@tool
def http_post(url: str, data: Dict[str, Any]) -> str:
"""Make HTTP POST request to send data to APIs.
Use this to:
- Submit data to external APIs
- Trigger webhooks
- Post analysis results
- Interact with exchange APIs (if authenticated)
Args:
url: The URL to post to
data: Dictionary of data to send in the request body
Returns:
Response text from the server
Raises:
ValueError: If the request fails
Example:
http_post("https://webhook.site/xxx", {"message": "Trade executed"})
"""
import requests
import json
try:
response = requests.post(url, json=data, timeout=10)
response.raise_for_status()
return response.text
except requests.RequestException as e:
raise ValueError(f"HTTP POST request failed: {str(e)}")
# Export tools list
RESEARCH_TOOLS = [
search_arxiv,
search_wikipedia,
search_web,
http_get,
http_post
]

View File

@@ -0,0 +1,475 @@
"""Shape/drawing tools for chart analysis."""
from typing import Dict, Any, List, Optional
from langchain_core.tools import tool
import logging
logger = logging.getLogger(__name__)
# Map legacy/common shape type names to TradingView's native names
SHAPE_TYPE_ALIASES: Dict[str, str] = {
'trendline': 'trend_line',
'fibonacci': 'fib_retracement',
'fibonacci_extension': 'fib_trend_ext',
'gann_fan': 'gannbox_fan',
}
def _get_registry():
"""Get the global registry instance."""
from . import _registry
return _registry
def _get_shape_store():
"""Get the global ShapeStore instance."""
registry = _get_registry()
if registry and "ShapeStore" in registry.entries:
return registry.entries["ShapeStore"].model
return None
@tool
def search_shapes(
start_time: Optional[int] = None,
end_time: Optional[int] = None,
shape_type: Optional[str] = None,
symbol: Optional[str] = None,
shape_ids: Optional[List[str]] = None,
original_ids: Optional[List[str]] = None
) -> List[Dict[str, Any]]:
"""Search for shapes/drawings using flexible filters.
This tool can search shapes by:
- Time range (finds shapes that overlap the range)
- Shape type (e.g., 'trendline', 'horizontal_line')
- Symbol (e.g., 'BINANCE:BTC/USDT')
- Specific shape IDs (TradingView's assigned IDs)
- Original IDs (the IDs you specified when creating shapes)
Args:
start_time: Optional start of time range (Unix timestamp in seconds)
end_time: Optional end of time range (Unix timestamp in seconds)
shape_type: Optional filter by shape type (e.g., 'trend_line', 'horizontal_line', 'rectangle')
symbol: Optional filter by symbol (e.g., 'BINANCE:BTC/USDT')
shape_ids: Optional list of specific shape IDs to retrieve (searches both id and original_id fields)
original_ids: Optional list of original IDs to search for (the IDs you specified when creating)
Returns:
List of matching shapes, each as a dictionary with:
- id: Shape identifier (TradingView's assigned ID)
- original_id: The ID you specified when creating the shape (if applicable)
- type: Shape type
- points: List of control points with time and price
- color, line_width, line_style: Visual properties
- properties: Additional shape-specific properties
- symbol: Symbol the shape is drawn on
- created_at, modified_at: Timestamps
Examples:
# Find all shapes in the currently visible chart range
shapes = search_shapes(
start_time=chart_state.start_time,
end_time=chart_state.end_time
)
# Find only trendlines in a specific time range
trendlines = search_shapes(
start_time=1640000000,
end_time=1650000000,
shape_type='trend_line'
)
# Find shapes for a specific symbol
btc_shapes = search_shapes(
start_time=1640000000,
end_time=1650000000,
symbol='BINANCE:BTC/USDT'
)
# Get specific shapes by TradingView ID or original ID
# This searches both the 'id' and 'original_id' fields
selected = search_shapes(
shape_ids=['trendline-1', 'support-42k', 'fib-retracement-1']
)
# Get shapes by the original IDs you specified when creating them
my_shapes = search_shapes(
original_ids=['my-support-line', 'my-resistance-line']
)
# Get all trendlines (no time filter)
all_trendlines = search_shapes(shape_type='trend_line')
"""
shape_store = _get_shape_store()
if not shape_store:
raise ValueError("ShapeStore not initialized")
shapes_dict = shape_store.shapes
matching_shapes = []
# If specific shape IDs are requested, search by both id and original_id
if shape_ids:
for requested_id in shape_ids:
# First try direct ID lookup
shape = shapes_dict.get(requested_id)
if shape:
# Still apply other filters if specified
if symbol and shape.get('symbol') != symbol:
continue
if shape_type and shape.get('type') != shape_type:
continue
matching_shapes.append(shape)
else:
# If not found by ID, search by original_id
for shape_id, shape in shapes_dict.items():
if shape.get('original_id') == requested_id:
# Still apply other filters if specified
if symbol and shape.get('symbol') != symbol:
continue
if shape_type and shape.get('type') != shape_type:
continue
matching_shapes.append(shape)
break
logger.info(
f"Found {len(matching_shapes)} shapes by ID filter (requested {len(shape_ids)} IDs)"
+ (f" for type '{shape_type}'" if shape_type else "")
+ (f" on symbol '{symbol}'" if symbol else "")
)
return matching_shapes
# If specific original IDs are requested, search by original_id only
if original_ids:
for original_id in original_ids:
for shape_id, shape in shapes_dict.items():
if shape.get('original_id') == original_id:
# Still apply other filters if specified
if symbol and shape.get('symbol') != symbol:
continue
if shape_type and shape.get('type') != shape_type:
continue
matching_shapes.append(shape)
break
logger.info(
f"Found {len(matching_shapes)} shapes by original_id filter (requested {len(original_ids)} IDs)"
+ (f" for type '{shape_type}'" if shape_type else "")
+ (f" on symbol '{symbol}'" if symbol else "")
)
return matching_shapes
# Otherwise, search all shapes with filters
for shape_id, shape in shapes_dict.items():
# Filter by symbol if specified
if symbol and shape.get('symbol') != symbol:
continue
# Filter by type if specified
if shape_type and shape.get('type') != shape_type:
continue
# Filter by time range if specified
if start_time is not None and end_time is not None:
# Check if any control point falls within the time range
# or if the shape spans across the time range
points = shape.get('points', [])
if not points:
continue
# Get min and max times from shape's control points
shape_times = [point['time'] for point in points]
shape_min_time = min(shape_times)
shape_max_time = max(shape_times)
# Check for overlap: shape overlaps if its range intersects with query range
if not (shape_max_time >= start_time and shape_min_time <= end_time):
continue
matching_shapes.append(shape)
logger.info(
f"Found {len(matching_shapes)} shapes"
+ (f" in time range {start_time}-{end_time}" if start_time and end_time else "")
+ (f" for type '{shape_type}'" if shape_type else "")
+ (f" on symbol '{symbol}'" if symbol else "")
)
return matching_shapes
@tool
async def create_or_update_shape(
shape_id: str,
shape_type: str,
points: List[Dict[str, Any]],
color: Optional[str] = None,
line_width: Optional[int] = None,
line_style: Optional[str] = None,
properties: Optional[Dict[str, Any]] = None,
symbol: Optional[str] = None
) -> Dict[str, Any]:
"""Create a new shape or update an existing shape on the chart.
This tool allows the agent to draw shapes on the user's chart or modify
existing shapes. Shapes are synchronized to the frontend in real-time.
IMPORTANT - Shape ID Mapping:
When you create a shape, TradingView will assign its own internal ID that differs
from the shape_id you provide. The shape will be updated in the store with:
- id: TradingView's assigned ID
- original_id: The shape_id you provided
To find your shape later, use search_shapes() and filter by original_id field.
Example:
# Create a shape
await create_or_update_shape(shape_id='my-support', ...)
# Later, find it by original_id
shapes = search_shapes(symbol='BINANCE:BTC/USDT')
my_shape = next((s for s in shapes if s.get('original_id') == 'my-support'), None)
Args:
shape_id: Unique identifier for the shape (use existing ID to update, new ID to create)
Note: TradingView will assign its own ID; your ID will be stored in original_id
shape_type: Type of shape using TradingView's native names.
Single-point shapes (use 1 point):
- 'horizontal_line': Horizontal support/resistance line
- 'vertical_line': Vertical time marker
- 'text': Text label
- 'anchored_text': Anchored text annotation
- 'anchored_note': Anchored note
- 'note': Note annotation
- 'emoji': Emoji marker
- 'icon': Icon marker
- 'sticker': Sticker marker
- 'arrow_up': Upward arrow marker
- 'arrow_down': Downward arrow marker
- 'flag': Flag marker
- 'long_position': Long position marker
- 'short_position': Short position marker
Multi-point shapes (use 2+ points):
- 'trend_line': Trendline (2 points)
- 'rectangle': Rectangle (2 points: top-left, bottom-right)
- 'fib_retracement': Fibonacci retracement (2 points)
- 'fib_trend_ext': Fibonacci extension (3 points)
- 'parallel_channel': Parallel channel (3 points)
- 'arrow': Arrow (2 points)
- 'circle': Circle/ellipse (2-3 points)
- 'path': Free drawing path (3+ points)
- 'pitchfork': Andrew's pitchfork (3 points)
- 'gannbox_fan': Gann fan (2 points)
- 'head_and_shoulders': Head and shoulders pattern (5 points)
points: List of control points, each with 'time' (Unix seconds) and 'price' fields
color: Optional color (hex like '#FF0000' or name like 'red')
line_width: Optional line width in pixels (default: 1)
line_style: Optional line style: 'solid', 'dashed', 'dotted' (default: 'solid')
properties: Optional dict of additional shape-specific properties
symbol: Optional symbol to associate with the shape (defaults to current chart symbol)
Returns:
Dictionary with:
- status: 'created' or 'updated'
- shape: The complete shape object (initially with your ID, will be updated to TV ID)
Examples:
# Draw a trendline between two points
await create_or_update_shape(
shape_id='my-trendline-1',
shape_type='trend_line',
points=[
{'time': 1640000000, 'price': 45000.0},
{'time': 1650000000, 'price': 50000.0}
],
color='#00FF00',
line_width=2
)
# Draw a horizontal support line
await create_or_update_shape(
shape_id='support-1',
shape_type='horizontal_line',
points=[{'time': 1640000000, 'price': 42000.0}],
color='blue',
line_style='dashed'
)
# Find your shape after creation using original_id
shapes = search_shapes(symbol='BINANCE:BTC/USDT')
my_shape = next((s for s in shapes if s.get('original_id') == 'support-1'), None)
if my_shape:
print(f"TradingView assigned ID: {my_shape['id']}")
"""
from schema.shape import Shape, ControlPoint
import time as time_module
registry = _get_registry()
if not registry:
raise ValueError("SyncRegistry not initialized")
shape_store = _get_shape_store()
if not shape_store:
raise ValueError("ShapeStore not initialized")
# Normalize shape type (handle legacy names)
normalized_type = SHAPE_TYPE_ALIASES.get(shape_type, shape_type)
if normalized_type != shape_type:
logger.info(f"Normalized shape type '{shape_type}' -> '{normalized_type}'")
# Convert points to ControlPoint objects
control_points = []
for p in points:
point_data = {
'time': p['time'],
'price': p['price']
}
# Only include channel if it's actually provided
if 'channel' in p and p['channel'] is not None:
point_data['channel'] = p['channel']
control_points.append(ControlPoint(**point_data))
# Check if updating existing shape
existing_shape = shape_store.shapes.get(shape_id)
is_update = existing_shape is not None
# If symbol is not provided, try to get it from ChartStore
if symbol is None and "ChartStore" in registry.entries:
chart_store = registry.entries["ChartStore"].model
if hasattr(chart_store, 'chart_state') and hasattr(chart_store.chart_state, 'symbol'):
symbol = chart_store.chart_state.symbol
logger.info(f"Using current chart symbol for shape: {symbol}")
now = int(time_module.time())
# Create shape object
shape = Shape(
id=shape_id,
type=normalized_type,
points=control_points,
color=color,
line_width=line_width,
line_style=line_style,
properties=properties or {},
symbol=symbol,
created_at=existing_shape.get('created_at') if existing_shape else now,
modified_at=now
)
# Update the store
shape_store.shapes[shape_id] = shape.model_dump(mode="json")
# Trigger sync
await registry.push_all()
logger.info(
f"{'Updated' if is_update else 'Created'} shape '{shape_id}' "
f"of type '{shape_type}' with {len(points)} points"
)
return {
"status": "updated" if is_update else "created",
"shape": shape.model_dump(mode="json")
}
@tool
async def delete_shape(shape_id: str) -> Dict[str, str]:
"""Delete a shape from the chart.
Args:
shape_id: ID of the shape to delete
Returns:
Dictionary with status message
Raises:
ValueError: If shape doesn't exist
Example:
await delete_shape('my-trendline-1')
"""
registry = _get_registry()
if not registry:
raise ValueError("SyncRegistry not initialized")
shape_store = _get_shape_store()
if not shape_store:
raise ValueError("ShapeStore not initialized")
if shape_id not in shape_store.shapes:
raise ValueError(f"Shape '{shape_id}' not found")
# Delete the shape
del shape_store.shapes[shape_id]
# Trigger sync
await registry.push_all()
logger.info(f"Deleted shape '{shape_id}'")
return {
"status": "success",
"message": f"Shape '{shape_id}' deleted"
}
@tool
def get_shape(shape_id: str) -> Dict[str, Any]:
"""Get details of a specific shape by ID.
Args:
shape_id: ID of the shape to retrieve
Returns:
Dictionary containing the shape data
Raises:
ValueError: If shape doesn't exist
Example:
shape = get_shape('my-trendline-1')
print(f"Shape type: {shape['type']}")
print(f"Points: {shape['points']}")
"""
shape_store = _get_shape_store()
if not shape_store:
raise ValueError("ShapeStore not initialized")
shape = shape_store.shapes.get(shape_id)
if not shape:
raise ValueError(f"Shape '{shape_id}' not found")
return shape
@tool
def list_all_shapes() -> List[Dict[str, Any]]:
"""List all shapes currently on the chart.
Returns:
List of all shapes as dictionaries
Example:
shapes = list_all_shapes()
print(f"Total shapes: {len(shapes)}")
for shape in shapes:
print(f" - {shape['id']}: {shape['type']}")
"""
shape_store = _get_shape_store()
if not shape_store:
raise ValueError("ShapeStore not initialized")
return list(shape_store.shapes.values())
SHAPE_TOOLS = [
search_shapes,
create_or_update_shape,
delete_shape,
get_shape,
list_all_shapes
]

View File

@@ -0,0 +1,138 @@
"""Synchronization store tools."""
from typing import Dict, Any, List
from langchain_core.tools import tool
def _get_registry():
"""Get the global registry instance."""
from . import _registry
return _registry
@tool
def list_sync_stores() -> List[str]:
"""List all available synchronization stores.
Returns:
List of store names that can be read/written
"""
registry = _get_registry()
if not registry:
return []
return list(registry.entries.keys())
@tool
def read_sync_state(store_name: str) -> Dict[str, Any]:
"""Read the current state of a synchronization store.
Args:
store_name: Name of the store to read (e.g., "TraderState", "StrategyState")
Returns:
Dictionary containing the current state of the store
Raises:
ValueError: If store_name doesn't exist
"""
registry = _get_registry()
if not registry:
raise ValueError("SyncRegistry not initialized")
entry = registry.entries.get(store_name)
if not entry:
available = list(registry.entries.keys())
raise ValueError(f"Store '{store_name}' not found. Available stores: {available}")
return entry.model.model_dump(mode="json")
@tool
async def write_sync_state(store_name: str, updates: Dict[str, Any]) -> Dict[str, str]:
"""Update the state of a synchronization store.
This will apply the updates to the store and trigger synchronization
with all connected clients.
Args:
store_name: Name of the store to update
updates: Dictionary of field updates (field_name: new_value)
Returns:
Dictionary with status and updated fields
Raises:
ValueError: If store_name doesn't exist or updates are invalid
"""
registry = _get_registry()
if not registry:
raise ValueError("SyncRegistry not initialized")
entry = registry.entries.get(store_name)
if not entry:
available = list(registry.entries.keys())
raise ValueError(f"Store '{store_name}' not found. Available stores: {available}")
try:
# Get current state
current_state = entry.model.model_dump(mode="json")
# Apply updates
new_state = {**current_state, **updates}
# Update the model
registry._update_model(entry.model, new_state)
# Trigger sync
await registry.push_all()
return {
"status": "success",
"store": store_name,
"updated_fields": list(updates.keys())
}
except Exception as e:
raise ValueError(f"Failed to update store '{store_name}': {str(e)}")
@tool
def get_store_schema(store_name: str) -> Dict[str, Any]:
"""Get the schema/structure of a synchronization store.
This shows what fields are available and their types.
Args:
store_name: Name of the store
Returns:
Dictionary describing the store's schema
Raises:
ValueError: If store_name doesn't exist
"""
registry = _get_registry()
if not registry:
raise ValueError("SyncRegistry not initialized")
entry = registry.entries.get(store_name)
if not entry:
available = list(registry.entries.keys())
raise ValueError(f"Store '{store_name}' not found. Available stores: {available}")
# Get model schema
schema = entry.model.model_json_schema()
return {
"store_name": store_name,
"schema": schema
}
SYNC_TOOLS = [
list_sync_stores,
read_sync_state,
write_sync_state,
get_store_schema
]

View File

@@ -0,0 +1,366 @@
"""
Agent tools for trigger system.
Allows agents to:
- Schedule recurring tasks (cron-style)
- Execute one-time triggers
- Manage scheduled triggers (list, cancel)
- Connect events to sub-agent runs or lambdas
"""
import logging
from typing import Any, Dict, List, Optional
from langchain_core.tools import tool
logger = logging.getLogger(__name__)
# Global references set by main.py
_trigger_queue = None
_trigger_scheduler = None
_coordinator = None
def set_trigger_queue(queue):
"""Set the global TriggerQueue instance for tools to use."""
global _trigger_queue
_trigger_queue = queue
def set_trigger_scheduler(scheduler):
"""Set the global TriggerScheduler instance for tools to use."""
global _trigger_scheduler
_trigger_scheduler = scheduler
def set_coordinator(coordinator):
"""Set the global CommitCoordinator instance for tools to use."""
global _coordinator
_coordinator = coordinator
def _get_trigger_queue():
"""Get the global trigger queue instance."""
if not _trigger_queue:
raise ValueError("TriggerQueue not initialized")
return _trigger_queue
def _get_trigger_scheduler():
"""Get the global trigger scheduler instance."""
if not _trigger_scheduler:
raise ValueError("TriggerScheduler not initialized")
return _trigger_scheduler
def _get_coordinator():
"""Get the global coordinator instance."""
if not _coordinator:
raise ValueError("CommitCoordinator not initialized")
return _coordinator
@tool
async def schedule_agent_prompt(
prompt: str,
schedule_type: str,
schedule_config: Dict[str, Any],
name: Optional[str] = None,
) -> Dict[str, str]:
"""Schedule an agent to run with a specific prompt on a recurring schedule.
This allows you to set up automated tasks where the agent runs periodically
with a predefined prompt. Useful for:
- Daily market analysis reports
- Hourly portfolio rebalancing checks
- Weekly performance summaries
- Monitoring alerts
Args:
prompt: The prompt to send to the agent when triggered
schedule_type: Type of schedule - "interval" or "cron"
schedule_config: Schedule configuration:
For "interval": {"minutes": 5} or {"hours": 1, "minutes": 30}
For "cron": {"hour": "9", "minute": "0"} for 9:00 AM daily
{"hour": "9", "minute": "0", "day_of_week": "mon-fri"}
name: Optional descriptive name for this scheduled task
Returns:
Dictionary with job_id and confirmation message
Examples:
# Run every 5 minutes
schedule_agent_prompt(
prompt="Check BTC price and alert if > $50k",
schedule_type="interval",
schedule_config={"minutes": 5}
)
# Run daily at 9 AM
schedule_agent_prompt(
prompt="Generate daily market summary",
schedule_type="cron",
schedule_config={"hour": "9", "minute": "0"}
)
# Run hourly on weekdays
schedule_agent_prompt(
prompt="Monitor portfolio for rebalancing opportunities",
schedule_type="cron",
schedule_config={"minute": "0", "day_of_week": "mon-fri"}
)
"""
from trigger.handlers import LambdaHandler
from trigger import Priority
scheduler = _get_trigger_scheduler()
queue = _get_trigger_queue()
if not name:
name = f"agent_prompt_{hash(prompt) % 10000}"
# Create a lambda that enqueues an agent trigger with the prompt
async def agent_prompt_lambda():
from trigger.handlers import AgentTriggerHandler
# Create agent trigger (will use current session's context)
# In production, you'd want to specify which session/user this belongs to
trigger = AgentTriggerHandler(
session_id="scheduled", # Special session for scheduled tasks
message_content=prompt,
coordinator=_get_coordinator(),
)
await queue.enqueue(trigger)
return [] # No direct commit intents
# Wrap in lambda handler
lambda_trigger = LambdaHandler(
name=f"scheduled_{name}",
func=agent_prompt_lambda,
priority=Priority.TIMER,
)
# Schedule based on type
if schedule_type == "interval":
job_id = scheduler.schedule_interval(
lambda_trigger,
seconds=schedule_config.get("seconds"),
minutes=schedule_config.get("minutes"),
hours=schedule_config.get("hours"),
priority=Priority.TIMER,
)
elif schedule_type == "cron":
job_id = scheduler.schedule_cron(
lambda_trigger,
minute=schedule_config.get("minute"),
hour=schedule_config.get("hour"),
day=schedule_config.get("day"),
month=schedule_config.get("month"),
day_of_week=schedule_config.get("day_of_week"),
priority=Priority.TIMER,
)
else:
raise ValueError(f"Invalid schedule_type: {schedule_type}. Use 'interval' or 'cron'")
return {
"job_id": job_id,
"message": f"Scheduled '{name}' with job_id={job_id}",
"schedule_type": schedule_type,
"config": schedule_config,
}
@tool
async def execute_agent_prompt_once(
prompt: str,
priority: str = "normal",
) -> Dict[str, str]:
"""Execute an agent prompt once, immediately (enqueued with priority).
Use this to trigger a sub-agent with a specific task without waiting for
a user message. Useful for:
- Background analysis tasks
- One-time data processing
- Responding to specific events
Args:
prompt: The prompt to send to the agent
priority: Priority level - "high", "normal", or "low"
Returns:
Confirmation that the prompt was enqueued
Example:
execute_agent_prompt_once(
prompt="Analyze the last 100 BTC/USDT bars and identify support levels",
priority="high"
)
"""
from trigger.handlers import AgentTriggerHandler
from trigger import Priority
queue = _get_trigger_queue()
# Map string priority to enum
priority_map = {
"high": Priority.USER_AGENT, # Same priority as user messages
"normal": Priority.SYSTEM,
"low": Priority.LOW,
}
priority_enum = priority_map.get(priority.lower(), Priority.SYSTEM)
# Create agent trigger
trigger = AgentTriggerHandler(
session_id="oneshot",
message_content=prompt,
coordinator=_get_coordinator(),
)
# Enqueue with priority override
queue_seq = await queue.enqueue(trigger, priority_enum)
return {
"queue_seq": queue_seq,
"message": f"Enqueued agent prompt with priority={priority}",
"prompt": prompt[:100] + "..." if len(prompt) > 100 else prompt,
}
@tool
def list_scheduled_triggers() -> List[Dict[str, Any]]:
"""List all currently scheduled triggers.
Returns:
List of dictionaries with job information (id, name, next_run_time)
Example:
jobs = list_scheduled_triggers()
for job in jobs:
print(f"{job['id']}: {job['name']} - next run at {job['next_run_time']}")
"""
scheduler = _get_trigger_scheduler()
jobs = scheduler.get_jobs()
result = []
for job in jobs:
result.append({
"id": job.id,
"name": job.name,
"next_run_time": str(job.next_run_time) if job.next_run_time else None,
"trigger": str(job.trigger),
})
return result
@tool
def cancel_scheduled_trigger(job_id: str) -> Dict[str, str]:
"""Cancel a scheduled trigger by its job ID.
Args:
job_id: The job ID returned from schedule_agent_prompt or list_scheduled_triggers
Returns:
Confirmation message
Example:
cancel_scheduled_trigger("interval_123")
"""
scheduler = _get_trigger_scheduler()
success = scheduler.remove_job(job_id)
if success:
return {
"status": "success",
"message": f"Cancelled job {job_id}",
}
else:
return {
"status": "error",
"message": f"Job {job_id} not found",
}
@tool
async def on_data_update_run_agent(
source_name: str,
symbol: str,
resolution: str,
prompt_template: str,
) -> Dict[str, str]:
"""Set up an agent to run whenever new data arrives for a specific symbol.
The prompt_template can include {variables} that will be filled with bar data:
- {time}: Bar timestamp
- {open}, {high}, {low}, {close}, {volume}: OHLCV values
- {symbol}: Trading pair symbol
- {source}: Data source name
Args:
source_name: Name of data source (e.g., "binance")
symbol: Trading pair (e.g., "BTC/USDT")
resolution: Time resolution (e.g., "1m", "5m", "1h")
prompt_template: Template string for agent prompt
Returns:
Confirmation with subscription details
Example:
on_data_update_run_agent(
source_name="binance",
symbol="BTC/USDT",
resolution="1m",
prompt_template="New bar on {symbol}: close={close}. Check if we should trade."
)
Note:
This is a simplified version. Full implementation would wire into
DataSource subscription system to trigger on every bar update.
"""
# TODO: Implement proper DataSource subscription integration
# For now, return placeholder
return {
"status": "not_implemented",
"message": "Data-driven agent triggers coming soon",
"config": {
"source": source_name,
"symbol": symbol,
"resolution": resolution,
"prompt_template": prompt_template,
},
}
@tool
def get_trigger_system_stats() -> Dict[str, Any]:
"""Get statistics about the trigger system.
Returns:
Dictionary with queue depth, execution stats, etc.
Example:
stats = get_trigger_system_stats()
print(f"Queue depth: {stats['queue_depth']}")
print(f"Current seq: {stats['current_seq']}")
"""
queue = _get_trigger_queue()
coordinator = _get_coordinator()
return {
"queue_depth": queue.get_queue_size(),
"queue_running": queue.is_running(),
"coordinator_stats": coordinator.get_stats(),
}
# Export tools list
TRIGGER_TOOLS = [
schedule_agent_prompt,
execute_agent_prompt_once,
list_scheduled_triggers,
cancel_scheduled_trigger,
on_data_update_run_agent,
get_trigger_system_stats,
]