sandbox connected and streaming
This commit is contained in:
@@ -1,25 +1,29 @@
|
||||
# Agent Harness
|
||||
|
||||
Comprehensive agent orchestration system for Dexorder AI platform, built on LangChain.js and LangGraph.js.
|
||||
Comprehensive agent orchestration system for Dexorder AI platform, built on LangChain.js deep agents architecture.
|
||||
|
||||
## Architecture Overview
|
||||
|
||||
```
|
||||
gateway/src/harness/
|
||||
├── memory/ # Storage layer (Redis + Iceberg + Qdrant)
|
||||
├── skills/ # Individual capabilities (markdown + TypeScript)
|
||||
├── subagents/ # Specialized agents with multi-file memory
|
||||
├── workflows/ # LangGraph state machines
|
||||
├── tools/ # Platform tools (non-MCP)
|
||||
├── config/ # Configuration files
|
||||
└── index.ts # Main exports
|
||||
gateway/src/
|
||||
├── harness/
|
||||
│ ├── memory/ # Storage layer (Redis + Iceberg + Qdrant)
|
||||
│ ├── subagents/ # Specialized agents with multi-file memory
|
||||
│ ├── workflows/ # LangGraph state machines
|
||||
│ ├── prompts/ # System prompts
|
||||
│ ├── agent-harness.ts # Main orchestrator
|
||||
│ └── index.ts # Exports
|
||||
└── tools/ # LangChain tools (platform + MCP)
|
||||
├── platform/ # Local platform tools
|
||||
├── mcp/ # Remote MCP tool wrappers
|
||||
└── tool-registry.ts # Tool-to-agent routing
|
||||
```
|
||||
|
||||
## Core Components
|
||||
|
||||
### 1. Memory Layer (`memory/`)
|
||||
|
||||
Tiered storage architecture as per [architecture discussion](/chat/harness-rag.txt):
|
||||
Tiered storage architecture:
|
||||
|
||||
- **Redis**: Hot state (active sessions, checkpoints)
|
||||
- **Iceberg**: Cold storage (durable conversations, analytics)
|
||||
@@ -32,27 +36,32 @@ Tiered storage architecture as per [architecture discussion](/chat/harness-rag.t
|
||||
- `embedding-service.ts`: Text→vector conversion
|
||||
- `session-context.ts`: User context with channel metadata
|
||||
|
||||
### 2. Skills (`skills/`)
|
||||
### 2. Tools (`../tools/`)
|
||||
|
||||
Self-contained capabilities with markdown definitions:
|
||||
Standard LangChain tools following deep agents best practices:
|
||||
|
||||
- `*.skill.md`: Human-readable documentation
|
||||
- `*.ts`: Implementation extending `BaseSkill`
|
||||
- Input validation and error handling
|
||||
- Can use LLM, MCP tools, or platform tools
|
||||
**Platform Tools** (local services):
|
||||
- `symbol_lookup`: Symbol search and metadata resolution
|
||||
- `get_chart_data`: OHLCV data with workspace defaults
|
||||
|
||||
**MCP Tools** (remote, per-user):
|
||||
- Dynamically discovered from user's MCP server
|
||||
- Wrapped as standard LangChain `DynamicStructuredTool`
|
||||
- Filtered per-agent via `ToolRegistry`
|
||||
|
||||
**Example:**
|
||||
```typescript
|
||||
import { MarketAnalysisSkill } from './skills';
|
||||
import { getToolRegistry } from '../tools';
|
||||
|
||||
const skill = new MarketAnalysisSkill(logger, model);
|
||||
const result = await skill.execute({
|
||||
context: userContext,
|
||||
parameters: { ticker: 'BTC/USDT', period: '4h' }
|
||||
});
|
||||
const toolRegistry = getToolRegistry();
|
||||
const tools = await toolRegistry.getToolsForAgent(
|
||||
'main',
|
||||
mcpClient,
|
||||
availableMCPTools
|
||||
);
|
||||
```
|
||||
|
||||
See [skills/README.md](skills/README.md) for authoring guide.
|
||||
See `../tools/tool-registry.ts` for tool configuration.
|
||||
|
||||
### 3. Subagents (`subagents/`)
|
||||
|
||||
@@ -75,11 +84,20 @@ subagents/
|
||||
- Split memory into logical files (better organization)
|
||||
- Model overrides
|
||||
- Capability tagging
|
||||
- Configurable tool access via ToolRegistry
|
||||
|
||||
**Tool Configuration** (in `config.yaml`):
|
||||
```yaml
|
||||
tools:
|
||||
platform: ['symbol_lookup'] # Platform tools
|
||||
mcp: ['category_*'] # MCP tool patterns
|
||||
```
|
||||
|
||||
**Example:**
|
||||
```typescript
|
||||
const codeReviewer = await createCodeReviewerSubagent(model, logger, basePath);
|
||||
const review = await codeReviewer.execute({ userContext }, strategyCode);
|
||||
const tools = await toolRegistry.getToolsForAgent('research', mcpClient, availableMCPTools);
|
||||
const subagent = await createResearchSubagent(model, logger, basePath, mcpClient, tools);
|
||||
const result = await subagent.execute({ userContext }, instruction);
|
||||
```
|
||||
|
||||
### 4. Workflows (`workflows/`)
|
||||
|
||||
@@ -1,22 +1,56 @@
|
||||
|
||||
import type { BaseMessage } from '@langchain/core/messages';
|
||||
import { HumanMessage, AIMessage, SystemMessage } from '@langchain/core/messages';
|
||||
import { HumanMessage, SystemMessage, ToolMessage } from '@langchain/core/messages';
|
||||
import type { FastifyBaseLogger } from 'fastify';
|
||||
import type { UserLicense } from '../types/user.js';
|
||||
import type { License } from '../types/user.js';
|
||||
import { ChannelType } from '../types/user.js';
|
||||
import type { ConversationStore } from './memory/conversation-store.js';
|
||||
import type { InboundMessage, OutboundMessage } from '../types/messages.js';
|
||||
import { MCPClientConnector } from './mcp-client.js';
|
||||
import { CONTEXT_URIS, type ResourceContent } from '../types/resources.js';
|
||||
import { LLMProviderFactory, type ProviderConfig } from '../llm/provider.js';
|
||||
import { ModelRouter, RoutingStrategy } from '../llm/router.js';
|
||||
import type { WorkspaceManager } from '../workspace/workspace-manager.js';
|
||||
import type { ChannelAdapter } from '../workspace/index.js';
|
||||
import type { ResearchSubagent } from './subagents/research/index.js';
|
||||
import type { DynamicStructuredTool } from '@langchain/core/tools';
|
||||
import { getToolRegistry } from '../tools/tool-registry.js';
|
||||
import type { MCPToolInfo } from '../tools/mcp/mcp-tool-wrapper.js';
|
||||
import { createResearchAgentTool } from '../tools/platform/research-agent.tool.js';
|
||||
import { createUserContext } from './memory/session-context.js';
|
||||
import { readFile } from 'fs/promises';
|
||||
import { join, dirname } from 'path';
|
||||
import { fileURLToPath } from 'url';
|
||||
|
||||
export interface AgentHarnessConfig {
|
||||
const __filename = fileURLToPath(import.meta.url);
|
||||
const __dirname = dirname(__filename);
|
||||
|
||||
/**
|
||||
* Session-specific config provided by channel handlers.
|
||||
* Contains only per-connection details — no infrastructure dependencies.
|
||||
*/
|
||||
export interface HarnessSessionConfig {
|
||||
userId: string;
|
||||
sessionId: string;
|
||||
license: UserLicense;
|
||||
providerConfig: ProviderConfig;
|
||||
license: License;
|
||||
mcpServerUrl: string;
|
||||
logger: FastifyBaseLogger;
|
||||
workspaceManager?: WorkspaceManager;
|
||||
channelAdapter?: ChannelAdapter;
|
||||
channelType?: ChannelType;
|
||||
channelUserId?: string;
|
||||
}
|
||||
|
||||
/**
|
||||
* Factory function type for creating AgentHarness instances.
|
||||
* Created in main.ts with infrastructure (storage, providerConfig) captured in closure.
|
||||
* Channel handlers call this factory without knowing about Redis or Iceberg.
|
||||
*/
|
||||
export type HarnessFactory = (sessionConfig: HarnessSessionConfig) => AgentHarness;
|
||||
|
||||
export interface AgentHarnessConfig extends HarnessSessionConfig {
|
||||
providerConfig: ProviderConfig;
|
||||
conversationStore?: ConversationStore;
|
||||
historyLimit: number;
|
||||
researchSubagent?: ResearchSubagent;
|
||||
}
|
||||
|
||||
/**
|
||||
@@ -27,32 +61,59 @@ export interface AgentHarnessConfig {
|
||||
* 1. Fetches context from user's MCP resources
|
||||
* 2. Routes to appropriate LLM model
|
||||
* 3. Calls LLM with embedded context
|
||||
* 4. Routes tool calls to user's MCP or platform tools
|
||||
* 4. Routes tool calls to platform tools or user's MCP tools
|
||||
* 5. Saves messages back to user's MCP
|
||||
*/
|
||||
export class AgentHarness {
|
||||
private static systemPromptTemplate: string | null = null;
|
||||
|
||||
private config: AgentHarnessConfig;
|
||||
private modelFactory: LLMProviderFactory;
|
||||
private modelRouter: ModelRouter;
|
||||
private mcpClient: MCPClientConnector;
|
||||
private workspaceManager?: WorkspaceManager;
|
||||
private lastWorkspaceSeq: number = 0;
|
||||
private channelAdapter?: ChannelAdapter;
|
||||
private isFirstMessage: boolean = true;
|
||||
private researchSubagent?: ResearchSubagent;
|
||||
private availableMCPTools: MCPToolInfo[] = [];
|
||||
private researchImageCapture: Array<{ data: string; mimeType: string }> = [];
|
||||
private conversationStore?: ConversationStore;
|
||||
|
||||
constructor(config: AgentHarnessConfig) {
|
||||
this.config = config;
|
||||
this.workspaceManager = config.workspaceManager;
|
||||
this.channelAdapter = config.channelAdapter;
|
||||
this.researchSubagent = config.researchSubagent;
|
||||
|
||||
this.modelFactory = new LLMProviderFactory(config.providerConfig, config.logger);
|
||||
this.modelRouter = new ModelRouter(this.modelFactory, config.logger);
|
||||
this.conversationStore = config.conversationStore;
|
||||
|
||||
this.mcpClient = new MCPClientConnector({
|
||||
userId: config.userId,
|
||||
mcpServerUrl: config.license.mcpServerUrl,
|
||||
mcpServerUrl: config.mcpServerUrl,
|
||||
logger: config.logger,
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Load system prompt template from file (cached)
|
||||
*/
|
||||
private static async loadSystemPromptTemplate(): Promise<string> {
|
||||
if (!AgentHarness.systemPromptTemplate) {
|
||||
const templatePath = join(__dirname, 'prompts', 'system-prompt.md');
|
||||
AgentHarness.systemPromptTemplate = await readFile(templatePath, 'utf-8');
|
||||
}
|
||||
return AgentHarness.systemPromptTemplate;
|
||||
}
|
||||
|
||||
/**
|
||||
* Set the channel adapter (can be called after construction)
|
||||
*/
|
||||
setChannelAdapter(adapter: ChannelAdapter): void {
|
||||
this.channelAdapter = adapter;
|
||||
}
|
||||
|
||||
/**
|
||||
* Initialize harness and connect to user's MCP server
|
||||
*/
|
||||
@@ -64,6 +125,13 @@ export class AgentHarness {
|
||||
|
||||
try {
|
||||
await this.mcpClient.connect();
|
||||
|
||||
// Discover available MCP tools from user's server
|
||||
await this.discoverMCPTools();
|
||||
|
||||
// Initialize research subagent if not provided
|
||||
await this.initializeResearchSubagent();
|
||||
|
||||
this.config.logger.info('Agent harness initialized');
|
||||
} catch (error) {
|
||||
this.config.logger.error({ error }, 'Failed to initialize agent harness');
|
||||
@@ -71,46 +139,384 @@ export class AgentHarness {
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Discover available MCP tools from user's server
|
||||
*/
|
||||
private async discoverMCPTools(): Promise<void> {
|
||||
try {
|
||||
this.config.logger.debug('Discovering MCP tools from user server');
|
||||
|
||||
// Call MCP client to list tools
|
||||
const tools = await this.mcpClient.listTools();
|
||||
|
||||
// Convert to MCPToolInfo format
|
||||
this.availableMCPTools = tools.map(tool => ({
|
||||
name: tool.name,
|
||||
description: tool.description,
|
||||
inputSchema: tool.inputSchema as any,
|
||||
}));
|
||||
|
||||
this.config.logger.info(
|
||||
{
|
||||
toolCount: this.availableMCPTools.length,
|
||||
toolNames: this.availableMCPTools.map(t => t.name),
|
||||
},
|
||||
'MCP tools discovered'
|
||||
);
|
||||
} catch (error) {
|
||||
this.config.logger.warn(
|
||||
{
|
||||
error,
|
||||
errorMessage: (error as Error)?.message,
|
||||
errorName: (error as Error)?.name,
|
||||
errorCode: (error as any)?.code,
|
||||
},
|
||||
'Failed to discover MCP tools - continuing without remote tools'
|
||||
);
|
||||
// Don't throw - MCP tools are optional, agent can still work with platform tools
|
||||
this.availableMCPTools = [];
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Initialize research subagent
|
||||
*/
|
||||
private async initializeResearchSubagent(): Promise<void> {
|
||||
if (this.researchSubagent) {
|
||||
this.config.logger.debug('Research subagent already provided');
|
||||
return;
|
||||
}
|
||||
|
||||
this.config.logger.debug('Creating research subagent for session');
|
||||
|
||||
try {
|
||||
const { createResearchSubagent } = await import('./subagents/research/index.js');
|
||||
|
||||
// Create a model for the research subagent
|
||||
const model = await this.modelRouter.route(
|
||||
'research analysis', // dummy query
|
||||
this.config.license,
|
||||
RoutingStrategy.COMPLEXITY,
|
||||
this.config.userId
|
||||
);
|
||||
|
||||
// Get tools for research subagent from registry
|
||||
// Images from MCP responses are captured via onImage and routed to the subagent
|
||||
const toolRegistry = getToolRegistry();
|
||||
const researchTools = await toolRegistry.getToolsForAgent(
|
||||
'research',
|
||||
this.mcpClient,
|
||||
this.availableMCPTools,
|
||||
this.workspaceManager,
|
||||
(img) => this.researchImageCapture.push(img)
|
||||
);
|
||||
|
||||
// Path resolution: use the compiled output path
|
||||
const researchSubagentPath = join(__dirname, 'subagents', 'research');
|
||||
this.config.logger.debug({ researchSubagentPath }, 'Using research subagent path');
|
||||
|
||||
this.researchSubagent = await createResearchSubagent(
|
||||
model,
|
||||
this.config.logger,
|
||||
researchSubagentPath,
|
||||
this.mcpClient,
|
||||
researchTools,
|
||||
this.researchImageCapture
|
||||
);
|
||||
|
||||
this.config.logger.info(
|
||||
{
|
||||
toolCount: researchTools.length,
|
||||
toolNames: researchTools.map(t => t.name),
|
||||
},
|
||||
'Research subagent created successfully'
|
||||
);
|
||||
} catch (error) {
|
||||
this.config.logger.error(
|
||||
{ error, errorMessage: (error as Error).message, stack: (error as Error).stack },
|
||||
'Failed to create research subagent'
|
||||
);
|
||||
// Don't throw - research subagent is optional
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Execute model with tool calling loop
|
||||
* Handles multi-turn tool calls until the model produces a final text response
|
||||
*/
|
||||
private async executeWithToolCalling(
|
||||
model: any,
|
||||
messages: BaseMessage[],
|
||||
tools: DynamicStructuredTool[],
|
||||
maxIterations: number = 2
|
||||
): Promise<string> {
|
||||
this.config.logger.info(
|
||||
{ toolCount: tools.length, maxIterations },
|
||||
'Starting tool calling loop'
|
||||
);
|
||||
|
||||
const messagesCopy = [...messages];
|
||||
let iterations = 0;
|
||||
|
||||
while (iterations < maxIterations) {
|
||||
iterations++;
|
||||
this.config.logger.info(
|
||||
{
|
||||
iteration: iterations,
|
||||
messageCount: messagesCopy.length,
|
||||
lastMessageType: messagesCopy[messagesCopy.length - 1]?.constructor.name,
|
||||
},
|
||||
'Tool calling loop iteration'
|
||||
);
|
||||
|
||||
this.config.logger.debug('Streaming model response...');
|
||||
let response: any = null;
|
||||
try {
|
||||
const stream = await model.stream(messagesCopy);
|
||||
for await (const chunk of stream) {
|
||||
if (typeof chunk.content === 'string' && chunk.content.length > 0) {
|
||||
this.channelAdapter?.sendChunk(chunk.content);
|
||||
} else if (Array.isArray(chunk.content)) {
|
||||
for (const block of chunk.content) {
|
||||
if (block.type === 'text' && block.text) {
|
||||
this.channelAdapter?.sendChunk(block.text);
|
||||
}
|
||||
}
|
||||
}
|
||||
response = response ? response.concat(chunk) : chunk;
|
||||
}
|
||||
} catch (invokeError: any) {
|
||||
this.config.logger.error(
|
||||
{
|
||||
error: invokeError,
|
||||
errorMessage: invokeError?.message,
|
||||
errorStack: invokeError?.stack,
|
||||
iteration: iterations,
|
||||
messageCount: messagesCopy.length,
|
||||
},
|
||||
'Model streaming failed in tool calling loop'
|
||||
);
|
||||
throw invokeError;
|
||||
}
|
||||
|
||||
this.config.logger.info(
|
||||
{
|
||||
hasContent: !!response.content,
|
||||
contentLength: typeof response.content === 'string' ? response.content.length : 0,
|
||||
hasToolCalls: !!response.tool_calls,
|
||||
toolCallCount: response.tool_calls?.length || 0,
|
||||
},
|
||||
'Model response received'
|
||||
);
|
||||
|
||||
// Check if model wants to call tools
|
||||
if (!response.tool_calls || response.tool_calls.length === 0) {
|
||||
// No tool calls - return final response
|
||||
let finalContent: string;
|
||||
if (typeof response.content === 'string') {
|
||||
finalContent = response.content;
|
||||
} else if (Array.isArray(response.content)) {
|
||||
finalContent = response.content
|
||||
.filter((block: any) => block.type === 'text')
|
||||
.map((block: any) => block.text || '')
|
||||
.join('');
|
||||
} else {
|
||||
finalContent = JSON.stringify(response.content);
|
||||
}
|
||||
this.config.logger.info(
|
||||
{ finalContentLength: finalContent.length, iterations },
|
||||
'Tool calling loop complete - no more tool calls'
|
||||
);
|
||||
return finalContent;
|
||||
}
|
||||
|
||||
this.config.logger.info(
|
||||
{ toolCalls: response.tool_calls.map((tc: any) => tc.name) },
|
||||
'Processing tool calls'
|
||||
);
|
||||
|
||||
// Add assistant message with tool calls to history
|
||||
messagesCopy.push(response);
|
||||
|
||||
// Execute each tool call
|
||||
for (const toolCall of response.tool_calls) {
|
||||
this.config.logger.info(
|
||||
{ tool: toolCall.name, args: toolCall.args },
|
||||
'Executing tool call'
|
||||
);
|
||||
|
||||
const tool = tools.find(t => t.name === toolCall.name);
|
||||
|
||||
if (!tool) {
|
||||
this.config.logger.warn({ tool: toolCall.name }, 'Tool not found');
|
||||
messagesCopy.push(
|
||||
new ToolMessage({
|
||||
content: `Error: Tool '${toolCall.name}' not found`,
|
||||
tool_call_id: toolCall.id,
|
||||
})
|
||||
);
|
||||
continue;
|
||||
}
|
||||
|
||||
try {
|
||||
this.channelAdapter?.sendToolCall?.(toolCall.name, this.getToolLabel(toolCall.name));
|
||||
const result = await tool.func(toolCall.args);
|
||||
|
||||
// Process result to extract images and send them via channel adapter
|
||||
const processedResult = this.processToolResult(result, toolCall.name);
|
||||
|
||||
this.config.logger.debug(
|
||||
{
|
||||
tool: toolCall.name,
|
||||
originalResultLength: result.length,
|
||||
processedResultLength: processedResult.length,
|
||||
},
|
||||
'Tool result processed'
|
||||
);
|
||||
|
||||
messagesCopy.push(
|
||||
new ToolMessage({
|
||||
content: processedResult,
|
||||
tool_call_id: toolCall.id,
|
||||
})
|
||||
);
|
||||
|
||||
this.config.logger.info(
|
||||
{ tool: toolCall.name, resultLength: processedResult.length },
|
||||
'Tool execution completed'
|
||||
);
|
||||
} catch (error) {
|
||||
this.config.logger.error(
|
||||
{
|
||||
error,
|
||||
errorMessage: (error as Error)?.message,
|
||||
errorStack: (error as Error)?.stack,
|
||||
tool: toolCall.name,
|
||||
args: toolCall.args,
|
||||
},
|
||||
'Tool execution failed'
|
||||
);
|
||||
|
||||
messagesCopy.push(
|
||||
new ToolMessage({
|
||||
content: `Error: ${error}`,
|
||||
tool_call_id: toolCall.id,
|
||||
})
|
||||
);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Max iterations reached - return what we have
|
||||
this.config.logger.warn('Max tool calling iterations reached');
|
||||
return 'I apologize, but I encountered an issue processing your request. Please try rephrasing your question.';
|
||||
}
|
||||
|
||||
/**
|
||||
* Handle incoming message from user
|
||||
*/
|
||||
async handleMessage(message: InboundMessage): Promise<OutboundMessage> {
|
||||
this.config.logger.info(
|
||||
{ messageId: message.messageId, userId: message.userId },
|
||||
{ messageId: message.messageId, userId: message.userId, content: message.content.substring(0, 100) },
|
||||
'Processing user message'
|
||||
);
|
||||
|
||||
try {
|
||||
// 1. Fetch context resources from user's MCP server
|
||||
this.config.logger.debug('Fetching context resources from MCP');
|
||||
const contextResources = await this.fetchContextResources();
|
||||
// 1. Build system prompt from template
|
||||
this.config.logger.debug('Building system prompt');
|
||||
const systemPrompt = await this.buildSystemPrompt();
|
||||
this.config.logger.debug({ systemPromptLength: systemPrompt.length }, 'System prompt built');
|
||||
|
||||
// 2. Build system prompt from resources
|
||||
const systemPrompt = this.buildSystemPrompt(contextResources);
|
||||
// 2. Load recent conversation history
|
||||
const channelKey = this.config.channelType ?? ChannelType.WEBSOCKET;
|
||||
const storedMessages = this.conversationStore
|
||||
? await this.conversationStore.getRecentMessages(
|
||||
this.config.userId, this.config.sessionId, this.config.historyLimit, channelKey
|
||||
)
|
||||
: [];
|
||||
const history = this.conversationStore
|
||||
? this.conversationStore.toLangChainMessages(storedMessages)
|
||||
: [];
|
||||
this.config.logger.debug({ historyLength: history.length }, 'Conversation history loaded');
|
||||
|
||||
// 3. Build messages with conversation context from MCP
|
||||
const messages = this.buildMessages(message, contextResources);
|
||||
|
||||
// 4. Route to appropriate model
|
||||
// 4. Get the configured model
|
||||
this.config.logger.debug('Routing to model');
|
||||
const model = await this.modelRouter.route(
|
||||
message.content,
|
||||
this.config.license,
|
||||
RoutingStrategy.COMPLEXITY
|
||||
RoutingStrategy.COMPLEXITY,
|
||||
this.config.userId
|
||||
);
|
||||
this.config.logger.info({ modelName: model.constructor.name }, 'Model selected');
|
||||
|
||||
// 5. Build LangChain messages
|
||||
const langchainMessages = this.buildLangChainMessages(systemPrompt, messages);
|
||||
const langchainMessages = this.buildLangChainMessages(systemPrompt, history, message.content);
|
||||
this.config.logger.debug({ messageCount: langchainMessages.length }, 'LangChain messages built');
|
||||
|
||||
// 6. Call LLM with streaming
|
||||
this.config.logger.debug('Invoking LLM');
|
||||
const response = await model.invoke(langchainMessages);
|
||||
// 6. Get tools for main agent from registry
|
||||
const toolRegistry = getToolRegistry();
|
||||
const tools = await toolRegistry.getToolsForAgent(
|
||||
'main',
|
||||
this.mcpClient,
|
||||
this.availableMCPTools,
|
||||
this.workspaceManager // Pass session workspace manager
|
||||
);
|
||||
|
||||
// 7. Extract text response (tool handling TODO)
|
||||
const assistantMessage = response.content as string;
|
||||
// Add research subagent as a tool if available
|
||||
if (this.researchSubagent) {
|
||||
const subagentContext = {
|
||||
userContext: createUserContext({
|
||||
userId: this.config.userId,
|
||||
sessionId: this.config.sessionId,
|
||||
license: this.config.license,
|
||||
channelType: this.config.channelType ?? ChannelType.WEBSOCKET,
|
||||
channelUserId: this.config.channelUserId ?? this.config.userId,
|
||||
}),
|
||||
};
|
||||
|
||||
// TODO: Save messages to Iceberg conversation table instead of MCP
|
||||
// Should batch-insert periodically or on session end to avoid many small Parquet files
|
||||
// await icebergConversationStore.appendMessages([...]);
|
||||
tools.push(createResearchAgentTool({
|
||||
researchSubagent: this.researchSubagent,
|
||||
context: subagentContext,
|
||||
logger: this.config.logger,
|
||||
}));
|
||||
}
|
||||
|
||||
this.config.logger.info(
|
||||
{
|
||||
toolCount: tools.length,
|
||||
toolNames: tools.map(t => t.name),
|
||||
},
|
||||
'Tools loaded for main agent'
|
||||
);
|
||||
|
||||
// 7. Bind tools to model
|
||||
const modelWithTools = tools.length > 0 && model.bindTools ? model.bindTools(tools) : model;
|
||||
|
||||
if (tools.length > 0) {
|
||||
this.config.logger.info(
|
||||
{ modelType: modelWithTools.constructor.name, toolsBound: tools.length > 0 && !!model.bindTools },
|
||||
'Model bound with tools'
|
||||
);
|
||||
}
|
||||
|
||||
// 8. Call LLM with tool calling loop
|
||||
this.config.logger.info('Invoking LLM with tool support');
|
||||
const assistantMessage = await this.executeWithToolCalling(modelWithTools, langchainMessages, tools);
|
||||
|
||||
this.config.logger.info(
|
||||
{ responseLength: assistantMessage.length },
|
||||
'LLM response received'
|
||||
);
|
||||
|
||||
// Save user message and assistant response to conversation store
|
||||
if (this.conversationStore) {
|
||||
await this.conversationStore.saveMessage(
|
||||
this.config.userId, this.config.sessionId, 'user', message.content, undefined, channelKey
|
||||
);
|
||||
await this.conversationStore.saveMessage(
|
||||
this.config.userId, this.config.sessionId, 'assistant', assistantMessage, undefined, channelKey
|
||||
);
|
||||
}
|
||||
|
||||
// Mark first message as processed
|
||||
if (this.isFirstMessage) {
|
||||
@@ -129,214 +535,174 @@ export class AgentHarness {
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Stream response from LLM
|
||||
*/
|
||||
async *streamMessage(message: InboundMessage): AsyncGenerator<string> {
|
||||
try {
|
||||
// Fetch context
|
||||
const contextResources = await this.fetchContextResources();
|
||||
const systemPrompt = this.buildSystemPrompt(contextResources);
|
||||
const messages = this.buildMessages(message, contextResources);
|
||||
|
||||
// Route to model
|
||||
const model = await this.modelRouter.route(
|
||||
message.content,
|
||||
this.config.license,
|
||||
RoutingStrategy.COMPLEXITY
|
||||
);
|
||||
|
||||
// Build messages
|
||||
const langchainMessages = this.buildLangChainMessages(systemPrompt, messages);
|
||||
|
||||
// Stream response
|
||||
const stream = await model.stream(langchainMessages);
|
||||
|
||||
let fullResponse = '';
|
||||
for await (const chunk of stream) {
|
||||
const content = chunk.content as string;
|
||||
fullResponse += content;
|
||||
yield content;
|
||||
}
|
||||
|
||||
// TODO: Save messages to Iceberg conversation table instead of MCP
|
||||
// Should batch-insert periodically or on session end to avoid many small Parquet files
|
||||
// await icebergConversationStore.appendMessages([
|
||||
// { role: 'user', content: message.content, timestamp: message.timestamp },
|
||||
// { role: 'assistant', content: fullResponse, timestamp: new Date() }
|
||||
// ]);
|
||||
|
||||
// Mark first message as processed
|
||||
if (this.isFirstMessage) {
|
||||
this.isFirstMessage = false;
|
||||
}
|
||||
} catch (error) {
|
||||
this.config.logger.error({ error }, 'Error streaming message');
|
||||
throw error;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Fetch context resources from user's MCP server
|
||||
*/
|
||||
private async fetchContextResources(): Promise<ResourceContent[]> {
|
||||
const contextUris = [
|
||||
CONTEXT_URIS.USER_PROFILE,
|
||||
CONTEXT_URIS.CONVERSATION_SUMMARY,
|
||||
CONTEXT_URIS.WORKSPACE_STATE,
|
||||
CONTEXT_URIS.SYSTEM_PROMPT,
|
||||
];
|
||||
|
||||
const resources = await Promise.all(
|
||||
contextUris.map(async (uri) => {
|
||||
try {
|
||||
return await this.mcpClient.readResource(uri);
|
||||
} catch (error) {
|
||||
this.config.logger.warn({ error, uri }, 'Failed to fetch resource, using empty');
|
||||
return { uri, text: '' };
|
||||
}
|
||||
})
|
||||
);
|
||||
|
||||
return resources;
|
||||
}
|
||||
|
||||
/**
|
||||
* Build messages array with context from resources
|
||||
*/
|
||||
private buildMessages(
|
||||
currentMessage: InboundMessage,
|
||||
contextResources: ResourceContent[]
|
||||
): Array<{ role: string; content: string }> {
|
||||
const conversationSummary = contextResources.find(
|
||||
(r) => r.uri === CONTEXT_URIS.CONVERSATION_SUMMARY
|
||||
);
|
||||
|
||||
const messages: Array<{ role: string; content: string }> = [];
|
||||
|
||||
// Add conversation context as a system-like user message
|
||||
if (conversationSummary?.text) {
|
||||
messages.push({
|
||||
role: 'user',
|
||||
content: `[Previous Conversation Context]\n${conversationSummary.text}`,
|
||||
});
|
||||
messages.push({
|
||||
role: 'assistant',
|
||||
content: 'I understand the context from our previous conversations.',
|
||||
});
|
||||
}
|
||||
|
||||
// Add workspace delta (for subsequent turns)
|
||||
const workspaceDelta = this.buildWorkspaceDelta();
|
||||
if (workspaceDelta) {
|
||||
messages.push({
|
||||
role: 'user',
|
||||
content: workspaceDelta,
|
||||
});
|
||||
}
|
||||
|
||||
// Add current user message
|
||||
messages.push({
|
||||
role: 'user',
|
||||
content: currentMessage.content,
|
||||
});
|
||||
|
||||
return messages;
|
||||
}
|
||||
|
||||
/**
|
||||
* Convert to LangChain message format
|
||||
*/
|
||||
private buildLangChainMessages(
|
||||
systemPrompt: string,
|
||||
messages: Array<{ role: string; content: string }>
|
||||
history: BaseMessage[],
|
||||
currentUserMessage: string
|
||||
): BaseMessage[] {
|
||||
const langchainMessages: BaseMessage[] = [new SystemMessage(systemPrompt)];
|
||||
|
||||
for (const msg of messages) {
|
||||
if (msg.role === 'user') {
|
||||
langchainMessages.push(new HumanMessage(msg.content));
|
||||
} else if (msg.role === 'assistant') {
|
||||
langchainMessages.push(new AIMessage(msg.content));
|
||||
}
|
||||
}
|
||||
|
||||
return langchainMessages;
|
||||
return [
|
||||
new SystemMessage(systemPrompt),
|
||||
...history,
|
||||
new HumanMessage(currentUserMessage),
|
||||
];
|
||||
}
|
||||
|
||||
/**
|
||||
* Build system prompt from platform base + user resources
|
||||
* Build system prompt from template
|
||||
*/
|
||||
private buildSystemPrompt(contextResources: ResourceContent[]): string {
|
||||
const userProfile = contextResources.find((r) => r.uri === CONTEXT_URIS.USER_PROFILE);
|
||||
const customPrompt = contextResources.find((r) => r.uri === CONTEXT_URIS.SYSTEM_PROMPT);
|
||||
const workspaceState = contextResources.find((r) => r.uri === CONTEXT_URIS.WORKSPACE_STATE);
|
||||
|
||||
// Base platform prompt
|
||||
let prompt = `You are a helpful AI assistant for Dexorder, an AI-first trading platform.
|
||||
You help users research markets, develop indicators and strategies, and analyze trading data.
|
||||
|
||||
User license: ${this.config.license.licenseType}
|
||||
Available features: ${JSON.stringify(this.config.license.features, null, 2)}`;
|
||||
|
||||
// Add user profile context
|
||||
if (userProfile?.text) {
|
||||
prompt += `\n\n# User Profile\n${userProfile.text}`;
|
||||
}
|
||||
|
||||
// Add workspace context from MCP resource (if available)
|
||||
if (workspaceState?.text) {
|
||||
prompt += `\n\n# Current Workspace (from MCP)\n${workspaceState.text}`;
|
||||
}
|
||||
private async buildSystemPrompt(): Promise<string> {
|
||||
// Load template and populate with license info
|
||||
const template = await AgentHarness.loadSystemPromptTemplate();
|
||||
let prompt = template
|
||||
.replace('{{licenseType}}', this.config.license.licenseType)
|
||||
.replace('{{features}}', JSON.stringify(this.config.license.features, null, 2));
|
||||
|
||||
// Add full workspace state from WorkspaceManager (first message only)
|
||||
if (this.isFirstMessage && this.workspaceManager) {
|
||||
const workspaceJSON = this.workspaceManager.serializeState();
|
||||
prompt += `\n\n# Workspace State (JSON)\n\`\`\`json\n${workspaceJSON}\n\`\`\``;
|
||||
|
||||
// Record current workspace sequence for delta tracking
|
||||
this.lastWorkspaceSeq = this.workspaceManager.getCurrentSeq();
|
||||
}
|
||||
|
||||
// Add user's custom instructions (highest priority)
|
||||
if (customPrompt?.text) {
|
||||
prompt += `\n\n# User Instructions\n${customPrompt.text}`;
|
||||
prompt += `\n\n# Current Workspace State\n\`\`\`json\n${workspaceJSON}\n\`\`\``;
|
||||
}
|
||||
|
||||
return prompt;
|
||||
}
|
||||
|
||||
/**
|
||||
* Build workspace delta message for subsequent turns.
|
||||
* Returns null if no changes since last message.
|
||||
* Map tool names to user-friendly status labels.
|
||||
*/
|
||||
private buildWorkspaceDelta(): string | null {
|
||||
if (!this.workspaceManager || this.isFirstMessage) {
|
||||
return null;
|
||||
}
|
||||
|
||||
const changes = this.workspaceManager.getChangesSince(this.lastWorkspaceSeq);
|
||||
|
||||
if (Object.keys(changes).length === 0) {
|
||||
return null;
|
||||
}
|
||||
|
||||
// Format changes as JSON
|
||||
const deltaJSON = JSON.stringify(changes, null, 2);
|
||||
|
||||
// Update sequence marker
|
||||
this.lastWorkspaceSeq = this.workspaceManager.getCurrentSeq();
|
||||
|
||||
return `[Workspace Changes Since Last Turn]\n\`\`\`json\n${deltaJSON}\n\`\`\``;
|
||||
private getToolLabel(toolName: string): string {
|
||||
const labels: Record<string, string> = {
|
||||
research_agent: 'Researching...',
|
||||
get_chart_data: 'Fetching chart data...',
|
||||
symbol_lookup: 'Looking up symbol...',
|
||||
};
|
||||
return labels[toolName] ?? `Running ${toolName}...`;
|
||||
}
|
||||
|
||||
/**
|
||||
* Process tool result to extract images and send via channel adapter.
|
||||
* Returns text-only version for LLM context (no base64 image data).
|
||||
*/
|
||||
private processToolResult(result: string, toolName: string): string {
|
||||
// Most tools return plain strings - only process JSON results
|
||||
if (!result || typeof result !== 'string') {
|
||||
return String(result || '');
|
||||
}
|
||||
|
||||
// Try to parse as JSON
|
||||
let parsedResult: any;
|
||||
try {
|
||||
parsedResult = JSON.parse(result);
|
||||
} catch {
|
||||
// Not JSON, return as-is
|
||||
return result;
|
||||
}
|
||||
|
||||
// Check if result has images array (from ResearchSubagent)
|
||||
if (parsedResult && Array.isArray(parsedResult.images) && parsedResult.images.length > 0) {
|
||||
this.config.logger.info(
|
||||
{ tool: toolName, imageCount: parsedResult.images.length },
|
||||
'Extracting images from tool result'
|
||||
);
|
||||
|
||||
// Send each image via channel adapter
|
||||
for (const image of parsedResult.images) {
|
||||
if (image.data && image.mimeType) {
|
||||
if (this.channelAdapter) {
|
||||
this.config.logger.debug({ mimeType: image.mimeType }, 'Sending image to channel');
|
||||
this.channelAdapter.sendImage({
|
||||
data: image.data,
|
||||
mimeType: image.mimeType,
|
||||
caption: undefined,
|
||||
});
|
||||
} else {
|
||||
this.config.logger.warn('No channel adapter set, cannot send image');
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Create text-only version for LLM
|
||||
const textOnlyResult = {
|
||||
...parsedResult,
|
||||
images: undefined,
|
||||
imageCount: parsedResult.images.length,
|
||||
};
|
||||
|
||||
// Clean up undefined values
|
||||
Object.keys(textOnlyResult).forEach(key => {
|
||||
if (textOnlyResult[key] === undefined) {
|
||||
delete textOnlyResult[key];
|
||||
}
|
||||
});
|
||||
|
||||
return JSON.stringify(textOnlyResult);
|
||||
}
|
||||
|
||||
// Check for nested chart_images object
|
||||
if (parsedResult && parsedResult.chart_images && typeof parsedResult.chart_images === 'object') {
|
||||
this.config.logger.info(
|
||||
{ tool: toolName, chartCount: Object.keys(parsedResult.chart_images).length },
|
||||
'Extracting chart images from tool result'
|
||||
);
|
||||
|
||||
// Send each chart image via channel adapter
|
||||
for (const [chartId, chartData] of Object.entries(parsedResult.chart_images)) {
|
||||
const chart = chartData as any;
|
||||
if (chart.type === 'image' && chart.data) {
|
||||
if (this.channelAdapter) {
|
||||
this.config.logger.debug({ chartId }, 'Sending chart image to channel');
|
||||
this.channelAdapter.sendImage({
|
||||
data: chart.data,
|
||||
mimeType: 'image/png',
|
||||
caption: undefined,
|
||||
});
|
||||
} else {
|
||||
this.config.logger.warn('No channel adapter set, cannot send chart image');
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Create text-only version for LLM
|
||||
const textOnlyResult = {
|
||||
...parsedResult,
|
||||
chart_images: undefined,
|
||||
chartCount: Object.keys(parsedResult.chart_images).length,
|
||||
};
|
||||
|
||||
// Clean up undefined values
|
||||
Object.keys(textOnlyResult).forEach(key => {
|
||||
if (textOnlyResult[key] === undefined) {
|
||||
delete textOnlyResult[key];
|
||||
}
|
||||
});
|
||||
|
||||
return JSON.stringify(textOnlyResult);
|
||||
}
|
||||
|
||||
// No images found, return stringified result
|
||||
return result;
|
||||
}
|
||||
|
||||
/**
|
||||
* Cleanup resources
|
||||
* End the session: flush conversation to cold storage, then release resources.
|
||||
* Called by channel handlers on disconnect, session expiry, or graceful shutdown.
|
||||
*/
|
||||
async cleanup(): Promise<void> {
|
||||
this.config.logger.info('Cleaning up agent harness');
|
||||
|
||||
if (this.conversationStore) {
|
||||
const channelKey = this.config.channelType ?? ChannelType.WEBSOCKET;
|
||||
try {
|
||||
await this.conversationStore.flushToIceberg(
|
||||
this.config.userId, this.config.sessionId, this.config.historyLimit, channelKey
|
||||
);
|
||||
} catch (error) {
|
||||
this.config.logger.error({ error }, 'Failed to flush conversation to Iceberg during cleanup');
|
||||
}
|
||||
}
|
||||
|
||||
await this.mcpClient.disconnect();
|
||||
}
|
||||
}
|
||||
|
||||
@@ -3,9 +3,6 @@
|
||||
// Memory
|
||||
export * from './memory/index.js';
|
||||
|
||||
// Skills
|
||||
export * from './skills/index.js';
|
||||
|
||||
// Subagents
|
||||
export * from './subagents/index.js';
|
||||
|
||||
|
||||
@@ -88,7 +88,7 @@ export class MCPClientConnector {
|
||||
|
||||
/**
|
||||
* List available tools from user's MCP server
|
||||
* Filters to only return tools marked as agent_accessible
|
||||
* Returns all available tools from the MCP server
|
||||
*/
|
||||
async listTools(): Promise<Array<{ name: string; description?: string; inputSchema?: any }>> {
|
||||
if (!this.client || !this.connected) {
|
||||
@@ -96,36 +96,54 @@ export class MCPClientConnector {
|
||||
}
|
||||
|
||||
try {
|
||||
this.config.logger.debug('Requesting tool list from MCP server');
|
||||
const response = await this.client.listTools();
|
||||
|
||||
// Filter tools to only include agent-accessible ones
|
||||
const tools = response.tools
|
||||
.filter((tool: any) => {
|
||||
// Check if tool has agent_accessible annotation
|
||||
const annotations = tool.annotations || {};
|
||||
return annotations.agent_accessible === true;
|
||||
})
|
||||
.map((tool: any) => ({
|
||||
name: tool.name,
|
||||
description: tool.description,
|
||||
inputSchema: tool.inputSchema,
|
||||
}));
|
||||
this.config.logger.debug(
|
||||
{
|
||||
hasTools: !!response.tools,
|
||||
toolCount: response.tools?.length || 0,
|
||||
},
|
||||
'Received tool list response'
|
||||
);
|
||||
|
||||
// Handle case where response.tools might be undefined
|
||||
if (!response.tools || !Array.isArray(response.tools)) {
|
||||
this.config.logger.warn('MCP server returned no tools array');
|
||||
return [];
|
||||
}
|
||||
|
||||
// Return all tools - agent-to-tool binding is handled by the tool registry
|
||||
const tools = response.tools.map((tool: any) => ({
|
||||
name: tool.name,
|
||||
description: tool.description,
|
||||
inputSchema: tool.inputSchema,
|
||||
}));
|
||||
|
||||
this.config.logger.debug(
|
||||
{ totalTools: response.tools.length, agentAccessibleTools: tools.length },
|
||||
'Listed MCP tools with filtering'
|
||||
{ toolCount: tools.length },
|
||||
'Listed MCP tools'
|
||||
);
|
||||
|
||||
return tools;
|
||||
} catch (error) {
|
||||
this.config.logger.error({ error }, 'Failed to list MCP tools');
|
||||
this.config.logger.error(
|
||||
{
|
||||
error,
|
||||
errorMessage: (error as Error)?.message,
|
||||
errorName: (error as Error)?.name,
|
||||
errorCode: (error as any)?.code,
|
||||
errorStack: (error as Error)?.stack,
|
||||
},
|
||||
'Failed to list MCP tools'
|
||||
);
|
||||
throw error;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* List available resources from user's MCP server
|
||||
* Filters to only return resources marked as agent_accessible
|
||||
* Returns all available resources from the MCP server
|
||||
*/
|
||||
async listResources(): Promise<Array<{ uri: string; name: string; description?: string; mimeType?: string }>> {
|
||||
if (!this.client || !this.connected) {
|
||||
@@ -135,23 +153,17 @@ export class MCPClientConnector {
|
||||
try {
|
||||
const response = await this.client.listResources();
|
||||
|
||||
// Filter resources to only include agent-accessible ones
|
||||
const resources = response.resources
|
||||
.filter((resource: any) => {
|
||||
// Check if resource has agent_accessible annotation
|
||||
const annotations = resource.annotations || {};
|
||||
return annotations.agent_accessible === true;
|
||||
})
|
||||
.map((resource: any) => ({
|
||||
uri: resource.uri,
|
||||
name: resource.name,
|
||||
description: resource.description,
|
||||
mimeType: resource.mimeType,
|
||||
}));
|
||||
// Return all resources - agent-to-resource binding is handled by the tool registry
|
||||
const resources = response.resources.map((resource: any) => ({
|
||||
uri: resource.uri,
|
||||
name: resource.name,
|
||||
description: resource.description,
|
||||
mimeType: resource.mimeType,
|
||||
}));
|
||||
|
||||
this.config.logger.debug(
|
||||
{ totalResources: response.resources.length, agentAccessibleResources: resources.length },
|
||||
'Listed MCP resources with filtering'
|
||||
{ resourceCount: resources.length },
|
||||
'Listed MCP resources'
|
||||
);
|
||||
|
||||
return resources;
|
||||
|
||||
@@ -2,6 +2,7 @@ import type Redis from 'ioredis';
|
||||
import type { FastifyBaseLogger } from 'fastify';
|
||||
import type { BaseMessage } from '@langchain/core/messages';
|
||||
import { HumanMessage, AIMessage, SystemMessage } from '@langchain/core/messages';
|
||||
import type { IcebergClient } from '../../clients/iceberg-client.js';
|
||||
|
||||
/**
|
||||
* Message record for storage
|
||||
@@ -17,36 +18,36 @@ export interface StoredMessage {
|
||||
}
|
||||
|
||||
/**
|
||||
* Conversation store: Redis (hot) + Iceberg (cold)
|
||||
* Conversation store: Redis (hot) + Iceberg/Parquet (cold)
|
||||
*
|
||||
* Hot path: Recent messages in Redis for fast access
|
||||
* Cold path: Full history in Iceberg for durability and analytics
|
||||
* Hot path: Recent messages in Redis for fast context loading
|
||||
* Cold path: Full session flushed as a single Parquet file at session end
|
||||
*
|
||||
* Architecture:
|
||||
* - Redis stores last N messages per session with TTL
|
||||
* - Iceberg stores all messages partitioned by user_id, session_id
|
||||
* - Supports time-travel queries for debugging and analysis
|
||||
* - Parquet file written to S3 at session close (one file per session)
|
||||
* - Cold read falls back to Parquet scan when Redis TTL has expired
|
||||
*/
|
||||
export class ConversationStore {
|
||||
private readonly HOT_MESSAGE_LIMIT = 50; // Keep last 50 messages in Redis
|
||||
private readonly HOT_MESSAGE_LIMIT = 50; // Redis buffer ceiling
|
||||
private readonly HOT_TTL_SECONDS = 3600; // 1 hour
|
||||
|
||||
constructor(
|
||||
private redis: Redis,
|
||||
private logger: FastifyBaseLogger
|
||||
// TODO: Add Iceberg catalog
|
||||
// private iceberg: IcebergCatalog
|
||||
private logger: FastifyBaseLogger,
|
||||
private icebergClient?: IcebergClient
|
||||
) {}
|
||||
|
||||
/**
|
||||
* Save a message to both Redis and Iceberg
|
||||
* Save a message to Redis hot path
|
||||
*/
|
||||
async saveMessage(
|
||||
userId: string,
|
||||
sessionId: string,
|
||||
role: 'user' | 'assistant' | 'system',
|
||||
content: string,
|
||||
metadata?: Record<string, unknown>
|
||||
metadata?: Record<string, unknown>,
|
||||
channelType?: string
|
||||
): Promise<void> {
|
||||
const message: StoredMessage = {
|
||||
id: `${userId}:${sessionId}:${Date.now()}`,
|
||||
@@ -60,20 +61,10 @@ export class ConversationStore {
|
||||
|
||||
this.logger.debug({ userId, sessionId, role }, 'Saving message');
|
||||
|
||||
// Hot: Add to Redis list (LPUSH for newest first)
|
||||
const key = this.getRedisKey(userId, sessionId);
|
||||
const key = this.getRedisKey(userId, sessionId, channelType);
|
||||
await this.redis.lpush(key, JSON.stringify(message));
|
||||
|
||||
// Trim to keep only recent messages
|
||||
await this.redis.ltrim(key, 0, this.HOT_MESSAGE_LIMIT - 1);
|
||||
|
||||
// Set TTL
|
||||
await this.redis.expire(key, this.HOT_TTL_SECONDS);
|
||||
|
||||
// Cold: Async append to Iceberg
|
||||
this.appendToIceberg(message).catch((error) => {
|
||||
this.logger.error({ error, userId, sessionId }, 'Failed to append message to Iceberg');
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
@@ -82,9 +73,10 @@ export class ConversationStore {
|
||||
async getRecentMessages(
|
||||
userId: string,
|
||||
sessionId: string,
|
||||
limit: number = 20
|
||||
limit: number,
|
||||
channelType?: string
|
||||
): Promise<StoredMessage[]> {
|
||||
const key = this.getRedisKey(userId, sessionId);
|
||||
const key = this.getRedisKey(userId, sessionId, channelType);
|
||||
const messages = await this.redis.lrange(key, 0, limit - 1);
|
||||
|
||||
return messages
|
||||
@@ -101,37 +93,70 @@ export class ConversationStore {
|
||||
}
|
||||
|
||||
/**
|
||||
* Get full conversation history from Iceberg (cold path)
|
||||
* Get full conversation history — Redis first, falls back to Iceberg cold path
|
||||
*/
|
||||
async getFullHistory(
|
||||
userId: string,
|
||||
sessionId: string,
|
||||
limit: number,
|
||||
channelType?: string,
|
||||
timeRange?: { start: number; end: number }
|
||||
): Promise<StoredMessage[]> {
|
||||
this.logger.debug({ userId, sessionId, timeRange }, 'Loading full history from Iceberg');
|
||||
this.logger.debug({ userId, sessionId }, 'Loading full history');
|
||||
|
||||
// TODO: Implement Iceberg query
|
||||
// const table = this.iceberg.loadTable('gateway.conversations');
|
||||
// const filters = [
|
||||
// EqualTo('user_id', userId),
|
||||
// EqualTo('session_id', sessionId),
|
||||
// ];
|
||||
//
|
||||
// if (timeRange) {
|
||||
// filters.push(GreaterThanOrEqual('timestamp', timeRange.start));
|
||||
// filters.push(LessThanOrEqual('timestamp', timeRange.end));
|
||||
// }
|
||||
//
|
||||
// const df = await table.scan({
|
||||
// row_filter: And(...filters)
|
||||
// }).to_pandas();
|
||||
//
|
||||
// if (!df.empty) {
|
||||
// return df.sort_values('timestamp').to_dict('records');
|
||||
// }
|
||||
// Try Redis hot path first
|
||||
const hot = await this.getRecentMessages(userId, sessionId, limit, channelType);
|
||||
if (hot.length > 0) {
|
||||
return hot;
|
||||
}
|
||||
|
||||
// Fallback to Redis if Iceberg not available
|
||||
return await this.getRecentMessages(userId, sessionId, 1000);
|
||||
// Fall back to Iceberg cold path (post-TTL recovery)
|
||||
if (this.icebergClient) {
|
||||
this.logger.debug({ userId, sessionId }, 'Redis miss, querying Iceberg cold path');
|
||||
const coldMessages = await this.icebergClient.queryMessages(userId, sessionId, {
|
||||
startTime: timeRange?.start,
|
||||
endTime: timeRange?.end,
|
||||
limit,
|
||||
});
|
||||
return coldMessages.map((m) => ({
|
||||
id: m.id,
|
||||
userId: m.user_id,
|
||||
sessionId: m.session_id,
|
||||
role: m.role as StoredMessage['role'],
|
||||
content: m.content,
|
||||
timestamp: m.timestamp,
|
||||
}));
|
||||
}
|
||||
|
||||
return [];
|
||||
}
|
||||
|
||||
/**
|
||||
* Flush the full session from Redis to Iceberg as a single Parquet file.
|
||||
* Called once at session end — prevents small-file fragmentation.
|
||||
*/
|
||||
async flushToIceberg(userId: string, sessionId: string, limit: number, channelType?: string): Promise<void> {
|
||||
if (!this.icebergClient) {
|
||||
return;
|
||||
}
|
||||
|
||||
const messages = await this.getRecentMessages(userId, sessionId, limit, channelType);
|
||||
if (messages.length === 0) {
|
||||
return;
|
||||
}
|
||||
|
||||
const icebergMessages = messages.map((m) => ({
|
||||
id: m.id,
|
||||
user_id: m.userId,
|
||||
session_id: m.sessionId,
|
||||
role: m.role,
|
||||
content: m.content,
|
||||
metadata: JSON.stringify(m.metadata || {}),
|
||||
timestamp: m.timestamp,
|
||||
}));
|
||||
|
||||
await this.icebergClient.appendMessages(userId, sessionId, icebergMessages);
|
||||
this.logger.info({ userId, sessionId, count: icebergMessages.length }, 'Conversation flushed to Iceberg');
|
||||
}
|
||||
|
||||
/**
|
||||
@@ -155,9 +180,9 @@ export class ConversationStore {
|
||||
/**
|
||||
* Delete all messages for a session (Redis only, Iceberg handled separately)
|
||||
*/
|
||||
async deleteSession(userId: string, sessionId: string): Promise<void> {
|
||||
async deleteSession(userId: string, sessionId: string, channelType?: string): Promise<void> {
|
||||
this.logger.info({ userId, sessionId }, 'Deleting session from Redis');
|
||||
const key = this.getRedisKey(userId, sessionId);
|
||||
const key = this.getRedisKey(userId, sessionId, channelType);
|
||||
await this.redis.del(key);
|
||||
}
|
||||
|
||||
@@ -167,62 +192,22 @@ export class ConversationStore {
|
||||
async deleteUserData(userId: string): Promise<void> {
|
||||
this.logger.info({ userId }, 'Deleting all user messages for GDPR compliance');
|
||||
|
||||
// Delete from Redis
|
||||
const pattern = `conv:${userId}:*`;
|
||||
const keys = await this.redis.keys(pattern);
|
||||
if (keys.length > 0) {
|
||||
await this.redis.del(...keys);
|
||||
}
|
||||
|
||||
// Delete from Iceberg
|
||||
// Note: For GDPR compliance, need to:
|
||||
// 1. Send delete command via Kafka OR
|
||||
// 2. Use Iceberg REST API to delete rows (if supported) OR
|
||||
// 3. Coordinate with Flink job to handle deletes
|
||||
//
|
||||
// Iceberg delete flow:
|
||||
// - Mark rows for deletion (equality delete files)
|
||||
// - Run compaction to physically remove
|
||||
// - Expire old snapshots
|
||||
|
||||
this.logger.info({ userId }, 'User messages deleted from Redis - Iceberg GDPR delete not yet implemented');
|
||||
}
|
||||
|
||||
/**
|
||||
* Get Redis key for conversation
|
||||
* Get Redis key for conversation, namespaced by channel type
|
||||
*/
|
||||
private getRedisKey(userId: string, sessionId: string): string {
|
||||
return `conv:${userId}:${sessionId}`;
|
||||
}
|
||||
|
||||
/**
|
||||
* Append message to Iceberg for durable storage
|
||||
*
|
||||
* Note: For production, send to Kafka topic that Flink consumes:
|
||||
* - Topic: gateway_conversations
|
||||
* - Flink job writes to gateway.conversations Iceberg table
|
||||
* - Ensures consistent write pattern with rest of system
|
||||
*/
|
||||
private async appendToIceberg(message: StoredMessage): Promise<void> {
|
||||
// TODO: Send to Kafka topic for Flink processing
|
||||
// const kafkaMessage = {
|
||||
// id: message.id,
|
||||
// user_id: message.userId,
|
||||
// session_id: message.sessionId,
|
||||
// role: message.role,
|
||||
// content: message.content,
|
||||
// metadata: JSON.stringify(message.metadata || {}),
|
||||
// timestamp: message.timestamp,
|
||||
// };
|
||||
// await this.kafkaProducer.send({
|
||||
// topic: 'gateway_conversations',
|
||||
// messages: [{ value: JSON.stringify(kafkaMessage) }]
|
||||
// });
|
||||
|
||||
this.logger.debug(
|
||||
{ messageId: message.id, userId: message.userId, sessionId: message.sessionId },
|
||||
'Message append to Iceberg (via Kafka) not yet implemented'
|
||||
);
|
||||
private getRedisKey(userId: string, sessionId: string, channelType?: string): string {
|
||||
return channelType
|
||||
? `conv:${channelType}:${userId}:${sessionId}`
|
||||
: `conv:${userId}:${sessionId}`;
|
||||
}
|
||||
|
||||
/**
|
||||
@@ -241,7 +226,7 @@ export class ConversationStore {
|
||||
}
|
||||
|
||||
const messages = await this.getRecentMessages(userId, sessionId, count);
|
||||
const timestamps = messages.map((m) => m.timestamp / 1000); // Convert to milliseconds
|
||||
const timestamps = messages.map((m) => m.timestamp / 1000);
|
||||
|
||||
return {
|
||||
messageCount: count,
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
import type { UserLicense, ChannelType } from '../../types/user.js';
|
||||
import type { License, ChannelType } from '../../types/user.js';
|
||||
import type { BaseMessage } from '@langchain/core/messages';
|
||||
|
||||
/**
|
||||
@@ -62,7 +62,7 @@ export interface UserContext {
|
||||
// Identity
|
||||
userId: string;
|
||||
sessionId: string;
|
||||
license: UserLicense;
|
||||
license: License;
|
||||
|
||||
// Channel context (for multi-channel routing)
|
||||
activeChannel: ActiveChannel;
|
||||
@@ -146,7 +146,7 @@ export function getDefaultCapabilities(channelType: ChannelType): ChannelCapabil
|
||||
export function createUserContext(params: {
|
||||
userId: string;
|
||||
sessionId: string;
|
||||
license: UserLicense;
|
||||
license: License;
|
||||
channelType: ChannelType;
|
||||
channelUserId: string;
|
||||
channelCapabilities?: Partial<ChannelCapabilities>;
|
||||
|
||||
99
gateway/src/harness/prompts/system-prompt.md
Normal file
99
gateway/src/harness/prompts/system-prompt.md
Normal file
@@ -0,0 +1,99 @@
|
||||
# Dexorder AI Assistant System Prompt
|
||||
|
||||
You are a helpful AI assistant for Dexorder, an AI-first trading platform.
|
||||
You help users research markets, develop indicators and strategies, and analyze trading data.
|
||||
|
||||
**User License:** {{licenseType}}
|
||||
|
||||
**Available Features:**
|
||||
{{features}}
|
||||
|
||||
---
|
||||
|
||||
# Important Instructions
|
||||
|
||||
## Task Delegation
|
||||
- For ANY research questions, deep analysis, statistical analysis, charting requests, plotting, ML tasks, or market data queries that require computation, you MUST use the 'research' tool
|
||||
- The research tool creates and runs Python scripts that generate charts and perform analysis
|
||||
- Use 'research' for anything involving: plotting, statistics, calculations, correlations, patterns, volume analysis, technical indicators, or any non-trivial data processing
|
||||
- NEVER write Python code directly in your responses to the user
|
||||
- NEVER show code to the user - delegate to the research tool instead
|
||||
- NEVER attempt to do analysis yourself - let the research subagent handle it
|
||||
|
||||
## Available Tools
|
||||
You have access to the following tools:
|
||||
|
||||
### research
|
||||
**This is your PRIMARY tool for any analysis, computation, charting, or plotting tasks.**
|
||||
|
||||
Creates and runs Python research scripts via a specialized research subagent.
|
||||
The subagent autonomously writes code, executes it, handles errors, and generates charts.
|
||||
|
||||
**ALWAYS use research for:**
|
||||
- Any plotting, charting, or visualization requests
|
||||
- Price action analysis and correlations
|
||||
- Technical indicators and overlays
|
||||
- Statistical analysis of market data
|
||||
- Volume analysis and patterns
|
||||
- Machine learning or predictive modeling
|
||||
- Any data-intensive computations
|
||||
- Multi-symbol comparisons
|
||||
- Custom calculations or transformations
|
||||
- Deep analysis requiring Python libraries (pandas, numpy, scipy, matplotlib, etc.)
|
||||
|
||||
**NEVER attempt to do analysis yourself in the chat.**
|
||||
Let the research subagent write and execute the Python code.
|
||||
|
||||
**Examples of when to use research:**
|
||||
- "Plot BTC with volume overlay" → use research
|
||||
- "Calculate correlation between ETH and BTC" → use research
|
||||
- "Show me RSI divergences" → use research
|
||||
- "Analyze Monday price patterns" → use research
|
||||
- "Does volume predict price movement?" → use research
|
||||
|
||||
Parameters:
|
||||
- instruction: Natural language description of the analysis to perform (be specific!)
|
||||
- name: A unique name for the research script (e.g., "BTC Weekly Analysis")
|
||||
|
||||
Example usage:
|
||||
- User: "Does Friday price action correlate with Monday?"
|
||||
- You: Call research tool with instruction="Analyze correlation between Friday and Monday price action during NY trading hours (9:30-4:00 ET)", name="Friday-Monday Correlation"
|
||||
|
||||
### symbol-lookup
|
||||
Look up trading symbols and get metadata.
|
||||
Use this when users mention tickers or need symbol information.
|
||||
|
||||
### get-chart-data
|
||||
**IMPORTANT: This is for QUICK, CASUAL information ONLY. This tool just returns raw data - it does NOT create charts or plots.**
|
||||
|
||||
Use ONLY when the user wants to:
|
||||
- Quickly glance at recent price data
|
||||
- Get a rough sense of current market conditions
|
||||
- Check basic OHLC values
|
||||
- Retrieve raw data without any processing
|
||||
|
||||
**DO NOT use get-chart-data for:**
|
||||
- Plotting, charting, or any visualization
|
||||
- Statistical analysis or correlations
|
||||
- Calculations or data transformations
|
||||
- Multi-symbol comparisons
|
||||
- Volume analysis or patterns
|
||||
- Any non-trivial computation
|
||||
- Technical indicators or overlays
|
||||
|
||||
**For anything beyond casual data retrieval, use the 'research' tool instead.**
|
||||
The research tool can create proper analysis with charts, statistics, and computations.
|
||||
|
||||
**Time Parameters:** Both from_time and to_time accept:
|
||||
- Unix timestamps as numbers (e.g., 1774126800)
|
||||
- Unix timestamps as strings (e.g., "1774126800")
|
||||
- Date strings (e.g., "2 days ago", "2024-01-01", "yesterday")
|
||||
|
||||
## Workspace Tools (MCP)
|
||||
You also have access to workspace persistence tools via MCP:
|
||||
|
||||
- **workspace_read(store_name)**: Read a workspace store (returns JSON object)
|
||||
- **workspace_write(store_name, data)**: Write/overwrite a workspace store
|
||||
- **workspace_patch(store_name, patch)**: Apply JSON patch to a workspace store
|
||||
|
||||
These are useful for persisting user preferences, analysis results, and custom data across sessions.
|
||||
@@ -1,146 +0,0 @@
|
||||
# Skills
|
||||
|
||||
Skills are individual capabilities that the agent can use to accomplish tasks. Each skill is a self-contained unit with:
|
||||
|
||||
- A markdown definition file (`*.skill.md`)
|
||||
- A TypeScript implementation extending `BaseSkill`
|
||||
- Clear input/output contracts
|
||||
- Parameter validation
|
||||
- Error handling
|
||||
|
||||
## Skill Structure
|
||||
|
||||
```
|
||||
skills/
|
||||
├── base-skill.ts # Base class
|
||||
├── {skill-name}.skill.md # Definition
|
||||
├── {skill-name}.ts # Implementation
|
||||
└── README.md # This file
|
||||
```
|
||||
|
||||
## Creating a New Skill
|
||||
|
||||
### 1. Create the Definition File
|
||||
|
||||
Create `{skill-name}.skill.md`:
|
||||
|
||||
```markdown
|
||||
# My Skill
|
||||
|
||||
**Version:** 1.0.0
|
||||
**Author:** Your Name
|
||||
**Tags:** category1, category2
|
||||
|
||||
## Description
|
||||
What does this skill do?
|
||||
|
||||
## Inputs
|
||||
| Parameter | Type | Required | Description |
|
||||
|-----------|------|----------|-------------|
|
||||
| param1 | string | Yes | What it does |
|
||||
|
||||
## Outputs
|
||||
What does it return?
|
||||
|
||||
## Example Usage
|
||||
Show code example
|
||||
```
|
||||
|
||||
### 2. Create the Implementation
|
||||
|
||||
Create `{skill-name}.ts`:
|
||||
|
||||
```typescript
|
||||
import { BaseSkill, SkillInput, SkillResult, SkillMetadata } from './base-skill.js';
|
||||
|
||||
export class MySkill extends BaseSkill {
|
||||
getMetadata(): SkillMetadata {
|
||||
return {
|
||||
name: 'my-skill',
|
||||
description: 'What it does',
|
||||
version: '1.0.0',
|
||||
};
|
||||
}
|
||||
|
||||
getParametersSchema(): Record<string, unknown> {
|
||||
return {
|
||||
type: 'object',
|
||||
required: ['param1'],
|
||||
properties: {
|
||||
param1: { type: 'string' },
|
||||
},
|
||||
};
|
||||
}
|
||||
|
||||
validateInput(parameters: Record<string, unknown>): boolean {
|
||||
return typeof parameters.param1 === 'string';
|
||||
}
|
||||
|
||||
async execute(input: SkillInput): Promise<SkillResult> {
|
||||
this.logStart(input);
|
||||
|
||||
try {
|
||||
// Your implementation here
|
||||
const result = this.success({ data: 'result' });
|
||||
this.logEnd(result);
|
||||
return result;
|
||||
} catch (error) {
|
||||
return this.error(error as Error);
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### 3. Register the Skill
|
||||
|
||||
Add to `index.ts`:
|
||||
|
||||
```typescript
|
||||
export { MySkill } from './my-skill.js';
|
||||
```
|
||||
|
||||
## Using Skills in Workflows
|
||||
|
||||
Skills can be used in LangGraph workflows:
|
||||
|
||||
```typescript
|
||||
import { MarketAnalysisSkill } from '../skills/market-analysis.js';
|
||||
|
||||
const analyzeNode = async (state) => {
|
||||
const skill = new MarketAnalysisSkill(logger, model);
|
||||
const result = await skill.execute({
|
||||
context: state.userContext,
|
||||
parameters: {
|
||||
ticker: state.ticker,
|
||||
period: '4h',
|
||||
},
|
||||
});
|
||||
|
||||
return {
|
||||
analysis: result.data,
|
||||
};
|
||||
};
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Single Responsibility**: Each skill should do one thing well
|
||||
2. **Validation**: Always validate inputs thoroughly
|
||||
3. **Error Handling**: Use try/catch and return meaningful errors
|
||||
4. **Logging**: Use `logStart()` and `logEnd()` helpers
|
||||
5. **Documentation**: Keep the `.skill.md` file up to date
|
||||
6. **Testing**: Write unit tests for skill logic
|
||||
7. **Idempotency**: Skills should be safe to retry
|
||||
|
||||
## Available Skills
|
||||
|
||||
- **market-analysis**: Analyze market conditions and trends
|
||||
- *(Add more as you build them)*
|
||||
|
||||
## Skill Categories
|
||||
|
||||
- **Market Data**: Query and analyze market information
|
||||
- **Trading**: Execute trades, manage positions
|
||||
- **Analysis**: Technical and fundamental analysis
|
||||
- **Risk**: Risk assessment and management
|
||||
- **Utilities**: Helper functions and utilities
|
||||
@@ -1,128 +0,0 @@
|
||||
import type { BaseChatModel } from '@langchain/core/language_models/chat_models';
|
||||
import type { FastifyBaseLogger } from 'fastify';
|
||||
import type { UserContext } from '../memory/session-context.js';
|
||||
|
||||
/**
|
||||
* Skill metadata
|
||||
*/
|
||||
export interface SkillMetadata {
|
||||
name: string;
|
||||
description: string;
|
||||
version: string;
|
||||
author?: string;
|
||||
tags?: string[];
|
||||
}
|
||||
|
||||
/**
|
||||
* Skill input parameters
|
||||
*/
|
||||
export interface SkillInput {
|
||||
context: UserContext;
|
||||
parameters: Record<string, unknown>;
|
||||
}
|
||||
|
||||
/**
|
||||
* Skill execution result
|
||||
*/
|
||||
export interface SkillResult {
|
||||
success: boolean;
|
||||
data?: unknown;
|
||||
error?: string;
|
||||
metadata?: Record<string, unknown>;
|
||||
}
|
||||
|
||||
/**
|
||||
* Base skill interface
|
||||
*
|
||||
* Skills are individual capabilities that the agent can use.
|
||||
* Each skill is defined by:
|
||||
* - A markdown file (*.skill.md) describing purpose, inputs, outputs
|
||||
* - A TypeScript implementation extending BaseSkill
|
||||
*
|
||||
* Skills can use:
|
||||
* - LLM calls for reasoning
|
||||
* - User's MCP server tools
|
||||
* - Platform tools (market data, charts, etc.)
|
||||
*/
|
||||
export abstract class BaseSkill {
|
||||
protected logger: FastifyBaseLogger;
|
||||
protected model?: BaseChatModel;
|
||||
|
||||
constructor(logger: FastifyBaseLogger, model?: BaseChatModel) {
|
||||
this.logger = logger;
|
||||
this.model = model;
|
||||
}
|
||||
|
||||
/**
|
||||
* Get skill metadata
|
||||
*/
|
||||
abstract getMetadata(): SkillMetadata;
|
||||
|
||||
/**
|
||||
* Validate input parameters
|
||||
*/
|
||||
abstract validateInput(parameters: Record<string, unknown>): boolean;
|
||||
|
||||
/**
|
||||
* Execute the skill
|
||||
*/
|
||||
abstract execute(input: SkillInput): Promise<SkillResult>;
|
||||
|
||||
/**
|
||||
* Get required parameters schema (JSON Schema format)
|
||||
*/
|
||||
abstract getParametersSchema(): Record<string, unknown>;
|
||||
|
||||
/**
|
||||
* Helper: Log skill execution start
|
||||
*/
|
||||
protected logStart(input: SkillInput): void {
|
||||
const metadata = this.getMetadata();
|
||||
this.logger.info(
|
||||
{
|
||||
skill: metadata.name,
|
||||
userId: input.context.userId,
|
||||
sessionId: input.context.sessionId,
|
||||
parameters: input.parameters,
|
||||
},
|
||||
'Starting skill execution'
|
||||
);
|
||||
}
|
||||
|
||||
/**
|
||||
* Helper: Log skill execution end
|
||||
*/
|
||||
protected logEnd(result: SkillResult): void {
|
||||
const metadata = this.getMetadata();
|
||||
this.logger.info(
|
||||
{
|
||||
skill: metadata.name,
|
||||
success: result.success,
|
||||
error: result.error,
|
||||
},
|
||||
'Skill execution completed'
|
||||
);
|
||||
}
|
||||
|
||||
/**
|
||||
* Helper: Create success result
|
||||
*/
|
||||
protected success(data: unknown, metadata?: Record<string, unknown>): SkillResult {
|
||||
return {
|
||||
success: true,
|
||||
data,
|
||||
metadata,
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Helper: Create error result
|
||||
*/
|
||||
protected error(error: string | Error, metadata?: Record<string, unknown>): SkillResult {
|
||||
return {
|
||||
success: false,
|
||||
error: error instanceof Error ? error.message : error,
|
||||
metadata,
|
||||
};
|
||||
}
|
||||
}
|
||||
@@ -1,10 +0,0 @@
|
||||
// Skills exports
|
||||
|
||||
export {
|
||||
BaseSkill,
|
||||
type SkillMetadata,
|
||||
type SkillInput,
|
||||
type SkillResult,
|
||||
} from './base-skill.js';
|
||||
|
||||
export { MarketAnalysisSkill } from './market-analysis.js';
|
||||
@@ -1,78 +0,0 @@
|
||||
# Market Analysis Skill
|
||||
|
||||
**Version:** 1.0.0
|
||||
**Author:** Dexorder AI Platform
|
||||
**Tags:** market-data, analysis, trading
|
||||
|
||||
## Description
|
||||
|
||||
Analyzes market conditions for a given ticker and timeframe. Provides insights on:
|
||||
- Price trends and patterns
|
||||
- Volume analysis
|
||||
- Support and resistance levels
|
||||
- Market sentiment indicators
|
||||
|
||||
## Inputs
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
|-----------|------|----------|-------------|
|
||||
| `ticker` | string | Yes | Market identifier (e.g., "BINANCE:BTC/USDT") |
|
||||
| `period` | string | Yes | Analysis period ("1h", "4h", "1d", "1w") |
|
||||
| `startTime` | number | No | Start timestamp (microseconds), defaults to 7 days ago |
|
||||
| `endTime` | number | No | End timestamp (microseconds), defaults to now |
|
||||
| `indicators` | string[] | No | Additional indicators to include (e.g., ["RSI", "MACD"]) |
|
||||
|
||||
## Outputs
|
||||
|
||||
```typescript
|
||||
{
|
||||
success: true,
|
||||
data: {
|
||||
ticker: string,
|
||||
period: string,
|
||||
timeRange: { start: number, end: number },
|
||||
trend: "bullish" | "bearish" | "neutral",
|
||||
priceChange: number,
|
||||
volumeProfile: {
|
||||
average: number,
|
||||
recent: number,
|
||||
trend: "increasing" | "decreasing" | "stable"
|
||||
},
|
||||
supportLevels: number[],
|
||||
resistanceLevels: number[],
|
||||
indicators: Record<string, unknown>,
|
||||
analysis: string // LLM-generated natural language analysis
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Example Usage
|
||||
|
||||
```typescript
|
||||
const skill = new MarketAnalysisSkill(logger, model);
|
||||
|
||||
const result = await skill.execute({
|
||||
context: userContext,
|
||||
parameters: {
|
||||
ticker: "BINANCE:BTC/USDT",
|
||||
period: "4h",
|
||||
indicators: ["RSI", "MACD"]
|
||||
}
|
||||
});
|
||||
|
||||
console.log(result.data.analysis);
|
||||
// "Bitcoin is showing bullish momentum with RSI at 65 and MACD crossing above signal line..."
|
||||
```
|
||||
|
||||
## Implementation Notes
|
||||
|
||||
- Queries OHLC data from Iceberg warehouse
|
||||
- Uses LLM for natural language analysis
|
||||
- Caches results for 5 minutes to reduce computation
|
||||
- Falls back to reduced analysis if Iceberg unavailable
|
||||
|
||||
## Dependencies
|
||||
|
||||
- Iceberg client (market data)
|
||||
- LLM model (analysis generation)
|
||||
- User's MCP server (optional custom indicators)
|
||||
@@ -1,198 +0,0 @@
|
||||
import { BaseSkill, type SkillInput, type SkillResult, type SkillMetadata } from './base-skill.js';
|
||||
import type { BaseChatModel } from '@langchain/core/language_models/chat_models';
|
||||
import type { FastifyBaseLogger } from 'fastify';
|
||||
import { HumanMessage, SystemMessage } from '@langchain/core/messages';
|
||||
|
||||
/**
|
||||
* Market analysis skill implementation
|
||||
*
|
||||
* See market-analysis.skill.md for full documentation
|
||||
*/
|
||||
export class MarketAnalysisSkill extends BaseSkill {
|
||||
constructor(logger: FastifyBaseLogger, model?: BaseChatModel) {
|
||||
super(logger, model);
|
||||
}
|
||||
|
||||
getMetadata(): SkillMetadata {
|
||||
return {
|
||||
name: 'market-analysis',
|
||||
description: 'Analyze market conditions for a given ticker and timeframe',
|
||||
version: '1.0.0',
|
||||
author: 'Dexorder AI Platform',
|
||||
tags: ['market-data', 'analysis', 'trading'],
|
||||
};
|
||||
}
|
||||
|
||||
getParametersSchema(): Record<string, unknown> {
|
||||
return {
|
||||
type: 'object',
|
||||
required: ['ticker', 'period'],
|
||||
properties: {
|
||||
ticker: {
|
||||
type: 'string',
|
||||
description: 'Market identifier (e.g., "BINANCE:BTC/USDT")',
|
||||
},
|
||||
period: {
|
||||
type: 'string',
|
||||
enum: ['1h', '4h', '1d', '1w'],
|
||||
description: 'Analysis period',
|
||||
},
|
||||
startTime: {
|
||||
type: 'number',
|
||||
description: 'Start timestamp in microseconds',
|
||||
},
|
||||
endTime: {
|
||||
type: 'number',
|
||||
description: 'End timestamp in microseconds',
|
||||
},
|
||||
indicators: {
|
||||
type: 'array',
|
||||
items: { type: 'string' },
|
||||
description: 'Additional indicators to include',
|
||||
},
|
||||
},
|
||||
};
|
||||
}
|
||||
|
||||
validateInput(parameters: Record<string, unknown>): boolean {
|
||||
if (!parameters.ticker || typeof parameters.ticker !== 'string') {
|
||||
return false;
|
||||
}
|
||||
if (!parameters.period || typeof parameters.period !== 'string') {
|
||||
return false;
|
||||
}
|
||||
return true;
|
||||
}
|
||||
|
||||
async execute(input: SkillInput): Promise<SkillResult> {
|
||||
this.logStart(input);
|
||||
|
||||
if (!this.validateInput(input.parameters)) {
|
||||
return this.error('Invalid parameters: ticker and period are required');
|
||||
}
|
||||
|
||||
try {
|
||||
const ticker = input.parameters.ticker as string;
|
||||
const period = input.parameters.period as string;
|
||||
const indicators = (input.parameters.indicators as string[]) || [];
|
||||
|
||||
// 1. Fetch OHLC data from Iceberg
|
||||
// TODO: Implement Iceberg query
|
||||
// const ohlcData = await this.fetchOHLCData(ticker, period, startTime, endTime);
|
||||
const ohlcData = this.getMockOHLCData(); // Placeholder
|
||||
|
||||
// 2. Calculate technical indicators
|
||||
const analysis = this.calculateAnalysis(ohlcData, indicators);
|
||||
|
||||
// 3. Generate natural language analysis using LLM
|
||||
let narrativeAnalysis = '';
|
||||
if (this.model) {
|
||||
narrativeAnalysis = await this.generateNarrativeAnalysis(
|
||||
ticker,
|
||||
period,
|
||||
analysis
|
||||
);
|
||||
}
|
||||
|
||||
const result = this.success({
|
||||
ticker,
|
||||
period,
|
||||
timeRange: {
|
||||
start: ohlcData.startTime,
|
||||
end: ohlcData.endTime,
|
||||
},
|
||||
trend: analysis.trend,
|
||||
priceChange: analysis.priceChange,
|
||||
volumeProfile: analysis.volumeProfile,
|
||||
supportLevels: analysis.supportLevels,
|
||||
resistanceLevels: analysis.resistanceLevels,
|
||||
indicators: analysis.indicators,
|
||||
analysis: narrativeAnalysis,
|
||||
});
|
||||
|
||||
this.logEnd(result);
|
||||
return result;
|
||||
} catch (error) {
|
||||
const result = this.error(error as Error);
|
||||
this.logEnd(result);
|
||||
return result;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Calculate technical analysis from OHLC data
|
||||
*/
|
||||
private calculateAnalysis(
|
||||
ohlcData: any,
|
||||
_requestedIndicators: string[]
|
||||
): any {
|
||||
// TODO: Implement proper technical analysis
|
||||
// This is a simplified placeholder
|
||||
|
||||
const priceChange = ((ohlcData.close - ohlcData.open) / ohlcData.open) * 100;
|
||||
const trend = priceChange > 1 ? 'bullish' : priceChange < -1 ? 'bearish' : 'neutral';
|
||||
|
||||
return {
|
||||
trend,
|
||||
priceChange,
|
||||
volumeProfile: {
|
||||
average: ohlcData.avgVolume,
|
||||
recent: ohlcData.currentVolume,
|
||||
trend: ohlcData.currentVolume > ohlcData.avgVolume ? 'increasing' : 'decreasing',
|
||||
},
|
||||
supportLevels: [ohlcData.low * 0.98, ohlcData.low * 0.95],
|
||||
resistanceLevels: [ohlcData.high * 1.02, ohlcData.high * 1.05],
|
||||
indicators: {},
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Generate natural language analysis using LLM
|
||||
*/
|
||||
private async generateNarrativeAnalysis(
|
||||
ticker: string,
|
||||
period: string,
|
||||
analysis: any
|
||||
): Promise<string> {
|
||||
if (!this.model) {
|
||||
return 'LLM not available for narrative analysis';
|
||||
}
|
||||
|
||||
const systemPrompt = `You are a professional market analyst.
|
||||
Provide concise, actionable market analysis based on technical data.
|
||||
Focus on key insights and avoid jargon.`;
|
||||
|
||||
const userPrompt = `Analyze the following market data for ${ticker} (${period}):
|
||||
|
||||
Trend: ${analysis.trend}
|
||||
Price Change: ${analysis.priceChange.toFixed(2)}%
|
||||
Volume: ${analysis.volumeProfile.trend}
|
||||
Support Levels: ${analysis.supportLevels.join(', ')}
|
||||
Resistance Levels: ${analysis.resistanceLevels.join(', ')}
|
||||
|
||||
Provide a 2-3 sentence analysis suitable for a trading decision.`;
|
||||
|
||||
const response = await this.model.invoke([
|
||||
new SystemMessage(systemPrompt),
|
||||
new HumanMessage(userPrompt),
|
||||
]);
|
||||
|
||||
return response.content as string;
|
||||
}
|
||||
|
||||
/**
|
||||
* Mock OHLC data (placeholder until Iceberg integration)
|
||||
*/
|
||||
private getMockOHLCData(): any {
|
||||
return {
|
||||
startTime: Date.now() - 7 * 24 * 60 * 60 * 1000,
|
||||
endTime: Date.now(),
|
||||
open: 50000,
|
||||
high: 52000,
|
||||
low: 49000,
|
||||
close: 51500,
|
||||
avgVolume: 1000000,
|
||||
currentVolume: 1200000,
|
||||
};
|
||||
}
|
||||
}
|
||||
@@ -3,6 +3,8 @@ import type { BaseMessage } from '@langchain/core/messages';
|
||||
import { SystemMessage, HumanMessage } from '@langchain/core/messages';
|
||||
import type { FastifyBaseLogger } from 'fastify';
|
||||
import type { UserContext } from '../memory/session-context.js';
|
||||
import type { MCPClientConnector } from '../mcp-client.js';
|
||||
import type { DynamicStructuredTool } from '@langchain/core/tools';
|
||||
import { readFile } from 'fs/promises';
|
||||
import { join } from 'path';
|
||||
|
||||
@@ -17,6 +19,10 @@ export interface SubagentConfig {
|
||||
memoryFiles: string[]; // Memory files to load from memory/ directory
|
||||
capabilities: string[];
|
||||
systemPromptFile?: string; // Path to system-prompt.md
|
||||
tools?: {
|
||||
platform?: string[]; // Platform tool names
|
||||
mcp?: string[]; // MCP tool patterns/names
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
@@ -52,15 +58,21 @@ export abstract class BaseSubagent {
|
||||
protected config: SubagentConfig;
|
||||
protected systemPrompt?: string;
|
||||
protected memoryContext: string[] = [];
|
||||
protected mcpClient?: MCPClientConnector;
|
||||
protected tools: DynamicStructuredTool[] = [];
|
||||
|
||||
constructor(
|
||||
config: SubagentConfig,
|
||||
model: BaseChatModel,
|
||||
logger: FastifyBaseLogger
|
||||
logger: FastifyBaseLogger,
|
||||
mcpClient?: MCPClientConnector,
|
||||
tools?: DynamicStructuredTool[]
|
||||
) {
|
||||
this.config = config;
|
||||
this.model = model;
|
||||
this.logger = logger;
|
||||
this.mcpClient = mcpClient;
|
||||
this.tools = tools || [];
|
||||
}
|
||||
|
||||
/**
|
||||
@@ -176,4 +188,56 @@ export abstract class BaseSubagent {
|
||||
hasCapability(capability: string): boolean {
|
||||
return this.config.capabilities.includes(capability);
|
||||
}
|
||||
|
||||
/**
|
||||
* Call a tool on the user's MCP server
|
||||
*
|
||||
* @param name Tool name
|
||||
* @param args Tool arguments
|
||||
* @returns Tool result
|
||||
* @throws Error if MCP client not available or tool call fails
|
||||
*/
|
||||
protected async callMCPTool(name: string, args: Record<string, unknown>): Promise<unknown> {
|
||||
if (!this.mcpClient) {
|
||||
throw new Error('MCP client not available for this subagent');
|
||||
}
|
||||
|
||||
try {
|
||||
this.logger.debug({ tool: name, args }, 'Calling MCP tool from subagent');
|
||||
const result = await this.mcpClient.callTool(name, args);
|
||||
return result;
|
||||
} catch (error) {
|
||||
this.logger.error({ error, tool: name }, 'MCP tool call failed');
|
||||
throw error;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Check if MCP client is available
|
||||
*/
|
||||
protected hasMCPClient(): boolean {
|
||||
return this.mcpClient !== undefined;
|
||||
}
|
||||
|
||||
/**
|
||||
* Get tools available to this subagent
|
||||
*/
|
||||
getTools(): DynamicStructuredTool[] {
|
||||
return this.tools;
|
||||
}
|
||||
|
||||
/**
|
||||
* Set tools for this subagent (used during initialization)
|
||||
*/
|
||||
setTools(tools: DynamicStructuredTool[]): void {
|
||||
this.tools = tools;
|
||||
this.logger.debug(
|
||||
{
|
||||
subagent: this.config.name,
|
||||
toolCount: tools.length,
|
||||
toolNames: tools.map(t => t.name),
|
||||
},
|
||||
'Tools set for subagent'
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
@@ -19,8 +19,8 @@ import type { FastifyBaseLogger } from 'fastify';
|
||||
* - best-practices.md: Industry standards
|
||||
*/
|
||||
export class CodeReviewerSubagent extends BaseSubagent {
|
||||
constructor(config: SubagentConfig, model: BaseChatModel, logger: FastifyBaseLogger) {
|
||||
super(config, model, logger);
|
||||
constructor(config: SubagentConfig, model: BaseChatModel, logger: FastifyBaseLogger, mcpClient?: any, tools?: any[]) {
|
||||
super(config, model, logger, mcpClient, tools);
|
||||
}
|
||||
|
||||
/**
|
||||
@@ -72,7 +72,9 @@ export class CodeReviewerSubagent extends BaseSubagent {
|
||||
export async function createCodeReviewerSubagent(
|
||||
model: BaseChatModel,
|
||||
logger: FastifyBaseLogger,
|
||||
basePath: string
|
||||
basePath: string,
|
||||
mcpClient?: any,
|
||||
tools?: any[]
|
||||
): Promise<CodeReviewerSubagent> {
|
||||
const { readFile } = await import('fs/promises');
|
||||
const { join } = await import('path');
|
||||
@@ -84,7 +86,7 @@ export async function createCodeReviewerSubagent(
|
||||
const config = yaml.load(configContent) as SubagentConfig;
|
||||
|
||||
// Create and initialize subagent
|
||||
const subagent = new CodeReviewerSubagent(config, model, logger);
|
||||
const subagent = new CodeReviewerSubagent(config, model, logger, mcpClient, tools);
|
||||
await subagent.initialize(basePath);
|
||||
|
||||
return subagent;
|
||||
|
||||
@@ -10,3 +10,9 @@ export {
|
||||
CodeReviewerSubagent,
|
||||
createCodeReviewerSubagent,
|
||||
} from './code-reviewer/index.js';
|
||||
|
||||
export {
|
||||
ResearchSubagent,
|
||||
createResearchSubagent,
|
||||
type ResearchResult,
|
||||
} from './research/index.js';
|
||||
|
||||
2
gateway/src/harness/subagents/research/.gitignore
vendored
Normal file
2
gateway/src/harness/subagents/research/.gitignore
vendored
Normal file
@@ -0,0 +1,2 @@
|
||||
# Auto-generated at build time by bin/build
|
||||
api-source/
|
||||
31
gateway/src/harness/subagents/research/config.yaml
Normal file
31
gateway/src/harness/subagents/research/config.yaml
Normal file
@@ -0,0 +1,31 @@
|
||||
# Research Subagent Configuration
|
||||
|
||||
name: research
|
||||
description: Creates and runs Python research scripts for market analysis, charting, and statistical analysis
|
||||
|
||||
# Model configuration
|
||||
model: claude-sonnet-4-6
|
||||
temperature: 0.3
|
||||
maxTokens: 8192
|
||||
|
||||
# Memory files to load from memory/ directory
|
||||
memoryFiles:
|
||||
- api-reference.md
|
||||
- usage-examples.md
|
||||
|
||||
# System prompt file
|
||||
systemPromptFile: system-prompt.md
|
||||
|
||||
# Capabilities this subagent provides
|
||||
capabilities:
|
||||
- research_scripting
|
||||
- data_analysis
|
||||
- charting
|
||||
- statistical_analysis
|
||||
|
||||
# Tools available to this subagent
|
||||
tools:
|
||||
platform: [] # No platform tools needed (works at script level)
|
||||
mcp:
|
||||
- category_* # All category_ tools (write, edit, read, list)
|
||||
- execute_research # Script execution tool
|
||||
209
gateway/src/harness/subagents/research/index.ts
Normal file
209
gateway/src/harness/subagents/research/index.ts
Normal file
@@ -0,0 +1,209 @@
|
||||
import { BaseSubagent, type SubagentConfig, type SubagentContext } from '../base-subagent.js';
|
||||
import type { BaseChatModel } from '@langchain/core/language_models/chat_models';
|
||||
import { SystemMessage } from '@langchain/core/messages';
|
||||
import { createReactAgent } from '@langchain/langgraph/prebuilt';
|
||||
import type { FastifyBaseLogger } from 'fastify';
|
||||
import type { MCPClientConnector } from '../../mcp-client.js';
|
||||
|
||||
/**
|
||||
* Result from research subagent execution
|
||||
*/
|
||||
export interface ResearchResult {
|
||||
text: string;
|
||||
images: Array<{
|
||||
data: string;
|
||||
mimeType: string;
|
||||
}>;
|
||||
}
|
||||
|
||||
/**
|
||||
* Research Subagent
|
||||
*
|
||||
* Specialized agent for creating and running Python research scripts.
|
||||
* Uses category_* MCP tools to:
|
||||
* - Create/edit research scripts with DataAPI and ChartingAPI
|
||||
* - Execute scripts and capture matplotlib charts
|
||||
* - Iterate on errors with autonomous coding loop
|
||||
*
|
||||
* The subagent has direct access to MCP tools and handles the full
|
||||
* coding loop without requiring skill-level orchestration.
|
||||
*
|
||||
* Images from script execution are extracted and returned separately
|
||||
* but are NOT loaded into the LLM context (pass-through only).
|
||||
*/
|
||||
export class ResearchSubagent extends BaseSubagent {
|
||||
private lastImages: Array<{data: string; mimeType: string}> = [];
|
||||
// Shared with the MCP tool wrappers — populated as tools run, cleared per execution
|
||||
private imageCapture: Array<{data: string; mimeType: string}> = [];
|
||||
|
||||
constructor(
|
||||
config: SubagentConfig,
|
||||
model: BaseChatModel,
|
||||
logger: FastifyBaseLogger,
|
||||
mcpClient?: MCPClientConnector,
|
||||
tools?: any[]
|
||||
) {
|
||||
super(config, model, logger, mcpClient, tools);
|
||||
}
|
||||
|
||||
setImageCapture(capture: Array<{data: string; mimeType: string}>): void {
|
||||
this.imageCapture = capture;
|
||||
}
|
||||
|
||||
/**
|
||||
* Execute research request using LangGraph's createReactAgent.
|
||||
* This is the standard LangChain pattern for agents with tool access —
|
||||
* createReactAgent handles the tool calling loop automatically.
|
||||
*/
|
||||
async execute(context: SubagentContext, instruction: string): Promise<string> {
|
||||
this.logger.info(
|
||||
{
|
||||
subagent: this.getName(),
|
||||
userId: context.userContext.userId,
|
||||
instruction: instruction.substring(0, 200),
|
||||
toolCount: this.tools.length,
|
||||
toolNames: this.tools.map(t => t.name),
|
||||
},
|
||||
'Research subagent starting'
|
||||
);
|
||||
|
||||
if (!this.hasMCPClient()) {
|
||||
throw new Error('MCP client not available for research subagent');
|
||||
}
|
||||
|
||||
if (this.tools.length === 0) {
|
||||
this.logger.warn('Research subagent has no tools — cannot write or execute scripts');
|
||||
}
|
||||
|
||||
// Clear previous images (in-place so tool wrappers keep the same array reference)
|
||||
this.imageCapture.length = 0;
|
||||
this.lastImages = [];
|
||||
|
||||
// Build system prompt (with memory context appended)
|
||||
const initialMessages = this.buildMessages(context, instruction);
|
||||
// buildMessages returns [SystemMessage, ...history, HumanMessage]
|
||||
// Extract system content for createReactAgent's prompt parameter
|
||||
const systemMessage = initialMessages[0];
|
||||
const humanMessage = initialMessages[initialMessages.length - 1];
|
||||
|
||||
// createReactAgent is the standard LangChain/LangGraph pattern for tool-using agents.
|
||||
// It manages the tool calling loop, message accumulation, and termination automatically.
|
||||
const agent = createReactAgent({
|
||||
llm: this.model,
|
||||
tools: this.tools,
|
||||
prompt: systemMessage as SystemMessage,
|
||||
});
|
||||
|
||||
const result = await agent.invoke(
|
||||
{ messages: [humanMessage] },
|
||||
{ recursionLimit: 20 }
|
||||
);
|
||||
|
||||
// The final message in the graph output is the agent's last AIMessage
|
||||
const allMessages: any[] = result.messages ?? [];
|
||||
|
||||
this.logger.info(
|
||||
{ messageCount: allMessages.length },
|
||||
'Research subagent graph completed'
|
||||
);
|
||||
|
||||
// Images were captured in real-time by the MCP tool wrappers into this.imageCapture
|
||||
this.lastImages = [...this.imageCapture];
|
||||
|
||||
// Return the final AI response
|
||||
const lastAI = [...allMessages].reverse().find(
|
||||
(m: any) => m.constructor?.name === 'AIMessage' || m._getType?.() === 'ai'
|
||||
);
|
||||
|
||||
const finalText = lastAI
|
||||
? (typeof lastAI.content === 'string' ? lastAI.content : JSON.stringify(lastAI.content))
|
||||
: 'Research completed.';
|
||||
|
||||
this.logger.info(
|
||||
{ textLength: finalText.length, imageCount: this.lastImages.length },
|
||||
'Research subagent finished'
|
||||
);
|
||||
|
||||
return finalText;
|
||||
}
|
||||
|
||||
/**
|
||||
* Execute with full result including images
|
||||
* This is the method that ResearchSkill should use
|
||||
*/
|
||||
async executeWithImages(context: SubagentContext, instruction: string): Promise<ResearchResult> {
|
||||
const text = await this.execute(context, instruction);
|
||||
return {
|
||||
text,
|
||||
images: this.lastImages,
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Get images from last execution
|
||||
*/
|
||||
getLastImages(): Array<{data: string; mimeType: string}> {
|
||||
return this.lastImages;
|
||||
}
|
||||
|
||||
/**
|
||||
* Stream research execution
|
||||
*/
|
||||
async *stream(context: SubagentContext, instruction: string): AsyncGenerator<string> {
|
||||
this.logger.info(
|
||||
{
|
||||
subagent: this.getName(),
|
||||
userId: context.userContext.userId,
|
||||
},
|
||||
'Streaming research request'
|
||||
);
|
||||
|
||||
if (!this.hasMCPClient()) {
|
||||
throw new Error('MCP client not available for research subagent');
|
||||
}
|
||||
|
||||
// Clear previous images
|
||||
this.lastImages = [];
|
||||
|
||||
const messages = this.buildMessages(context, instruction);
|
||||
|
||||
const stream = await this.model.stream(messages);
|
||||
|
||||
for await (const chunk of stream) {
|
||||
if (typeof chunk.content === 'string') {
|
||||
yield chunk.content;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
/**
|
||||
* Factory function to create and initialize ResearchSubagent
|
||||
*/
|
||||
export async function createResearchSubagent(
|
||||
model: BaseChatModel,
|
||||
logger: FastifyBaseLogger,
|
||||
basePath: string,
|
||||
mcpClient?: MCPClientConnector,
|
||||
tools?: any[],
|
||||
imageCapture?: Array<{data: string; mimeType: string}>
|
||||
): Promise<ResearchSubagent> {
|
||||
const { readFile } = await import('fs/promises');
|
||||
const { join } = await import('path');
|
||||
const yaml = await import('js-yaml');
|
||||
|
||||
// Load config
|
||||
const configPath = join(basePath, 'config.yaml');
|
||||
const configContent = await readFile(configPath, 'utf-8');
|
||||
const config = yaml.load(configContent) as SubagentConfig;
|
||||
|
||||
// Create and initialize subagent
|
||||
const subagent = new ResearchSubagent(config, model, logger, mcpClient, tools);
|
||||
if (imageCapture !== undefined) {
|
||||
subagent.setImageCapture(imageCapture);
|
||||
}
|
||||
await subagent.initialize(basePath);
|
||||
|
||||
return subagent;
|
||||
}
|
||||
480
gateway/src/harness/subagents/research/memory/api-reference.md
Normal file
480
gateway/src/harness/subagents/research/memory/api-reference.md
Normal file
@@ -0,0 +1,480 @@
|
||||
# Dexorder Research API Reference
|
||||
|
||||
This file contains the complete Python API source code with full docstrings.
|
||||
These files are copied verbatim from `sandbox/dexorder/api/`.
|
||||
|
||||
The API provides access to market data and charting capabilities for research scripts.
|
||||
|
||||
---
|
||||
|
||||
## Overview
|
||||
|
||||
Research scripts access the API via:
|
||||
```python
|
||||
from dexorder.api import get_api
|
||||
api = get_api()
|
||||
```
|
||||
|
||||
The API instance provides:
|
||||
- `api.data` - DataAPI for fetching OHLC market data
|
||||
- `api.charting` - ChartingAPI for creating financial charts
|
||||
|
||||
---
|
||||
|
||||
## Complete API Source Code
|
||||
|
||||
The following sections contain the verbatim Python source files with complete
|
||||
type hints, docstrings, and examples.
|
||||
|
||||
|
||||
### api.py
|
||||
```python
|
||||
"""
|
||||
Main DexOrder API - provides access to market data and charting.
|
||||
"""
|
||||
|
||||
import logging
|
||||
|
||||
from .charting_api import ChartingAPI
|
||||
from .data_api import DataAPI
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class API:
|
||||
"""
|
||||
Main API for accessing market data and creating charts.
|
||||
|
||||
This is the primary interface for research scripts and trading strategies.
|
||||
Access this via get_api() in research scripts.
|
||||
|
||||
Attributes:
|
||||
data: DataAPI for fetching historical and current market data
|
||||
charting: ChartingAPI for creating candlestick charts and visualizations
|
||||
|
||||
Example:
|
||||
from dexorder.api import get_api
|
||||
import asyncio
|
||||
|
||||
api = get_api()
|
||||
|
||||
# Fetch data
|
||||
df = asyncio.run(api.data.historical_ohlc(
|
||||
ticker="BINANCE:BTC/USDT",
|
||||
period_seconds=3600,
|
||||
start_time="2021-12-20",
|
||||
end_time="2021-12-21"
|
||||
))
|
||||
|
||||
# Create chart
|
||||
fig, ax = api.charting.plot_ohlc(df, title="BTC/USDT 1H")
|
||||
"""
|
||||
|
||||
def __init__(self, charting: ChartingAPI, data: DataAPI):
|
||||
self.charting: ChartingAPI = charting
|
||||
self.data: DataAPI = data
|
||||
```
|
||||
|
||||
|
||||
### data_api.py
|
||||
```python
|
||||
from abc import ABC, abstractmethod
|
||||
from typing import Optional, List
|
||||
|
||||
import pandas as pd
|
||||
|
||||
from dexorder.utils import TimestampInput
|
||||
|
||||
|
||||
class DataAPI(ABC):
|
||||
"""
|
||||
API for accessing market data.
|
||||
|
||||
Provides methods to query OHLC (Open, High, Low, Close) candlestick data
|
||||
for cryptocurrency markets.
|
||||
"""
|
||||
|
||||
@abstractmethod
|
||||
async def historical_ohlc(
|
||||
self,
|
||||
ticker: str,
|
||||
period_seconds: int,
|
||||
start_time: TimestampInput,
|
||||
end_time: TimestampInput,
|
||||
extra_columns: Optional[List[str]] = None,
|
||||
) -> pd.DataFrame:
|
||||
"""
|
||||
Fetch historical OHLC candlestick data for a market.
|
||||
|
||||
Args:
|
||||
ticker: Market identifier in format "EXCHANGE:SYMBOL"
|
||||
Examples: "BINANCE:BTC/USDT", "COINBASE:ETH/USD"
|
||||
period_seconds: Candle period in seconds
|
||||
Common values:
|
||||
- 60 (1 minute)
|
||||
- 300 (5 minutes)
|
||||
- 900 (15 minutes)
|
||||
- 3600 (1 hour)
|
||||
- 86400 (1 day)
|
||||
- 604800 (1 week)
|
||||
start_time: Start of time range. Accepts:
|
||||
- Unix timestamp in seconds (int/float): 1640000000
|
||||
- Date string: "2021-12-20" or "2021-12-20 12:00:00"
|
||||
- datetime object: datetime(2021, 12, 20)
|
||||
- pandas Timestamp: pd.Timestamp("2021-12-20")
|
||||
end_time: End of time range. Same formats as start_time.
|
||||
extra_columns: Optional additional columns to include beyond the standard
|
||||
OHLC columns. Available options:
|
||||
- "volume" - Total volume (decimal float)
|
||||
- "buy_vol" - Buy-side volume (decimal float)
|
||||
- "sell_vol" - Sell-side volume (decimal float)
|
||||
- "open_time", "high_time", "low_time", "close_time" (timestamps)
|
||||
- "open_interest" (for futures markets)
|
||||
- "ticker", "period_seconds"
|
||||
|
||||
Returns:
|
||||
DataFrame with candlestick data sorted by timestamp (ascending).
|
||||
Standard columns (always included):
|
||||
- timestamp: Period start time in microseconds
|
||||
- open: Opening price (decimal float)
|
||||
- high: Highest price (decimal float)
|
||||
- low: Lowest price (decimal float)
|
||||
- close: Closing price (decimal float)
|
||||
|
||||
Plus any columns specified in extra_columns.
|
||||
|
||||
All prices and volumes are automatically converted to decimal floats
|
||||
using market metadata. No manual conversion is needed.
|
||||
|
||||
Returns empty DataFrame if no data is available.
|
||||
|
||||
Examples:
|
||||
# Basic OHLC with Unix timestamp
|
||||
df = await api.historical_ohlc(
|
||||
ticker="BINANCE:BTC/USDT",
|
||||
period_seconds=3600,
|
||||
start_time=1640000000,
|
||||
end_time=1640086400
|
||||
)
|
||||
|
||||
# Using date strings with volume
|
||||
df = await api.historical_ohlc(
|
||||
ticker="BINANCE:BTC/USDT",
|
||||
period_seconds=3600,
|
||||
start_time="2021-12-20",
|
||||
end_time="2021-12-21",
|
||||
extra_columns=["volume"]
|
||||
)
|
||||
|
||||
# Using datetime objects
|
||||
from datetime import datetime
|
||||
df = await api.historical_ohlc(
|
||||
ticker="COINBASE:ETH/USD",
|
||||
period_seconds=300,
|
||||
start_time=datetime(2021, 12, 20, 9, 30),
|
||||
end_time=datetime(2021, 12, 20, 16, 30),
|
||||
extra_columns=["volume", "buy_vol", "sell_vol"]
|
||||
)
|
||||
"""
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
async def latest_ohlc(
|
||||
self,
|
||||
ticker: str,
|
||||
period_seconds: int,
|
||||
length: int = 1,
|
||||
extra_columns: Optional[List[str]] = None,
|
||||
) -> pd.DataFrame:
|
||||
"""
|
||||
Query the most recent OHLC candles for a ticker.
|
||||
|
||||
This method fetches the latest N completed candles without needing to
|
||||
specify exact timestamps. Useful for real-time analysis and indicators.
|
||||
|
||||
Args:
|
||||
ticker: Market identifier in format "EXCHANGE:SYMBOL"
|
||||
Examples: "BINANCE:BTC/USDT", "COINBASE:ETH/USD"
|
||||
period_seconds: OHLC candle period in seconds
|
||||
Common values: 60 (1m), 300 (5m), 900 (15m), 3600 (1h),
|
||||
86400 (1d), 604800 (1w)
|
||||
length: Number of most recent candles to return (default: 1)
|
||||
extra_columns: Optional list of additional column names to include.
|
||||
Same column options as historical_ohlc:
|
||||
- "volume", "buy_vol", "sell_vol"
|
||||
- "open_time", "high_time", "low_time", "close_time"
|
||||
- "open_interest", "ticker", "period_seconds"
|
||||
|
||||
Returns:
|
||||
Pandas DataFrame with the same column structure as historical_ohlc,
|
||||
containing the N most recent completed candles sorted by timestamp.
|
||||
Returns empty DataFrame if no data is available.
|
||||
|
||||
Examples:
|
||||
# Get the last candle
|
||||
df = await api.latest_ohlc(
|
||||
ticker="BINANCE:BTC/USDT",
|
||||
period_seconds=3600
|
||||
)
|
||||
# Returns: timestamp, open, high, low, close
|
||||
|
||||
# Get the last 50 5-minute candles with volume
|
||||
df = await api.latest_ohlc(
|
||||
ticker="COINBASE:ETH/USD",
|
||||
period_seconds=300,
|
||||
length=50,
|
||||
extra_columns=["volume", "buy_vol", "sell_vol"]
|
||||
)
|
||||
|
||||
# Get recent candles with all timing data
|
||||
df = await api.latest_ohlc(
|
||||
ticker="BINANCE:BTC/USDT",
|
||||
period_seconds=60,
|
||||
length=100,
|
||||
extra_columns=["open_time", "high_time", "low_time", "close_time"]
|
||||
)
|
||||
|
||||
Note:
|
||||
This method returns only completed candles. The current (incomplete)
|
||||
candle is not included.
|
||||
"""
|
||||
pass
|
||||
|
||||
```
|
||||
|
||||
|
||||
### charting_api.py
|
||||
```python
|
||||
import logging
|
||||
from abc import abstractmethod, ABC
|
||||
from typing import Optional, Tuple, List
|
||||
|
||||
import pandas as pd
|
||||
from matplotlib import pyplot as plt
|
||||
from matplotlib.figure import Figure
|
||||
|
||||
|
||||
class ChartingAPI(ABC):
|
||||
"""
|
||||
API for creating financial charts and visualizations.
|
||||
|
||||
Provides methods to create candlestick charts, add technical indicator panels,
|
||||
and build custom visualizations. All figures are automatically captured and
|
||||
returned to the client as images.
|
||||
|
||||
Basic workflow:
|
||||
1. Create a chart with plot_ohlc() → returns Figure and Axes
|
||||
2. Optionally overlay indicators on the main axes (e.g., moving averages)
|
||||
3. Optionally add indicator panels below with add_indicator_panel()
|
||||
4. Figures are automatically captured (no need to save manually)
|
||||
"""
|
||||
|
||||
@abstractmethod
|
||||
def plot_ohlc(
|
||||
self,
|
||||
df: pd.DataFrame,
|
||||
title: Optional[str] = None,
|
||||
volume: bool = False,
|
||||
style: str = "charles",
|
||||
figsize: Tuple[int, int] = (12, 8),
|
||||
**kwargs
|
||||
) -> Tuple[Figure, plt.Axes]:
|
||||
"""
|
||||
Create a candlestick chart from OHLC data.
|
||||
|
||||
Args:
|
||||
df: DataFrame with OHLC data. Required columns: open, high, low, close.
|
||||
Column names are case-insensitive.
|
||||
title: Chart title (optional)
|
||||
volume: If True, shows volume bars below the candlesticks (requires 'volume' column)
|
||||
style: Visual style for the chart. Available styles:
|
||||
"charles" (default), "binance", "blueskies", "brasil", "checkers",
|
||||
"classic", "mike", "nightclouds", "sas", "starsandstripes", "yahoo"
|
||||
figsize: Figure size as (width, height) in inches. Default: (12, 8)
|
||||
**kwargs: Additional styling arguments
|
||||
|
||||
Returns:
|
||||
Tuple of (Figure, Axes):
|
||||
- Figure: matplotlib Figure object
|
||||
- Axes: Main candlestick axes (use for overlaying indicators)
|
||||
|
||||
Examples:
|
||||
# Basic chart
|
||||
fig, ax = api.plot_ohlc(df)
|
||||
|
||||
# With volume and title
|
||||
fig, ax = api.plot_ohlc(
|
||||
df,
|
||||
title="BTC/USDT 1H",
|
||||
volume=True,
|
||||
style="binance"
|
||||
)
|
||||
|
||||
# Overlay moving average
|
||||
fig, ax = api.plot_ohlc(df)
|
||||
ax.plot(df.index, df['sma_20'], label="SMA 20", color="blue")
|
||||
ax.legend()
|
||||
"""
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
def add_indicator_panel(
|
||||
self,
|
||||
fig: Figure,
|
||||
df: pd.DataFrame,
|
||||
columns: Optional[List[str]] = None,
|
||||
ylabel: Optional[str] = None,
|
||||
height_ratio: float = 0.3,
|
||||
ylim: Optional[Tuple[float, float]] = None,
|
||||
**kwargs
|
||||
) -> plt.Axes:
|
||||
"""
|
||||
Add an indicator panel below the chart with time-aligned x-axis.
|
||||
|
||||
Use this to display indicators that should be shown separately from the
|
||||
price chart (e.g., RSI, MACD, volume).
|
||||
|
||||
Args:
|
||||
fig: Figure object from plot_ohlc()
|
||||
df: DataFrame with indicator data (must have same index as OHLC data)
|
||||
columns: Column names to plot. If None, plots all numeric columns.
|
||||
ylabel: Y-axis label (e.g., "RSI", "MACD")
|
||||
height_ratio: Panel height relative to main chart (default: 0.3 = 30%)
|
||||
ylim: Y-axis limits as (min, max). If None, auto-scales.
|
||||
**kwargs: Line styling options (color, linewidth, linestyle, alpha)
|
||||
|
||||
Returns:
|
||||
Axes object for the new panel (use for further customization)
|
||||
|
||||
Examples:
|
||||
# Add RSI panel with reference lines
|
||||
fig, ax = api.plot_ohlc(df)
|
||||
rsi_ax = api.add_indicator_panel(
|
||||
fig, df,
|
||||
columns=["rsi"],
|
||||
ylabel="RSI",
|
||||
ylim=(0, 100)
|
||||
)
|
||||
rsi_ax.axhline(30, color='green', linestyle='--', alpha=0.5)
|
||||
rsi_ax.axhline(70, color='red', linestyle='--', alpha=0.5)
|
||||
|
||||
# Add MACD panel
|
||||
fig, ax = api.plot_ohlc(df)
|
||||
api.add_indicator_panel(
|
||||
fig, df,
|
||||
columns=["macd", "macd_signal"],
|
||||
ylabel="MACD"
|
||||
)
|
||||
"""
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
def create_figure(
|
||||
self,
|
||||
figsize: Tuple[int, int] = (12, 8),
|
||||
style: str = "charles"
|
||||
) -> Tuple[Figure, plt.Axes]:
|
||||
"""
|
||||
Create a styled figure for custom visualizations.
|
||||
|
||||
Use this when you want to create charts other than candlesticks
|
||||
(e.g., histograms, scatter plots, heatmaps).
|
||||
|
||||
Args:
|
||||
figsize: Figure size as (width, height) in inches. Default: (12, 8)
|
||||
style: Style name for consistent theming. Default: "charles"
|
||||
|
||||
Returns:
|
||||
Tuple of (Figure, Axes) ready for plotting
|
||||
|
||||
Examples:
|
||||
# Histogram
|
||||
fig, ax = api.create_figure()
|
||||
ax.hist(returns, bins=50)
|
||||
ax.set_title("Return Distribution")
|
||||
|
||||
# Heatmap
|
||||
fig, ax = api.create_figure(figsize=(10, 10))
|
||||
import seaborn as sns
|
||||
sns.heatmap(correlation_matrix, ax=ax)
|
||||
ax.set_title("Correlation Matrix")
|
||||
"""
|
||||
pass
|
||||
```
|
||||
|
||||
|
||||
### __init__.py
|
||||
```python
|
||||
"""
|
||||
DexOrder API - market data and charting for research and trading.
|
||||
|
||||
For research scripts, import and use get_api() to access the API:
|
||||
|
||||
from dexorder.api import get_api
|
||||
import asyncio
|
||||
|
||||
api = get_api()
|
||||
df = asyncio.run(api.data.historical_ohlc(...))
|
||||
fig, ax = api.charting.plot_ohlc(df)
|
||||
"""
|
||||
|
||||
import logging
|
||||
from typing import Optional
|
||||
|
||||
from dexorder.api.api import API
|
||||
from dexorder.api.charting_api import ChartingAPI
|
||||
from dexorder.api.data_api import DataAPI
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
|
||||
# Global API instance - managed by main.py
|
||||
_global_api: Optional[API] = None
|
||||
|
||||
|
||||
def get_api() -> API:
|
||||
"""
|
||||
Get the global API instance for accessing market data and charts.
|
||||
|
||||
Use this in research scripts to access the data and charting APIs.
|
||||
|
||||
Returns:
|
||||
API instance with data and charting capabilities
|
||||
|
||||
Raises:
|
||||
RuntimeError: If called before API initialization (should not happen in research scripts)
|
||||
|
||||
Example:
|
||||
from dexorder.api import get_api
|
||||
import asyncio
|
||||
|
||||
api = get_api()
|
||||
|
||||
# Fetch data
|
||||
df = asyncio.run(api.data.historical_ohlc(
|
||||
ticker="BINANCE:BTC/USDT",
|
||||
period_seconds=3600,
|
||||
start_time="2021-12-20",
|
||||
end_time="2021-12-21"
|
||||
))
|
||||
|
||||
# Create chart
|
||||
fig, ax = api.charting.plot_ohlc(df, title="BTC/USDT")
|
||||
"""
|
||||
if _global_api is None:
|
||||
raise RuntimeError("API not initialized")
|
||||
return _global_api
|
||||
|
||||
|
||||
def set_api(api: API) -> None:
|
||||
"""Set the global API instance. Internal use only."""
|
||||
global _global_api
|
||||
_global_api = api
|
||||
|
||||
|
||||
__all__ = ['API', 'ChartingAPI', 'DataAPI', 'get_api', 'set_api']
|
||||
```
|
||||
|
||||
|
||||
---
|
||||
|
||||
For practical usage patterns and complete working examples, see `usage-examples.md`.
|
||||
221
gateway/src/harness/subagents/research/memory/usage-examples.md
Normal file
221
gateway/src/harness/subagents/research/memory/usage-examples.md
Normal file
@@ -0,0 +1,221 @@
|
||||
# Research Script API Usage
|
||||
|
||||
Research scripts executed via the `execute_research` MCP tool have access to the global API instance, which provides both data fetching and charting capabilities.
|
||||
|
||||
## Accessing the API
|
||||
|
||||
```python
|
||||
from dexorder.api import get_api
|
||||
import asyncio
|
||||
|
||||
# Get the global API instance
|
||||
api = get_api()
|
||||
```
|
||||
|
||||
## Using the Data API
|
||||
|
||||
The data API provides access to historical OHLC (Open, High, Low, Close) market data with smart caching via Iceberg.
|
||||
|
||||
### Fetching Historical Data
|
||||
|
||||
The API accepts flexible timestamp formats for convenience:
|
||||
|
||||
```python
|
||||
from dexorder.api import get_api
|
||||
import asyncio
|
||||
from datetime import datetime
|
||||
|
||||
api = get_api()
|
||||
|
||||
# Method 1: Using Unix timestamps (seconds)
|
||||
df = asyncio.run(api.data.historical_ohlc(
|
||||
ticker="BINANCE:BTC/USDT",
|
||||
period_seconds=3600, # 1 hour candles
|
||||
start_time=1640000000, # Unix timestamp in seconds
|
||||
end_time=1640086400,
|
||||
extra_columns=["volume"]
|
||||
))
|
||||
|
||||
# Method 2: Using date strings
|
||||
df = asyncio.run(api.data.historical_ohlc(
|
||||
ticker="BINANCE:BTC/USDT",
|
||||
period_seconds=3600,
|
||||
start_time="2021-12-20", # Simple date string
|
||||
end_time="2021-12-21",
|
||||
extra_columns=["volume"]
|
||||
))
|
||||
|
||||
# Method 3: Using date strings with time
|
||||
df = asyncio.run(api.data.historical_ohlc(
|
||||
ticker="BINANCE:BTC/USDT",
|
||||
period_seconds=3600,
|
||||
start_time="2021-12-20 00:00:00",
|
||||
end_time="2021-12-20 23:59:59",
|
||||
extra_columns=["volume"]
|
||||
))
|
||||
|
||||
# Method 4: Using datetime objects
|
||||
df = asyncio.run(api.data.historical_ohlc(
|
||||
ticker="BINANCE:BTC/USDT",
|
||||
period_seconds=3600,
|
||||
start_time=datetime(2021, 12, 20),
|
||||
end_time=datetime(2021, 12, 21),
|
||||
extra_columns=["volume"]
|
||||
))
|
||||
|
||||
print(f"Loaded {len(df)} candles")
|
||||
print(df.head())
|
||||
```
|
||||
|
||||
### Available Extra Columns
|
||||
|
||||
- `"volume"` - Total volume
|
||||
- `"buy_vol"` - Buy-side volume
|
||||
- `"sell_vol"` - Sell-side volume
|
||||
- `"open_time"`, `"high_time"`, `"low_time"`, `"close_time"` - Timestamps for each price point
|
||||
- `"open_interest"` - Open interest (for futures)
|
||||
- `"ticker"` - Market identifier
|
||||
- `"period_seconds"` - Period in seconds
|
||||
|
||||
## Using the Charting API
|
||||
|
||||
The charting API provides styled financial charts with OHLC candlesticks and technical indicators.
|
||||
|
||||
### Creating a Basic Candlestick Chart
|
||||
|
||||
```python
|
||||
from dexorder.api import get_api
|
||||
import asyncio
|
||||
from datetime import datetime
|
||||
|
||||
api = get_api()
|
||||
|
||||
# Fetch data
|
||||
df = asyncio.run(api.data.historical_ohlc(
|
||||
ticker="BINANCE:BTC/USDT",
|
||||
period_seconds=3600,
|
||||
start_time="2021-12-20",
|
||||
end_time="2021-12-21",
|
||||
extra_columns=["volume"]
|
||||
))
|
||||
|
||||
# Create candlestick chart (synchronous)
|
||||
fig, ax = api.charting.plot_ohlc(
|
||||
df,
|
||||
title="BTC/USDT 1H",
|
||||
volume=True, # Show volume bars
|
||||
style="charles" # Chart style
|
||||
)
|
||||
|
||||
# The figure is automatically captured and returned to the MCP client
|
||||
```
|
||||
|
||||
### Adding Indicator Panels
|
||||
|
||||
```python
|
||||
from dexorder.api import get_api
|
||||
import asyncio
|
||||
import pandas as pd
|
||||
|
||||
api = get_api()
|
||||
|
||||
# Fetch data
|
||||
df = asyncio.run(api.data.historical_ohlc(
|
||||
ticker="BINANCE:BTC/USDT",
|
||||
period_seconds=3600,
|
||||
start_time="2021-12-20",
|
||||
end_time="2021-12-21"
|
||||
))
|
||||
|
||||
# Calculate a simple moving average
|
||||
df['sma_20'] = df['close'].rolling(window=20).mean()
|
||||
|
||||
# Create chart
|
||||
fig, ax = api.charting.plot_ohlc(df, title="BTC/USDT with SMA")
|
||||
|
||||
# Overlay the SMA on the price chart
|
||||
ax.plot(df.index, df['sma_20'], label="SMA 20", color="blue", linewidth=2)
|
||||
ax.legend()
|
||||
|
||||
# Add RSI indicator panel below
|
||||
df['rsi'] = calculate_rsi(df['close'], 14) # Your RSI calculation
|
||||
rsi_ax = api.charting.add_indicator_panel(
|
||||
fig, df,
|
||||
columns=["rsi"],
|
||||
ylabel="RSI",
|
||||
ylim=(0, 100)
|
||||
)
|
||||
rsi_ax.axhline(70, color='red', linestyle='--', alpha=0.5)
|
||||
rsi_ax.axhline(30, color='green', linestyle='--', alpha=0.5)
|
||||
```
|
||||
|
||||
## Complete Example
|
||||
|
||||
```python
|
||||
from dexorder.api import get_api
|
||||
import asyncio
|
||||
import pandas as pd
|
||||
|
||||
# Get API instance
|
||||
api = get_api()
|
||||
|
||||
# Fetch historical data using date strings (easiest for research)
|
||||
df = asyncio.run(api.data.historical_ohlc(
|
||||
ticker="BINANCE:BTC/USDT",
|
||||
period_seconds=3600, # 1 hour
|
||||
start_time="2021-12-20",
|
||||
end_time="2021-12-21",
|
||||
extra_columns=["volume"]
|
||||
))
|
||||
|
||||
# Add some analysis
|
||||
df['sma_20'] = df['close'].rolling(window=20).mean()
|
||||
df['sma_50'] = df['close'].rolling(window=50).mean()
|
||||
|
||||
# Create chart with volume
|
||||
fig, ax = api.charting.plot_ohlc(
|
||||
df,
|
||||
title="BTC/USDT Analysis",
|
||||
volume=True,
|
||||
style="charles"
|
||||
)
|
||||
|
||||
# Overlay moving averages
|
||||
ax.plot(df.index, df['sma_20'], label="SMA 20", color="blue", linewidth=1.5)
|
||||
ax.plot(df.index, df['sma_50'], label="SMA 50", color="red", linewidth=1.5)
|
||||
ax.legend()
|
||||
|
||||
# Print summary statistics
|
||||
print(f"Period: {len(df)} candles")
|
||||
print(f"High: {df['high'].max()}")
|
||||
print(f"Low: {df['low'].min()}")
|
||||
print(f"Mean Volume: {df['volume'].mean():.2f}")
|
||||
```
|
||||
|
||||
## Notes
|
||||
|
||||
- **Async vs Sync**: Data API methods are async and require `asyncio.run()`. Charting API methods are synchronous.
|
||||
- **Figure Capture**: All matplotlib figures created during script execution are automatically captured and returned as PNG images.
|
||||
- **Print Statements**: All `print()` output is captured and returned as text content.
|
||||
- **Errors**: Exceptions are caught and reported in the execution results.
|
||||
- **Timestamps**: The API accepts flexible timestamp formats:
|
||||
- Unix timestamps in **seconds** (int or float) - e.g., `1640000000`
|
||||
- Date strings - e.g., `"2021-12-20"` or `"2021-12-20 12:00:00"`
|
||||
- datetime objects - e.g., `datetime(2021, 12, 20)`
|
||||
- pandas Timestamp objects
|
||||
- Internally, the system uses microseconds since epoch, but you don't need to worry about this conversion.
|
||||
- **Price/Volume Values**: All prices and volumes are returned as decimal floats, automatically converted from internal storage format using market metadata. No manual conversion is needed.
|
||||
|
||||
## Available Chart Styles
|
||||
|
||||
- `"charles"` (default)
|
||||
- `"binance"`
|
||||
- `"blueskies"`
|
||||
- `"brasil"`
|
||||
- `"checkers"`
|
||||
- `"classic"`
|
||||
- `"mike"`
|
||||
- `"nightclouds"`
|
||||
- `"sas"`
|
||||
- `"starsandstripes"`
|
||||
- `"yahoo"`
|
||||
138
gateway/src/harness/subagents/research/system-prompt.md
Normal file
138
gateway/src/harness/subagents/research/system-prompt.md
Normal file
@@ -0,0 +1,138 @@
|
||||
# Research Script Assistant
|
||||
|
||||
You are a specialized assistant that creates Python research scripts for market data analysis and visualization.
|
||||
|
||||
## Your Purpose
|
||||
|
||||
Create Python scripts that:
|
||||
- Fetch historical market data using the Dexorder DataAPI
|
||||
- Perform statistical analysis and calculations
|
||||
- Generate professional charts using matplotlib via the ChartingAPI
|
||||
- All matplotlib figures are automatically captured and sent to the user as images
|
||||
|
||||
## Available Tools
|
||||
|
||||
You have direct access to these MCP tools:
|
||||
|
||||
- **category_write**: Create a new research script
|
||||
- Required: category="research", name, description, code
|
||||
- Optional: metadata (with conda_packages list if needed)
|
||||
- Automatically executes the script after writing
|
||||
- Returns validation results and execution output (text + images)
|
||||
|
||||
- **category_edit**: Update an existing research script
|
||||
- Required: category="research", name
|
||||
- Optional: code, description, metadata
|
||||
- Automatically re-executes if code is updated
|
||||
- Returns validation results and execution output
|
||||
|
||||
- **category_read**: Read an existing research script
|
||||
- Returns: code, metadata
|
||||
|
||||
- **category_list**: List all research scripts
|
||||
- Returns: array of {name, description, metadata}
|
||||
|
||||
- **execute_research**: Manually run a research script
|
||||
- Note: Usually not needed since write/edit auto-execute
|
||||
- Returns: text output and images
|
||||
|
||||
## Research Script API
|
||||
|
||||
All research scripts have access to the Dexorder API via:
|
||||
|
||||
```python
|
||||
from dexorder.api import get_api
|
||||
import asyncio
|
||||
|
||||
api = get_api()
|
||||
```
|
||||
|
||||
The API provides two main components:
|
||||
- `api.data` - DataAPI for fetching OHLC market data
|
||||
- `api.charting` - ChartingAPI for creating financial charts
|
||||
|
||||
See your knowledge base for complete API documentation and examples.
|
||||
|
||||
## Coding Loop Pattern
|
||||
|
||||
When a user requests analysis:
|
||||
|
||||
1. **Understand the request**: What data is needed? What analysis? What visualization?
|
||||
|
||||
2. **Check for existing scripts**: Use `category_list` to see if a similar script exists
|
||||
- If exists and suitable: use `category_read` to review it
|
||||
- Consider editing existing script vs creating new one
|
||||
|
||||
3. **Write the script**: Use `category_write` (or `category_edit`)
|
||||
- Write clean, well-commented Python code
|
||||
- Include proper error handling
|
||||
- Use appropriate ticker symbols, time ranges, and periods
|
||||
- The script will auto-execute after writing
|
||||
|
||||
4. **Check execution results**: The tool returns:
|
||||
- `validation.success`: Whether script ran without errors
|
||||
- `validation.output`: Any stdout/stderr text output
|
||||
- `execution.content`: Array of text and image results
|
||||
- Note: Images are NOT included in your context - only text output is visible to you
|
||||
|
||||
5. **Iterate if needed**: If there are errors:
|
||||
- Read the error message from validation.output or execution text
|
||||
- Use `category_edit` to fix the script
|
||||
- The script will auto-execute again
|
||||
|
||||
6. **Return results**: Once successful, summarize what was done
|
||||
- The user will receive both your text response AND the chart images
|
||||
- Don't try to describe the images in detail - the user can see them
|
||||
|
||||
## Important Guidelines
|
||||
|
||||
- **Images are pass-through only**: Chart images go directly to the user. You only see text output (print statements, errors). Don't try to analyze or describe images you can't see.
|
||||
|
||||
- **Async data fetching**: All `api.data` methods are async. Always use `asyncio.run()`:
|
||||
```python
|
||||
df = asyncio.run(api.data.historical_ohlc(...))
|
||||
```
|
||||
|
||||
- **Charting is sync**: All `api.charting` methods are synchronous:
|
||||
```python
|
||||
fig, ax = api.charting.plot_ohlc(df, title="BTC/USDT")
|
||||
```
|
||||
|
||||
- **Automatic figure capture**: All matplotlib figures are automatically captured. Don't save manually.
|
||||
|
||||
- **Print for debugging**: Use `print()` statements for debugging - you'll see this output.
|
||||
|
||||
- **Package management**: If script needs packages beyond base environment (pandas, numpy, matplotlib):
|
||||
- Add `conda_packages: ["package-name"]` to metadata
|
||||
- Packages are auto-installed during validation
|
||||
|
||||
- **Script naming**: Choose descriptive, unique names. Examples:
|
||||
- "BTC Weekly Analysis"
|
||||
- "ETH Volume Profile"
|
||||
- "Market Correlation Heatmap"
|
||||
|
||||
- **Error handling**: Wrap data fetching in try/except to provide helpful error messages
|
||||
|
||||
## Example Workflow
|
||||
|
||||
User: "Show me BTC price action for the last 7 days with volume"
|
||||
|
||||
You:
|
||||
1. Call `category_write` with:
|
||||
- name: "BTC 7-Day Price Action"
|
||||
- description: "BTC/USDT price and volume analysis for the last 7 days"
|
||||
- code: (Python script that fetches data and creates chart)
|
||||
2. Check execution results
|
||||
3. If successful, respond: "I've created a 7-day BTC price chart with volume analysis. The chart shows [brief summary of what the script does]."
|
||||
4. User receives: Your text response + the actual chart image
|
||||
|
||||
## Response Format
|
||||
|
||||
When reporting results:
|
||||
- Be concise and factual
|
||||
- Mention what data was fetched and what analysis was performed
|
||||
- Don't try to interpret the charts (user can see them)
|
||||
- If errors occurred and you fixed them, briefly mention the resolution
|
||||
- Always confirm the script name for future reference
|
||||
|
||||
Remember: You're creating tools for the user, not just answering questions. Each research script becomes a reusable analysis tool.
|
||||
Reference in New Issue
Block a user