backend redesign

This commit is contained in:
2026-03-11 18:47:11 -04:00
parent 8ff277c8c6
commit e99ef5d2dd
210 changed files with 12147 additions and 155 deletions

109
test/README.md Normal file
View File

@@ -0,0 +1,109 @@
# Test Clients
Test clients for the DexOrder trading system.
## History Client
Tests the historical OHLC data request/response pattern between clients, Flink, and ingestors.
### Quick Start
```bash
cd history_client
./run-test.sh
```
This will:
1. Start all required services (Kafka, Flink, Ingestor)
2. Wait for services to initialize
3. Run the test client to query historical data
4. Display the results
### What it tests
- **ZMQ Communication**: Client → Flink REQ/REP pattern (port 5559)
- **Work Distribution**: Flink → Ingestor PUB/SUB with exchange prefix filtering (port 5555)
- **Response Channel**: Ingestor → Flink DEALER/ROUTER pattern (port 5556)
- **Data Flow**: Request → Ingestor fetches data → Response back to Flink → Response to client
### Expected Flow
1. **Client** sends OHLCRequest to Flink (REQ/REP)
- Ticker: `BINANCE:BTC/USDT`
- Period: 3600s (1 hour)
- Range: Jan 1-7, 2026
2. **Flink** publishes DataRequest to ingestor work queue (PUB/SUB)
- Topic prefix: `BINANCE:`
- Any ingestor subscribed to BINANCE can respond
3. **Ingestor** receives request, fetches data, sends back response
- Uses CCXT to fetch from exchange
- Sends DataResponse via DEALER socket
- Also writes to Kafka for Flink processing
4. **Flink** receives response, sends back to client
- Matches response by request_id
- Returns data or error to waiting client
### Manual Testing
Run the Python client directly:
```bash
cd history_client
pip install pyzmq
python client.py
```
Edit `client.py` to customize:
- Flink hostname and port
- Ticker symbol
- Time range
- Period (e.g., 3600 for 1h, 86400 for 1d)
## Docker Compose Profiles
The test client uses a Docker Compose profile to avoid starting automatically:
```bash
# Start all services
docker-compose up -d
# Run test client
docker-compose --profile test up history-test-client
# Or start everything including test
docker-compose --profile test up
```
## Troubleshooting
### Service logs
```bash
docker-compose logs -f ingestor
docker-compose logs -f flink-jobmanager
```
### Check ZMQ ports
```bash
# From inside Flink container
netstat -tlnp | grep 555
```
### Verify ingestor subscriptions
Check ingestor logs for:
```
Subscribed to exchange prefix: BINANCE:
Subscribed to exchange prefix: COINBASE:
```
### Test without Docker
1. Start Kafka: `docker-compose up -d kafka`
2. Build and run Flink app locally
3. Run ingestor: `cd ingestor && npm start`
4. Run test: `cd test/history_client && python client.py`

View File

@@ -0,0 +1,23 @@
FROM python:3.11-slim
WORKDIR /app
# Install dependencies for the OHLCClient library
RUN pip install --no-cache-dir \
pyzmq \
protobuf>=4.25.0 \
pyiceberg>=0.6.0 \
pyarrow>=14.0.0 \
pandas>=2.0.0 \
pyyaml>=6.0
# Copy test scripts
COPY client.py .
COPY client_async.py .
COPY client_ohlc_api.py .
# Make them executable
RUN chmod +x *.py
# Default command uses the new OHLCClient-based test
CMD ["python", "client_ohlc_api.py"]

View File

@@ -0,0 +1,46 @@
# Historical Data Test Client
Simple ZMQ client to test historical OHLC data retrieval from Flink.
## Usage
### Run with Docker Compose
The client is included in the docker-compose.yml. To run it:
```bash
cd redesign
docker-compose up history-test-client
```
### Run locally
```bash
pip install pyzmq
python client.py
```
## What it does
1. Connects to Flink's client request endpoint (REQ/REP on port 5559)
2. Requests 1-hour OHLC candles for BINANCE:BTC/USDT
3. Time range: January 1-7, 2026 (168 candles)
4. Waits for Flink to respond (up to 30 seconds)
5. Displays the response status and sample data
## Protocol
Uses the ZMQ message format:
- Frame 1: Protocol version byte (0x01)
- Frame 2: Message type (0x07 = OHLCRequest) + protobuf payload
Expected response:
- Frame 1: Protocol version byte (0x01)
- Frame 2: Message type (0x08 = Response) + protobuf payload
## Configuration
Edit `client.py` to change:
- `flink_host`: Flink hostname (default: 'localhost')
- `client_request_port`: Port number (default: 5559)
- Query parameters: ticker, time range, period, limit

View File

@@ -0,0 +1,200 @@
#!/usr/bin/env python3
"""
Simple ZMQ client to query historical OHLC data via the Relay gateway.
Tests the request-response pattern for historical data retrieval.
"""
import zmq
import struct
import json
import time
from datetime import datetime, timezone
# Protocol constants
PROTOCOL_VERSION = 0x01
MSG_TYPE_OHLC_REQUEST = 0x07
MSG_TYPE_RESPONSE = 0x08
class HistoryClient:
def __init__(self, relay_host='relay', client_request_port=5559):
self.context = zmq.Context()
self.socket = None
self.relay_endpoint = f"tcp://{relay_host}:{client_request_port}"
def connect(self):
"""Connect to Relay's client request endpoint (REQ/REP)"""
self.socket = self.context.socket(zmq.REQ)
self.socket.connect(self.relay_endpoint)
print(f"Connected to Relay at {self.relay_endpoint}")
def request_historical_ohlc(self, ticker, start_time, end_time, period_seconds, limit=None):
"""
Request historical OHLC data via Relay.
Args:
ticker: Market identifier (e.g., "BINANCE:BTC/USDT")
start_time: Start timestamp in microseconds since epoch
end_time: End timestamp in microseconds since epoch
period_seconds: OHLC period in seconds (e.g., 3600 for 1h)
limit: Optional limit on number of candles
Returns:
Response dict with status, data, etc.
"""
request_id = f"test-{int(time.time() * 1000)}"
# Build OHLCRequest message (simplified - would use protobuf in production)
request = {
'request_id': request_id,
'ticker': ticker,
'start_time': start_time,
'end_time': end_time,
'period_seconds': period_seconds
}
if limit:
request['limit'] = limit
print(f"\n=== Sending OHLCRequest ===")
print(f"Request ID: {request_id}")
print(f"Ticker: {ticker}")
print(f"Period: {period_seconds}s ({period_seconds // 3600}h)")
print(f"Start: {datetime.fromtimestamp(start_time / 1_000_000, tz=timezone.utc).isoformat()}")
print(f"End: {datetime.fromtimestamp(end_time / 1_000_000, tz=timezone.utc).isoformat()}")
if limit:
print(f"Limit: {limit}")
# Encode request (placeholder - would use actual protobuf)
request_data = json.dumps(request).encode('utf-8')
# Send message: [version byte] [type byte + data]
version_frame = struct.pack('B', PROTOCOL_VERSION)
message_frame = struct.pack('B', MSG_TYPE_OHLC_REQUEST) + request_data
self.socket.send_multipart([version_frame, message_frame])
print("\n⏳ Waiting for response via Relay...")
# Receive response with timeout
if self.socket.poll(30000): # 30 second timeout
response_frames = self.socket.recv_multipart()
return self._parse_response(response_frames)
else:
print("❌ Request timed out (30s)")
return None
def _parse_response(self, frames):
"""Parse response frames via Relay"""
if len(frames) != 2:
print(f"❌ Invalid response: expected 2 frames, got {len(frames)}")
return None
version_frame = frames[0]
message_frame = frames[1]
if len(version_frame) != 1:
print(f"❌ Invalid version frame length: {len(version_frame)}")
return None
version = struct.unpack('B', version_frame)[0]
if version != PROTOCOL_VERSION:
print(f"❌ Unsupported protocol version: {version}")
return None
if len(message_frame) < 1:
print(f"❌ Invalid message frame length: {len(message_frame)}")
return None
msg_type = message_frame[0]
msg_data = message_frame[1:]
print(f"\n=== Received Response ===")
print(f"Protocol version: {version}")
print(f"Message type: 0x{msg_type:02x}")
if msg_type != MSG_TYPE_RESPONSE:
print(f"❌ Unexpected message type: expected 0x{MSG_TYPE_RESPONSE:02x}, got 0x{msg_type:02x}")
return None
# Parse response (placeholder - would use actual protobuf)
try:
response = json.loads(msg_data.decode('utf-8'))
print(f"Request ID: {response.get('request_id', 'N/A')}")
print(f"Status: {response.get('status', 'UNKNOWN')}")
if response.get('error_message'):
print(f"Error: {response['error_message']}")
data = response.get('data', [])
total_records = response.get('total_records', len(data))
print(f"Total records: {total_records}")
print(f"Is final: {response.get('is_final', True)}")
if data and len(data) > 0:
print(f"\n📊 Sample data (first 3 records):")
for i, record in enumerate(data[:3]):
print(f" {i+1}. {record}")
return response
except json.JSONDecodeError as e:
print(f"❌ Failed to parse response JSON: {e}")
print(f"Raw data: {msg_data[:100]}...")
return None
def close(self):
"""Close the connection"""
if self.socket:
self.socket.close()
self.context.term()
print("\n🔌 Connection closed")
def main():
"""Test the historical data request"""
# Create client
client = HistoryClient(relay_host='relay', client_request_port=5559)
try:
# Connect to Relay
client.connect()
# Request BINANCE:BTC/USDT 1h candles for first 7 days of January 2026
# January 1, 2026 00:00:00 UTC = 1735689600 seconds = 1735689600000000 microseconds
# January 7, 2026 23:59:59 UTC = 1736294399 seconds = 1736294399000000 microseconds
start_time_us = 1735689600 * 1_000_000 # Jan 1, 2026 00:00:00 UTC
end_time_us = 1736294399 * 1_000_000 # Jan 7, 2026 23:59:59 UTC
response = client.request_historical_ohlc(
ticker='BINANCE:BTC/USDT',
start_time=start_time_us,
end_time=end_time_us,
period_seconds=3600, # 1 hour
limit=168 # 7 days * 24 hours = 168 candles
)
if response:
print("\n✅ Request completed successfully!")
status = response.get('status', 'UNKNOWN')
if status == 'OK':
print(f"📈 Received {response.get('total_records', 0)} candles")
else:
print(f"⚠️ Request status: {status}")
else:
print("\n❌ Request failed!")
except KeyboardInterrupt:
print("\n\n⚠️ Interrupted by user")
except Exception as e:
print(f"\n❌ Error: {e}")
import traceback
traceback.print_exc()
finally:
client.close()
if __name__ == '__main__':
main()

View File

@@ -0,0 +1,308 @@
#!/usr/bin/env python3
"""
Async ZMQ client for historical OHLC data requests via Relay gateway.
Uses async pub/sub pattern: submit request → wait for notification → query Iceberg
"""
import zmq
import struct
import json
import time
import uuid
from datetime import datetime, timezone
# Protocol constants
PROTOCOL_VERSION = 0x01
MSG_TYPE_SUBMIT_REQUEST = 0x10
MSG_TYPE_SUBMIT_RESPONSE = 0x11
MSG_TYPE_HISTORY_READY = 0x12
class AsyncHistoryClient:
def __init__(self, relay_host='relay', request_port=5559, data_port=5558):
self.context = zmq.Context()
self.request_socket = None
self.subscribe_socket = None
self.relay_endpoint_req = f"tcp://{relay_host}:{request_port}"
self.relay_endpoint_sub = f"tcp://{relay_host}:{data_port}"
self.client_id = f"client-{uuid.uuid4().hex[:8]}"
def connect(self):
"""Connect to Relay endpoints"""
# REQ socket for submitting requests (gets immediate ack)
self.request_socket = self.context.socket(zmq.REQ)
self.request_socket.connect(self.relay_endpoint_req)
print(f"Connected REQ socket to Relay at {self.relay_endpoint_req}")
# SUB socket for receiving notifications
self.subscribe_socket = self.context.socket(zmq.SUB)
self.subscribe_socket.connect(self.relay_endpoint_sub)
# CRITICAL: Subscribe to our client-specific response topic BEFORE submitting any requests
# This prevents race condition where notification arrives before we subscribe.
# The notification topic is deterministic: RESPONSE:{client_id} (we generate client_id)
response_topic = f"RESPONSE:{self.client_id}"
self.subscribe_socket.subscribe(response_topic.encode())
print(f"Connected SUB socket to Relay at {self.relay_endpoint_sub}")
print(f"Subscribed to topic: {response_topic}")
print(f"✓ Safe to submit requests - already subscribed to notifications")
def request_historical_ohlc(self, ticker, start_time, end_time, period_seconds, limit=None, timeout_secs=60):
"""
Request historical OHLC data (async pattern).
Flow:
1. Submit request → get immediate ack with request_id
2. Wait for HistoryReadyNotification on pub/sub
3. Query Iceberg with the table information (or notification includes data)
Args:
ticker: Market identifier (e.g., "BINANCE:BTC/USDT")
start_time: Start timestamp in microseconds since epoch
end_time: End timestamp in microseconds since epoch
period_seconds: OHLC period in seconds (e.g., 3600 for 1h)
limit: Optional limit on number of candles
timeout_secs: How long to wait for notification (default 60s)
Returns:
Notification dict or None on timeout
"""
# Generate request ID
request_id = f"{self.client_id}-{int(time.time() * 1000)}"
# Build SubmitHistoricalRequest
request = {
'request_id': request_id,
'ticker': ticker,
'start_time': start_time,
'end_time': end_time,
'period_seconds': period_seconds,
'client_id': self.client_id, # For response routing
}
if limit:
request['limit'] = limit
print(f"\n=== Step 1: Submitting Request ===")
print(f"Request ID: {request_id}")
print(f"Ticker: {ticker}")
print(f"Period: {period_seconds}s ({period_seconds // 3600}h)")
print(f"Start: {datetime.fromtimestamp(start_time / 1_000_000, tz=timezone.utc).isoformat()}")
print(f"End: {datetime.fromtimestamp(end_time / 1_000_000, tz=timezone.utc).isoformat()}")
print(f"Client ID: {self.client_id}")
if limit:
print(f"Limit: {limit}")
# Encode request
request_data = json.dumps(request).encode('utf-8')
# Send: [version byte] [type byte + data]
version_frame = struct.pack('B', PROTOCOL_VERSION)
message_frame = struct.pack('B', MSG_TYPE_SUBMIT_REQUEST) + request_data
self.request_socket.send_multipart([version_frame, message_frame])
# Receive immediate SubmitResponse
if self.request_socket.poll(5000): # 5 second timeout for ack
response_frames = self.request_socket.recv_multipart()
submit_response = self._parse_submit_response(response_frames)
if not submit_response or submit_response.get('status') != 'QUEUED':
print(f"❌ Request submission failed: {submit_response}")
return None
print(f"\n✅ Request queued successfully")
print(f"Notification topic: {submit_response.get('notification_topic')}")
else:
print("❌ Timeout waiting for submit response")
return None
# Step 2: Wait for HistoryReadyNotification
print(f"\n=== Step 2: Waiting for Notification ===")
print(f"⏳ Waiting up to {timeout_secs}s for HistoryReadyNotification...")
print(f" (Ingestor fetches → Kafka → Flink → Iceberg → Notification)")
if self.subscribe_socket.poll(timeout_secs * 1000):
notification_frames = self.subscribe_socket.recv_multipart()
notification = self._parse_history_ready(notification_frames)
if notification:
print(f"\n=== Step 3: Notification Received ===")
return notification
else:
print("❌ Failed to parse notification")
return None
else:
print(f"\n❌ Timeout waiting for notification ({timeout_secs}s)")
print(" Possible reasons:")
print(" - Ingestor still fetching data from exchange")
print(" - Flink still processing Kafka stream")
print(" - Flink writing to Iceberg")
return None
def _parse_submit_response(self, frames):
"""Parse SubmitResponse from relay"""
if len(frames) != 2:
print(f"❌ Invalid submit response: expected 2 frames, got {len(frames)}")
return None
version_frame = frames[0]
message_frame = frames[1]
if len(version_frame) != 1:
return None
version = struct.unpack('B', version_frame)[0]
if version != PROTOCOL_VERSION:
print(f"❌ Unsupported protocol version: {version}")
return None
if len(message_frame) < 1:
return None
msg_type = message_frame[0]
msg_data = message_frame[1:]
if msg_type != MSG_TYPE_SUBMIT_RESPONSE:
print(f"❌ Unexpected message type: 0x{msg_type:02x}")
return None
try:
response = json.loads(msg_data.decode('utf-8'))
return response
except json.JSONDecodeError as e:
print(f"❌ Failed to parse response: {e}")
return None
def _parse_history_ready(self, frames):
"""Parse HistoryReadyNotification from Flink via relay"""
# Topic frame + message frames
if len(frames) < 2:
print(f"❌ Invalid notification: expected at least 2 frames, got {len(frames)}")
return None
topic_frame = frames[0]
# Find version and message frames (may have multiple frames)
# Typically: [topic][version][message]
if len(frames) == 3:
version_frame = frames[1]
message_frame = frames[2]
else:
# Handle multi-part message
version_frame = frames[1]
message_frame = frames[2]
topic = topic_frame.decode('utf-8')
print(f"📬 Received on topic: {topic}")
if len(version_frame) != 1:
print(f"❌ Invalid version frame")
return None
version = struct.unpack('B', version_frame)[0]
if version != PROTOCOL_VERSION:
print(f"❌ Unsupported protocol version: {version}")
return None
if len(message_frame) < 1:
print(f"❌ Empty message frame")
return None
msg_type = message_frame[0]
msg_data = message_frame[1:]
print(f"Message type: 0x{msg_type:02x}")
if msg_type != MSG_TYPE_HISTORY_READY:
print(f"⚠️ Unexpected message type: expected 0x{MSG_TYPE_HISTORY_READY:02x}, got 0x{msg_type:02x}")
try:
notification = json.loads(msg_data.decode('utf-8'))
print(f"\nRequest ID: {notification.get('request_id')}")
print(f"Status: {notification.get('status')}")
print(f"Ticker: {notification.get('ticker')}")
print(f"Period: {notification.get('period_seconds')}s")
if notification.get('error_message'):
print(f"❌ Error: {notification['error_message']}")
if notification.get('status') == 'OK':
print(f"✅ Data ready in Iceberg")
print(f" Namespace: {notification.get('iceberg_namespace', 'N/A')}")
print(f" Table: {notification.get('iceberg_table', 'N/A')}")
print(f" Row count: {notification.get('row_count', 0)}")
completed_at = notification.get('completed_at')
if completed_at:
ts = datetime.fromtimestamp(completed_at / 1_000_000, tz=timezone.utc)
print(f" Completed at: {ts.isoformat()}")
return notification
except json.JSONDecodeError as e:
print(f"❌ Failed to parse notification: {e}")
print(f"Raw data: {msg_data[:200]}...")
return None
def close(self):
"""Close connections"""
if self.request_socket:
self.request_socket.close()
if self.subscribe_socket:
self.subscribe_socket.close()
self.context.term()
print("\n🔌 Connection closed")
def main():
"""Test the async historical data request pattern"""
client = AsyncHistoryClient(relay_host='relay', request_port=5559, data_port=5558)
try:
# Connect
client.connect()
# Request BINANCE:BTC/USDT 1h candles for first 7 days of January 2026
start_time_us = 1735689600 * 1_000_000 # Jan 1, 2026 00:00:00 UTC
end_time_us = 1736294399 * 1_000_000 # Jan 7, 2026 23:59:59 UTC
notification = client.request_historical_ohlc(
ticker='BINANCE:BTC/USDT',
start_time=start_time_us,
end_time=end_time_us,
period_seconds=3600, # 1 hour
limit=168, # 7 days * 24 hours
timeout_secs=60
)
if notification:
status = notification.get('status')
if status == 'OK':
print(f"\n🎉 Success! Data is ready in Iceberg")
print(f"📊 Query Iceberg to retrieve {notification.get('row_count', 0)} records")
print(f"\nNext steps:")
print(f" 1. Connect to Iceberg")
print(f" 2. Query table: {notification.get('iceberg_table')}")
print(f" 3. Filter by time range and ticker")
elif status == 'NOT_FOUND':
print(f"\n⚠️ No data found for the requested period")
elif status == 'ERROR':
print(f"\n❌ Error: {notification.get('error_message')}")
elif status == 'TIMEOUT':
print(f"\n⏱️ Request timed out on server side")
else:
print("\n❌ Request failed or timed out")
except KeyboardInterrupt:
print("\n\n⚠️ Interrupted by user")
except Exception as e:
print(f"\n❌ Error: {e}")
import traceback
traceback.print_exc()
finally:
client.close()
if __name__ == '__main__':
main()

View File

@@ -0,0 +1,126 @@
#!/usr/bin/env python3
"""
Simple test client using the high-level OHLCClient API.
Demonstrates smart caching - checks Iceberg first, requests missing data automatically.
"""
import asyncio
import sys
import os
from datetime import datetime, timezone
# Add client library to path
sys.path.insert(0, os.path.join(os.path.dirname(__file__), '../../client-py'))
from dexorder import OHLCClient
async def main():
"""
Test the high-level OHLC client API with smart caching.
"""
print("=== DexOrder OHLC Client Test ===\n")
# Initialize client
client = OHLCClient(
iceberg_catalog_uri="http://localhost:8181",
relay_endpoint="tcp://localhost:5559", # Client request port
notification_endpoint="tcp://localhost:5558", # Market data pub port
namespace="trading",
s3_endpoint="http://localhost:9000", # Port-forwarded MinIO
s3_access_key="minio",
s3_secret_key="minio123",
)
try:
# Start background notification listener
await client.start()
print("✅ Client started\n")
# Request parameters
ticker = "BINANCE:BTC/USDT"
period_seconds = 3600 # 1-hour candles
# Request 7 days of data (Jan 1-7, 2026)
start_time_us = 1735689600 * 1_000_000 # Jan 1, 2026 00:00:00 UTC
end_time_us = 1736294399 * 1_000_000 # Jan 7, 2026 23:59:59 UTC
start_dt = datetime.fromtimestamp(start_time_us / 1_000_000, tz=timezone.utc)
end_dt = datetime.fromtimestamp(end_time_us / 1_000_000, tz=timezone.utc)
print(f"Requesting data:")
print(f" Ticker: {ticker}")
print(f" Period: {period_seconds}s ({period_seconds // 3600}h)")
print(f" Start: {start_dt.isoformat()}")
print(f" End: {end_dt.isoformat()}")
print(f" Expected candles: ~{(end_time_us - start_time_us) // (period_seconds * 1_000_000)}")
print()
# Fetch OHLC data (automatically handles caching)
print("⏳ Fetching data (checking cache, requesting if needed)...\n")
df = await client.fetch_ohlc(
ticker=ticker,
period_seconds=period_seconds,
start_time=start_time_us,
end_time=end_time_us,
request_timeout=60.0
)
# Display results
print(f"✅ Success! Fetched {len(df)} candles\n")
if not df.empty:
print("First 5 candles:")
print(df[['timestamp', 'open', 'high', 'low', 'close', 'volume']].head())
print()
print("Last 5 candles:")
print(df[['timestamp', 'open', 'high', 'low', 'close', 'volume']].tail())
print()
# Data quality check
expected_count = (end_time_us - start_time_us) // (period_seconds * 1_000_000)
actual_count = len(df)
coverage = (actual_count / expected_count) * 100 if expected_count > 0 else 0
print(f"Data coverage: {coverage:.1f}% ({actual_count}/{expected_count} candles)")
if coverage < 100:
print(f"⚠️ Missing {expected_count - actual_count} candles")
else:
print("✅ Complete data coverage")
else:
print("⚠️ No data returned")
except asyncio.TimeoutError:
print("\n❌ Request timed out")
print("Possible reasons:")
print(" - Ingestor still fetching from exchange")
print(" - Flink processing backlog")
print(" - Network issues")
except ValueError as e:
print(f"\n❌ Request failed: {e}")
except ConnectionError as e:
print(f"\n❌ Connection error: {e}")
print("Make sure relay and Flink are running")
except KeyboardInterrupt:
print("\n\n⚠️ Interrupted by user")
except Exception as e:
print(f"\n❌ Unexpected error: {e}")
import traceback
traceback.print_exc()
finally:
await client.stop()
print("\n🔌 Client stopped")
if __name__ == '__main__':
asyncio.run(main())

29
test/history_client/run-test.sh Executable file
View File

@@ -0,0 +1,29 @@
#!/bin/bash
# Script to run the historical data test
echo "Starting test environment..."
echo "This will start Kafka, Flink, and the Ingestor services"
echo ""
cd ../..
echo "Step 1: Starting core services (Kafka, Flink, Ingestor)..."
docker-compose up -d zookeeper kafka postgres flink-jobmanager flink-taskmanager ingestor
echo ""
echo "Step 2: Waiting for services to be ready (30 seconds)..."
sleep 30
echo ""
echo "Step 3: Running test client..."
docker-compose --profile test up history-test-client
echo ""
echo "Test complete!"
echo ""
echo "To view logs:"
echo " docker-compose logs ingestor"
echo " docker-compose logs flink-jobmanager"
echo ""
echo "To stop all services:"
echo " docker-compose down"