MiroFish Options Decision Support System

March 22, 2026

MiroFish Options Decision Support System

Blueprint for IBKR-Informed Trading

Research Date: 2026-03-17

Requested by: Jack


What You're Building

A decision-support tool that runs MiroFish simulations on upcoming market events and delivers structured sentiment briefings to your phone (via Telegram) before you trade. You make the call. The system provides the behavioral intelligence.

Not an automated trading bot. A research assistant that tells you: "Here's how 200 simulated market participants reacted to this scenario, here's where sentiment clustered, here's the asymmetry."


System Architecture

┌─────────────────────────────────────────────────────┐
│                  EVENT CALENDAR                      │
│  (FOMC, CPI, earnings, geopolitical catalysts)      │
│  Source: economic calendar API + manual additions    │
└──────────────────────┬──────────────────────────────┘
                       │
                       ▼
┌─────────────────────────────────────────────────────┐
│              SEED DATA COLLECTOR                     │
│  Pulls relevant context for upcoming event:         │
│  - Recent news articles (RSS/API)                   │
│  - Current market positioning (put/call ratios,     │
│    options flow, VIX level)                          │
│  - Consensus expectations                           │
│  - Contrarian indicators                            │
│  Packages into MiroFish seed document               │
└──────────────────────┬──────────────────────────────┘
                       │
                       ▼
┌─────────────────────────────────────────────────────┐
│              MIROFISH ENGINE                         │
│  localhost:5001 (Docker)                             │
│                                                     │
│  For each event, runs 2-3 scenario simulations:     │
│  Scenario A: Consensus outcome                      │
│  Scenario B: Hawkish/bearish surprise               │
│  Scenario C: Dovish/bullish surprise                │
│                                                     │
│  Each sim: ~200 agents, 15-20 rounds                │
│  Agent types: retail traders, institutional PMs,    │
│  algo/quant desks, media/pundits, options MMs       │
└──────────────────────┬──────────────────────────────┘
                       │
                       ▼
┌─────────────────────────────────────────────────────┐
│           REPORT PARSER + TRANSLATOR                 │
│  Extracts from MiroFish report:                     │
│  - Dominant sentiment direction per scenario        │
│  - Sentiment clustering (consensus vs. divergence)  │
│  - Agent behavior patterns (panic, FOMO, rotation)  │
│  - Second-order effects identified                  │
│  - Asymmetry score (which scenario has outsized     │
│    behavioral impact?)                              │
│                                                     │
│  Translates into options-relevant output:           │
│  - Directional bias + confidence                    │
│  - Implied volatility alignment (is IV pricing the  │
│    right scenarios?)                                │
│  - Suggested strategy type (not specific strikes)   │
│  - Key risk the simulation flagged                  │
└──────────────────────┬──────────────────────────────┘
                       │
                       ▼
┌─────────────────────────────────────────────────────┐
│            TELEGRAM BRIEFING                         │
│  Delivers structured message to Jack:               │
│                                                     │
│  📋 FOMC Decision — March 19                        │
│  Sim run: 3 scenarios, 200 agents, 15 rounds        │
│                                                     │
│  CONSENSUS (hold): Muted. Agents priced it in.      │
│  Mild drift lower in vol.                           │
│                                                     │
│  SURPRISE CUT: Strong bullish cascade. Retail       │
│  agents FOMO into calls. Institutional agents       │
│  rotate into risk. Sentiment divergence: LOW        │
│  (crowd moves together = strong move likely)        │
│                                                     │
│  SURPRISE HOLD + HAWKISH LANGUAGE: Panic cluster    │
│  in retail agents. Institutional agents hedge.      │
│  Second-order: credit spread widening narrative     │
│  emerges. Sentiment divergence: HIGH                │
│                                                     │
│  ASYMMETRY: Bearish surprise has 2.3x behavioral    │
│  impact vs. bullish. Crowd is positioned for        │
│  dovish outcome.                                    │
│                                                     │
│  SUGGESTED LENS: Consider put spreads or bearish    │
│  structures if you believe hawkish surprise is      │
│  underpriced. IV appears to underweight downside    │
│  tail.                                              │
│                                                     │
│  ⚠️ Confidence: MODERATE. 2 of 3 sims showed       │
│  consistent patterns. Limited historical            │
│  calibration.                                       │
└─────────────────────────────────────────────────────┘

Concrete Build Plan

Phase 1: MiroFish Deployment (Day 1)

What: Get MiroFish running locally in Docker.

Steps:

git clone https://github.com/666ghj/MiroFish.git
cd MiroFish
cp .env.example .env

Configure .env:

# Use a cheap model. Qwen-plus is recommended and ~5-10x cheaper than OpenAI
LLM_API_KEY=<your_alibaba_bailian_key>
LLM_BASE_URL=https://dashscope.aliyuncs.com/compatible-mode/v1
LLM_MODEL_NAME=qwen-plus

# OR use OpenAI-compatible (Claude, GPT-4o-mini for cost, etc.)
# LLM_API_KEY=<your_key>
# LLM_BASE_URL=https://api.openai.com/v1
# LLM_MODEL_NAME=gpt-4o-mini

ZEP_API_KEY=<your_zep_cloud_key>  # Free tier at app.getzep.com
docker compose up -d
# Frontend: localhost:3000
# Backend API: localhost:5001

Verify: Hit http://localhost:5001/health -- should return OK.

Cost: $0 (infrastructure is local Docker; LLM costs start when you run sims)

Phase 2: Manual Testing (Days 2-5)

What: Run simulations manually through the web UI to understand the output quality before building anything.

Use the frontend at localhost:3000:

  1. Upload a seed document (copy-paste a recent FOMC statement + market commentary into a .txt file)
  2. Set simulation requirement: "Simulate how different market participants (retail investors, institutional portfolio managers, options market makers, financial media) react to this Federal Reserve decision. Focus on sentiment shifts, positioning changes, and second-order narrative effects."
  3. Run simulation with default settings (<40 rounds as recommended)
  4. Read the generated report
  5. Compare against what actually happened

Do this for 3-5 events before writing any code. You need to understand:

  • How good are the behavioral insights?
  • What does the report format look like?
  • Where does it add value vs. where is it noise?
  • How long does a simulation take?

Phase 3: Seed Data Pipeline (Days 6-10)

What: Build automated seed document generation for market events.

Python script: seed_builder.py

"""
Builds MiroFish seed documents from market event data.
Pulls news, positioning data, and consensus expectations.
"""

import requests
from datetime import datetime, timedelta

class SeedBuilder:
    def __init__(self):
        # Data sources (pick what you have access to)
        self.news_sources = []  # RSS feeds, newsapi.org, etc.
    
    def build_fomc_seed(self, meeting_date: str) -> str:
        """Build seed document for FOMC meeting."""
        sections = []
        
        # 1. Current Fed Funds rate and market expectations
        sections.append(self._get_fed_expectations())
        
        # 2. Recent economic data (CPI, jobs, GDP)
        sections.append(self._get_recent_econ_data())
        
        # 3. Recent Fed commentary / speeches
        sections.append(self._get_fed_commentary())
        
        # 4. Current market positioning
        sections.append(self._get_market_positioning())
        
        # 5. Options market data (VIX, put/call ratio, skew)
        sections.append(self._get_options_context())
        
        seed_doc = "\n\n".join(sections)
        return seed_doc
    
    def build_earnings_seed(self, ticker: str, report_date: str) -> str:
        """Build seed document for earnings report."""
        sections = []
        
        # Company background, consensus estimates, recent price action
        # Analyst sentiment, options IV vs. historical
        # Sector context, peer performance
        
        seed_doc = "\n\n".join(sections)
        return seed_doc
    
    def build_cpi_seed(self, release_date: str) -> str:
        """Build seed document for CPI release."""
        # Similar structure: expectations, recent data trend,
        # market positioning, Fed implications
        pass

Data sources you can use (free/cheap):

  • Economic calendar: FRED API (free), Trading Economics (freemium)
  • News: NewsAPI.org ($449/mo for production, free for dev), RSS feeds (free)
  • Market data: Yahoo Finance (yfinance library, free), Alpha Vantage (free tier)
  • Options data: CBOE VIX data (free), options flow requires paid service (Unusual Whales, etc.)
  • Fed expectations: CME FedWatch (scrape or manual input)

Phase 4: MiroFish API Integration (Days 11-15)

What: Script that submits seed docs to MiroFish API and retrieves reports programmatically.

The MiroFish backend exposes a Flask REST API. Based on the source code, here's the actual workflow:

"""
MiroFish API client for market simulations.
"""

import requests
import time
import json

MIROFISH_URL = "http://localhost:5001"

class MiroFishClient:
    
    def run_market_simulation(self, seed_text: str, scenario_name: str, 
                               simulation_requirement: str) -> dict:
        """
        Full pipeline: upload seed → build graph → create sim → 
        prepare → start → wait → generate report → return report
        """
        
        # Step 1: Create project and generate ontology
        # POST /api/graph/ontology/generate (multipart/form-data)
        # Upload seed text as a file, provide simulation_requirement
        
        files = {
            'files': (f'{scenario_name}.txt', seed_text, 'text/plain')
        }
        data = {
            'simulation_requirement': simulation_requirement,
            'project_name': f'Market-{scenario_name}-{int(time.time())}'
        }
        
        resp = requests.post(f"{MIROFISH_URL}/api/graph/ontology/generate",
                           files=files, data=data)
        result = resp.json()
        project_id = result['data']['project_id']
        
        # Step 2: Build knowledge graph
        # POST /api/graph/build
        resp = requests.post(f"{MIROFISH_URL}/api/graph/build",
                           json={"project_id": project_id})
        task_id = resp.json()['data']['task_id']
        
        # Poll until graph is built
        self._wait_for_task(f"/api/graph/build/status", task_id)
        
        # Step 3: Create simulation
        # POST /api/simulation/create
        resp = requests.post(f"{MIROFISH_URL}/api/simulation/create",
                           json={
                               "project_id": project_id,
                               "enable_twitter": True,
                               "enable_reddit": True
                           })
        simulation_id = resp.json()['data']['simulation_id']
        
        # Step 4: Prepare simulation (generates agent profiles, config)
        # POST /api/simulation/prepare
        resp = requests.post(f"{MIROFISH_URL}/api/simulation/prepare",
                           json={"simulation_id": simulation_id})
        task_id = resp.json()['data'].get('task_id')
        if task_id:
            self._wait_for_task("/api/simulation/prepare/status", task_id)
        
        # Step 5: Start simulation
        # POST /api/simulation/start
        resp = requests.post(f"{MIROFISH_URL}/api/simulation/start",
                           json={
                               "simulation_id": simulation_id,
                               "platform": "parallel",
                               "force": True,
                               "enable_graph_memory_update": True
                           })
        
        # Poll until simulation completes
        self._wait_for_simulation(simulation_id)
        
        # Step 6: Generate report
        # POST /api/report/generate
        resp = requests.post(f"{MIROFISH_URL}/api/report/generate",
                           json={"simulation_id": simulation_id})
        report_data = resp.json()['data']
        report_id = report_data['report_id']
        
        if not report_data.get('already_generated'):
            task_id = report_data['task_id']
            self._wait_for_task("/api/report/generate/status", task_id,
                              extra_params={"simulation_id": simulation_id})
        
        # Step 7: Get report
        # GET /api/report/<report_id>
        resp = requests.get(f"{MIROFISH_URL}/api/report/{report_id}")
        report = resp.json()['data']
        
        return report
    
    def _wait_for_task(self, status_endpoint, task_id, 
                       extra_params=None, timeout=600):
        """Poll task status until complete."""
        start = time.time()
        while time.time() - start < timeout:
            payload = {"task_id": task_id}
            if extra_params:
                payload.update(extra_params)
            resp = requests.post(f"{MIROFISH_URL}{status_endpoint}",
                               json=payload)
            data = resp.json().get('data', {})
            status = data.get('status', '')
            if status == 'completed':
                return data
            if status == 'failed':
                raise Exception(f"Task failed: {data.get('error')}")
            time.sleep(5)
        raise TimeoutError(f"Task {task_id} timed out")
    
    def _wait_for_simulation(self, simulation_id, timeout=900):
        """Poll simulation run status."""
        start = time.time()
        while time.time() - start < timeout:
            resp = requests.get(
                f"{MIROFISH_URL}/api/simulation/{simulation_id}/run-status")
            data = resp.json().get('data', {})
            status = data.get('status', '')
            if status in ('completed', 'stopped'):
                return data
            if status == 'failed':
                raise Exception(f"Simulation failed")
            time.sleep(10)
        raise TimeoutError(f"Simulation {simulation_id} timed out")

Phase 5: Report Parser + Options Translator (Days 16-20)

What: Parse MiroFish reports and translate behavioral patterns into options-relevant insights.

"""
Translates MiroFish simulation reports into options-trading-relevant insights.
"""

from dataclasses import dataclass
from typing import List, Optional

@dataclass
class ScenarioInsight:
    scenario_name: str
    sentiment_direction: str      # bullish / bearish / neutral
    sentiment_strength: float     # 0.0 - 1.0
    sentiment_divergence: str     # low / medium / high
    dominant_behavior: str        # e.g., "panic selling", "FOMO buying", "rotation"
    second_order_effects: List[str]
    agent_consensus_pct: float    # what % of agents aligned on direction
    raw_summary: str

@dataclass  
class TradingBriefing:
    event_name: str
    event_date: str
    scenarios: List[ScenarioInsight]
    asymmetry_note: str           # which direction has outsized behavioral impact
    iv_alignment: str             # does implied vol seem to price these scenarios correctly
    suggested_lens: str           # strategy type suggestion (not specific trades)
    confidence: str               # low / moderate / high
    confidence_reasoning: str
    warnings: List[str]


class ReportTranslator:
    """
    Takes MiroFish markdown report and extracts trading-relevant signals.
    Uses an LLM to parse the unstructured report into structured insights.
    """
    
    def translate(self, mirofish_report_markdown: str, 
                  event_context: dict,
                  scenarios: List[str]) -> TradingBriefing:
        """
        Parse report and generate trading briefing.
        
        This uses an LLM call (cheap model like gpt-4o-mini) to extract
        structured data from the MiroFish narrative report.
        """
        
        prompt = f"""
You are a senior options trader analyzing a behavioral simulation report.
The simulation modeled how market participants react to: {event_context['event_name']}

The report below was generated by MiroFish, a multi-agent simulation engine
where hundreds of AI agents with different personalities (retail traders,
institutional PMs, options market makers, media pundits) interacted and
reacted to injected market scenarios.

Extract the following for each scenario that was simulated:
1. Dominant sentiment direction (bullish/bearish/neutral)
2. Sentiment strength (0.0-1.0, how strong was the directional move)
3. Sentiment divergence (low=crowd moves together, high=crowd is split)
4. Dominant behavioral pattern (e.g., panic, FOMO, orderly rotation, indifference)
5. Second-order effects (non-obvious consequences the agents surfaced)
6. What percentage of agents aligned on the dominant direction

Then provide:
- Asymmetry assessment: which scenario direction has outsized behavioral impact?
- Does the current options implied volatility (IV) seem to correctly price
  these behavioral scenarios, or is one tail underpriced?
- A suggested strategic lens (NOT specific trades, but strategy type:
  e.g., "consider put spreads", "straddle may be appropriate", "directional
  call buying if you agree with the bullish scenario")
- Confidence level with reasoning
- Any warnings or caveats

SIMULATION REPORT:
{mirofish_report_markdown}

CURRENT MARKET CONTEXT:
VIX: {event_context.get('vix', 'unknown')}
Put/Call Ratio: {event_context.get('pcr', 'unknown')}
IV Rank: {event_context.get('iv_rank', 'unknown')}

Respond in JSON format matching the TradingBriefing schema.
"""
        
        # Call LLM to parse (use a cheap, fast model)
        # response = openai_client.chat.completions.create(...)
        # Parse JSON response into TradingBriefing dataclass
        
        pass


    def format_telegram_message(self, briefing: TradingBriefing) -> str:
        """Format briefing for Telegram delivery."""
        
        lines = []
        lines.append(f"EVENT: {briefing.event_name} -- {briefing.event_date}")
        lines.append("")
        
        for s in briefing.scenarios:
            emoji = {"bullish": "↑", "bearish": "↓", "neutral": "→"}
            direction = emoji.get(s.sentiment_direction, "?")
            lines.append(
                f"{direction} {s.scenario_name}: {s.sentiment_direction.upper()} "
                f"(strength: {s.sentiment_strength:.0%}, "
                f"consensus: {s.agent_consensus_pct:.0%})")
            lines.append(f"  Behavior: {s.dominant_behavior}")
            lines.append(f"  Divergence: {s.sentiment_divergence}")
            if s.second_order_effects:
                lines.append(f"  2nd-order: {'; '.join(s.second_order_effects)}")
            lines.append("")
        
        lines.append(f"ASYMMETRY: {briefing.asymmetry_note}")
        lines.append(f"IV READ: {briefing.iv_alignment}")
        lines.append(f"LENS: {briefing.suggested_lens}")
        lines.append(f"CONFIDENCE: {briefing.confidence} -- {briefing.confidence_reasoning}")
        
        if briefing.warnings:
            lines.append("")
            for w in briefing.warnings:
                lines.append(f"WARNING: {w}")
        
        return "\n".join(lines)

Phase 6: Event Scheduler + Telegram Delivery (Days 21-25)

What: Cron-based system that knows your event calendar and runs simulations overnight.

"""
Scheduler that runs MiroFish simulations for upcoming market events
and delivers briefings via Telegram.
"""

import schedule
import time
from datetime import datetime, timedelta

class EventScheduler:
    
    def __init__(self):
        self.mirofish = MiroFishClient()
        self.seed_builder = SeedBuilder()
        self.translator = ReportTranslator()
        self.events = EventCalendar()
    
    def run_daily_briefing(self):
        """
        Run each evening at 8 PM MST (3 AM UTC).
        Checks for events in next 24-48 hours.
        Runs simulations and delivers briefings by morning.
        """
        
        upcoming = self.events.get_events_next_48h()
        
        for event in upcoming:
            scenarios = self._define_scenarios(event)
            reports = []
            
            for scenario in scenarios:
                seed = self.seed_builder.build_seed(
                    event_type=event.type,
                    scenario=scenario
                )
                
                requirement = (
                    f"Simulate how market participants react to: "
                    f"{event.name} - {scenario.description}. "
                    f"Include retail traders, institutional portfolio managers, "
                    f"options market makers, quantitative trading desks, and "
                    f"financial media commentators. Focus on: sentiment shifts, "
                    f"positioning changes, panic/FOMO dynamics, and "
                    f"second-order narrative effects."
                )
                
                report = self.mirofish.run_market_simulation(
                    seed_text=seed,
                    scenario_name=scenario.name,
                    simulation_requirement=requirement
                )
                reports.append((scenario, report))
            
            # Parse all scenario reports into unified briefing
            combined_report = "\n\n---\n\n".join([
                f"## Scenario: {s.name}\n{r['markdown_content']}" 
                for s, r in reports
            ])
            
            briefing = self.translator.translate(
                mirofish_report_markdown=combined_report,
                event_context=self._get_market_context(),
                scenarios=[s.name for s in scenarios]
            )
            
            message = self.translator.format_telegram_message(briefing)
            self._send_telegram(message)
    
    def _define_scenarios(self, event):
        """Define 2-3 scenarios per event type."""
        
        if event.type == "FOMC":
            return [
                Scenario("Consensus", "Fed holds rates as expected, neutral guidance"),
                Scenario("Hawkish Surprise", "Fed holds but signals delayed cuts, hawkish dot plot"),
                Scenario("Dovish Surprise", "Fed signals imminent cuts, dovish language")
            ]
        
        elif event.type == "EARNINGS":
            return [
                Scenario("Beat + Raise", f"{event.ticker} beats estimates and raises guidance"),
                Scenario("Miss + Lower", f"{event.ticker} misses estimates and lowers guidance"),
                Scenario("Beat + Lower", f"{event.ticker} beats quarter but guides lower (mixed)")
            ]
        
        elif event.type == "CPI":
            return [
                Scenario("In-Line", "CPI comes in at consensus"),
                Scenario("Hot", "CPI 0.2%+ above consensus, re-ignites inflation fears"),
                Scenario("Cool", "CPI below consensus, rate cut path cleared")
            ]
        
        # Add more event types as needed


# Cron schedule
if __name__ == "__main__":
    scheduler = EventScheduler()
    
    # Run at 8 PM MST (3 AM UTC next day) every day
    schedule.every().day.at("03:00").do(scheduler.run_daily_briefing)
    
    while True:
        schedule.run_pending()
        time.sleep(60)

Agent Profile Design for Financial Sims

This is where MiroFish's value lives or dies. The default agent profiles are generic. For market simulations, you want to customize them. MiroFish generates profiles from the seed data via LLM, but you can shape what it creates through your simulation_requirement prompt.

Agent archetypes to request in your simulation requirement:

Agent TypePersonalityBehavioral Logic
Retail day traderImpulsive, follows momentum, FOMO-proneReacts to headlines, buys calls on dips, panic sells
Institutional PMRisk-averse, benchmark-consciousSlow to move, hedges first, rotates between sectors
Options market makerDelta-neutral, vol-focusedAdjusts spreads based on flow, widens on uncertainty
Quant/algo deskRules-based, momentum-followingTriggers on technical levels, amplifies moves
Financial media punditNarrative-driven, contrarian-leaningCreates and amplifies narratives that move retail
Hedge fund macro traderConviction-driven, leveragedTakes large directional bets, can move markets
Passive index fundMechanical, flow-drivenRebalances regardless of sentiment, dampens volatility

Cost Model (Detailed)

Per-Event Cost

Each event requires 2-3 scenario simulations. Each simulation involves:

StepLLM CallsEst. Cost (Qwen-plus)Est. Cost (GPT-4o-mini)
Ontology generation1-2$0.02$0.05
Graph building5-20 (chunked)$0.10-0.30$0.25-0.75
Agent profile generation1 per agent (×200)$0.50-1.00$1.50-3.00
Simulation (15-20 rounds × 200 agents)~3,000-4,000$1.50-3.00$5.00-10.00
Report generation5-10$0.10-0.20$0.25-0.50
Report translation (your LLM)1$0.02$0.05
Total per scenario$2-5$7-15
Total per event (3 scenarios)$6-15$20-45

Monthly Cost Projection

Usage LevelEvents/MonthMonthly LLM Cost (Qwen)Monthly LLM Cost (GPT-4o-mini)
Light (FOMC + CPI only)2-3$15-45$50-135
Medium (+ major earnings)8-10$50-150$160-450
Heavy (daily event scanning)20-30$120-450$400-1,350

Add: Zep Cloud (0freetier),VPSifnotrunninglocally(0 free tier), VPS if not running locally (20-50/mo), news API if used ($0-50/mo).

My recommendation: Start with Qwen-plus and light usage. Your total monthly cost will be $50-100 including infrastructure. Scale up only after you've validated the insights are useful.


What This System CAN Tell You

  1. Behavioral asymmetry -- "If CPI comes in hot, the simulated crowd panics 3x harder than they rally on a cool print. Downside behavioral risk is underpriced."

  2. Consensus fragility -- "87% of agents converge on 'buy the dip' in the bullish scenario, but only 45% agree on what to do in the bearish case. High divergence = choppy, indecisive market if things go south."

  3. Second-order narratives -- "In the hawkish FOMC scenario, agents didn't just sell equities -- they started a 'credit spread widening' narrative that cascaded into financial sector selling. This wasn't in the obvious first-order reaction."

  4. Positioning crowding -- "Most simulated retail agents were already positioned bullish before the event. The consensus scenario produces almost no move because it's priced in. The surprise scenarios produce outsized moves because of crowded positioning."

What This System CANNOT Tell You

  1. Specific strike prices or expiry dates to trade
  2. Exact magnitude of market moves
  3. Precise timing of moves within a session
  4. Whether YOUR specific trade will be profitable
  5. Anything about after-hours/pre-market microstructure
  6. Real options flow or dark pool activity

Validation Plan (Critical)

Before trusting any of this with real money:

Month 1: Paper scoring

  • Run simulations for every major event
  • After each event, grade the simulation output:
    • Did it correctly identify the dominant behavioral pattern? (Y/N)
    • Did it correctly identify asymmetry? (Y/N)
    • Did it surface any second-order effects that actually manifested? (Y/N)
    • Would following its "lens" suggestion have been profitable? (Y/N)
  • Track hit rate across 10+ events

Month 2: Paper trading

  • If Month 1 hit rate > 60% on directional calls:
    • Paper trade on IBKR using simulation insights
    • Use small position sizes (2-3% of portfolio per trade)
    • Track Sharpe ratio of simulation-informed trades vs. your baseline

Month 3: Small live

  • If paper trading shows positive edge:
    • Go live with real money, small sizes
    • Strict risk management: max loss per trade, max daily loss
    • Continue tracking simulation accuracy

File Structure

mirofish-options/
├── config.py              # API keys, MiroFish URL, Telegram bot token
├── seed_builder.py         # Builds seed documents from market data
├── mirofish_client.py      # MiroFish API wrapper
├── report_translator.py    # Parses reports → trading insights
├── telegram_sender.py      # Sends briefings to your Telegram
├── scheduler.py            # Cron scheduler for overnight runs
├── event_calendar.py       # Manages upcoming events
├── models.py               # Data classes (ScenarioInsight, TradingBriefing)
├── validation_tracker.py   # Tracks prediction accuracy over time
├── seeds/                  # Generated seed documents (archived)
├── reports/                # MiroFish reports (archived)
├── briefings/              # Generated trading briefings (archived)
└── validation/             # Accuracy tracking data

Quick Start Path (Minimum Viable Version)

If you want the fastest path to testing whether this is useful at all:

  1. Deploy MiroFish Docker (~30 min)
  2. Sign up for Alibaba Bailian (qwen-plus) + Zep Cloud free tier (~15 min)
  3. Manually create a seed document for the next FOMC meeting (~20 min of copy-pasting news/data into a .txt)
  4. Upload via the web UI, run simulation, read the report (~30 min for the sim to run)
  5. Compare the report against what actually happens after the event

Total time to first test: ~2 hours. Total cost: ~$5 in LLM API calls.

If the report gives you an insight you wouldn't have had otherwise, build the rest. If it doesn't, you've spent $5 and an afternoon finding out.