Dispatch Pipeline
Every prompt sent through Mia passes through a middleware pipeline before reaching the AI plugin and after receiving a response. This pipeline handles context injection, tracing, verification, and memory extraction.
Pipeline Stages
Section titled “Pipeline Stages”User Prompt │ ▼┌──────────────────┐│ ContextPreparer │ Builds workspace context, memory facts, git state└────────┬─────────┘ │ ▼┌──────────────────┐│ TraceLogger │ Records dispatch start time, prompt, model└────────┬─────────┘ │ ▼┌──────────────────┐│ Plugin Dispatch │ Sends to AI backend, streams response└────────┬─────────┘ │ ▼┌──────────────────┐│ PostDispatch │ Optional semantic verification of response│ Verifier │└────────┬─────────┘ │ ▼┌──────────────────┐│ MemoryExtractor │ Auto-extracts facts from response (async)└────────┬─────────┘ │ ▼ ResponseContext Preparation
Section titled “Context Preparation”The ContextPreparer middleware assembles the full context payload sent to the AI plugin. Sections are ordered for prompt caching efficiency — static content first, volatile content last.
Context Sections (in order)
Section titled “Context Sections (in order)”- Personality & system prompt — Static. Defines Mia’s behavior. Highly cache-friendly.
- User profile — Semi-static. User preferences, name, timezone.
- Codebase summary — Semi-static. Languages, frameworks, file count. Cached per daemon lifetime.
- Workspace snapshot — Semi-stable. Git branch, recent commits, file structure. Refreshed every ~30 minutes.
- Memory facts — Semi-stable. Relevant facts retrieved via BM25 semantic search. Top 5 by default.
- Conversation history — Volatile. Prior messages in the current session.
- User prompt — Volatile. The current message.
Token Budgeting
Section titled “Token Budgeting”Each context section has an estimated token count. The builder tracks total usage against a budget and:
- Skips sections that would exceed the budget
- Truncates oversized sections
- Reserves 4096 tokens for model completion
- Uses
tiktokenfor accurate token estimation
Plugin Context Object
Section titled “Plugin Context Object”interface PluginContext { memoryFacts: string[] // From SQLite FTS5 codebaseContext: string // Languages, frameworks summary gitContext: string // Branch, recent commits, changes workspaceSnapshot: string // File structure overview projectInstructions: string // .claude-code-instructions if present conversationSummary?: string // Compacted prior messages}Trace Logging
Section titled “Trace Logging”The TraceLogger middleware records every dispatch for debugging and analytics:
- Start: Timestamp, prompt, model, plugin name
- End: Duration, token usage (input/output/cache), tool calls, status
- Storage: JSONL files at
~/.mia/traces/YYYY-MM-DD.jsonl - Retention: Configurable (default 30 days)
View traces via:
mia log # Recent dispatchesmia log --n 20 # Last 20 dispatchesmia usage # Aggregated token usagemia usage --week # This week's usagePost-Dispatch Verification
Section titled “Post-Dispatch Verification”The optional PostDispatchVerifier validates the AI’s response:
- Semantic check: Uses a secondary LLM call to verify the response addresses the prompt
- Retry on failure: If verification fails, retries with fallback plugins
- Configurable: Disabled by default (adds latency and cost)
{ "pluginDispatch": { "verification": { "enabled": true, "semanticCheck": true, "retryOnFailure": true } }}Memory Extraction
Section titled “Memory Extraction”After a successful dispatch, the MemoryExtractor fires asynchronously:
- Analyzes the conversation for extractable facts
- Uses the active plugin for LLM-based extraction
- Stores facts to SQLite with timestamps and metadata
- Fire-and-forget — doesn’t block the response
Configuration
Section titled “Configuration”{ "pluginDispatch": { "memoryExtraction": { "enabled": true, "minDurationMs": 5000, "maxFacts": 5 } }}minDurationMs: Only extract from dispatches that took at least this long (skip trivial queries)maxFacts: Maximum facts to extract per dispatch
Git Change Capture
Section titled “Git Change Capture”After each dispatch, the pipeline checks for git changes made by the AI:
- Detects new/modified/deleted files
- Records changes in the trace log
- Useful for auditing what the AI actually did