← Meta Context Schema

Structured Context Delivers Expert-Quality Answers — From Any Model

We gave three AI models the same questions about a data metric. With scattered documentation, answer quality depended heavily on how powerful (and expensive) the model was. With meta context — a structured YAML summary extracted once from those same docs — even the cheapest model matched the most expensive one.

Eval: 3 models × 6 context conditions × 5 questions — 18 runs scored on a 5-point quality scale (Unreliable → Expert)

Cheapest Model + Meta Context
Near-Expert
Haiku scores 4.7 / 5 — pennies per query
Most Expensive Model + Best Docs
Good
Opus scores 4.6 / 5 — dollars per query
Model Quality Gap
Nearly Gone
0.2 points with meta (was 0.8 without)
The Convergence: How Context Erases the Model Gap
Each line is a different AI model. On the left, without context, there's a wide quality gap between cheap and expensive models. As context improves from left to right, the lines converge — until with meta context, they all reach near-Expert quality.
Hover data points for exact scores and quality ratings
Key finding
Cheap model + meta context beats expensive model + documentation
Haiku (the cheapest model, pennies per query) with structured meta context outscored Opus (the most expensive, dollars per query) working from the same scattered docs that teams typically rely on.
How it works
Extract once, answer forever
A powerful model reads your documentation once and distills it into a structured YAML block — 5.6× smaller than the source docs. Every future question reads that block instead of re-processing scattered files.
Why it works
Meta context pre-digests the hard reasoning
Cheap models struggle to find thresholds across 7 docs, reconcile contradictions, and judge severity. Meta context does that work in advance — so even a small model can give expert answers.
Supporting Evidence
Less Context, Better Answers
Meta context is 5.6× smaller than the documentation it was extracted from — yet produces higher quality answers. Bars show word count; labels show resulting quality.
Docs + Distractors
10 docs + 3 irrelevant files
5,815 w
Fair – Good
Full Documentation
7 scattered documents
4,743 w
Good
Basic Docs
wiki + runbook only
1,789 w
Fair – Good
↓ 5.6× compression via one-time extraction ↓
Meta Context
structured YAML block
1,157 w
Good – Expert
No Context
metric name + schema only
371 w
Poor
Quality Lift by Model Tier
Weaker models get a larger boost from meta context. The meta pre-digests reasoning that smaller models can't do at query time.
Haiku
+2.4 points
Poor → Near-Expert
Sonnet
+1.9 points
Fair → Expert
Opus
+1.8 points
Fair → Expert
Meta Context Advantage Over Best Available Documentation
Haiku
+0.6 points
Good → Near-Expert
Sonnet
+0.4 points
Good → Expert
Opus
+0.3 points
Good → Expert
Where Meta Context Matters Most
Calibration ("is 12% concerning?") and decision ("should I escalate?") show the largest quality gaps — and the largest recovery with meta context. These questions require threshold precision and SLA knowledge that scattered docs fail to deliver cleanly.
Hover for exact scores. Solid lines = with meta context, dashed = no context.
The Economics: Pay Once, Answer Forever
Meta context converts per-query reasoning into one-time authoring. The cost structure inverts as you scale.
Without Meta Context
Cost = Nqueries × Opus × doc retrieval + reconciliation
Every question pays the full reasoning tax: find the docs, extract the relevant parts, reconcile contradictions, synthesize an answer. 500 analysts × 200 metrics = 100,000 expensive inferences.
With Meta Context
Cost = Mmetrics × Opusonce + Nqueries × Haiku
Pay the heavy inference once per metric. Every subsequent query reads structured YAML at Haiku prices. 200 extractions once, then 100,000 cheap reads — indefinitely.
The Frontier Slides Forward. The Meta Doesn't Expire.
2026
Opus 4.6 authors meta → Haiku 4.5 answers at scale
2027
Opus 5 re-validates → Haiku 5 answers at scale Same YAML. New models. Cheaper prices.
2028
Opus 7.3 re-validates → Haiku 6 answers at scale The meta block is model-agnostic YAML. It doesn't care what reads it.
An organization with 500 analysts asking questions about 200 metrics isn't paying for 100,000 document-reconciliation tasks. It's paying for 200 extraction tasks once, then serving answers at Haiku prices — indefinitely.
The Deeper Win: You Can Audit What the AI Knew
Meta context isn't just faster and cheaper. Because it lives in version-controlled YAML, you can answer a question that scattered docs never could:

“Can your system tell me what a specific agent knew at 2:14 PM last Tuesday when it made a specific decision?”

— Justin Johnson, on the test that knowledge graphs must pass
meta:
Structured state. The meta block captures what is known about a metric right now: thresholds, owners, investigation paths, relationships. One canonical source.
git log
Change history. Every edit — who changed the critical threshold from 15% to 18%, when, in what PR — is recorded. last_validated + git blame = complete audit trail.
t−1
Point-in-time knowledge. git show HEAD~30:schema.yml tells you exactly what the AI knew 30 days ago. What thresholds were active. What investigation path it would have followed.

With scattered documentation, Johnson's question is unanswerable. Which version of which wiki page was in the context window? Which threshold did the AI use?

With meta context in version-controlled YAML: yes. The meta block is the AI's epistemic state, frozen in git, queryable at any point in time.