We gave three AI models the same questions about a data metric. With scattered documentation, answer quality depended heavily on how powerful (and expensive) the model was. With meta context — a structured YAML summary extracted once from those same docs — even the cheapest model matched the most expensive one.
Eval: 3 models × 6 context conditions × 5 questions — 18 runs scored on a 5-point quality scale (Unreliable → Expert)
“Can your system tell me what a specific agent knew at 2:14 PM last Tuesday when it made a specific decision?”
— Justin Johnson, on the test that knowledge graphs must pass
last_validated + git blame = complete audit trail.
git show HEAD~30:schema.yml tells you exactly what the AI knew 30 days ago. What thresholds were active. What investigation path it would have followed.
With scattered documentation, Johnson's question is unanswerable. Which version of which wiki page was in the context window? Which threshold did the AI use?
With meta context in version-controlled YAML: yes. The meta block is the AI's epistemic state, frozen in git, queryable at any point in time.