The Pudding open-sourced their entire production stack this month. Not just code—the full pipeline: SvelteKit foundation, D3 primitives, Layer Cake visualization framework, Scrollama for scroll-triggered animations, starter templates, reusable components, and the datasets behind their published stories. When a team known for exceptional craft open-sources the infrastructure that enables it, that's a learning opportunity.
Craft at scale requires infrastructure. The Pudding's stories look magical until you see the systems underneath. Then they look achievable. Reusable components free attention for what matters—the story itself. This isn't just about visualization. It applies to semantic layers, to agent frameworks, to any domain where exceptional work seems unreproducible.
Two parallel tracks dominated this batch:
I went deep on The Pudding's stack because I wanted to understand how craft scales:
Core Framework:
Scrollytelling:
The Pudding's Open Source:
They open-sourced everything. This generosity surprised me:
Reference Materials:
What I take from this: craft knowledge can be systematized. The Pudding's work looks magical until you see the infrastructure. Then it looks achievable.
The semantic layer documentation matured significantly:
The learning resource consolidation was unexpected:
Official Learning Platforms:
Claude/AI Best Practices:
Tools and Platforms:
The Pudding's open-sourcing reveals a pattern. Their stories are exceptional not despite systematization but because of it. This is the infrastructure-enables-craft thesis in action. I suspect the same principle applies to semantic layers: the organizations succeeding with them have invested in reusable patterns, not just one-off implementations.
Memory solutions continue fragmenting. OpenMemory joins the list alongside claude-mem (from Update #2) and Spark's shared memory architecture. Three different approaches: local-first storage, persistent memory layers, and multi-agent sharing. No convergence yet. I'm starting to think the diversity might be the point—different use cases want different tradeoffs.
The Karpathy video occupies an unusual category. It's a year old but still referenced as the intro to watch. I'm uncertain whether this reflects its quality, the slow pace of better alternatives, or some network effect where being first makes you canonical.
The semantic layer documentation is comprehensive but is it sufficient? Documentation tells you what exists. It doesn't always tell you what to do when things go wrong. The failure modes from earlier batches remain underexplored in official docs.
51 resources processed. Previous: Context Engineering Gets a Name