Guides
Applied context engineering. Deep dives into specific patterns, framework guides, and case studies from production systems.
Deep Dives
Extended analysis of individual patterns: implementation details, data, and comparisons.
Memory Architectures for AI Agents
Compare memory implementations across systems. Flat files, structured databases, vector stores, and hybrid approaches. Map MemGPT, Claude, ChatGPT, and coding agents to episodic, semantic, and procedural memory concepts.
Context Rot Across Models
Data-driven comparison of how different models handle long context. NoLiMa and RULER benchmarks reveal which models maintain quality and which degrade fastest across GPT-4o, Claude, Gemini, Llama, and Mistral.
Recursive Delegation in Swarm, CrewAI, and LangGraph
How OpenAI Swarm, CrewAI, and LangGraph implement recursive delegation. Each framework handles context passing, result aggregation, and agent spawning differently.
Guides
How to apply context patterns with specific frameworks, domains, and use cases.
Context Engineering for RAG Pipelines
Most RAG implementations fail not because retrieval is bad, but because nobody thought about what happens after retrieval. Bad chunking, no re-ranking, and no context budgeting waste the tokens you spent retrieving.
Context Engineering for Coding Agents
Configure Claude Code, Cursor, and Windsurf for better results. Structure your AGENTS.md and .cursorrules files to provide the right context at the right time.
Context Engineering for Code Generation
Include types, interfaces, and existing patterns in your context. Without them, the model generates code that matches its training data instead of your codebase.
Context Engineering vs Prompt Engineering
Prompt engineering was about crafting the right question for a single turn. Context engineering is about assembling the right information environment for complex multi-turn systems. The shift reflects how LLM applications matured from chatbots to agents.