Context Engineering Cheat Sheet
Nov 20, 2025
Context Engineering In One 𝗖𝗵𝗲𝗮𝘁𝘀𝗵𝗲𝗲𝘁.
Your agent starts strong → performs a few tool calls → suddenly gets confused → outputs garbage.
Sound familiar?
Here's what's really happening: Context poisoning.
As your agent runs longer tasks, its context window fills up with tool feedback, memories, and instructions. Eventually, it drowns in its own data.
Enter: Context Engineering
Andrej Karpathy nailed the definition:
"The delicate art and science of filling the context window with just the right information for the next step."
Think of it like this:
》 LLM = CPU
》 Context Window = RAM (limited capacity)
》 Context Engineering = Managing what fits in that RAM
The 4 Pillars of Context Engineering:
1️⃣ WRITING Context Save information OUTSIDE the context window.
✸ Scratch Pads: Take notes during task execution (like Anthropic's multi-agent researcher saving its plan to memory)
✸ Long-term Memory: Persist learnings across multiple sessions (like ChatGPT's memory feature)
2️⃣ SELECTING Context Pull only relevant information INTO the context window.
✸ Smart Tool Selection: Research shows agents fail after ~100 tools. Solution? Use RAG over tool descriptions to fetch only relevant tools
✸ Memory Types: Facts (semantic), past experiences (episodic), instructions (procedural)
✸ Knowledge Retrieval: Code agents like Cursor use parsing + embeddings + knowledge graphs + LLM-based ranking
3️⃣ COMPRESSING Context Retain only essential tokens.
✸ Summarization: Claude Code auto-compacts at 95% of 200K token limit
✸ Trimming: Remove irrelevant messages using heuristics or learned approaches
4️⃣ ISOLATING Context Split context across multiple spaces.
✸ Multi-Agent Systems: Each sub-agent gets its own context window (Anthropic's researcher processes more total tokens this way)
✸ Sandboxing: Execute code in isolated environments - keep heavy objects (images, audio) away from LLM context
✸ State Objects: Use Pydantic models with separate fields for different context types
Why This Matters:
According to Cognition: "Context engineering is effectively the #1 job of engineers building AI agents."
Without it, you hit:
》 Context poisoning (conflicting information)
》 Distraction (too much noise)
》 Clash (hallucinations influencing outputs)
Real-World Impact:
》 Semantic code chunking (not random blocks)
》 Multiple retrieval techniques combined
》 LLM-based ranking on top
Try to Get this:
what goes IN,
what stays OUT,
and what gets COMPRESSED in your agent's context window.
Master context engineering = Master AI agents.
👉Watch this video from langchain for more Context Engineering.
----------------------------
🎓 New to AI Agents? Start with my free training and learn the fundamentals of building production-ready agents with LangGraph, CrewAI, and modern frameworks. 👉 Get Free Training
🚀 Ready to Master AI Agents? Join AI Agents Mastery and learn to build enterprise-grade multi-agent systems with 20+ years of real-world AI experience. 👉 Join 5-in-1 AI Agents Mastery
⭐⭐⭐⭐⭐ (5/5) 1500+ enrolled
👩💻 Written by Dr. Maryam Miradi
CEO & Chief AI Scientist
I train STEM professionals to master real-world AI Agents.
