Agentic Architecture & Orchestration
Master the agentic loop lifecycle, hub-and-spoke multi-agent patterns, subagent context isolation, workflow enforcement via hooks, and task decomposition strategies.
Complete Technical Handbook
A complete guide to building production-grade AI applications with Claude Code, the Agent SDK, the Claude API, and Model Context Protocol. Five domains. Real patterns. No fluff.
The Curriculum
Each domain maps directly to what it takes to architect production systems with Claude — not toy demos, not tutorials, but real systems that handle real load.
Master the agentic loop lifecycle, hub-and-spoke multi-agent patterns, subagent context isolation, workflow enforcement via hooks, and task decomposition strategies.
Tool descriptions as the primary selection mechanism, structured error responses, tool_choice configuration, MCP server scoping, and built-in tool sequencing.
CLAUDE.md hierarchy, path-specific rules with glob patterns, custom slash commands and skills, plan mode vs direct execution, and CI/CD integration.
Explicit categorical criteria, few-shot prompting for consistency, tool_use with JSON schemas, validation-retry loops, batch processing, and multi-instance review.
Persistent case facts, the lost-in-the-middle effect, escalation triggers, structured error propagation, codebase exploration strategies, and information provenance.
Reference Scenarios
All five domains are examined through six concrete production scenarios. Understanding them deeply accelerates learning across the entire curriculum.
Scenario 01
A production customer support agent that resolves tickets using the Agent SDK, integrates external data via MCP, and escalates to humans under specific conditions. This scenario tests workflow enforcement, handoff protocols, and the distinction between prompt-based guidance and programmatic enforcement for financial operations.
Critical Patterns
These are the highest-leverage ideas in the handbook — the ones that separate architects who build systems that work from those who build systems that sometimes work.
The agentic loop terminates when stop_reason equals "end_turn". Parsing natural language for completion signals, setting arbitrary iteration caps, or checking for assistant text are all unreliable anti-patterns that the exam — and production systems — will punish.
Subagents do not inherit the coordinator's conversation history. They do not share memory between invocations. Every piece of information a subagent needs must be explicitly included in its prompt. This is the single most commonly misunderstood concept in multi-agent architecture.
Prompt-based guidance has a non-zero failure rate. When the consequence of a single failure is financial loss, security breach, or compliance violation, programmatic enforcement via hooks and prerequisite gates is mandatory. Prompts are for preferences, not guarantees.
Instructions like "be conservative" or "only report high-confidence findings" do not work. Specific categorical criteria — "flag when claimed behaviour contradicts actual code behaviour; skip style preferences" — produce reliable, consistent results. Few-shot examples with reasoning are the highest-leverage technique.
Agentic Architecture and Orchestration accounts for 27% of the total curriculum. Every other domain builds on its foundations. Start here.