5 Core Domains
30+ Task Statements
6 Reference Scenarios
720 Exam Pass Score
100% Applicable Knowledge

The Curriculum

Five Domains.
Everything That Matters.

Each domain maps directly to what it takes to architect production systems with Claude — not toy demos, not tutorials, but real systems that handle real load.

Domain 01

Agentic Architecture & Orchestration

Master the agentic loop lifecycle, hub-and-spoke multi-agent patterns, subagent context isolation, workflow enforcement via hooks, and task decomposition strategies.

Domain 02

Tool Design & MCP Integration

Tool descriptions as the primary selection mechanism, structured error responses, tool_choice configuration, MCP server scoping, and built-in tool sequencing.

Domain 03

Claude Code Configuration & Workflows

CLAUDE.md hierarchy, path-specific rules with glob patterns, custom slash commands and skills, plan mode vs direct execution, and CI/CD integration.

Domain 04

Prompt Engineering & Structured Output

Explicit categorical criteria, few-shot prompting for consistency, tool_use with JSON schemas, validation-retry loops, batch processing, and multi-instance review.

Domain 05

Context Management & Reliability

Persistent case facts, the lost-in-the-middle effect, escalation triggers, structured error propagation, codebase exploration strategies, and information provenance.

Reference Scenarios

Six Scenarios.
Every Domain.

All five domains are examined through six concrete production scenarios. Understanding them deeply accelerates learning across the entire curriculum.

Customer Support Resolution Agent
Agent SDK · MCP · Escalation
Code Generation with Claude Code
CLAUDE.md · Plan Mode · Slash Commands
Multi-Agent Research System
Coordinator · Subagent · Orchestration
Developer Productivity Tools
Built-in Tools · MCP Servers
Claude Code for CI/CD
Non-interactive · Structured Output · -p flag
Structured Data Extraction
JSON Schema · tool_use · Validation Loops

Scenario 01

Customer Support Resolution Agent

A production customer support agent that resolves tickets using the Agent SDK, integrates external data via MCP, and escalates to humans under specific conditions. This scenario tests workflow enforcement, handoff protocols, and the distinction between prompt-based guidance and programmatic enforcement for financial operations.

Agent SDK MCP Integration Hooks Escalation Logic Domain 1 Domain 5

Critical Patterns

The Concepts That Matter Most

These are the highest-leverage ideas in the handbook — the ones that separate architects who build systems that work from those who build systems that sometimes work.

Agentic Loops

Always Check stop_reason — Never Natural Language

The agentic loop terminates when stop_reason equals "end_turn". Parsing natural language for completion signals, setting arbitrary iteration caps, or checking for assistant text are all unreliable anti-patterns that the exam — and production systems — will punish.

// ✓ Correct termination check
if (response.stop_reason === "end_turn") break;

// ✗ All of these are wrong
if (response.content[0].type === "text") break;
if (iterations >= 10) break;
Multi-Agent Systems

Subagents Are Fully Isolated — Pass Everything Explicitly

Subagents do not inherit the coordinator's conversation history. They do not share memory between invocations. Every piece of information a subagent needs must be explicitly included in its prompt. This is the single most commonly misunderstood concept in multi-agent architecture.

// ✓ Explicit context passing
subagentPrompt = `
Prior findings: ${webSearchResults}
Source metadata: ${sourceList}
Task: synthesise the above
`;
Workflow Enforcement

High Stakes = Hooks. Low Stakes = Prompts.

Prompt-based guidance has a non-zero failure rate. When the consequence of a single failure is financial loss, security breach, or compliance violation, programmatic enforcement via hooks and prerequisite gates is mandatory. Prompts are for preferences, not guarantees.

// ✓ Programmatic gate — 0% failure rate
preToolHook("process_refund", async (args) => {
if (!await verifyOwnership(args.accountId)) {
throw new Error("Ownership check required");
}
});
Prompt Engineering

Specific Categorical Criteria Beat Vague Confidence Thresholds

Instructions like "be conservative" or "only report high-confidence findings" do not work. Specific categorical criteria — "flag when claimed behaviour contradicts actual code behaviour; skip style preferences" — produce reliable, consistent results. Few-shot examples with reasoning are the highest-leverage technique.

// ✓ Explicit categorical criteria
"Flag: bugs, security vulns, logic errors.
Skip: style preferences, local conventions.
Example: [see attached examples]"

Start with Domain 1.
The largest. The most important.

Agentic Architecture and Orchestration accounts for 27% of the total curriculum. Every other domain builds on its foundations. Start here.

Begin Domain 1 →