Before You Begin
Executive Summary
Read This First — Three Anti-Patterns to Reject on Sight
Anti-pattern 1: Parsing natural language to determine loop termination. The stop_reason field exists for this. Natural language is unreliable.
Anti-pattern 2: Arbitrary iteration caps as the primary stopping mechanism. The model signals completion. Caps either cut off work or waste compute.
Anti-pattern 3: Checking assistant text content as a completion indicator. The model can return text and tool_use blocks simultaneously.
The Most Important Principle
Subagents do not share memory with the coordinator. They do not inherit conversation history. Every piece of information a subagent needs must be passed explicitly in its prompt. This is the single most commonly misunderstood concept in multi-agent architecture.
The High-Stakes Enforcement Rule
When consequences are financial, security-related, or compliance-related — prompt instructions alone are insufficient. Programmatic enforcement via hooks and prerequisite gates is mandatory. The exam will present prompt-based solutions for high-stakes scenarios. Reject them.
TASK STATEMENT 1.1
Agentic Loops
The agentic loop is the heartbeat of every Claude-powered agent. Understand its lifecycle exactly — not approximately.
The Complete Loop Lifecycle
- Send a request to Claude via the Messages API.
- Inspect the
stop_reasonfield in the response. - If
stop_reason === "tool_use": execute the requested tools, append results to conversation history, send the updated conversation back to Claude. - If
stop_reason === "end_turn": the agent has finished. Present the final response.
Model-Driven vs Programmatic Decision-Making
The exam favours model-driven approaches for flexibility in general operation, but switches to programmatic enforcement for critical business logic. Recognise which mode a scenario demands before selecting an answer.
Practice Scenario 1.1
Premature Loop Termination
A developer's agent sometimes terminates prematurely. The code checks response.content[0].type === "text" to determine completion. Identify the bug and fix it.
Fix: Always check
stop_reason === "end_turn" as the sole termination signal.TASK STATEMENT 1.2
Multi-Agent Orchestration
The hub-and-spoke architecture is the canonical pattern for multi-agent systems with Claude. Every responsibility of the coordinator and every constraint on subagents must be understood precisely.
Hub-and-Spoke Architecture
- A coordinator agent sits at the centre of the system.
- Subagents are spokes invoked for specialised tasks.
- All communication flows through the coordinator. Subagents never communicate directly with each other.
- The coordinator handles: task decomposition, subagent selection, context passing, result aggregation, error handling, and information routing.
Critical Isolation Principle — The Most Important Rule
Subagents do not automatically inherit the coordinator's conversation history. Subagents do not share memory between invocations. Every piece of information a subagent needs must be explicitly included in its prompt. This is the single most commonly misunderstood concept in multi-agent systems.
Coordinator Responsibilities
- Analyse query requirements and dynamically select which subagents to invoke — not always routing through the full pipeline.
- Partition research scope across subagents to minimise duplication.
- Implement iterative refinement loops: evaluate synthesis output for gaps, re-delegate with targeted queries, repeat until coverage is sufficient.
- Route all communication through the coordinator for observability and consistent error handling.
Narrow Decomposition Failure
When a research output is missing entire categories of information, the root cause is always the coordinator's decomposition logic — not any downstream subagent. The exam expects precise root cause tracing.
Practice Scenario 1.2
Incomplete Research Report
A multi-agent research system produces a report on "renewable energy technologies" covering only solar and wind. Geothermal, tidal, biomass, and nuclear fusion are absent. Four answer options target different system components. Which component is the root cause?
TASK STATEMENT 1.3
Subagent Invocation and Context Passing
The Task Tool
- The Task tool is the mechanism for spawning subagents from a coordinator.
- The coordinator's
allowedToolsmust include"Task"— without it, spawning is impossible. - Each subagent has an
AgentDefinitionwith description, system prompt, and tool restrictions.
Context Passing Best Practices
- Include complete findings from prior agents directly in the subagent's prompt.
- Use structured data formats that separate content from metadata — source URLs, document names, page numbers — to preserve attribution across agents.
- Design coordinator prompts that specify research goals and quality criteria, not step-by-step procedural instructions. This preserves subagent adaptability.
Parallel Spawning and fork_session
Emitting multiple Task tool calls in a single coordinator response spawns subagents in parallel — significantly faster than sequential invocation. When latency matters, always prefer parallel spawning.
fork_session creates independent branches from a shared analysis baseline. Use it when exploring divergent approaches from a common starting point. Each fork operates independently after the branching point.
Practice Scenario 1.3
Missing Source Attribution
A synthesis agent produces a report with claims that have no source attribution. The web search and document analysis subagents are working correctly. What is the root cause?
Fix: Require subagents to output structured claim-source mappings — each claim paired with source URL, document name, and page number. The coordinator must preserve and forward these mappings to the synthesis agent.
TASK STATEMENT 1.4
Workflow Enforcement and Handoff
The Enforcement Spectrum
| Approach | Reliability | Use When |
|---|---|---|
| Prompt-based guidance | Works most of the time. Non-zero failure rate. | Consequences are low-stakes: formatting, style preferences. |
| Programmatic enforcement | Works every time. Zero failure rate. | Consequences are financial, security-related, or compliance-related. |
The Exam Decision Rule
The exam will present prompt-based solutions as answer options for high-stakes scenarios. Reject them every time. Prompt instructions have a non-zero failure rate by definition. A single financial or security failure is unacceptable.
Structured Handoff Protocols
When escalating to a human agent, the handoff package must be self-contained. The human agent does not have access to the conversation transcript. Always compile: customer ID, conversation summary, root cause analysis, refund amount if applicable, and recommended action.
Practice Scenario 1.4
Refunds Without Ownership Verification
Production data shows that in 8% of cases, a customer support agent processes refunds without verifying account ownership. Options: A) programmatic prerequisite gate, B) enhanced system prompt, C) few-shot examples, D) routing classifier.
Options B, C, and D all have non-zero failure rates. An 8% failure rate on a financial operation is exactly the scenario that mandates deterministic enforcement. A prerequisite gate physically blocks the refund tool until ownership verification completes successfully — no prompt can guarantee this.
TASK STATEMENT 1.5
Agent SDK Hooks
PostToolUse Hooks
- Intercept tool results after execution, before the model processes them.
- Use case: normalise heterogeneous data formats — Unix timestamps to ISO 8601, numeric status codes to human-readable strings.
- The model receives clean, consistent data regardless of which tool produced it.
Tool Call Interception Hooks
- Intercept outgoing tool calls before execution.
- Use case: block refunds above a threshold and redirect to human escalation.
- Use case: enforce compliance rules such as requiring manager approval for certain operations.
Hooks vs Prompts — The Decision Framework
Hooks = deterministic guarantees. Use for business rules that must be followed 100% of the time.
Prompts = probabilistic guidance. Use for preferences and soft rules.
If a single failure would cost the business money or create legal risk, use a hook.
TASK STATEMENT 1.6
Task Decomposition Strategies
Fixed Sequential Pipelines (Prompt Chaining)
Break work into predetermined sequential steps. Best for predictable, structured tasks such as code reviews and document processing. Consistent and reliable, but cannot adapt to unexpected findings.
Dynamic Adaptive Decomposition
Generate subtasks based on what is discovered at each step. Best for open-ended investigation tasks where the problem shape is not known in advance. Adapts to the problem, but less predictable in execution time.
The Attention Dilution Problem
Processing too many files in a single pass produces inconsistent depth — detailed feedback for some files, missed obvious bugs in others, and contradictory judgments across identical code patterns.
Fix: Split large reviews into per-file local analysis passes plus a separate cross-file integration pass. Per-file passes catch local issues consistently; the integration pass catches data flow issues that span multiple files.
TASK STATEMENT 1.7
Session State and Resumption
| Option | When to Use |
|---|---|
--resume <session-name> | Prior context is mostly still valid; files have not changed significantly. |
fork_session | Need to explore divergent approaches from a shared analysis point. |
| Fresh start with summary injection | Tool results are stale, files have changed, or context has degraded over a long session. |
When resuming after code modifications, always inform the agent about the specific files that changed. Starting fresh with an injected summary is more reliable than resuming with stale tool results — the agent receives accurate context without re-exploring everything from scratch.
Hands-On Build Exercise
Build: Coordinator Agent with Two Subagents
- A coordinator agent with two subagents: web search and document analysis.
- Proper context passing with structured metadata — claim, source URL, document name, page number.
- A programmatic prerequisite gate that blocks downstream operations until account verification completes successfully.
- A PostToolUse normalisation hook that converts heterogeneous tool output to a consistent format.
- A PreToolUse interception hook that blocks policy violations before execution.
- Test with a multi-concern request and verify the gate fires correctly on each invocation.