· claude code · sub-agents · april 2026 ·

The 10 Claude Code sub-agents we actually use to ship software in 2026

TL;DR
  • Claude Code's /agents command lets you define named sub-agents with isolated system prompts that the orchestrator can spin up mid-task — per the official docs, each agent "operates with full tool access in a separate context."
  • Every agent below has a single, narrow job. Narrow jobs mean cheaper runs, easier debugging, and lower token burn when one agent misbehaves.
  • The full 10-agent pack — prompts, CLAUDE.md wiring, and trigger rules — is available as a $49 download. Link mid-post.

Why bother with sub-agents at all

A single Claude Code session that handles architecture, code review, test generation, documentation, and security scanning in one context window is expensive, slow, and prone to context contamination — earlier decisions bleed into later ones in ways that are hard to trace.

Sub-agents fix this by giving each concern its own context. According to Anthropic's Claude Code documentation, sub-agents are "spawned by an orchestrator and run tools independently." That independence is the point: the Reviewer agent never sees the Architect's deliberation. The Tester never sees the Debugger's half-formed hypotheses. Each agent gets a clean slate, a tight prompt, and one job.

The /agents command in Claude Code manages this. You define agents in .claude/agents/ as Markdown files. The orchestrator picks which to fire based on context, or you call them explicitly. Scope creep from an agent that was "just going to check one thing" is the main failure mode — which is why every agent below has an explicit "do not" clause in its prompt.

The 10 agents

01 Architect design decisions
System prompt (one line) You propose file structure and API contracts only — write no implementation code.

When to fire: Before any new feature that touches more than two files. Before a refactor that crosses module boundaries. Never mid-implementation — at that point it's too late to influence structure without rewriting.

02 Reviewer code quality
System prompt (one line) Review the diff for correctness, readability, and edge cases — do not rewrite, only comment.

When to fire: After any commit-ready change. The Reviewer sees only the diff, not the broader codebase — this is intentional. Scope creep into unrelated files wastes its context and produces noise.

03 Tester test generation
System prompt (one line) Write tests for the specified function or module — assume the implementation is complete.

When to fire: After implementation is done, before the PR is opened. Firing Tester before the code stabilizes produces tests that get thrown away. One function, one agent invocation — don't batch five modules into one run.

04 Debugger failure diagnosis
System prompt (one line) Identify the root cause of the reported error — do not fix, only diagnose and explain.

When to fire: When a test fails or a production error surfaces. Separating diagnosis from fixing prevents the agent from reaching for the nearest plausible fix rather than the correct one. Read the diagnosis, then decide whether to fix it yourself or fire a second agent.

05 Docs documentation
System prompt (one line) Write JSDoc or Markdown documentation for the provided code — do not alter logic.

When to fire: After a function or module is merged and stable. Writing docs before the API is settled means rewriting them. Docs is the cheapest agent to run — short context, deterministic output — so it's fine to fire it on small functions.

Want all 10 agent files, pre-wired?

The Septim Agents Pack is a $49 download: 10 .md agent definition files, a CLAUDE.md orchestrator config, trigger rules, and a token-budget guide for each agent. Drop the folder into .claude/agents/ and you're running in under five minutes.

Get the Agents Pack — $49

06 Migrator schema & data migration
System prompt (one line) Write the migration script and its rollback — do not touch application code outside the migration file.

When to fire: Any time a database schema change is needed. Migrator's constraint — touch nothing outside the migration file — prevents it from "helpfully" updating ORM models or API handlers, which belong to separate review cycles.

07 Security vulnerability scan
System prompt (one line) Audit the provided code for injection, auth, and exposure risks — report findings, do not patch.

When to fire: Before any PR that touches auth, user input handling, or external API calls. Security's value is in the report, not the fix — if it patches in the same pass, it can paper over the finding rather than surface it for review.

08 Researcher library & API scouting
System prompt (one line) Find three candidate libraries or API approaches for the stated problem — compare trade-offs, pick none.

When to fire: When you're about to pick a dependency or third-party API and you're not certain which option fits your constraints. Researcher doesn't decide — it structures the decision so you or another agent can.

09 Planner task breakdown
System prompt (one line) Break the stated feature into ordered, atomic tasks — output a numbered list, nothing else.

When to fire: At the start of any feature that will take more than one session. Planner's output becomes the orchestrator's task list. A feature that starts with a numbered list from Planner rarely gets scope-crept mid-session, because the boundary is visible.

10 Cost Guard token budget enforcement
System prompt (one line) Monitor cumulative session spend; halt execution and report if the threshold in CLAUDE.md is exceeded.

When to fire: Always — run this as a PreToolUse hook on every session, not on demand. Cost Guard is the one agent that should be invisible when things are going well and loud when they aren't. Without it, a runaway multi-agent session compounds token spend before you notice the bill.

How to wire these in Claude Code

Each agent lives as a Markdown file at .claude/agents/<agent-name>.md. The file format is straightforward:

---
name: reviewer
description: Reviews code diffs for correctness and edge cases
---

Review the diff for correctness, readability, and edge cases — do not rewrite, only comment.

The orchestrator (your main CLAUDE.md session) references agents by name. Per Anthropic's documentation, the orchestrator "delegates tasks to sub-agents" automatically when the description matches the context, or you can call them explicitly with /agents reviewer.

A few wiring rules that save you from the common failure modes:

What this setup costs per working session

Running all 10 agents across a full feature build — architecture through security review — costs roughly 80,000–140,000 tokens on a medium-complexity feature with Claude Sonnet. At current API rates that's approximately $0.40–$0.70 per feature cycle. The variance comes almost entirely from how tightly the individual agent prompts are scoped. A Tester with a vague prompt that decides to also check adjacent modules can triple its token usage on a single run.

The Agents Pack includes a per-agent token budget recommendation based on what we've seen in production. Cost Guard enforces it.

The one thing most sub-agent setups get wrong

Most developers build agents that can do anything the orchestrator can do, then wonder why the orchestrator isn't actually delegating. The fix is counterintuitive: make each agent less capable, not more. An agent that can only read files and report findings will be called for exactly that. An agent with full tool access gets treated as a backup orchestrator, which means it handles things it shouldn't, consumes more context, and produces noisier output.

Narrow tools get used precisely. Wide tools get used hopefully. Every agent above has one job, stated in a single sentence.

Need a custom team built for your codebase?

If your repo has constraints that don't fit a generic pack — specific frameworks, compliance requirements, a monorepo with unusual patterns — we'll build a custom agent team for your stack. One working session, we design and wire the agents together, and you get the files at the end.

Book a custom session — $149 →

Further reading