Hierarchical
The Hierarchical pattern places a coordinator above a set of specialists. The coordinator delegates work, the researcher returns to the coordinator with facts, and the writer is the terminal step that uses those facts to produce the final summary — its reply closes the workflow.
Classic (non-beta) primitives: NestedChat for sub-flows, GroupManager for delegation, top-level DefaultPattern for the coordinator graph.
Key Characteristics#
- Coordinator owns dispatch. The coord decides whether to call the researcher or the writer, never replies as the specialist.
- Researcher returns to coord. The researcher's reply routes back via
FromSpeaker(researcher) → AgentTarget(coord)so coord can decide the next move. - Writer is the terminal speaker. A
FromSpeaker(writer) →TerminateTarget("written")rule closes the workflow as soon as the writer's summary lands. There is no separate "finish" tool.
Routing Mechanics#
- Typed
Handoffreturn. Eachdelegate_<spoke>tool returnsHandoff(target="researcher", reason=...). The framework reads it from the agent's localToolResultEventstream after the round and stamps it onto the packet'srouting.target. No graph rule is needed for the delegation edge —Handoff.targetis authoritative. - No
finish_delegatetool. Earlier iterations of this demo had afinish_delegatetool that flipped a context flag to terminate. Real LLMs (Sonnet, GPT, Gemini) routinely emit several tool calls in parallel inside one round — a parallel call tofinish_delegatealongsidedelegate_researcherwould set the flag before the researcher had a chance to speak, terminating the workflow prematurely. Making the writer the terminal speaker sidesteps this hazard entirely: parallel calls todelegate_researcheranddelegate_writerare safe because first-emitted-wins picks researcher (the other tool runs but itsHandoffdoesn't drive routing), and the writer step still runs in its own dedicated round once researcher returns.
Agent Flow#
sequenceDiagram
participant Intake as intake
participant Coord as coord
participant Researcher as researcher
participant Writer as writer
Intake->>Coord: kickoff (FromSpeaker → AgentTarget)
Coord->>Researcher: Handoff(target="researcher")
Researcher->>Coord: facts (FromSpeaker → AgentTarget)
Coord->>Writer: Handoff(target="writer")
Writer-->>Writer: 1-line summary
Note over Writer: FromSpeaker(writer) → TerminateTarget("written") Migration Notes#
| Classic | Beta |
|---|---|
ReplyResult(target=AgentTarget(researcher)) from a delegate tool | return Handoff(target="researcher", reason=...) |
NestedChat for a specialist that runs its own sub-flow | Specialist tool opens a separate consulting session |
| Explicit "finish" tool flipping a context flag | Make the terminal specialist's reply itself close via FromSpeaker(writer) → TerminateTarget |
Gaps & Workarounds#
- No
NestedChatTarget. A specialist that needs its own sub-flow can't open a "child workflow" inline. Workaround: the specialist's tool opens a separate session (e.g. aconsultingsession viaAgentClient.open(...)), runs the sub-conversation there, and returns the result to the coordinator session via its reply. Two sessions, one per nesting level — clean WAL per nesting, but the affordance isn't built in.
Code#
Tip
Coord, researcher, and writer all use real Sonnet — the coordinator genuinely decides between researcher and writer, and the writer's summary is a real LLM output.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 | |
Output#
session: 9b7c...
intake: Brief on distributed consensus: research, then write a 1-line summary.
[tool] delegate_researcher('Gather bullet facts about distributed consensus...')
coord: [Handed off via delegate_researcher] Gather bullet facts about distributed consensus...
researcher: Distributed consensus enables networked nodes to agree on a single value or state despite failures.
- Key algorithms: Paxos, Raft, PBFT
- Core trade-offs: CAP theorem, FLP impossibility
- Real-world uses: ZooKeeper, etcd, blockchain protocols
[tool] delegate_writer('Write a 1-line summary covering algorithms, trade-offs, and uses.')
coord: [Handed off via delegate_writer] Write a 1-line summary covering algorithms, trade-offs, and uses.
writer: Distributed consensus algorithms — Paxos, Raft, PBFT — let networked nodes agree on shared state despite faults, trading consistency against availability in real systems like etcd and blockchains.
closed: reason='written'