Triage with Tasks
The Triage with Tasks pattern breaks a complex request into typed tasks (research → writing → review). Each task type routes to a specialist; tasks process sequentially, respecting prerequisite ordering. A triage agent up front produces the plan that downstream specialists work from.
Classic (non-beta) primitives: DefaultPattern, OnContextCondition checking current_task_type, ReplyResult advancing the task index.
Key Characteristics#
- Triage produces a plan. The triage agent's only job is to write a 2-3 sentence plan naming the three tasks and what each will produce for THIS specific request. Downstream specialists read the plan as their brief.
- Sequence then synthesises. This demo uses
TransitionGraph.sequence— a fixed pipeline of triage → researcher → writer → reviewer. Each specialist sees the full prior conversation via the windowed view. knobs["context_vars"]seeds state at session creation. Any tool / middleware can read it viaSessionStateInject; transitions can route on it viaContextEquals. The fixed sequence here doesn't need to read it for routing — it's there to demonstrate that session-scoped state survives the entire run.
Routing Mechanics#
There is no routing tool in this demo — every step is a plain FromSpeaker(a) → AgentTarget(b) rule wired by TransitionGraph.sequence([...]). The plan is a single triage output that the windowed view propagates to every subsequent specialist.
Sequence variant vs. dynamic queue
The runnable demo uses the simpler TransitionGraph.sequence variant: triage produces a real LLM-generated plan, then the sequence executes researcher → writer → reviewer deterministically.
The dynamic version (triage advances via an advance_task tool that pops the next task type from a queue, with ContextEquals(current_task_type, ...) per branch) has a sharp edge today: parallel-tool-calling LLMs can fire advance_task multiple times in one triage turn, each call mutating the queue before the previous handoff has locked the speaker. The first dispatch wins; the others corrupt state. The clean fix is either disable_parallel_tool_use at the model layer (not yet exposed via AnthropicConfig) or compare-and-swap on the queue. In the meantime, a sequence graph trades the dynamic queue for determinism.
Agent Flow#
sequenceDiagram
participant Intake as intake
participant Triage as triage
participant Researcher as researcher
participant Writer as writer
participant Reviewer as reviewer
Intake->>Triage: kickoff (FromSpeaker → AgentTarget)
Triage->>Researcher: 2-3 sentence plan
Researcher->>Writer: short paragraph of factual research
Writer->>Reviewer: deliverable (brief / summary / draft)
Reviewer->>Intake: review notes
Note over Intake,Reviewer: TerminateTarget("sequence_complete") fires after reviewer's reply Migration Notes#
| Classic | Beta |
|---|---|
ReplyResult(context_variables={"current_task_type": ..., "pending_tasks": [...]}) | set_context(session, key, value) per field (dynamic variant) |
OnContextCondition per task type | ContextEquals("current_task_type", <type>) per branch (dynamic variant) |
Initial ContextVariables(data=...) passed to the pattern | knobs["context_vars"] on session.open(...) |
Dynamic queue variant (production pattern)#
The dynamic variant has the triage agent owning a list of pending tasks in context. Its tool either pops the next task type into current_task_type or flips all_done=True when the list is empty — both via set_context:
The matching graph routes per task type and terminates when the queue is empty:
The sequence variant below is the runnable demo.
Code#
Tip
Real Sonnet on every agent — triage produces an actual plan, each specialist does real domain work.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 | |
Output#
session: 6b1a...
initial context_vars: {'pending_tasks': ['research', 'writing', 'review'], 'completed_tasks': [], 'request_kind': 'brief'}
intake: Write a 3-sentence brief on distributed consensus.
triage: Research will gather the canonical protocols (Paxos, Raft) and the FLP impossibility result. Writing will produce a 3-sentence brief defining distributed consensus, naming the protocols, and stating the safety/liveness trade-off. Review will check that the brief is technically accurate and reads naturally.
researcher: Distributed consensus protocols allow a set of unreliable nodes to agree on a single value despite failures and message loss. Paxos and Raft are the dominant safety-first protocols; Raft is widely deployed (etcd, Consul) for its leader-based simplicity, while Paxos underpins systems like Google Chubby. The FLP impossibility theorem proves that no asynchronous protocol can guarantee both safety and liveness without timing assumptions, so all production protocols rely on partial synchrony or randomness.
writer: Distributed consensus is the problem of getting a fleet of unreliable nodes to agree on one value despite failures. Paxos and Raft are the canonical protocols — Raft is favoured for its leader-based simplicity and powers etcd and Consul; Paxos predates it and underpins systems like Chubby. The FLP impossibility theorem is the price of asynchrony: no protocol can guarantee both safety and liveness without timing assumptions, so production systems rely on partial synchrony.
reviewer: The brief is technically accurate and well-paced — naming Paxos / Raft and citing etcd / Consul / Chubby gives it the right level of detail for a 3-sentence summary. The closing FLP sentence is dense; consider splitting "the price of asynchrony" off as a short framing phrase so the impossibility result lands with one clear takeaway. Otherwise it's ship-ready.
closed: reason='sequence_complete'
final context_vars: {'pending_tasks': ['research', 'writing', 'review'], 'completed_tasks': [], 'request_kind': 'brief'}