Feedback Loop
The Feedback Loop pattern alternates a drafter and a reviewer until the reviewer flips a done flag in context, or max_turns fires as the safety cap. Drafter writes, reviewer either approves or gives concrete feedback for revision, drafter incorporates, reviewer re-evaluates — until satisfied.
Classic (non-beta) primitives: DefaultPattern, OnContextCondition checking an iteration_needed flag, ReplyResult updating that flag, max_round.
Key Characteristics#
- Two termination paths.
- Happy path:
ContextEquals("done", True) → TerminateTarget("approved"), fired when the reviewer'sapprovetool flips the flag. - Safety belt:
default_target=TerminateTarget("max_iterations")paired withmax_turns=10.
- Happy path:
- Intake agent owns kickoff. A separate
intakeagent (usingTestConfig) owns the initial topic message sodrafteris the first agent the graph hands control to (rather than seeing its own kickoff and bouncing straight to reviewer).
Routing Mechanics#
Under the packet execution model, approve uses set_context to flip done=True. The reviewer's reply body (containing the "APPROVED" note) flows naturally through the packet's body field on the same turn — when the packet folds, ContextEquals(done, True) matches the just-set value and the session terminates with reason 'approved'.
If the reviewer instead just writes feedback as text (no approve call), the loop continues — drafter is the next speaker via FromSpeaker(reviewer) → AgentTarget(drafter). max_turns=10 is the safety cap.
Agent Flow#
sequenceDiagram
participant Intake as intake
participant Drafter as drafter
participant Reviewer as reviewer
Intake->>Drafter: topic (FromSpeaker → AgentTarget)
loop until reviewer approves or max_turns
Drafter->>Reviewer: paragraph
alt good enough
Reviewer->>Reviewer: approve(reason); set_context("done", True); writes approval body
Note over Reviewer: ContextEquals("done", True) → TerminateTarget("approved")
else needs work
Reviewer->>Drafter: feedback (FromSpeaker → AgentTarget)
end
end Migration Notes#
| Classic | Beta |
|---|---|
ReplyResult(context_variables={"iteration_needed": False}) | await set_context(session, "done", True) |
ExpressionContextCondition("not iteration_needed") | ContextEquals("done", True) |
max_round=12 on the pattern | max_turns=12 on the graph |
Code#
Tip
Drafter and reviewer both use real Sonnet — the reviewer genuinely decides when the draft is good enough. The reviewer's prompt enforces three objective structural rules (sentence count, last-token type, opening sentence length) so the feedback loop reliably iterates 1–2 times before approval rather than approving on round 1. Approve cap is the 4th round to avoid burning budget.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 | |
Output#
A typical run iterates 1–2 times before approval — the structural rules (especially the last-token-must-be-a-number rule) reliably catch first-draft paragraphs and force a revision:
session: 5d3a...
intake: Topic: write the opening paragraph of a blog post explaining why distributed systems are hard.
drafter: Distributed systems are hard because they break the comfortable assumptions you can lean on. Networks fail in subtle ways, clocks drift, machines crash mid-operation, and there is no single source of truth. The result is a class of problems that look easy on a whiteboard but become surprisingly difficult in production.
reviewer: Rule (b) fails — the final sentence's last token is 'production', not a concrete number, percentage, or year. Please revise so the closing sentence ends on a concrete metric (e.g. an outage statistic or year of a famous incident).
drafter: Distributed systems are hard because they break single-process assumptions. Networks fail, clocks drift, machines crash, and no node is the single source of truth — only a tangle of messages spread across nodes that may or may not agree. Surveys consistently show that distributed-systems bugs cost the median engineering team upwards of 30%.
[tool] approve("All three rules satisfied: 3 sentences, opening ≤ 12 words, final token is '30%' (a concrete percentage).")
reviewer: APPROVED — all three structural rules now hold.
closed: reason='approved'