Skip to content

Feedback Loop

The Feedback Loop pattern alternates a drafter and a reviewer until the reviewer flips a done flag in context, or max_turns fires as the safety cap. Drafter writes, reviewer either approves or gives concrete feedback for revision, drafter incorporates, reviewer re-evaluates — until satisfied.

Classic (non-beta) primitives: DefaultPattern, OnContextCondition checking an iteration_needed flag, ReplyResult updating that flag, max_round.

Key Characteristics#

  • Two termination paths.
    • Happy path: ContextEquals("done", True) TerminateTarget("approved"), fired when the reviewer's approve tool flips the flag.
    • Safety belt: default_target=TerminateTarget("max_iterations") paired with max_turns=10.
  • Intake agent owns kickoff. A separate intake agent (using TestConfig) owns the initial topic message so drafter is the first agent the graph hands control to (rather than seeing its own kickoff and bouncing straight to reviewer).

Routing Mechanics#

Under the packet execution model, approve uses set_context to flip done=True. The reviewer's reply body (containing the "APPROVED" note) flows naturally through the packet's body field on the same turn — when the packet folds, ContextEquals(done, True) matches the just-set value and the session terminates with reason 'approved'.

If the reviewer instead just writes feedback as text (no approve call), the loop continues — drafter is the next speaker via FromSpeaker(reviewer) AgentTarget(drafter). max_turns=10 is the safety cap.

Agent Flow#

sequenceDiagram
    participant Intake as intake
    participant Drafter as drafter
    participant Reviewer as reviewer

    Intake->>Drafter: topic (FromSpeaker → AgentTarget)
    loop until reviewer approves or max_turns
        Drafter->>Reviewer: paragraph
        alt good enough
            Reviewer->>Reviewer: approve(reason); set_context("done", True); writes approval body
            Note over Reviewer: ContextEquals("done", True) → TerminateTarget("approved")
        else needs work
            Reviewer->>Drafter: feedback (FromSpeaker → AgentTarget)
        end
    end

Migration Notes#

Classic Beta
ReplyResult(context_variables={"iteration_needed": False}) await set_context(session, "done", True)
ExpressionContextCondition("not iteration_needed") ContextEquals("done", True)
max_round=12 on the pattern max_turns=12 on the graph

Code#

Tip

Drafter and reviewer both use real Sonnet — the reviewer genuinely decides when the draft is good enough. The reviewer's prompt enforces three objective structural rules (sentence count, last-token type, opening sentence length) so the feedback loop reliably iterates 1–2 times before approval rather than approving on round 1. Approve cap is the 4th round to avoid burning budget.

"""Cookbook 06 — Feedback Loop pattern.

Drafter and reviewer alternate until the reviewer flips ``done=True``
in context (via the ``approve`` tool), or ``max_turns`` fires as the
safety cap.
"""

import asyncio

from dotenv import load_dotenv

from autogen.beta import Agent
from autogen.beta.config import AnthropicConfig
from autogen.beta.knowledge import MemoryKnowledgeStore
from autogen.beta.network import (
    EV_PACKET,
    EV_SESSION_CLOSED,
    EV_TEXT,
    WORKFLOW_TYPE,
    AgentTarget,
    ContextEquals,
    FromSpeaker,
    Hub,
    HubClient,
    LocalLink,
    Passport,
    Resume,
    SessionInject,
    TerminateTarget,
    Transition,
    TransitionGraph,
)
from autogen.beta.network.workflow_helpers import set_context
from autogen.beta.testing import TestConfig

load_dotenv()

async def approve(reason: str, session: SessionInject) -> str:
    """Mark the draft approved by setting done=True in context.
    The graph's ContextEquals(done, True) rule terminates the
    workflow; the reviewer can also write a short approval note in
    its reply body on the same turn — when the packet folds, the
    just-set done flag triggers terminate."""
    if session is None:
        return "no session"
    print(f"  [tool] approve({reason!r})")
    await set_context(session, "done", True)
    return f"approved: {reason}"

async def main() -> None:
    config = AnthropicConfig(model="claude-sonnet-4-6")

    hub = await Hub.open(MemoryKnowledgeStore(), ttl_sweep_interval=0)
    link = LocalLink(hub)

    intake_hc = HubClient(link, hub=hub)
    drafter_hc = HubClient(link, hub=hub)
    reviewer_hc = HubClient(link, hub=hub)

    intake_agent = Agent("intake", config=TestConfig())

    drafter_agent = Agent(
        "drafter",
        prompt=(
            "You are the drafter. The user gives a topic; you write "
            "an opening paragraph. On subsequent turns, the reviewer "
            "will give feedback as their reply — incorporate it and "
            "post a revised draft.\n"
            "\n"
            "Reply with ONE paragraph. No preamble, no headers, no "
            "labels — just the paragraph."
        ),
        config=config,
    )

    reviewer_agent = Agent(
        "reviewer",
        prompt=(
            "You are the reviewer. The drafter posts a paragraph. The "
            "house style requires that an opening paragraph satisfies "
            "ALL of these objective rules:\n"
            "\n"
            "  (a) exactly 3 sentences (count the periods);\n"
            "  (b) the final sentence's LAST TOKEN is a concrete "
            "number, percentage, or year (e.g. '62%', '2019', "
            "'30 seconds');\n"
            "  (c) opens with a single short sentence (≤ 12 words).\n"
            "\n"
            "Each turn:\n"
            "\n"
            "1. If the paragraph satisfies all three rules, call the "
            "`approve(reason)` tool with a one-sentence reason citing "
            "the rules. This terminates the workflow.\n"
            "\n"
            "2. If even one rule fails, DO NOT call `approve`. Reply "
            "with feedback that names the specific failing rule(s) so "
            "the drafter can revise. Be precise — don't editorialise "
            "on style, only on the rules above.\n"
            "\n"
            "Approve no later than the 4th round to avoid burning "
            "budget."
        ),
        config=config,
    )
    reviewer_agent.tool(approve)

    intake = await intake_hc.register(intake_agent, Passport(name="intake"), Resume())
    drafter = await drafter_hc.register(drafter_agent, Passport(name="drafter"), Resume())
    reviewer = await reviewer_hc.register(reviewer_agent, Passport(name="reviewer"), Resume())

    graph = TransitionGraph(
        initial_speaker=intake.agent_id,
        transitions=[
            # done=True flips the loop into terminate.
            Transition(when=ContextEquals("done", value=True), then=TerminateTarget("approved")),
            # intake kicks off → drafter.
            Transition(when=FromSpeaker(intake.agent_id),   then=AgentTarget(drafter.agent_id)),
            # Otherwise alternate.
            Transition(when=FromSpeaker(drafter.agent_id),  then=AgentTarget(reviewer.agent_id)),
            Transition(when=FromSpeaker(reviewer.agent_id), then=AgentTarget(drafter.agent_id)),
        ],
        default_target=TerminateTarget("max_iterations"),
        max_turns=10,
    )

    session = await intake.open(
        type=WORKFLOW_TYPE,
        target=[drafter.agent_id, reviewer.agent_id],
        knobs={"graph": graph.to_dict()},
    )
    print(f"session: {session.session_id}\n")

    name_by_id = {
        intake.agent_id: "intake",
        drafter.agent_id: "drafter",
        reviewer.agent_id: "reviewer",
    }

    await session.send(
        "Topic: write the opening paragraph of a blog post explaining "
        "why distributed systems are hard."
    )

    # Wait for the workflow to terminate (any of the five close routes
    # documented in /docs/beta/network/termination — this demo uses
    # ContextEquals("done", True) → TerminateTarget("approved")).
    close_env = await intake_hc.wait_for_session_event(
        session_id=session.session_id,
        predicate=lambda e: e.event_type == EV_SESSION_CLOSED,
        timeout=360.0,
    )

    # Print the transcript from the WAL after close.
    for env in await hub.read_wal(session.session_id):
        speaker = name_by_id.get(env.sender_id, env.sender_id[:8])
        if env.event_type == EV_TEXT:
            print(f"{speaker:>14}: {env.event_data['text']}")
        elif env.event_type == EV_PACKET:
            routing = env.event_data.get("routing", {}) or {}
            if routing.get("kind") == "handoff":
                line = f"[Handed off via {routing.get('tool', '')}] {routing.get('reason', '')}"
                print(f"{speaker:>14}: {line.rstrip()}")
            body = env.event_data.get("body", "")
            if body:
                print(f"{speaker:>14}: {body}")

    print(f"\nclosed: reason={close_env.event_data.get('reason')!r}")

    await intake_hc.close()
    await drafter_hc.close()
    await reviewer_hc.close()
    await hub.close()

if __name__ == "__main__":
    asyncio.run(main())

Output#

A typical run iterates 1–2 times before approval — the structural rules (especially the last-token-must-be-a-number rule) reliably catch first-draft paragraphs and force a revision:

session: 5d3a...

         intake: Topic: write the opening paragraph of a blog post explaining why distributed systems are hard.
        drafter: Distributed systems are hard because they break the comfortable assumptions you can lean on. Networks fail in subtle ways, clocks drift, machines crash mid-operation, and there is no single source of truth. The result is a class of problems that look easy on a whiteboard but become surprisingly difficult in production.
       reviewer: Rule (b) fails — the final sentence's last token is 'production', not a concrete number, percentage, or year. Please revise so the closing sentence ends on a concrete metric (e.g. an outage statistic or year of a famous incident).
        drafter: Distributed systems are hard because they break single-process assumptions. Networks fail, clocks drift, machines crash, and no node is the single source of truth — only a tangle of messages spread across nodes that may or may not agree. Surveys consistently show that distributed-systems bugs cost the median engineering team upwards of 30%.
  [tool] approve("All three rules satisfied: 3 sentences, opening ≤ 12 words, final token is '30%' (a concrete percentage).")
       reviewer: APPROVED — all three structural rules now hold.

closed: reason='approved'