Skip to content

Discussion

discussion is an N-party round-robin session. Participants speak in a fixed order, cycling indefinitely until you close it. The adapter enforces "wait your turn" via validate_send; the hub's can_send probe lets the default handler skip wasted LLM calls when it isn't this agent's turn.

Shape#

Participants 2+
Turn order Round-robin (creator first, then participants in order)
Auto-close No
Termination Explicit session.close() or TTL
Default view WindowedSummary(recent_n=N*2) (where N = participant count)
Default expectations turn_within(120s, warn), turn_within(600s, hide)
Knob {"ordering": "round_robin"} (only ordering shipped today)

Lifecycle#

sequenceDiagram
    participant A as alice
    participant H as Hub + DiscussionAdapter
    participant B as bob
    participant C as carol

    A->>H: open(type="discussion", target=[bob, carol], knobs=round_robin)
    H->>B: EV_SESSION_INVITE
    H->>C: EV_SESSION_INVITE
    B->>H: EV_SESSION_INVITE_ACK
    C->>H: EV_SESSION_INVITE_ACK
    H->>A: EV_SESSION_OPENED
    Note over A,C: expected_next_speaker = alice

    A->>H: EV_TEXT (alice 1)
    Note over H: state.expected_next_speaker ← bob
    H->>B: deliver
    H->>C: deliver (probes can_send → false, no LLM)
    B->>H: EV_TEXT (bob 1)
    Note over H: state.expected_next_speaker ← carol
    C->>H: EV_TEXT (carol 1)
    Note over H: state.expected_next_speaker ← alice (cycle)

    Note over A,C: ...continues until close() or TTL
    A->>H: session.close()
    H-->>A: EV_SESSION_CLOSED
    H-->>B: EV_SESSION_CLOSED
    H-->>C: EV_SESSION_CLOSED

The can_send probe lets each default handler skip its LLM call when it's not that participant's turn — see "How Turn Skipping Works" below.

Smallest Example#

from autogen.beta import Agent
from autogen.beta.config import AnthropicConfig
from autogen.beta.knowledge import MemoryKnowledgeStore
from autogen.beta.network import (
    EV_TEXT,
    ORDERING_ROUND_ROBIN,
    Hub,
    HubClient,
    LocalLink,
    Passport,
    Resume,
)

config = AnthropicConfig(model="claude-sonnet-4-6")
hub = await Hub.open(MemoryKnowledgeStore(), ttl_sweep_interval=0)
link = LocalLink(hub)

alice_hc, bob_hc, carol_hc = (HubClient(link, hub=hub) for _ in range(3))

alice = await alice_hc.register(
    Agent("alice", prompt="The optimist. One short sentence.", config=config),
    Passport(name="alice"), Resume(),
)
bob = await bob_hc.register(
    Agent("bob", prompt="The realist. One short sentence.", config=config),
    Passport(name="bob"), Resume(),
)
carol = await carol_hc.register(
    Agent("carol", prompt="The skeptic. One short sentence.", config=config),
    Passport(name="carol"), Resume(),
)

session = await alice.open(
    type="discussion",
    target=[bob.agent_id, carol.agent_id],
    knobs={"ordering": ORDERING_ROUND_ROBIN},
)

await session.send("Topic: should every developer learn Rust?")
# After the kickoff, each agent's default handler responds when can_send
# returns true for them — bob, then carol, then alice again, and so on.

To halt, cap on text count and call session.close():

await wait_for_text_count(hub, session.session_id, expected=6)
await session.close()

How Turn Skipping Works#

When alice sends "alice 1", the hub fans out an EV_TEXT to bob and carol. Both default handlers fire in parallel:

  • bob's handler — calls hc.can_send(session_id, bob.agent_id). The adapter says "yes, bob is expected_next_speaker." Handler runs Agent.ask, sends bob's reply.
  • carol's handler — calls hc.can_send(session_id, carol.agent_id). The adapter says "no, expected_next_speaker is bob, not carol." Handler returns without engaging the LLM.

When bob's reply lands, the same fan-out happens. Now expected_next_speaker = carol, so carol's handler engages and bob's skips. No wasted LLM calls.

When to Use#

  • Brainstorms with a fixed cast — three agents debating a topic in turn.
  • Panel discussions where each agent has a static viewpoint.
  • Round-robin reviewers — three reviewers each commenting once per cycle on a draft.

When NOT to Use#

  • Conditional handoffs ("if alice mentions security, hand to the security expert") — use workflow.
  • Two participants only with no order — use conversation.
  • A pipeline where each step happens once — use workflow with TransitionGraph.sequence(...).

Validation Rules#

DiscussionAdapter.validate_send rejects:

  • EV_TEXT from anyone other than state.expected_next_speaker.
  • Sends from non-participants.
  • Sends to a closed session.

Protocol envelopes (EV_SESSION_*, ag2.task.*) bypass the turn check.

State Object#

@dataclass(slots=True)
class DiscussionState:
    participant_order: list[str]
    expected_next_speaker: str
    turn_count: int = 0

Read via hub._adapter_states[session_id]. The order is fixed at create time by sorting participants on Participant.order; round-robin advances by (current_index + 1) % len(participant_order).

Customising the Ordering#

Today only ORDERING_ROUND_ROBIN ships. The knob is knobs={"ordering": "round_robin"}; passing anything else raises at create time. Future orderings (dynamic, weighted) will plug in here without breaking the round-robin contract.

Closing#

discussion never auto-closes. The example below caps at 6 turns and calls session.close(), but four other patterns work for discussion too:

  • App-side cap — count turns and call session.close() (canonical, simplest).
  • Agent-side tool — any participant calls a tool that closes the session. See Closing Sessions → Agent-side tool.
  • Custom adapter — subclass DiscussionAdapter to fold turn_count and emit CLOSING at a cap (or switch to workflow with TransitionGraph.round_robin(max_turns=N)).
  • TTL / expectations — safety nets only.

See Closing Sessions for the full picture.