Iterating Over Agent Runs with AG2#
AG2’s run() and run_group_chat() methods execute agent conversations in background threads that run independently of event consumption. This is great for streaming UIs, but makes it difficult to:
- Debug step-by-step - Execution races ahead while you’re inspecting events
- Implement human-in-the-loop - Hard to pause for approval before each action
- Build interactive tools - Can’t easily gate execution on user decisions
Run iteration solves this by synchronizing the producer (background thread) and consumer (your code) - the producer blocks after each event until you advance to the next iteration.
Setup#
First, let’s set up our LLM configuration and create some agents.
import os
from dotenv import load_dotenv
from autogen import ConversableAgent
load_dotenv("../.env.local")
llm_config = {
"config_list": [
{
"model": "gpt-4o-mini",
"api_key": os.environ.get("OPENAI_API_KEY"),
}
]
}
# Create two agents for a simple conversation
jack = ConversableAgent(
"Jack",
system_message="Your name is Jack and you are a comedian. Tell one short joke and then say FINISH.",
is_termination_msg=lambda x: "FINISH" in x.get("content", ""),
llm_config=llm_config,
)
emma = ConversableAgent(
"Emma",
system_message="Your name is Emma. Laugh at Jack's joke and say FINISH.",
llm_config=llm_config,
)
Basic Run Iteration#
Use run_iter() instead of run() to iterate over events. This yields every event and uses Python’s iteration protocol.
Cleanup is automatic - the generator’s finally block ensures the background thread exits cleanly even if an exception occurs or you break out of the loop.
from autogen.events.agent_events import InputRequestEvent
# Iterate over every event
event_count = 0
result = jack.run_iter(emma, message="Emma, tell me a joke!", max_turns=2)
for event in result:
event_count += 1
print(f"\n[Event {event_count}] Event type: {event.type}")
# Handle input requests - prompt user for input
if isinstance(event, InputRequestEvent):
user_input = input(" Input requested: ")
event.content.respond(user_input)
continue
# Access event content
if hasattr(event, "content") and hasattr(event.content, "content"):
content = str(event.content.content)
preview = content[:100] + "..." if len(content) > 100 else content
print(f" Content: {preview}")
print(f"\n[Event {event_count}] Run completed!")
print(f"Total events: {event_count}")
print(f"Summary: {result.summary}")
Filtering Events with yield_on#
In many cases, you don’t need to yield every event. Use yield_on to specify which event types to yield.
Common event types: - TextEvent - Agent sends/receives a text message - ToolCallEvent - Agent wants to call a tool - ToolResponseEvent - Tool returns a result - TerminationEvent - Conversation terminates
Special events are always yielded regardless of filter: - InputRequestEvent - User must respond to input requests - ErrorEvent - Errors are raised as exceptions - RunCompletionEvent - Signals completion (iteration ends)
from autogen.events.agent_events import InputRequestEvent, TerminationEvent, TextEvent
# Only yield TextEvent and TerminationEvent
# Note: InputRequestEvent is always yielded regardless of filter
event_count = 0
result = jack.run_iter(
emma,
message="Emma, tell me another joke!",
max_turns=2,
yield_on=[TextEvent, TerminationEvent],
)
for event in result:
event_count += 1
print(f"\n[Event {event_count}] Event type: {event.type}")
# Handle input requests - always yielded regardless of filter
if isinstance(event, InputRequestEvent):
user_input = input(" Input requested: ")
event.content.respond(user_input)
continue
if hasattr(event, "content"):
if hasattr(event.content, "sender"):
print(f" Sender: {event.content.sender}")
if hasattr(event.content, "content"):
content = str(event.content.content)
preview = content[:100] + "..." if len(content) > 100 else content
print(f" Content: {preview}")
print(f"\n[Event {event_count}] Run completed!")
print(f"Total events (with filter): {event_count}")
Iterating Group Chats#
Run iteration also works with run_group_chat_iter(). This is useful for monitoring multi-agent conversations.
Use GroupChatRunChatEvent to yield when each agent takes their turn.
from autogen import ConversableAgent
from autogen.agentchat.group.multi_agent_chat import run_group_chat_iter
from autogen.agentchat.group.patterns import AutoPattern
from autogen.events.agent_events import GroupChatRunChatEvent, InputRequestEvent, TerminationEvent, TextEvent
# Create agents for group chat
coder = ConversableAgent(
"Coder",
system_message="You are a coder. Write a simple hello world function. Then say APPROVE to get approval.",
llm_config=llm_config,
)
reviewer = ConversableAgent(
"Reviewer",
system_message="You are a code reviewer. If you see code, approve it by saying TERMINATE.",
llm_config=llm_config,
)
user = ConversableAgent(
"User",
system_message="You are a user who wants code written.",
llm_config=llm_config,
)
# Create pattern for group chat
pattern = AutoPattern(
initial_agent=coder,
agents=[coder, reviewer, user],
group_manager_args={"llm_config": llm_config},
)
# Run with step mode, yielding on agent turns
for event in run_group_chat_iter(
pattern=pattern,
messages="Write a hello world function",
max_rounds=4,
yield_on=[GroupChatRunChatEvent, TextEvent, TerminationEvent],
):
# Handle input requests
if isinstance(event, InputRequestEvent):
user_input = input(" Input requested: ")
event.content.respond(user_input)
continue
if isinstance(event, GroupChatRunChatEvent):
print(f"\n=== {event.content.speaker}'s turn ===")
elif isinstance(event, TextEvent):
content = str(event.content.content)[:200]
print(f" {content}")
print("\n--- Run completed! ---")
Aborting Execution Based on Events#
A key use case for run iteration is inspecting events and aborting execution if something unexpected happens. Since you control when to advance to the next event, you can break out of the loop at any time to stop the agent.
This example shows how to: 1. Monitor tool calls before they execute 2. Abort if a tool call targets a blocked recipient 3. Let the generator cleanup handle the background thread
from autogen.events.agent_events import InputRequestEvent, ToolCallEvent
from autogen.tools import tool
@tool(description="Send an email to a recipient")
def send_email(recipient: str, subject: str, body: str) -> str:
"""Send an email (mock implementation)."""
print(f" [EXECUTING] Sending email to {recipient}...")
return f"Email sent to {recipient} with subject: {subject}"
# Create an agent with the email tool
assistant = ConversableAgent(
"Assistant",
system_message="You are a helpful assistant. Use tools when asked. Say DONE when finished.",
is_termination_msg=lambda x: "DONE" in x.get("content", ""),
llm_config=llm_config,
functions=[send_email],
)
# List of blocked recipients
BLOCKED_RECIPIENTS = ["ceo@company.com", "legal@company.com", "hr@company.com"]
# Run with step mode - abort if agent tries to email a blocked recipient
aborted = False
for event in assistant.run_iter(
message="Send an email to ceo@company.com about the budget",
max_turns=3,
yield_on=[TextEvent, ToolCallEvent, TerminationEvent],
):
# Handle input requests
if isinstance(event, InputRequestEvent):
user_input = input(" Input requested: ")
event.content.respond(user_input)
continue
if isinstance(event, ToolCallEvent):
# Inspect the tool call arguments
for tool_call in event.content.tool_calls:
print(f"\n[TOOL CALL] {tool_call.function.name}")
print(f" Arguments: {tool_call.function.arguments}")
# Check if recipient is blocked
import json
args = json.loads(tool_call.function.arguments)
recipient = args.get("recipient", "")
if recipient in BLOCKED_RECIPIENTS:
print(f"\n[BLOCKED] Cannot send to {recipient} - aborting execution!")
aborted = True
break # Break inner loop
if aborted:
break # Break outer loop - execution stops here
elif isinstance(event, TextEvent):
content = str(event.content.content)[:100]
print(f"\n[TEXT] {content}")
if aborted:
print("\nExecution was aborted before the tool could run.")
print("The email was NOT sent.")
else:
print("\nRun completed normally.")
print("Execution completed without issues.")
Event Types Reference#
| Event Type | When It Fires | Always Yielded? |
|---|---|---|
TextEvent | Agent sends/receives a text message | No |
ToolCallEvent | Agent wants to call a tool | No |
ToolResponseEvent | Tool returns a result | No |
ExecutedFunctionEvent | Function execution completed | No |
GroupChatRunChatEvent | Agent selected to speak in group chat | No |
TerminationEvent | Conversation terminates | No |
InputRequestEvent | Human input requested | Yes |
ErrorEvent | Error occurred | Yes (raises exception) |
RunCompletionEvent | Run completed (always fires last) | Yes (iteration ends) |
Note: Events marked “Always Yielded” bypass the yield_on filter because they require user action.
See autogen/events/agent_events.py for the full list of events.
Summary#
Run iteration provides fine-grained control over agent execution:
run_iter()/a_run_iter()- Iterate over single-agent runsrun_group_chat_iter()/a_run_group_chat_iter()- Iterate over group chatsyield_on=[...]- Filter which events to yieldfor event in ...- Simple Python iteration- Automatic cleanup - Generator handles cleanup on break or exception
This is ideal for:
- Debugging agent conversations
- Monitoring and logging all events
- Aborting execution based on conditions (e.g., blocked recipients, cost limits)
- Building interactive agent applications