The autogen.beta.Agent is designed to be fully compatible with existing AG2 architectures, including Group Chats and sequential workflows. By calling the as_conversable() method, you can seamlessly integrate beta agents with traditional ConversableAgent instances.
This guide explains how to use the new Beta Agents across various chat topologies.
You can initiate a standard chat between a ConversableAgent and a Beta Agent by converting the beta agent into a conversable format. This enables direct, two-way communication.
fromautogenimportConversableAgent,LLMConfigfromautogen.betaimportAgent,config# Define the beta agentbeta_agent=Agent("beta_agent",config=config.OpenAIConfig(model="gpt-4o"),)# Define a traditional local agentlocal_agent=ConversableAgent("local_agent",llm_config=LLMConfig({"model":"gpt-4o"}),)# Initiate one-to-one chatresult=awaitlocal_agent.a_run(recipient=beta_agent.as_conversable(),message="Hello beta agent!",max_turns=2,)awaitresult.process()
You can chain multiple chats together sequentially using a_initiate_chats (see the Sequential Chat guide). The beta agents handle their respective tasks in order, acting as recipients in the chat sequence.
fromautogenimportConversableAgent,LLMConfigfromautogen.betaimportAgent,configmodel_config=config.OpenAIConfig(model="gpt-4o")agent1=Agent("agent1",config=model_config)agent2=Agent("agent2",config=model_config)local_agent=ConversableAgent("local_manager",llm_config=LLMConfig({"model":"gpt-4o"}),)chat_results=awaitlocal_agent.a_initiate_chats([{"recipient":agent1.as_conversable(),"message":"Analyze this data.","max_turns":1,"chat_id":"analysis-chat",},{"recipient":agent2.as_conversable(),"message":"Summarize the analysis.","max_turns":1,"chat_id":"summary-chat",},])
Beta agents fully support AG2's pattern-based handoff mechanisms. You can use AgentTarget to explicitly dictate which agent should take over when the current agent completes its work.
fromautogenimportConversableAgent,LLMConfigfromautogen.agentchat.group.multi_agent_chatimporta_run_group_chatfromautogen.agentchat.groupimportAgentTargetfromautogen.agentchat.group.patternsimportDefaultPatternfromautogen.betaimportAgent,configoriginal_agent=ConversableAgent("manager",llm_config=LLMConfig({"model":"gpt-4o"}))model_config=config.OpenAIConfig(model="gpt-4o")agent1=Agent("researcher",config=model_config).as_conversable()agent2=Agent("reviewer",config=model_config).as_conversable()# Define handoffsoriginal_agent.handoffs.set_after_work(AgentTarget(agent1))agent1.handoffs.set_after_work(AgentTarget(agent2))agent2.handoffs.set_after_work(AgentTarget(original_agent))pattern=DefaultPattern(initial_agent=original_agent,agents=[original_agent,agent1,agent2],)result=awaita_run_group_chat(pattern=pattern,messages="Start the research process.",max_rounds=5,)awaitresult.process()
fromautogen.agentchat.group.multi_agent_chatimporta_run_group_chatfromautogen.agentchat.group.patternsimportAutoPatternfromautogen.llm_config.configimportLLMConfigfromautogen.betaimportAgent,config# Create beta agentsmodel_config=config.OpenAIConfig(model="gpt-4o")researcher=Agent("researcher",config=model_config).as_conversable()writer=Agent("writer",config=model_config).as_conversable()pattern=AutoPattern(initial_agent=researcher,agents=[researcher,writer],group_manager_args={"llm_config":LLMConfig({"model":"gpt-4o"})},)result=awaita_run_group_chat(pattern=pattern,messages="Research quantum computing and write a summary.",max_rounds=10,)awaitresult.process()
Beta agents deeply integrate with AG2's ContextVariables, allowing state to be shared effortlessly across group chats and seamlessly accessed inside beta agent tools.
You can inject global variables into the group chat pattern, and read/modify them within any tool via the Context object or Variable() annotations.
fromtypingimportAnnotatedfromautogenimportConversableAgent,LLMConfigfromautogen.agentchat.groupimportContextVariablesfromautogen.agentchat.group.multi_agent_chatimporta_run_group_chatfromautogen.agentchat.group.patternsimportRoundRobinPatternfromautogen.betaimportAgent,Context,Variable,configbeta_agent=Agent("tracker_agent",config=config.OpenAIConfig(model="gpt-4o"),)# Define a tool that accesses and modifies ContextVariables@beta_agent.tooldefissue_tracker(context:Context,issue_count:Annotated[int,Variable(default=0)])->str:# Update the shared context variableissue_count+=1context.variables["issue_count"]=issue_countreturnf"Issue tracked. Total issues: {issue_count}"local_agent=ConversableAgent("local_agent",llm_config=LLMConfig({"model":"gpt-4o"}),)# Initialize the pattern with ContextVariablespattern=RoundRobinPattern(initial_agent=local_agent,agents=[local_agent,beta_agent.as_conversable()],context_variables=ContextVariables({"issue_count":0}),)asyncdefmain():result=awaita_run_group_chat(pattern=pattern,messages="Please track this new issue.",max_rounds=3,)awaitresult.process()# context_variables["issue_count"] will now be updated globally!context_variables=awaitresult.context_variablesprint("Final issue count:",context_variables.data["issue_count"])