Auto Generated Agent Chat: Group Chat with GPTAssistantAgent#
AG2 offers conversable agents powered by LLM, tool or human, which can be used to perform tasks collectively via automated chat. This framework allows tool use and human participation through multi-agent conversation. Please find documentation about this feature here.
In this notebook, we demonstrate how to get multiple GPTAssistantAgent
converse through group chat.
Requirements#
AG2 requires Python>=3.9
. To run this notebook example, please install:
Set your API Endpoint#
The LLMConfig.from_json
method loads a list of configurations from an environment variable or a json file.
import autogen
from autogen.agentchat import AssistantAgent
from autogen.agentchat.contrib.gpt_assistant_agent import GPTAssistantAgent
llm_config = autogen.LLMConfig.from_json(path="OAI_CONFIG_LIST", cache_seed=45).where(
model=["gpt-4", "gpt-4-1106-preview", "gpt-4-32k"]
)
Tip
Learn more about configuring LLMs for agents here.
Define GPTAssistantAgent and GroupChat#
# Define user proxy agent
user_proxy = autogen.UserProxyAgent(
name="User_proxy",
system_message="A human admin.",
code_execution_config={
"last_n_messages": 2,
"work_dir": "groupchat",
"use_docker": False,
}, # Please set use_docker=True if docker is available to run the generated code. Using docker is safer than running the generated code directly.
human_input_mode="TERMINATE",
)
# define two GPTAssistants
coder = GPTAssistantAgent(
name="Coder",
llm_config=llm_config,
instructions=AssistantAgent.DEFAULT_SYSTEM_MESSAGE,
)
analyst = GPTAssistantAgent(
name="Data_analyst",
instructions="You are a data analyst that offers insight into data.",
llm_config=llm_config,
)
# define group chat
groupchat = autogen.GroupChat(agents=[user_proxy, coder, analyst], messages=[], max_round=10)
manager = autogen.GroupChatManager(groupchat=groupchat, llm_config=llm_config)
Initiate Group Chat#
Now all is set, we can initiate group chat.