Exploring GPT-5: Comparing Verbosity Levels in AG2 AgentChat#
This notebook demonstrates how to use different verbosity levels with GPT-5 in AG2’s AgentChat framework. It compares the outputs and behaviors of agents when the verbosity setting is changed.
Key Feature: Verbosity Parameter#
The verbosity parameter allows you to control how detailed or concise the model’s responses are, without needing to rewrite your prompts. This is especially useful for adapting the agent’s communication style to different use cases.
The verbosity parameter has three levels: - Low: Produces short, minimal, and to-the-point responses. This is ideal for user experience scenarios, quick answers, or when you need concise summaries. - Medium: Offers a balanced amount of detail and is the default setting. Use this for general purposes and most conversations. - High: Generates very detailed, explanatory, and verbose replies. This level is best suited for audits, teaching, documentation, or when handing off information.
Tip: Keep your prompts consistent and simply use the verbosity
parameter to adjust the response style as needed.
import os
from dotenv import load_dotenv
from autogen import ConversableAgent, LLMConfig
from autogen.agentchat import initiate_group_chat
from autogen.agentchat.group.patterns import AutoPattern
load_dotenv()
Agent Setup Instructions#
-
Install Required Packages: Make sure you have installed the required packages:
pip install ag2
-
Set Up Environment Variables: Create a
.env
file in your project directory and add your OpenAI API key:OPENAI_API_KEY=your_openai_api_key_here
-
Configure LLM and Agents:
-
The notebook demonstrates how to configure the
LLMConfig
for GPT-5 with different verbosity levels. -
It sets up three agents:
code_agent
: Codes the given task.reviewer_agent
: Reviews the code.user
: Initiates the conversation.
-
Run the Notebook: Execute the cells in order to see how agent behavior changes with different verbosity settings.
llm_config = LLMConfig(
model="gpt-5-nano",
api_key=os.getenv("OPENAI_API_KEY"),
api_type="openai",
verbosity="low",
)
with llm_config:
code_agent = ConversableAgent(
name="code_agent",
system_message="you are a coding agent. you need to code the task given to you",
max_consecutive_auto_reply=5,
human_input_mode="NEVER",
)
reviewer_agent = ConversableAgent(
name="reviewer_agent",
human_input_mode="NEVER",
system_message="you are a reviewer agent. you need to review the code given to you",
)
user = ConversableAgent(
name="user",
human_input_mode="ALWAYS",
llm_config=llm_config,
)
pattern = AutoPattern(
initial_agent=code_agent,
agents=[code_agent, reviewer_agent],
user_agent=user,
group_manager_args={"llm_config": llm_config},
)
Solving the 2 sums problem using the verbosity=‘low’ verbosity setting#
result, context_variables, last_agent = initiate_group_chat(
pattern=pattern,
messages="write a python code to solve the 2 sums problem",
max_rounds=3,
)
print("LOW Code:")
print(result.chat_history[-2]["content"])
print("*" * 100)
print("Reviewer:")
print(f"Result: {result.summary}")
Example: Using ‘medium’ verbosity setting#
In this example, we set the verbosity parameter to ‘medium’ when configuring the LLM. This instructs the model to provide responses that are more detailed than the default, but not as exhaustive as the ‘high’ verbosity setting. The model will include some explanations and reasoning in its answers, offering a balance between conciseness and detail.
llm_config = LLMConfig(
model="gpt-5",
api_key=os.getenv("OPENAI_API_KEY"),
api_type="openai",
verbosity="medium",
)
with llm_config:
code_agent = ConversableAgent(
name="code_agent",
system_message="you are a coding agent. you need to code the task given to you",
max_consecutive_auto_reply=5,
human_input_mode="NEVER",
)
reviewer_agent = ConversableAgent(
name="reviewer_agent",
human_input_mode="NEVER",
system_message="you are a reviewer agent. you need to review the code given to you",
)
pattern = AutoPattern(
initial_agent=code_agent,
agents=[code_agent, reviewer_agent],
user_agent=user,
group_manager_args={"llm_config": llm_config},
)
result, context_variables, last_agent = initiate_group_chat(
pattern=pattern,
messages="write a python code to solve the 2 sums problem",
max_rounds=3,
)
print("Medium Code:")
print(result.chat_history[-2]["content"])
print("*" * 100)
print("Reviewer:")
print(f"Result: {result.summary}")
Example: Using ‘high’ verbosity setting#
This setting instructs the model to provide more detailed and comprehensive responses, In the following example, we demonstrate how to use the ‘high’ verbosity setting with GPT-5. including step-by-step reasoning, explanations, and additional context where applicable. Using higher verbosity can be especially useful when you want the model to elaborate on its answers, justify its choices, or provide more insight into its problem-solving process.
llm_config = LLMConfig(
model="gpt-5",
api_key=os.getenv("OPENAI_API_KEY"),
api_type="openai",
verbosity="high",
)
with llm_config:
code_agent = ConversableAgent(
name="code_agent",
system_message="you are a coding agent. you need to code the task given to you",
max_consecutive_auto_reply=5,
human_input_mode="NEVER",
)
reviewer_agent = ConversableAgent(
name="reviewer_agent",
human_input_mode="NEVER",
system_message="you are a reviewer agent. you need to review the code given to you",
)
pattern = AutoPattern(
initial_agent=code_agent,
agents=[code_agent, reviewer_agent],
user_agent=user,
group_manager_args={"llm_config": llm_config},
)
result, context_variables, last_agent = initiate_group_chat(
pattern=pattern,
messages="write a python code to solve the 2 sums problem",
max_rounds=3,
)