Exploring GPT-5: Comparing Verbosity Levels in AG2 AgentChat#
This notebook demonstrates how to use different verbosity levels with GPT-5 in AG2’s AgentChat framework. It compares the outputs and behaviors of agents when the verbosity setting is changed.
Key Feature: Verbosity Parameter#
The verbosity parameter allows you to control how detailed or concise the model’s responses are, without needing to rewrite your prompts. This is especially useful for adapting the agent’s communication style to different use cases.
Verbosity Levels Explained
The verbosity parameter controls how detailed the agent’s responses are. It supports three levels:
- Low: Short, minimal, and to-the-point responses.
- Best for: Quick answers, concise summaries, UX scenarios
- Medium (default): Balanced detail. Provides enough information for most conversations and general use.
- Best for: Everyday tasks, general conversations
- High: Very detailed, explanatory, and verbose replies.
- Best for: Audits, teaching, documentation, handoffs
Tip:
You don’t need to rewrite your prompts—just set theverbosityparameter to adjust the response style as needed!
import os
from dotenv import load_dotenv
from autogen import ConversableAgent, LLMConfig
from autogen.agentchat import initiate_group_chat
from autogen.agentchat.group.patterns import AutoPattern
load_dotenv()
Agent Setup Instructions#
-
Install Required Packages: Make sure you have installed the required packages:
pip install ag2 -
Set Up Environment Variables: Create a
.envfile in your project directory and add your OpenAI API key:OPENAI_API_KEY=your_openai_api_key_here -
Configure LLM and Agents:
-
The notebook demonstrates how to configure the
LLMConfigfor GPT-5 with different verbosity levels. -
It sets up three agents:
code_agent: Codes the given task.reviewer_agent: Reviews the code.user: Initiates the conversation.
-
Run the Notebook: Execute the cells in order to see how agent behavior changes with different verbosity settings.
llm_config = LLMConfig(
config_list={
"model": "gpt-5-nano",
"api_key": os.getenv("OPENAI_API_KEY"),
"api_type": "openai",
"verbosity": "low",
}
)
code_agent = ConversableAgent(
name="code_agent",
system_message="you are a coding agent. you need to code the task given to you",
max_consecutive_auto_reply=5,
human_input_mode="NEVER",
llm_config=llm_config,
)
reviewer_agent = ConversableAgent(
name="reviewer_agent",
human_input_mode="NEVER",
system_message="you are a reviewer agent. you need to review the code given to you",
llm_config=llm_config,
)
user = ConversableAgent(
name="user",
human_input_mode="ALWAYS",
)
pattern = AutoPattern(
initial_agent=code_agent,
agents=[code_agent, reviewer_agent],
user_agent=user,
group_manager_args={"llm_config": llm_config},
)
Solving the 2 sums problem using the verbosity=‘low’ verbosity setting#
result, context_variables, last_agent = initiate_group_chat(
pattern=pattern,
messages="write a python code to solve the 2 sums problem",
max_rounds=3,
)
print("LOW Code:")
print(result.chat_history[-2]["content"])
print("*" * 100)
print("Reviewer:")
print(f"Result: {result.summary}")
Example: Using ‘medium’ verbosity setting#
In this example, we set the verbosity parameter to ‘medium’ when configuring the LLM. This instructs the model to provide responses that are more detailed than the default, but not as exhaustive as the ‘high’ verbosity setting. The model will include some explanations and reasoning in its answers, offering a balance between conciseness and detail.
llm_config = LLMConfig(
config_list={
"model": "gpt-5",
"api_key": os.getenv("OPENAI_API_KEY"),
"api_type": "openai",
"verbosity": "medium",
}
)
code_agent = ConversableAgent(
name="code_agent",
system_message="you are a coding agent. you need to code the task given to you",
max_consecutive_auto_reply=5,
human_input_mode="NEVER",
llm_config=llm_config,
)
reviewer_agent = ConversableAgent(
name="reviewer_agent",
human_input_mode="NEVER",
system_message="you are a reviewer agent. you need to review the code given to you",
llm_config=llm_config,
)
pattern = AutoPattern(
initial_agent=code_agent,
agents=[code_agent, reviewer_agent],
user_agent=user,
group_manager_args={"llm_config": llm_config},
)
result, context_variables, last_agent = initiate_group_chat(
pattern=pattern,
messages="write a python code to solve the 2 sums problem",
max_rounds=3,
)
print("Medium Code:")
print(result.chat_history[-2]["content"])
print("*" * 100)
print("Reviewer:")
print(f"Result: {result.summary}")
Example: Using ‘high’ verbosity setting#
This setting instructs the model to provide more detailed and comprehensive responses, In the following example, we demonstrate how to use the ‘high’ verbosity setting with GPT-5. including step-by-step reasoning, explanations, and additional context where applicable. Using higher verbosity can be especially useful when you want the model to elaborate on its answers, justify its choices, or provide more insight into its problem-solving process.
llm_config = LLMConfig(
config_list={
"model": "gpt-5",
"api_key": os.getenv("OPENAI_API_KEY"),
"api_type": "openai",
"verbosity": "high",
}
)
code_agent = ConversableAgent(
name="code_agent",
system_message="you are a coding agent. you need to code the task given to you",
max_consecutive_auto_reply=5,
human_input_mode="NEVER",
llm_config=llm_config,
)
reviewer_agent = ConversableAgent(
name="reviewer_agent",
human_input_mode="NEVER",
system_message="you are a reviewer agent. you need to review the code given to you",
llm_config=llm_config,
)
pattern = AutoPattern(
initial_agent=code_agent,
agents=[code_agent, reviewer_agent],
user_agent=user,
group_manager_args={"llm_config": llm_config},
)
result, context_variables, last_agent = initiate_group_chat(
pattern=pattern,
messages="write a python code to solve the 2 sums problem",
max_rounds=3,
)