Open In Colab Open on GitHub

OpenAI offers functionality for defining a structure of the messages generated by LLMs, AG2 enables this functionality by propagating response_format, in the LLM configuration for your agents, to the underlying client. This is currently only supported by OpenAI.

For more info on structured output, please check here

:::info Requirements
Install `ag2`:
```bash
pip install ag2
```

For more information, please refer to the [installation guide](/docs/installation/).
:::

Set your API Endpoint

The config_list_from_json function loads a list of configurations from an environment variable or a json file. Structured Output is supported by OpenAI’s models from gpt-4-0613 and gpt-3.5-turbo-0613.

import autogen

config_list = autogen.config_list_from_json(
    "OAI_CONFIG_LIST",
    filter_dict={
        "model": ["gpt-4o", "gpt-4o-mini"],
    },
)
/usr/local/lib/python3.11/site-packages/tqdm/auto.py:21: TqdmWarning: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html
  from .autonotebook import tqdm as notebook_tqdm
:::tip
Learn more about configuring LLMs for agents [here](/docs/topics/llm_configuration).
:::

Example: math reasoning

Using structured output, we can enforce chain-of-thought reasoning in the model to output an answer in a structured, step-by-step way.

Define the reasoning model

First we will define the math reasoning model. This model will indirectly force the LLM to solve the posed math problems iteratively through math reasoning steps.

from pydantic import BaseModel


class Step(BaseModel):
    explanation: str
    output: str


class MathReasoning(BaseModel):
    steps: list[Step]
    final_answer: str

Applying the Response Format

The response_format is added to the LLM configuration and then this configuration is applied to the agent.

for config in config_list:
    config["response_format"] = MathReasoning

Define chat actors

Now we can define the agents that will solve the posed math problem. We will keep this example simple; we will use a UserProxyAgent to input the math problem and an AssistantAgent to solve it.

The AssistantAgent will be constrained to solving the math problem step-by-step by using the MathReasoning response format we defined above.

llm_config = {"config_list": config_list, "cache_seed": 42}

user_proxy = autogen.UserProxyAgent(
    name="User_proxy",
    system_message="A human admin.",
    human_input_mode="NEVER",
)

assistant = autogen.AssistantAgent(
    name="Math_solver",
    llm_config=llm_config,  # Response Format is in the configuration
)

Start the chat

Let’s now start the chat and prompt the assistant to solve a simple equation. The assistant agent should return a response solving the equation using a step-by-step MathReasoning model.

user_proxy.initiate_chat(assistant, message="how can I solve 8x + 7 = -23", max_turns=1, summary_method="last_msg")
User_proxy (to Math_solver):

how can I solve 8x + 7 = -23

--------------------------------------------------------------------------------
Math_solver (to User_proxy):

{"steps":[{"explanation":"To isolate the term with x, we first subtract 7 from both sides of the equation.","output":"8x + 7 - 7 = -23 - 7 -> 8x = -30."},{"explanation":"Now that we have 8x = -30, we divide both sides by 8 to solve for x.","output":"x = -30 / 8 -> x = -3.75."}],"final_answer":"x = -3.75"}

--------------------------------------------------------------------------------
ChatResult(chat_id=None, chat_history=[{'content': 'how can I solve 8x + 7 = -23', 'role': 'assistant', 'name': 'User_proxy'}, {'content': '{"steps":[{"explanation":"To isolate the term with x, we first subtract 7 from both sides of the equation.","output":"8x + 7 - 7 = -23 - 7 -> 8x = -30."},{"explanation":"Now that we have 8x = -30, we divide both sides by 8 to solve for x.","output":"x = -30 / 8 -> x = -3.75."}],"final_answer":"x = -3.75"}', 'role': 'user', 'name': 'Math_solver'}], summary='{"steps":[{"explanation":"To isolate the term with x, we first subtract 7 from both sides of the equation.","output":"8x + 7 - 7 = -23 - 7 -> 8x = -30."},{"explanation":"Now that we have 8x = -30, we divide both sides by 8 to solve for x.","output":"x = -30 / 8 -> x = -3.75."}],"final_answer":"x = -3.75"}', cost={'usage_including_cached_inference': {'total_cost': 0.00015089999999999998, 'gpt-4o-mini-2024-07-18': {'cost': 0.00015089999999999998, 'prompt_tokens': 582, 'completion_tokens': 106, 'total_tokens': 688}}, 'usage_excluding_cached_inference': {'total_cost': 0}}, human_input=[])

Formatting a response

When defining a response_format, you have the flexibility to customize how the output is parsed and presented, making it more user-friendly. To demonstrate this, we’ll add a format method to our MathReasoning model. This method will define the logic for transforming the raw JSON response into a more human-readable and accessible format.

Define the reasoning model

Let’s redefine the MathReasoning model to include a format method. This method will allow the underlying client to parse the return value from the LLM into a more human-readable format. If the format method is not defined, the client will default to returning the model’s JSON representation, as demonstrated in the previous example.

from pydantic import BaseModel


class Step(BaseModel):
    explanation: str
    output: str


class MathReasoning(BaseModel):
    steps: list[Step]
    final_answer: str

    def format(self) -> str:
        steps_output = "\n".join(
            f"Step {i + 1}: {step.explanation}\n  Output: {step.output}" for i, step in enumerate(self.steps)
        )
        return f"{steps_output}\n\nFinal Answer: {self.final_answer}"

Define chat actors and start the chat

The rest of the process is the same as in the previous example: define the actors and start the chat.

Observe how the Math_solver agent now communicates using the format we have defined in our MathReasoning.format method.

for config in config_list:
    config["response_format"] = MathReasoning

user_proxy = autogen.UserProxyAgent(
    name="User_proxy",
    system_message="A human admin.",
    human_input_mode="NEVER",
)

assistant = autogen.AssistantAgent(
    name="Math_solver",
    llm_config=llm_config,
)

user_proxy.initiate_chat(assistant, message="how can I solve 8x + 7 = -23", max_turns=1, summary_method="last_msg")
User_proxy (to Math_solver):

how can I solve 8x + 7 = -23

--------------------------------------------------------------------------------
Math_solver (to User_proxy):

Step 1: To isolate the term with x, we first subtract 7 from both sides of the equation.
  Output: 8x + 7 - 7 = -23 - 7 -> 8x = -30.
Step 2: Now that we have 8x = -30, we divide both sides by 8 to solve for x.
  Output: x = -30 / 8 -> x = -3.75.

Final Answer: x = -3.75

--------------------------------------------------------------------------------
ChatResult(chat_id=None, chat_history=[{'content': 'how can I solve 8x + 7 = -23', 'role': 'assistant', 'name': 'User_proxy'}, {'content': 'Step 1: To isolate the term with x, we first subtract 7 from both sides of the equation.\n  Output: 8x + 7 - 7 = -23 - 7 -> 8x = -30.\nStep 2: Now that we have 8x = -30, we divide both sides by 8 to solve for x.\n  Output: x = -30 / 8 -> x = -3.75.\n\nFinal Answer: x = -3.75', 'role': 'user', 'name': 'Math_solver'}], summary='Step 1: To isolate the term with x, we first subtract 7 from both sides of the equation.\n  Output: 8x + 7 - 7 = -23 - 7 -> 8x = -30.\nStep 2: Now that we have 8x = -30, we divide both sides by 8 to solve for x.\n  Output: x = -30 / 8 -> x = -3.75.\n\nFinal Answer: x = -3.75', cost={'usage_including_cached_inference': {'total_cost': 0.00015089999999999998, 'gpt-4o-mini-2024-07-18': {'cost': 0.00015089999999999998, 'prompt_tokens': 582, 'completion_tokens': 106, 'total_tokens': 688}}, 'usage_excluding_cached_inference': {'total_cost': 0}}, human_input=[])

Normal function calling still works alongside structured output, so your agent can have a response format while still calling tools.

@assistant.register_for_execution()
@assistant.register_for_llm(description="You can use this function call to solve addition")
def add(x: int, y: int) -> int:
    return x + y


user_proxy.initiate_chat(
    assistant, message="solve 3 + 4 by calling appropriate function", max_turns=1, summary_method="last_msg"
)
User_proxy (to Math_solver):

solve 3 + 4 by calling appropriate function

--------------------------------------------------------------------------------
Math_solver (to User_proxy):

***** Suggested tool call (call_oTp96rVzs2kAOwGhBM5rJDcW): add *****
Arguments: 
{"x":3,"y":4}
********************************************************************

--------------------------------------------------------------------------------
ChatResult(chat_id=None, chat_history=[{'content': 'solve 3 + 4 by calling appropriate function', 'role': 'assistant', 'name': 'User_proxy'}, {'tool_calls': [{'id': 'call_oTp96rVzs2kAOwGhBM5rJDcW', 'function': {'arguments': '{"x":3,"y":4}', 'name': 'add'}, 'type': 'function'}], 'content': None, 'role': 'assistant'}], summary='', cost={'usage_including_cached_inference': {'total_cost': 0.0001029, 'gpt-4o-mini-2024-07-18': {'cost': 0.0001029, 'prompt_tokens': 618, 'completion_tokens': 17, 'total_tokens': 635}}, 'usage_excluding_cached_inference': {'total_cost': 0}}, human_input=[])