Skip to content

OpenAI Responses API with Structured Output#

Open In Colab Open on GitHub

This example demonstrates how to use Structured Outputs with OpenAI’s Responses API client.

Note: Current support for the OpenAI Responses API is limited to initiate_chat with a two-agent chat. Future releases will included expanded support for group chat and the run interfaces.

Install AG2 and dependencies#

To be able to run this notebook, you will need to install AG2 with the openai extra.

Requirements

Install ag2 with 'openai' extra:

pip install ag2[openai]
For more information, please refer to the installation guide.

import os

from pydantic import BaseModel

from autogen import AssistantAgent, UserProxyAgent

# ---------------------------------------------------------------------
# 1. Define the response format (a Pydantic model)
# ---------------------------------------------------------------------
class QA(BaseModel):
    question: str
    answer: str
    reasoning: str

# ---------------------------------------------------------------------
# 2. Build an llm_config that opts-in to the Responses endpoint
#    and attaches the structured-output model
# ---------------------------------------------------------------------
llm_config = {
    "config_list": [
        {
            "api_type": "responses",  # <─ use /responses
            "model": "gpt-4o",  # any supported model
            "api_key": os.getenv("OPENAI_API_KEY"),
            "response_format": QA,  # <─ structured output!
        }
    ]
}

# ---------------------------------------------------------------------
# 3. Create two simple chat actors
# ---------------------------------------------------------------------
user = UserProxyAgent(
    name="User",
    system_message="Human admin",
    human_input_mode="NEVER",
)

assistant = AssistantAgent(
    name="StructuredBot",
    llm_config=llm_config,
    system_message=(
        "You are a Q&A bot. Always return a JSON object that matches the QA schema: {question, answer, reasoning}"
    ),
)

# ---------------------------------------------------------------------
# 4. Start the conversation
# ---------------------------------------------------------------------
result = user.initiate_chat(
    assistant,
    message="What causes seasons on Earth?",
    max_turns=1,
    summary_method="last_msg",
)

# ---------------------------------------------------------------------
# 5. Parse and print the result
# ---------------------------------------------------------------------
try:
    qa = QA.model_validate_json(result.summary)
    print(f"Question: {qa.question}")
    print(f"Answer: {qa.answer}")
    print(f"Reasoning: {qa.reasoning}")
except Exception as e:
    print(f"Error parsing result: {e}")