OpenAICompletionsClient
autogen.llm_clients.openai_completions_client.OpenAICompletionsClient #
Bases: ModelClient
OpenAI Chat Completions API client implementing ModelClientV2 protocol.
This client works with OpenAI's Chat Completions API (client.chat.completions.create) which returns structured output with reasoning blocks (o1/o3 models), tool calls, and more.
Key Features: - Preserves reasoning blocks as ReasoningContent (o1/o3 models) - Handles tool calls and results - Supports multimodal content - Provides backward compatibility via create_v1_compatible()
Example
client = OpenAICompletionsClient(api_key="...")
Get rich response with reasoning#
response = client.create({ "model": "o1-preview", "messages": [{"role": "user", "content": "Explain quantum computing"}] })
Access reasoning blocks#
for reasoning in response.reasoning: print(f"Reasoning: {reasoning.reasoning}")
Get text response#
print(f"Answer: {response.text}")
Initialize OpenAI Chat Completions API client.
| PARAMETER | DESCRIPTION |
|---|---|
api_key | OpenAI API key (or set OPENAI_API_KEY env var) TYPE: |
base_url | Custom base URL for OpenAI API TYPE: |
timeout | Request timeout in seconds TYPE: |
response_format | Optional response format (Pydantic model or JSON schema) TYPE: |
**kwargs | Additional arguments passed to OpenAI client TYPE: |
Source code in autogen/llm_clients/openai_completions_client.py
RESPONSE_USAGE_KEYS class-attribute instance-attribute #
client instance-attribute #
ModelClientResponseProtocol #
create #
Create a completion and return UnifiedResponse with all features preserved.
This method implements ModelClient.create() but returns UnifiedResponse instead of ModelClientResponseProtocol. The rich UnifiedResponse structure is compatible via duck typing - it has .model attribute and works with message_retrieval().
| PARAMETER | DESCRIPTION |
|---|---|
params | Request parameters including: - model: Model name (e.g., "o1-preview") - messages: List of message dicts - temperature: Optional temperature (not supported by o1 models) - max_tokens: Optional max completion tokens - tools: Optional tool definitions - response_format: Optional Pydantic BaseModel or JSON schema dict - **other OpenAI parameters |
| RETURNS | DESCRIPTION |
|---|---|
UnifiedResponse | UnifiedResponse with reasoning blocks, citations, and all content preserved |
Source code in autogen/llm_clients/openai_completions_client.py
create_v1_compatible #
Create completion in backward-compatible ChatCompletionExtended format.
This method provides compatibility with existing AG2 code that expects ChatCompletionExtended format. Note that reasoning blocks and citations will be lost in this format.
| PARAMETER | DESCRIPTION |
|---|---|
params | Same parameters as create() |
| RETURNS | DESCRIPTION |
|---|---|
Any | ChatCompletionExtended-compatible dict (flattened response) |
Warning
This method loses information (reasoning blocks, citations) when converting to the legacy format. Prefer create() for new code.
Source code in autogen/llm_clients/openai_completions_client.py
cost #
Calculate cost from response usage.
Implements ModelClient.cost() but accepts UnifiedResponse via duck typing.
| PARAMETER | DESCRIPTION |
|---|---|
response | UnifiedResponse with usage information TYPE: |
| RETURNS | DESCRIPTION |
|---|---|
float | Cost in USD for the API call |
Source code in autogen/llm_clients/openai_completions_client.py
get_usage staticmethod #
Extract usage statistics from response.
Implements ModelClient.get_usage() but accepts UnifiedResponse via duck typing.
| PARAMETER | DESCRIPTION |
|---|---|
response | UnifiedResponse from create() TYPE: |
| RETURNS | DESCRIPTION |
|---|---|
dict[str, Any] | Dict with keys from RESPONSE_USAGE_KEYS |
Source code in autogen/llm_clients/openai_completions_client.py
message_retrieval #
Retrieve messages from response in OpenAI-compatible format.
Returns list of strings for text-only messages, or list of dicts when tool calls, function calls, or complex content is present.
This matches the behavior of the legacy OpenAIClient which returns: - Strings for simple text responses - ChatCompletionMessage objects (as dicts) when tool_calls/function_call present
The returned dicts follow OpenAI's ChatCompletion message format: { "role": "assistant", "content": "text content or None", "tool_calls": [{"id": "...", "type": "function", "function": {"name": "...", "arguments": "..."}}], "name": "agent_name" (optional) }
| PARAMETER | DESCRIPTION |
|---|---|
response | UnifiedResponse from create() TYPE: |
| RETURNS | DESCRIPTION |
|---|---|
list[str] | list[dict[str, Any]] | List of strings (for text-only) OR list of message dicts (for tool calls/complex content) |