UnifiedResponse
autogen.llm_clients.models.unified_response.UnifiedResponse #
Bases: BaseModel
Provider-agnostic response format.
This response format can represent responses from any LLM provider while preserving all provider-specific features (reasoning, citations, etc.).
Features: - Provider agnostic (OpenAI, Anthropic, Gemini, etc.) - Rich content blocks (text, images, reasoning, citations) - Usage tracking and cost calculation - Provider-specific metadata preservation - Serializable (no attached functions) - Extensible status field for provider-specific statuses
provider_metadata class-attribute instance-attribute #
provider_metadata = Field(default_factory=dict)
get_content_by_type #
Get all content blocks of a specific type across all messages.
This is especially useful for unknown types handled by GenericContent.
| PARAMETER | DESCRIPTION |
|---|---|
content_type | The type string to filter by (e.g., "text", "reasoning", "reflection") TYPE: |
| RETURNS | DESCRIPTION |
|---|---|
list[BaseContent] | List of content blocks matching the type across all messages |
Source code in autogen/llm_clients/models/unified_response.py
is_standard_status #
Check if this response uses a standard status value.
| RETURNS | DESCRIPTION |
|---|---|
bool | True if status is one of the standard statuses (completed, in_progress, failed), |
bool | False if it's a custom/future status or None |