Skip to content

Index

The Agent-User Interaction (AG-UI) protocol standardizes how frontend applications talk to AI agents – including streaming, tools, shared state, and custom events. AG2 provides a lightweight integration via autogen.ag_ui.AGUIStream, letting you connect a ConversableAgent to any AG-UI-compatible frontend.

For background on the protocol see AG-UI Protocol introduction

When to use AG-UI with AG2#

Use the AG-UI integration when:

  • You already have (or plan to build) a web UI based on the AG-UI protocol
  • You want streaming responses and tool events that the frontend can render in real-time
  • You need to mix backend tools (Python functions) with frontend tools (UI actions / Generative UI tools) in one coherent protocol
  • You want to reuse existing AG-UI debugging tooling (e.g. Dojo) while keeping your agent logic in AG2

Generally, we recommend using AG-UI whenever you need to build a rich interactive UI for your agent. It less than an hour you can create your own ChatGPT-like web application.

Installation#

To use the AG-UI integration, install AG2 with the ag-ui extra, pulling in the official ag-ui-protocol package:

pip install "ag2[ag-ui]"

Fast integration: Build an ASGI endpoint#

If you are using Starlette or a Starlette-based framework, like FastAPI, AGUIStream can build an ASGI endpoint class for you.

asgi_ag_ui_starlette.py
from starlette.applications import Starlette
from starlette.routing import Route

from autogen import ConversableAgent, LLMConfig
from autogen.ag_ui import AGUIStream

agent = ConversableAgent(
    name="support_bot",
    system_message="You answer product questions.",
    llm_config=LLMConfig({"model": "gpt-4o-mini"}),
)

stream = AGUIStream(agent)
app = Starlette(routes=[Route("/chat", stream.build_asgi())])
asgi_ag_ui_fastapi.py
from fastapi import FastAPI

from autogen import ConversableAgent, LLMConfig
from autogen.ag_ui import AGUIStream

agent = ConversableAgent(
    name="support_bot",
    system_message="You answer product questions.",
    llm_config=LLMConfig({"model": "gpt-4o-mini"}),
)

stream = AGUIStream(agent)

app = FastAPI()
app.mount("/chat", stream.build_asgi())

Because the class returned by build_asgi() is a standard Starlette HTTPEndpoint, you can plug or mount into any ASGI application that accepts ASGI routes (for example, FastAPI, Starlette itself, or other Starlette-based frameworks).

This gives you a ready-to-use endpoint compatible with AG-UI frontends:

uvicorn asgi_ag_ui:app

Advanced example: Manual dispatch#

The most flexible way to integrate AG-UI is to:

  1. Accept an HTTP request from an AG-UI frontend
  2. Parse the body into a RunAgentInput
  3. Call AGUIStream.dispatch
  4. Stream the encoded SSE (Server-Sent Events) events back to the client
run_ag_ui.py
from ag_ui.core import RunAgentInput
from fastapi import FastAPI, Header
from fastapi.responses import StreamingResponse

from autogen import ConversableAgent, LLMConfig
from autogen.ag_ui import AGUIStream

agent = ConversableAgent(
    name="support_bot",
    system_message="You help users with billing questions.",
    llm_config=LLMConfig({"model": "gpt-4o-mini"}),
)

stream = AGUIStream(agent)
app = FastAPI()

@app.post("/chat")
async def run_agent(
    message: RunAgentInput,
    accept: str | None = Header(None),
) -> StreamingResponse:
    event_stream = stream.dispatch(
        message,
        accept=accept,
    )

    return StreamingResponse(
        event_stream,
        media_type=accept or "text/event-stream",
    )

You can then run this using any ASGI server:

uvicorn run_ag_ui:app

Your AG-UI frontend can now send RunAgentInput payloads to /chat and consume the streamed events to render messages, tools, and state updates.

It allows you to add any additional logic to the endpoint, for example, you can add a logging layer, a cache layer, a rate limiting layer, etc.

Authentication#

You can protect your endpoint with a simple authentication layer, such token-based shown below, that you already use in other endpoints.

run_ag_ui.py
from typing import Annotated

from fastapi import FastAPI, Header
from fastapi.responses import StreamingResponse

app = FastAPI()

@app.post("/chat")
async def run_agent(
    message: RunAgentInput,
    token: Annotated[str, Header(..., description="Authentication token")],
) -> StreamingResponse:
    if token != "1234567890":
        raise HTTPException(status_code=401, detail="Invalid token")

    event_stream = stream.dispatch(message)
    return StreamingResponse(event_stream)

Tools Context#

In some cases you may want to provide additional context to your agent, such as the user's ID, a session ID, a user profile, etc.

You can pass this information into your agent's tools using the standard (ContextVariables) way, passing it as a parameter to the tool function.

from autogen import ConversableAgent, LLMConfig, ContextVariables

def get_user_profile(context: ContextVariables) -> str:
    user_id = context.get("user_id")
    return f"User profile for user {user_id}"

agent = ConversableAgent(
    "calculator",
    functions=[get_user_profile],
    llm_config=LLMConfig({"model": "gpt-4o-mini"}),
)

Pass this context to your agent, you can use the context parameter of the dispatch method.

from typing import Annotated

from fastapi import FastAPI, HTTPException, Header
from fastapi.responses import StreamingResponse

from autogen.ag_ui import AGUIStream, RunAgentInput

app = FastAPI()

stream = AGUIStream(agent)

@app.post("/chat")
async def run_agent(
    message: RunAgentInput,
    token: Annotated[str, Header(..., description="Authentication token")],
) -> StreamingResponse:
    if token != "1234567890":
        raise HTTPException(status_code=401, detail="Invalid token")

    event_stream = stream.dispatch(
        message,
        context={"user_id": "1234567890"},
    )
    return StreamingResponse(event_stream)

Backend vs Frontend tools#

The tools mechanism is one of the most powerful features of AG-UI. It allows you to mix backend tools (Python functions) and frontend tools (UI actions / GenUI tools) in one coherent protocol.

Backend tools (Python functions)#

Backend tools are AG2 tools registered on your ConversableAgent:

from typing import Annotated

from autogen import ConversableAgent, LLMConfig

def calculate_sum(
    a: Annotated[int, "First number"],
    b: Annotated[int, "Second number"],
) -> int:
    return a + b

agent = ConversableAgent(
    "calculator",
    functions=[calculate_sum],
    llm_config=LLMConfig({"model": "gpt-4o-mini"}),
)

AG-UI allows you to capture these tool's calls and render them in the UI to notify the user about backend agent activity.

Frontend tools (UI-driven actions)#

Frontend tools are defined on the AG-UI side and allow you to build flexible and appealing UI to interact with your agent.

The most common cases of frontend tools are:

  • Generative UI - tools that are able to render UI based on LLM output: custom cards, lists, buttons and others
  • HITL (Human in the Loop) - tools that allow you to ask the user for specific input from the UI: buttons, toggles, etc.

You can learn more about frontend tools in the CopilotKit documentation.