RealtimeAgent with local websocket connection#
AG2 supports RealtimeAgent, a powerful agent type that connects seamlessly to OpenAI’s Realtime API. In this example we will start a local RealtimeAgent and register a mock get_weather function that the agent will be able to call.
Note: This notebook cannot be run in Google Colab because it depends on local JavaScript files and HTML templates. To execute the notebook successfully, run it locally within the cloned project so that the notebooks/agentchat_realtime_websocket/static
and notebooks/agentchat_realtime_websocket/templates
folders are available in the correct relative paths.
Install AG2 and dependencies#
To use the realtime agent we will connect it to a local websocket through the browser.
We have prepared a WebSocketAudioAdapter
to enable you to connect your realtime agent to a websocket service.
To be able to run this notebook, you will need to install ag2, fastapi and uvicorn.
Import the dependencies#
After installing the necessary requirements, we can import the necessary dependencies for the example
import os
from logging import getLogger
from pathlib import Path
from typing import Annotated
import uvicorn
from fastapi import FastAPI, Request, WebSocket
from fastapi.responses import HTMLResponse, JSONResponse
from fastapi.staticfiles import StaticFiles
from fastapi.templating import Jinja2Templates
import autogen
from autogen.agentchat.realtime.experimental import AudioObserver, RealtimeAgent, WebSocketAudioAdapter
Prepare your llm_config
and realtime_llm_config
#
The LLMConfig.from_json
method loads a list of configurations from an environment variable or a json file.
realtime_llm_config = autogen.LLMConfig.from_json(
path="OAI_CONFIG_LIST",
temperature=0.8,
timeout=600,
).where(tags=["gpt-4o-mini-realtime"])
assert realtime_llm_config.config_list, (
"No LLM found for the given model, please add the following lines to the OAI_CONFIG_LIST file:"
"""
{
"model": "gpt-4o-mini-realtime-preview",
"api_key": "sk-***********************...*",
"tags": ["gpt-4o-mini-realtime", "realtime"]
}"""
)
Before you start the server#
To run uvicorn server inside the notebook, you will need to use nest_asyncio. This is because Jupyter uses the asyncio event loop, and uvicorn uses its own event loop. nest_asyncio will allow uvicorn to run in Jupyter.
Please install nest_asyncio by running the following cell.
Implementing and Running a Basic App#
Let us set up and execute a FastAPI application that integrates real-time agent interactions.
Define basic FastAPI app#
- Define Port: Sets the
PORT
variable to5050
, which will be used for the server. - Initialize FastAPI App: Creates a
FastAPI
instance namedapp
, which serves as the main application. - Define Root Endpoint: Adds a
GET
endpoint at the root URL (/
). When accessed, it returns a JSON response with the message"WebSocket Audio Stream Server is running!"
.
This sets up a basic FastAPI server and provides a simple health-check endpoint to confirm that the server is operational.
from contextlib import asynccontextmanager
PORT = 5050
@asynccontextmanager
async def lifespan(*args, **kwargs):
print("Application started. Please visit http://localhost:5050/start-chat to start voice chat.")
yield
app = FastAPI(lifespan=lifespan)
@app.get("/", response_class=JSONResponse)
async def index_page():
return {"message": "WebSocket Audio Stream Server is running!"}
Prepare start-chat
endpoint#
- Set the Working Directory: Define
notebook_path
as the current working directory usingos.getcwd()
. - Mount Static Files: Mount the
static
directory (insideagentchat_realtime_websocket
) to serve JavaScript, CSS, and other static assets under the/static
path. - Set Up Templates: Configure Jinja2 to render HTML templates from the
templates
directory withinagentchat_realtime_websocket
. - Create the
/start-chat/
Endpoint: Define aGET
route that serves thechat.html
template. Pass the client’srequest
and theport
variable to the template for rendering a dynamic page for the audio chat interface.
This code sets up static file handling, template rendering, and a dedicated endpoint to deliver the chat interface.
notebook_path = os.getcwd()
app.mount(
"/static", StaticFiles(directory=Path(notebook_path) / "agentchat_realtime_websocket" / "static"), name="static"
)
# Templates for HTML responses
templates = Jinja2Templates(directory=Path(notebook_path) / "agentchat_realtime_websocket" / "templates")
@app.get("/start-chat/", response_class=HTMLResponse)
async def start_chat(request: Request):
"""Endpoint to return the HTML page for audio chat."""
port = PORT # Extract the client's port
return templates.TemplateResponse("chat.html", {"request": request, "port": port})
Prepare endpoint for conversation audio stream#
- Set Up the WebSocket Endpoint: Define the
/media-stream
WebSocket route to handle audio streaming. - Accept WebSocket Connections: Accept incoming WebSocket connections from clients.
- Initialize Logger: Retrieve a logger instance for logging purposes.
- Configure Audio Adapter: Instantiate a
WebSocketAudioAdapter
, connecting the WebSocket to handle audio streaming with logging. - Set Up Realtime Agent: Create a
RealtimeAgent
with the following:- Name:
Weather Bot
. - System Message: Introduces the AI assistant and its capabilities.
- LLM Configuration: Uses
realtime_llm_config
for language model settings. - Audio Adapter: Leverages the previously created
audio_adapter
. - Logger: Logs activities for debugging and monitoring.
- Name:
- Register a Realtime Function: Add a function
get_weather
to the agent, allowing it to respond with basic weather information based on the providedlocation
. - Run the Agent: Start the
realtime_agent
to handle interactions in real time.
@app.websocket("/media-stream")
async def handle_media_stream(websocket: WebSocket):
"""Handle WebSocket connections providing audio stream and OpenAI."""
await websocket.accept()
logger = getLogger("uvicorn.error")
audio_adapter = WebSocketAudioAdapter(websocket, logger=logger)
realtime_agent = RealtimeAgent(
name="Weather_Bot",
system_message="You are an AI voice assistant powered by AG2 and the OpenAI Realtime API. You can answer questions about weather. Start by saying 'How can I help you'?",
llm_config=realtime_llm_config,
audio_adapter=audio_adapter,
logger=logger,
observers=[AudioObserver(logger=logger)],
)
@realtime_agent.register_realtime_function(name="get_weather", description="Get the current weather")
def get_weather(location: Annotated[str, "city"]) -> str:
return "The weather is cloudy." if location == "Seattle" else "The weather is sunny."
await realtime_agent.run()