Skip to content

Agent Observability with OpenLIT#

Open In Colab Open on GitHub

export const quartoRawHtml = [`

Purpose Parameter/Environment Variable For Sending to OpenLIT
Send data to an HTTP OTLP endpoint otlp_endpoint or OTEL_EXPORTER_OTLP_ENDPOINT "http://127.0.0.1:4318"
Authenticate telemetry backends otlp_headers or OTEL_EXPORTER_OTLP_HEADERS Not required by default

`];

OpenLIT Logo for LLM Observability

OpenLIT an open source product that helps developers build and manage AI agents in production, effectively helping them improve accuracy. As a self-hosted solution, it enables developers to experiment with LLMs, manage and version prompts, securely manage API keys, and provide safeguards against prompt injection and jailbreak attempts. It also includes built-in OpenTelemetry-native observability and evaluation for the complete GenAI stack (LLMs, Agents, vector databases, and GPUs).

For more info, check out the OpenLIT Repo

Adding OpenLIT to an existing AG2 (Now AG2) service#

To get started, you’ll need to install the OpenLIT library

OpenLIT uses OpenTelemetry to automatically instrument the AI Agent app when it’s initialized meaning your agent observability data like execution traces and metrics will be tracked in just one line of code.

! pip install -U ag2[openai] openlit
import openlit

from autogen import AssistantAgent, LLMConfig, UserProxyAgent

openlit.init()

OpenLIT will now start automatically tracking

  • LLM prompts and completions
  • Token usage and costs
  • Agent names and actions
  • Tool usage
  • Errors

Lets look at a simple chat example#

import openlit

openlit.init()
import os

llm_config = LLMConfig(model="gpt-4", api_key=os.environ["OPENAI_API_KEY"])
assistant = AssistantAgent("assistant", llm_config=llm_config)
user_proxy = UserProxyAgent("user_proxy", code_execution_config=False)

# Start the chat
user_proxy.initiate_chat(
    assistant,
    message="Tell me a joke about NVDA and TESLA stock prices.",
)

Sending Traces and metrics to OpenLIT#

By default, OpenLIT generates OpenTelemetry traces and metrics that are logged to your console. To set up a detailed monitoring environment, this guide outlines how to deploy OpenLIT and direct all traces and metrics there. You also have the flexibility to send the telemetry data to any OpenTelemetry-compatible endpoint, such as Grafana, Jaeger, or DataDog.

Deploy OpenLIT Stack#

  1. Clone the OpenLIT Repository

    Open your terminal or command line and execute:

    git clone git@github.com:openlit/openlit.git
    
  2. Host it Yourself with Docker

    Deploy and start OpenLIT using the command:

    docker compose up -d
    

For instructions on installing in Kubernetes using Helm, refer to the Kubernetes Helm installation guide.

Configure the telemetry data destination as follows:

💡 Info: If the otlp_endpoint or OTEL_EXPORTER_OTLP_ENDPOINT is not provided, the OpenLIT SDK will output traces directly to your console, which is recommended during the development phase.

Visualize and Optimize!#

With the Observability data now being collected and sent to OpenLIT, the next step is to visualize and analyze this data to get insights into your AI application’s performance, behavior, and identify areas of improvement.

Just head over to OpenLIT at 127.0.0.1:3000 on your browser to start exploring. You can login using the default credentials - Email: user@openlit.io - Password: openlituser