Agent Observability with OpenLIT#
export const quartoRawHtml = [`
Purpose | Parameter/Environment Variable | For Sending to OpenLIT |
---|---|---|
Send data to an HTTP OTLP endpoint | otlp_endpoint or OTEL_EXPORTER_OTLP_ENDPOINT | "http://127.0.0.1:4318" |
Authenticate telemetry backends | otlp_headers or OTEL_EXPORTER_OTLP_HEADERS | Not required by default |
`];
OpenLIT an open source product that helps developers build and manage AI agents in production, effectively helping them improve accuracy. As a self-hosted solution, it enables developers to experiment with LLMs, manage and version prompts, securely manage API keys, and provide safeguards against prompt injection and jailbreak attempts. It also includes built-in OpenTelemetry-native observability and evaluation for the complete GenAI stack (LLMs, Agents, vector databases, and GPUs).
For more info, check out the OpenLIT Repo
Adding OpenLIT to an existing AG2 (Now AG2) service#
To get started, you’ll need to install the OpenLIT library
OpenLIT uses OpenTelemetry to automatically instrument the AI Agent app when it’s initialized meaning your agent observability data like execution traces and metrics will be tracked in just one line of code.
OpenLIT will now start automatically tracking
- LLM prompts and completions
- Token usage and costs
- Agent names and actions
- Tool usage
- Errors
Lets look at a simple chat example#
import os
llm_config = LLMConfig(model="gpt-4", api_key=os.environ["OPENAI_API_KEY"])
assistant = AssistantAgent("assistant", llm_config=llm_config)
user_proxy = UserProxyAgent("user_proxy", code_execution_config=False)
# Start the chat
user_proxy.initiate_chat(
assistant,
message="Tell me a joke about NVDA and TESLA stock prices.",
)
Sending Traces and metrics to OpenLIT#
By default, OpenLIT generates OpenTelemetry traces and metrics that are logged to your console. To set up a detailed monitoring environment, this guide outlines how to deploy OpenLIT and direct all traces and metrics there. You also have the flexibility to send the telemetry data to any OpenTelemetry-compatible endpoint, such as Grafana, Jaeger, or DataDog.
Deploy OpenLIT Stack#
-
Clone the OpenLIT Repository
Open your terminal or command line and execute:
-
Host it Yourself with Docker
Deploy and start OpenLIT using the command:
For instructions on installing in Kubernetes using Helm, refer to the Kubernetes Helm installation guide.
Configure the telemetry data destination as follows:
💡 Info: If the
otlp_endpoint
orOTEL_EXPORTER_OTLP_ENDPOINT
is not provided, the OpenLIT SDK will output traces directly to your console, which is recommended during the development phase.
Visualize and Optimize!#
With the Observability data now being collected and sent to OpenLIT, the next step is to visualize and analyze this data to get insights into your AI application’s performance, behavior, and identify areas of improvement.
Just head over to OpenLIT at 127.0.0.1:3000
on your browser to start exploring. You can login using the default credentials - Email: user@openlit.io
- Password: openlituser