Ecosystem
Agent Monitoring and Debugging with AgentOps
AgentOps provides session replays, metrics, and monitoring for AI agents.
At a high level, AgentOps gives you the ability to monitor LLM calls, costs, latency, agent failures, multi-agent interactions, tool usage, session-wide statistics, and more. For more info, check out the AgentOps Repo.
๐ Replay Analytics and Debugging | Step-by-step agent execution graphs |
๐ธ LLM Cost Management | Track spend with LLM foundation model providers |
๐งช Agent Benchmarking | Test your agents against 1,000+ evals |
๐ Compliance and Security | Detect common prompt injection and data exfiltration exploits |
๐ค Framework Integrations | Native Integrations with CrewAI, AutoGen, & LangChain |
Installation
AgentOps works seamlessly with applications built using Autogen.
- Install AgentOps
-
Create an API Key: Create a user API key here: Create API Key
-
Configure Your Environment: Add your API key to your environment variables
- Initialize AgentOps
To start tracking all available data on Autogen runs, simply add two lines of code before implementing Autogen.
After initializing AgentOps, Autogen will now start automatically tracking your agent runs.
Features
- LLM Costs: Track spend with foundation model providers
- Replay Analytics: Watch step-by-step agent execution graphs
- Recursive Thought Detection: Identify when agents fall into infinite loops
- Custom Reporting: Create custom analytics on agent performance
- Analytics Dashboard: Monitor high level statistics about agents in development and production
- Public Model Testing: Test your agents against benchmarks and leaderboards
- Custom Tests: Run your agents against domain specific tests
- Time Travel Debugging: Save snapshots of session states to rewind and replay agent runs from chosen checkpoints.
- Compliance and Security: Create audit logs and detect potential threats such as profanity and PII leaks
- Prompt Injection Detection: Identify potential code injection and secret leaks