Use Cases
- Use cases
- Reference Agents
- Notebooks
- All Notebooks
- Group Chat with Customized Speaker Selection Method
- RAG OpenAI Assistants in AutoGen
- Using RetrieveChat with Qdrant for Retrieve Augmented Code Generation and Question Answering
- Auto Generated Agent Chat: Function Inception
- Task Solving with Provided Tools as Functions (Asynchronous Function Calls)
- Using Guidance with AutoGen
- Solving Complex Tasks with A Sequence of Nested Chats
- Group Chat
- Solving Multiple Tasks in a Sequence of Async Chats
- Auto Generated Agent Chat: Task Solving with Provided Tools as Functions
- Conversational Chess using non-OpenAI clients
- RealtimeAgent with local websocket connection
- Web Scraping using Apify Tools
- DeepSeek: Adding Browsing Capabilities to AG2
- Interactive LLM Agent Dealing with Data Stream
- Generate Dalle Images With Conversable Agents
- Supercharging Web Crawling with Crawl4AI
- RealtimeAgent in a Swarm Orchestration
- Perform Research with Multi-Agent Group Chat
- Agent Tracking with AgentOps
- Translating Video audio using Whisper and GPT-3.5-turbo
- Automatically Build Multi-agent System from Agent Library
- Auto Generated Agent Chat: Collaborative Task Solving with Multiple Agents and Human Users
- Structured output
- CaptainAgent
- Group Chat with Coder and Visualization Critic
- Cross-Framework LLM Tool Integration with AG2
- Using FalkorGraphRagCapability with agents for GraphRAG Question & Answering
- Demonstrating the `AgentEval` framework using the task of solving math problems as an example
- RealtimeAgent in a Swarm Orchestration using WebRTC
- A Uniform interface to call different LLMs
- From Dad Jokes To Sad Jokes: Function Calling with GPTAssistantAgent
- Solving Complex Tasks with Nested Chats
- Usage tracking with AutoGen
- Agent with memory using Mem0
- Using RetrieveChat Powered by PGVector for Retrieve Augmented Code Generation and Question Answering
- Tools with Dependency Injection
- Solving Multiple Tasks in a Sequence of Chats with Different Conversable Agent Pairs
- WebSurferAgent
- Using RetrieveChat Powered by MongoDB Atlas for Retrieve Augmented Code Generation and Question Answering
- Assistants with Azure Cognitive Search and Azure Identity
- ReasoningAgent - Advanced LLM Reasoning with Multiple Search Strategies
- Agentic RAG workflow on tabular data from a PDF file
- RealtimeAgent in a Swarm Orchestration
- Making OpenAI Assistants Teachable
- Run a standalone AssistantAgent
- AutoBuild
- Solving Multiple Tasks in a Sequence of Chats
- Currency Calculator: Task Solving with Provided Tools as Functions
- Swarm Orchestration with AG2
- Use AutoGen in Databricks with DBRX
- Using a local Telemetry server to monitor a GraphRAG agent
- Auto Generated Agent Chat: Solving Tasks Requiring Web Info
- StateFlow: Build Workflows through State-Oriented Actions
- Groupchat with Llamaindex agents
- Using Neo4j's native GraphRAG SDK with AG2 agents for Question & Answering
- WebSurferAgent
- Agent Chat with Multimodal Models: LLaVA
- Group Chat with Retrieval Augmented Generation
- Runtime Logging with AutoGen
- SocietyOfMindAgent
- Agent Chat with Multimodal Models: DALLE and GPT-4V
- Agent Observability with OpenLIT
- Mitigating Prompt hacking with JSON Mode in Autogen
- Trip planning with a FalkorDB GraphRAG agent using a Swarm
- Language Agent Tree Search
- Auto Generated Agent Chat: Collaborative Task Solving with Coding and Planning Agent
- OptiGuide with Nested Chats in AutoGen
- Auto Generated Agent Chat: Task Solving with Langchain Provided Tools as Functions
- Writing a software application using function calls
- Auto Generated Agent Chat: GPTAssistant with Code Interpreter
- Adding Browsing Capabilities to AG2
- Agentchat MathChat
- Chatting with a teachable agent
- RealtimeAgent with gemini client
- Preprocessing Chat History with `TransformMessages`
- Chat with OpenAI Assistant using function call in AutoGen: OSS Insights for Advanced GitHub Data Analysis
- Small, Local Model (IBM Granite) Multi-Agent RAG
- Websockets: Streaming input and output using websockets
- RealtimeAgent in a Swarm Orchestration
- Task Solving with Code Generation, Execution and Debugging
- Agent Chat with Async Human Inputs
- Agent Chat with custom model loading
- Chat Context Dependency Injection
- DeepReaserchAgent
- Nested Chats for Tool Use in Conversational Chess
- Auto Generated Agent Chat: Group Chat with GPTAssistantAgent
- Cross-Framework LLM Tool for CaptainAgent
- Auto Generated Agent Chat: Teaching AI New Skills via Natural Language Interaction
- SQL Agent for Spider text-to-SQL benchmark
- Auto Generated Agent Chat: Task Solving with Code Generation, Execution, Debugging & Human Feedback
- OpenAI Assistants in AutoGen
- (Legacy) Implement Swarm-style orchestration with GroupChat
- Enhanced Swarm Orchestration with AG2
- Using RetrieveChat for Retrieve Augmented Code Generation and Question Answering
- Using Neo4j's graph database with AG2 agents for Question & Answering
- AgentOptimizer: An Agentic Way to Train Your LLM Agent
- Engaging with Multimodal Models: GPT-4V in AutoGen
- RealtimeAgent with WebRTC connection
- Agent with memory using Mem0
- FSM - User can input speaker transition constraints
- Config loader utility functions
- Community Gallery
Agent Tracking with AgentOps
Use AgentOps to simplify the development process and monitor your agents in production.
AgentOps provides session replays, metrics, and monitoring for AI agents.
At a high level, AgentOps gives you the ability to monitor LLM calls, costs, latency, agent failures, multi-agent interactions, tool usage, session-wide statistics, and more. For more info, check out the AgentOps Repo.
Overview Dashboard
Session Replays
Adding AgentOps to an existing Autogen service.
To get started, you’ll need to install the AgentOps package and set an API key.
AgentOps automatically configures itself when it’s initialized meaning your agent run data will be tracked and logged to your AgentOps account right away.
Some extra dependencies are needed for this notebook, which can be installed via pip:
pip install pyautogen agentops
For more information, please refer to the installation guide.
Set an API key
By default, the AgentOps init()
function will look for an environment
variable named AGENTOPS_API_KEY
. Alternatively, you can pass one in as
an optional parameter.
Create an account and obtain an API key at AgentOps.ai
import agentops
from autogen import ConversableAgent, UserProxyAgent, config_list_from_json
agentops.init(api_key="...")
🖇 AgentOps: Session Replay: https://app.agentops.ai/drilldown?session_id=8bfaeed1-fd51-4c68-b3ec-276b1a3ce8a4
UUID('8bfaeed1-fd51-4c68-b3ec-276b1a3ce8a4')
Autogen will now start automatically tracking - LLM prompts and completions - Token usage and costs - Agent names and actions - Correspondence between agents - Tool usage - Errors
Simple Chat Example
import agentops
# When initializing AgentOps, you can pass in optional tags to help filter sessions
agentops.init(tags=["simple-autogen-example"])
# Create the agent that uses the LLM.
config_list = config_list_from_json(env_or_file="OAI_CONFIG_LIST")
assistant = ConversableAgent("agent", llm_config={"config_list": config_list})
# Create the agent that represents the user in the conversation.
user_proxy = UserProxyAgent("user", code_execution_config=False)
# Let the assistant start the conversation. It will end when the user types "exit".
assistant.initiate_chat(user_proxy, message="How can I help you today?")
# Close your AgentOps session to indicate that it completed.
agentops.end_session("Success")
agent (to user):
How can I help you today?
--------------------------------------------------------------------------------
user (to agent):
2+2
--------------------------------------------------------------------------------
>>>>>>>> USING AUTO REPLY...
agent (to user):
2 + 2 equals 4.
--------------------------------------------------------------------------------
🖇 AgentOps: This run's cost $0.000960
🖇 AgentOps: Session Replay: https://app.agentops.ai/drilldown?session_id=8bfaeed1-fd51-4c68-b3ec-276b1a3ce8a4
You can view data on this run at app.agentops.ai.
The dashboard will display LLM events for each message sent by each agent, including those made by the human user.
Tool Example
AgentOps also tracks when Autogen agents use tools. You can find more information on this example in tool-use.ipynb
from typing import Annotated, Literal
from autogen import ConversableAgent, config_list_from_json, register_function
agentops.start_session(tags=["autogen-tool-example"])
Operator = Literal["+", "-", "*", "/"]
def calculator(a: int, b: int, operator: Annotated[Operator, "operator"]) -> int:
if operator == "+":
return a + b
elif operator == "-":
return a - b
elif operator == "*":
return a * b
elif operator == "/":
return int(a / b)
else:
raise ValueError("Invalid operator")
config_list = config_list_from_json(env_or_file="OAI_CONFIG_LIST")
# Create the agent that uses the LLM.
assistant = ConversableAgent(
name="Assistant",
system_message="You are a helpful AI assistant. "
"You can help with simple calculations. "
"Return 'TERMINATE' when the task is done.",
llm_config={"config_list": config_list},
)
# The user proxy agent is used for interacting with the assistant agent
# and executes tool calls.
user_proxy = ConversableAgent(
name="User",
llm_config=False,
is_termination_msg=lambda msg: msg.get("content") is not None and "TERMINATE" in msg["content"],
human_input_mode="NEVER",
)
assistant.register_for_llm(name="calculator", description="A simple calculator")(calculator)
user_proxy.register_for_execution(name="calculator")(calculator)
# Register the calculator function to the two agents.
register_function(
calculator,
caller=assistant, # The assistant agent can suggest calls to the calculator.
executor=user_proxy, # The user proxy agent can execute the calculator calls.
name="calculator", # By default, the function name is used as the tool name.
description="A simple calculator", # A description of the tool.
)
# Let the assistant start the conversation. It will end when the user types "exit".
user_proxy.initiate_chat(assistant, message="What is (1423 - 123) / 3 + (32 + 23) * 5?")
agentops.end_session("Success")
🖇 AgentOps: Session Replay: https://app.agentops.ai/drilldown?session_id=880c206b-751e-4c23-9313-8684537fc04d
User (to Assistant):
What is (1423 - 123) / 3 + (32 + 23) * 5?
--------------------------------------------------------------------------------
>>>>>>>> USING AUTO REPLY...
Assistant (to User):
***** Suggested tool call (call_aINcGyo0Xkrh9g7buRuhyCz0): calculator *****
Arguments:
{
"a": 1423,
"b": 123,
"operator": "-"
}
***************************************************************************
--------------------------------------------------------------------------------
>>>>>>>> EXECUTING FUNCTION calculator...
User (to Assistant):
User (to Assistant):
***** Response from calling tool (call_aINcGyo0Xkrh9g7buRuhyCz0) *****
1300
**********************************************************************
--------------------------------------------------------------------------------
>>>>>>>> USING AUTO REPLY...
Assistant (to User):
***** Suggested tool call (call_prJGf8V0QVT7cbD91e0Fcxpb): calculator *****
Arguments:
{
"a": 1300,
"b": 3,
"operator": "/"
}
***************************************************************************
--------------------------------------------------------------------------------
>>>>>>>> EXECUTING FUNCTION calculator...
User (to Assistant):
User (to Assistant):
***** Response from calling tool (call_prJGf8V0QVT7cbD91e0Fcxpb) *****
433
**********************************************************************
--------------------------------------------------------------------------------
>>>>>>>> USING AUTO REPLY...
/Users/braelynboynton/Developer/agentops/autogen/autogen/agentchat/conversable_agent.py:2489: UserWarning: Function 'calculator' is being overridden.
warnings.warn(f"Function '{tool_sig['function']['name']}' is being overridden.", UserWarning)
/Users/braelynboynton/Developer/agentops/autogen/autogen/agentchat/conversable_agent.py:2408: UserWarning: Function 'calculator' is being overridden.
warnings.warn(f"Function '{name}' is being overridden.", UserWarning)
Assistant (to User):
***** Suggested tool call (call_CUIgHRsySLjayDKuUphI1TGm): calculator *****
Arguments:
{
"a": 32,
"b": 23,
"operator": "+"
}
***************************************************************************
--------------------------------------------------------------------------------
>>>>>>>> EXECUTING FUNCTION calculator...
User (to Assistant):
User (to Assistant):
***** Response from calling tool (call_CUIgHRsySLjayDKuUphI1TGm) *****
55
**********************************************************************
--------------------------------------------------------------------------------
>>>>>>>> USING AUTO REPLY...
Assistant (to User):
***** Suggested tool call (call_L7pGtBLUf9V0MPL90BASyesr): calculator *****
Arguments:
{
"a": 55,
"b": 5,
"operator": "*"
}
***************************************************************************
--------------------------------------------------------------------------------
>>>>>>>> EXECUTING FUNCTION calculator...
User (to Assistant):
User (to Assistant):
***** Response from calling tool (call_L7pGtBLUf9V0MPL90BASyesr) *****
275
**********************************************************************
--------------------------------------------------------------------------------
>>>>>>>> USING AUTO REPLY...
Assistant (to User):
***** Suggested tool call (call_Ygo6p4XfcxRjkYBflhG3UVv6): calculator *****
Arguments:
{
"a": 433,
"b": 275,
"operator": "+"
}
***************************************************************************
--------------------------------------------------------------------------------
>>>>>>>> EXECUTING FUNCTION calculator...
User (to Assistant):
User (to Assistant):
***** Response from calling tool (call_Ygo6p4XfcxRjkYBflhG3UVv6) *****
708
**********************************************************************
--------------------------------------------------------------------------------
>>>>>>>> USING AUTO REPLY...
Assistant (to User):
The result of the calculation is 708.
--------------------------------------------------------------------------------
User (to Assistant):
--------------------------------------------------------------------------------
>>>>>>>> USING AUTO REPLY...
Assistant (to User):
TERMINATE
--------------------------------------------------------------------------------
🖇 AgentOps: This run's cost $0.001800
🖇 AgentOps: Session Replay: https://app.agentops.ai/drilldown?session_id=880c206b-751e-4c23-9313-8684537fc04d
You can see your run in action at
app.agentops.ai. In this example, the
AgentOps dashboard will show: - Agents talking to each other - Each use
of the calculator
tool - Each call to OpenAI for LLM use