Use Cases
- Use cases
- Reference Agents
- Notebooks
- All Notebooks
- Group Chat with Customized Speaker Selection Method
- RAG OpenAI Assistants in AutoGen
- Using RetrieveChat with Qdrant for Retrieve Augmented Code Generation and Question Answering
- Auto Generated Agent Chat: Function Inception
- Task Solving with Provided Tools as Functions (Asynchronous Function Calls)
- Using Guidance with AutoGen
- Solving Complex Tasks with A Sequence of Nested Chats
- Group Chat
- Solving Multiple Tasks in a Sequence of Async Chats
- Auto Generated Agent Chat: Task Solving with Provided Tools as Functions
- Conversational Chess using non-OpenAI clients
- RealtimeAgent with local websocket connection
- Web Scraping using Apify Tools
- DeepSeek: Adding Browsing Capabilities to AG2
- Interactive LLM Agent Dealing with Data Stream
- Generate Dalle Images With Conversable Agents
- Supercharging Web Crawling with Crawl4AI
- RealtimeAgent in a Swarm Orchestration
- Perform Research with Multi-Agent Group Chat
- Agent Tracking with AgentOps
- Translating Video audio using Whisper and GPT-3.5-turbo
- Automatically Build Multi-agent System from Agent Library
- Auto Generated Agent Chat: Collaborative Task Solving with Multiple Agents and Human Users
- Structured output
- CaptainAgent
- Group Chat with Coder and Visualization Critic
- Cross-Framework LLM Tool Integration with AG2
- Using FalkorGraphRagCapability with agents for GraphRAG Question & Answering
- Demonstrating the `AgentEval` framework using the task of solving math problems as an example
- RealtimeAgent in a Swarm Orchestration using WebRTC
- A Uniform interface to call different LLMs
- From Dad Jokes To Sad Jokes: Function Calling with GPTAssistantAgent
- Solving Complex Tasks with Nested Chats
- Usage tracking with AutoGen
- Agent with memory using Mem0
- Using RetrieveChat Powered by PGVector for Retrieve Augmented Code Generation and Question Answering
- Tools with Dependency Injection
- Solving Multiple Tasks in a Sequence of Chats with Different Conversable Agent Pairs
- WebSurferAgent
- Using RetrieveChat Powered by MongoDB Atlas for Retrieve Augmented Code Generation and Question Answering
- Assistants with Azure Cognitive Search and Azure Identity
- ReasoningAgent - Advanced LLM Reasoning with Multiple Search Strategies
- Agentic RAG workflow on tabular data from a PDF file
- Making OpenAI Assistants Teachable
- Run a standalone AssistantAgent
- AutoBuild
- Solving Multiple Tasks in a Sequence of Chats
- Currency Calculator: Task Solving with Provided Tools as Functions
- Swarm Orchestration with AG2
- Use AutoGen in Databricks with DBRX
- Using a local Telemetry server to monitor a GraphRAG agent
- Auto Generated Agent Chat: Solving Tasks Requiring Web Info
- StateFlow: Build Workflows through State-Oriented Actions
- Groupchat with Llamaindex agents
- Using Neo4j's native GraphRAG SDK with AG2 agents for Question & Answering
- Agent Chat with Multimodal Models: LLaVA
- Group Chat with Retrieval Augmented Generation
- Runtime Logging with AutoGen
- SocietyOfMindAgent
- Agent Chat with Multimodal Models: DALLE and GPT-4V
- Agent Observability with OpenLIT
- Mitigating Prompt hacking with JSON Mode in Autogen
- Trip planning with a FalkorDB GraphRAG agent using a Swarm
- Language Agent Tree Search
- Auto Generated Agent Chat: Collaborative Task Solving with Coding and Planning Agent
- OptiGuide with Nested Chats in AutoGen
- Auto Generated Agent Chat: Task Solving with Langchain Provided Tools as Functions
- Writing a software application using function calls
- Auto Generated Agent Chat: GPTAssistant with Code Interpreter
- Adding Browsing Capabilities to AG2
- Agentchat MathChat
- Chatting with a teachable agent
- RealtimeAgent with gemini client
- Preprocessing Chat History with `TransformMessages`
- Chat with OpenAI Assistant using function call in AutoGen: OSS Insights for Advanced GitHub Data Analysis
- Websockets: Streaming input and output using websockets
- Task Solving with Code Generation, Execution and Debugging
- Agent Chat with Async Human Inputs
- Agent Chat with custom model loading
- Chat Context Dependency Injection
- Nested Chats for Tool Use in Conversational Chess
- Auto Generated Agent Chat: Group Chat with GPTAssistantAgent
- Cross-Framework LLM Tool for CaptainAgent
- Auto Generated Agent Chat: Teaching AI New Skills via Natural Language Interaction
- SQL Agent for Spider text-to-SQL benchmark
- Auto Generated Agent Chat: Task Solving with Code Generation, Execution, Debugging & Human Feedback
- OpenAI Assistants in AutoGen
- (Legacy) Implement Swarm-style orchestration with GroupChat
- Enhanced Swarm Orchestration with AG2
- Using RetrieveChat for Retrieve Augmented Code Generation and Question Answering
- Using Neo4j's graph database with AG2 agents for Question & Answering
- AgentOptimizer: An Agentic Way to Train Your LLM Agent
- Engaging with Multimodal Models: GPT-4V in AutoGen
- RealtimeAgent with WebRTC connection
- FSM - User can input speaker transition constraints
- Config loader utility functions
- Community Gallery
Auto Generated Agent Chat: Function Inception
Function Inception: Enable AutoGen agents to update/remove functions during conversations.
AutoGen offers conversable agents powered by LLM, tool or human, which can be used to perform tasks collectively via automated chat. This framework allows tool use and human participation through multi-agent conversation. Please find documentation about this feature here.
In this notebook, we demonstrate how to use AssistantAgent
and
UserProxyAgent
to give them the ability to auto-extend the list of
functions the model may call. Functions need to be registered to
UserProxyAgent
, which will be responsible for executing any function
calls made by AssistantAgent
. The assistant also needs to know the
signature of functions that may be called. A special define_function
function is registered, which registers a new function in
UserProxyAgent
and updates the configuration of the assistant.
In the example scenario, the user first asks to define a function that gets a URL and prints the response body. Then the user asks to print the response body, and the assistant suggests to the user to call the new function.
Requirements
AutoGen requires Python>=3.9
.
import json
from autogen import AssistantAgent, UserProxyAgent, config_list_from_json
from autogen.code_utils import execute_code
config_list = config_list_from_json(
"OAI_CONFIG_LIST",
filter_dict={
# Function calling with GPT 3.5
"model": ["gpt-3.5-turbo-16k-0613"],
},
)
llm_config = {
"functions": [
{
"name": "define_function",
"description": "Define a function to add to the context of the conversation. Necessary Python packages must be declared. Once defined, the assistant may decide to use this function, respond with a normal message.",
"parameters": {
"type": "object",
"properties": {
"name": {
"type": "string",
"description": "The name of the function to define.",
},
"description": {
"type": "string",
"description": "A short description of the function.",
},
"arguments": {
"type": "string",
"description": 'JSON schema of arguments encoded as a string. For example: { "url": { "type": "string", "description": "The URL", }}',
},
"packages": {
"type": "string",
"description": "A list of package names imported by the function, and that need to be installed with pip prior to invoking the function. This solves ModuleNotFoundError.",
},
"code": {
"type": "string",
"description": "The implementation in Python. Do not include the function declaration.",
},
},
"required": ["name", "description", "arguments", "packages", "code"],
},
},
],
"config_list": config_list,
"request_timeout": 120,
}
def define_function(name, description, arguments, packages, code):
json_args = json.loads(arguments)
function_config = {
"name": name,
"description": description,
"parameters": {"type": "object", "properties": json_args},
# TODO Make all arguments required
"required": ["url"],
}
llm_config["functions"] = llm_config["functions"] + [function_config]
user_proxy.register_function(function_map={name: lambda **args: execute_func(name, packages, code, **args)})
assistant.update_function_signature(function_config, is_remove=False)
return f"A function has been added to the context of this conversation.\nDescription: {description}"
def execute_func(name, packages, code, **args):
pip_install = (
f"""print("Installing package: {packages}")\nsubprocess.run(["pip", "-qq", "install", "{packages}"])"""
if packages
else ""
)
str = f"""
import subprocess
{pip_install}
print("Result of {name} function execution:")
{code}
args={args}
result={name}(**args)
if result is not None: print(result)
"""
print(f"execute_code:\n{str}")
result = execute_code(str)[1]
print(f"Result: {result}")
return result
def _is_termination_msg(message):
"""Check if a message is a termination message."""
if isinstance(message, dict):
message = message.get("content")
if message is None:
return False
return message.rstrip().endswith("TERMINATE")
assistant = AssistantAgent(
name="chatbot",
system_message="""You are an assistant.
The user will ask a question.
You may use the provided functions before providing a final answer.
Only use the functions you were provided.
When the answer has been provided, reply TERMINATE.""",
llm_config=llm_config,
)
user_proxy = UserProxyAgent(
"user_proxy",
code_execution_config=False,
is_termination_msg=_is_termination_msg,
default_auto_reply="Reply TERMINATE when the initial request has been fulfilled.",
human_input_mode="NEVER",
)
user_proxy.register_function(function_map={"define_function": define_function})
# user_proxy.initiate_chat(
# assistant, message="What functions do you know about?")
user_proxy.initiate_chat(
assistant,
message="Define a function that gets a URL, then prints the response body.\nReply TERMINATE when the function is defined.",
)
# user_proxy.initiate_chat(
# assistant, message="List functions do you know about.")
user_proxy.initiate_chat(
assistant, message="Print the response body of https://echo.free.beeceptor.com/\nUse the functions you know about."
)
user_proxy (to chatbot):
Define a function that gets a URL, then prints the response body.
Reply TERMINATE when the function is defined.
--------------------------------------------------------------------------------
chatbot (to user_proxy):
***** Suggested function Call: define_function *****
Arguments:
{
"name": "get_url_response_body",
"description": "Gets a URL, then prints the response body",
"arguments": "{\"url\": {\"type\": \"string\", \"description\": \"The URL\"}}",
"packages": "requests",
"code": "import requests\n\n\ndef get_url_response_body(url):\n response = requests.get(url)\n print(response.text)\n"
}
****************************************************
--------------------------------------------------------------------------------
>>>>>>>> EXECUTING FUNCTION define_function...
user_proxy (to chatbot):
***** Response from calling function "define_function" *****
A function has been added to the context of this conversation.
Description: Gets a URL, then prints the response body
************************************************************
--------------------------------------------------------------------------------
chatbot (to user_proxy):
TERMINATE
--------------------------------------------------------------------------------
user_proxy (to chatbot):
Print the response body of https://echo.free.beeceptor.com/
Use the functions you know about.
--------------------------------------------------------------------------------
chatbot (to user_proxy):
***** Suggested function Call: get_url_response_body *****
Arguments:
{
"url": "https://echo.free.beeceptor.com/"
}
**********************************************************
--------------------------------------------------------------------------------
>>>>>>>> EXECUTING FUNCTION get_url_response_body...
execute_code:
import subprocess
print("Installing package: requests")
subprocess.run(["pip", "-qq", "install", "requests"])
print("Result of get_url_response_body function execution:")
import requests
def get_url_response_body(url):
response = requests.get(url)
print(response.text)
args={'url': 'https://echo.free.beeceptor.com/'}
result=get_url_response_body(**args)
if result is not None: print(result)
Result: Installing package: requests
Result of get_url_response_body function execution:
{
"method": "GET",
"path": "/",
"ip": "104.28.208.116",
"headers": {
"host": "echo.free.beeceptor.com",
"user-agent": "python-requests/2.31.0",
"accept": "*/*",
"accept-encoding": "gzip, deflate",
"x-forwarded-for": "104.28.208.116",
"x-forwarded-host": "echo.free.beeceptor.com",
"x-forwarded-proto": "https",
"x-real-ip": "104.28.208.116"
},
"parsedQueryParams": {}
}
user_proxy (to chatbot):
***** Response from calling function "get_url_response_body" *****
Installing package: requests
Result of get_url_response_body function execution:
{
"method": "GET",
"path": "/",
"ip": "104.28.208.116",
"headers": {
"host": "echo.free.beeceptor.com",
"user-agent": "python-requests/2.31.0",
"accept": "*/*",
"accept-encoding": "gzip, deflate",
"x-forwarded-for": "104.28.208.116",
"x-forwarded-host": "echo.free.beeceptor.com",
"x-forwarded-proto": "https",
"x-real-ip": "104.28.208.116"
},
"parsedQueryParams": {}
}
******************************************************************
--------------------------------------------------------------------------------
chatbot (to user_proxy):
The response body of https://echo.free.beeceptor.com/ is:
{
"method": "GET",
"path": "/",
"ip": "104.28.208.116",
"headers": {
"host": "echo.free.beeceptor.com",
"user-agent": "python-requests/2.31.0",
"accept": "*/*",
"accept-encoding": "gzip, deflate",
"x-forwarded-for": "104.28.208.116",
"x-forwarded-host": "echo.free.beeceptor.com",
"x-forwarded-proto": "https",
"x-real-ip": "104.28.208.116"
},
"parsedQueryParams": {}
}
TERMINATE
--------------------------------------------------------------------------------