Use Cases
- Use cases
- Reference Agents
- Notebooks
- All Notebooks
- Group Chat with Customized Speaker Selection Method
- RAG OpenAI Assistants in AutoGen
- Using RetrieveChat with Qdrant for Retrieve Augmented Code Generation and Question Answering
- Auto Generated Agent Chat: Function Inception
- Task Solving with Provided Tools as Functions (Asynchronous Function Calls)
- Using Guidance with AutoGen
- Solving Complex Tasks with A Sequence of Nested Chats
- Group Chat
- Solving Multiple Tasks in a Sequence of Async Chats
- Auto Generated Agent Chat: Task Solving with Provided Tools as Functions
- Conversational Chess using non-OpenAI clients
- RealtimeAgent with local websocket connection
- Web Scraping using Apify Tools
- DeepSeek: Adding Browsing Capabilities to AG2
- Interactive LLM Agent Dealing with Data Stream
- Generate Dalle Images With Conversable Agents
- Supercharging Web Crawling with Crawl4AI
- RealtimeAgent in a Swarm Orchestration
- Perform Research with Multi-Agent Group Chat
- Agent Tracking with AgentOps
- Translating Video audio using Whisper and GPT-3.5-turbo
- Automatically Build Multi-agent System from Agent Library
- Auto Generated Agent Chat: Collaborative Task Solving with Multiple Agents and Human Users
- Structured output
- CaptainAgent
- Group Chat with Coder and Visualization Critic
- Cross-Framework LLM Tool Integration with AG2
- Using FalkorGraphRagCapability with agents for GraphRAG Question & Answering
- Demonstrating the `AgentEval` framework using the task of solving math problems as an example
- RealtimeAgent in a Swarm Orchestration using WebRTC
- A Uniform interface to call different LLMs
- From Dad Jokes To Sad Jokes: Function Calling with GPTAssistantAgent
- Solving Complex Tasks with Nested Chats
- Usage tracking with AutoGen
- Agent with memory using Mem0
- Using RetrieveChat Powered by PGVector for Retrieve Augmented Code Generation and Question Answering
- Tools with Dependency Injection
- Solving Multiple Tasks in a Sequence of Chats with Different Conversable Agent Pairs
- WebSurferAgent
- Using RetrieveChat Powered by MongoDB Atlas for Retrieve Augmented Code Generation and Question Answering
- Assistants with Azure Cognitive Search and Azure Identity
- ReasoningAgent - Advanced LLM Reasoning with Multiple Search Strategies
- Agentic RAG workflow on tabular data from a PDF file
- Making OpenAI Assistants Teachable
- Run a standalone AssistantAgent
- AutoBuild
- Solving Multiple Tasks in a Sequence of Chats
- Currency Calculator: Task Solving with Provided Tools as Functions
- Swarm Orchestration with AG2
- Use AutoGen in Databricks with DBRX
- Using a local Telemetry server to monitor a GraphRAG agent
- Auto Generated Agent Chat: Solving Tasks Requiring Web Info
- StateFlow: Build Workflows through State-Oriented Actions
- Groupchat with Llamaindex agents
- Using Neo4j's native GraphRAG SDK with AG2 agents for Question & Answering
- Agent Chat with Multimodal Models: LLaVA
- Group Chat with Retrieval Augmented Generation
- Runtime Logging with AutoGen
- SocietyOfMindAgent
- Agent Chat with Multimodal Models: DALLE and GPT-4V
- Agent Observability with OpenLIT
- Mitigating Prompt hacking with JSON Mode in Autogen
- Trip planning with a FalkorDB GraphRAG agent using a Swarm
- Language Agent Tree Search
- Auto Generated Agent Chat: Collaborative Task Solving with Coding and Planning Agent
- OptiGuide with Nested Chats in AutoGen
- Auto Generated Agent Chat: Task Solving with Langchain Provided Tools as Functions
- Writing a software application using function calls
- Auto Generated Agent Chat: GPTAssistant with Code Interpreter
- Adding Browsing Capabilities to AG2
- Agentchat MathChat
- Chatting with a teachable agent
- RealtimeAgent with gemini client
- Preprocessing Chat History with `TransformMessages`
- Chat with OpenAI Assistant using function call in AutoGen: OSS Insights for Advanced GitHub Data Analysis
- Websockets: Streaming input and output using websockets
- Task Solving with Code Generation, Execution and Debugging
- Agent Chat with Async Human Inputs
- Agent Chat with custom model loading
- Chat Context Dependency Injection
- Nested Chats for Tool Use in Conversational Chess
- Auto Generated Agent Chat: Group Chat with GPTAssistantAgent
- Cross-Framework LLM Tool for CaptainAgent
- Auto Generated Agent Chat: Teaching AI New Skills via Natural Language Interaction
- SQL Agent for Spider text-to-SQL benchmark
- Auto Generated Agent Chat: Task Solving with Code Generation, Execution, Debugging & Human Feedback
- OpenAI Assistants in AutoGen
- (Legacy) Implement Swarm-style orchestration with GroupChat
- Enhanced Swarm Orchestration with AG2
- Using RetrieveChat for Retrieve Augmented Code Generation and Question Answering
- Using Neo4j's graph database with AG2 agents for Question & Answering
- AgentOptimizer: An Agentic Way to Train Your LLM Agent
- Engaging with Multimodal Models: GPT-4V in AutoGen
- RealtimeAgent with WebRTC connection
- FSM - User can input speaker transition constraints
- Config loader utility functions
- Community Gallery
Auto Generated Agent Chat: GPTAssistant with Code Interpreter
This Jupyter Notebook showcases the integration of the Code Interpreter tool which executes Python code dynamically within applications.
The latest released Assistants API by OpenAI allows users to build AI
assistants within their own applications. The Assistants API currently
supports three types of tools: Code Interpreter, Retrieval, and Function
calling. In this notebook, we demonstrate how to enable
GPTAssistantAgent
to use code interpreter.
Requirements
AutoGen requires Python>=3.9
. To run this notebook example, please
install:
Set your API Endpoint
The
config_list_from_json
function loads a list of configurations from an environment variable or
a json file.
import io
from IPython.display import display
from PIL import Image
import autogen
from autogen.agentchat import UserProxyAgent
from autogen.agentchat.contrib.gpt_assistant_agent import GPTAssistantAgent
config_list = autogen.config_list_from_json(
"OAI_CONFIG_LIST",
file_location=".",
filter_dict={
"model": ["gpt-3.5-turbo", "gpt-35-turbo", "gpt-4", "gpt4", "gpt-4-32k", "gpt-4-turbo"],
},
)
Learn more about configuring LLMs for agents here.
Perform Tasks Using Code Interpreter
We demonstrate task solving using GPTAssistantAgent
with code
interpreter. Pass code_interpreter
in tools
parameter to enable
GPTAssistantAgent
with code interpreter. It will write code and
automatically execute it in a sandbox. The agent will receive the
results from the sandbox environment and act accordingly.
Example 1: Math Problem Solving
In this example, we demonstrate how to use code interpreter to solve math problems.
# Initiate an agent equipped with code interpreter
gpt_assistant = GPTAssistantAgent(
name="Coder Assistant",
llm_config={
"config_list": config_list,
},
assistant_config={
"tools": [{"type": "code_interpreter"}],
},
instructions="You are an expert at solving math questions. Write code and run it to solve math problems. Reply TERMINATE when the task is solved and there is no problem.",
)
user_proxy = UserProxyAgent(
name="user_proxy",
is_termination_msg=lambda msg: "TERMINATE" in msg["content"],
code_execution_config={
"work_dir": "coding",
"use_docker": False, # Please set use_docker=True if docker is available to run the generated code. Using docker is safer than running the generated code directly.
},
human_input_mode="NEVER",
)
# When all is set, initiate the chat.
user_proxy.initiate_chat(
gpt_assistant, message="If $725x + 727y = 1500$ and $729x+ 731y = 1508$, what is the value of $x - y$ ?"
)
gpt_assistant.delete_assistant()
OpenAI client config of GPTAssistantAgent(Coder Assistant) - model: gpt-4-turbo
Matching assistant found, using the first matching assistant: {'id': 'asst_xBMxObFj0TzDex04NAKbBCmP', 'created_at': 1710321320, 'description': None, 'file_ids': [], 'instructions': 'You are an expert at solving math questions. Write code and run it to solve math problems. Reply TERMINATE when the task is solved and there is no problem.', 'metadata': {}, 'model': 'gpt-4-turbo', 'name': 'Coder Assistant', 'object': 'assistant', 'tools': [ToolCodeInterpreter(type='code_interpreter')]}
user_proxy (to Coder Assistant):
If $725x + 727y = 1500$ and $729x+ 731y = 1508$, what is the value of $x - y$ ?
--------------------------------------------------------------------------------
Coder Assistant (to user_proxy):
The value of \( x - y \) is \(-48\).
--------------------------------------------------------------------------------
user_proxy (to Coder Assistant):
--------------------------------------------------------------------------------
Coder Assistant (to user_proxy):
It seems you have no further inquiries. If you have more questions in the future, feel free to ask. Goodbye!
TERMINATE
--------------------------------------------------------------------------------
Permanently deleting assistant...
Example 2: Plotting with Code Interpreter
Code Interpreter can outputs files, such as generating image diagrams. In this example, we demonstrate how to draw figures and download it.
gpt_assistant = GPTAssistantAgent(
name="Coder Assistant",
llm_config={
"config_list": config_list,
},
assistant_config={
"tools": [{"type": "code_interpreter"}],
},
instructions="You are an expert at writing python code to solve problems. Reply TERMINATE when the task is solved and there is no problem.",
)
user_proxy.initiate_chat(
gpt_assistant,
message="Draw a line chart to show the population trend in US. Show how you solved it with code.",
is_termination_msg=lambda msg: "TERMINATE" in msg["content"],
human_input_mode="NEVER",
clear_history=True,
max_consecutive_auto_reply=1,
)
OpenAI client config of GPTAssistantAgent(Coder Assistant) - model: gpt-4-turbo
No matching assistant found, creating a new assistant
user_proxy (to Coder Assistant):
Draw a line chart to show the population trend in US. Show how you solved it with code.
--------------------------------------------------------------------------------
Coder Assistant (to user_proxy):
To draw a line chart showing the population trend in the US, we first need to obtain the data that contains the population figures over a range of years. As I don't have access to the internet in this environment, I cannot download the data directly. However, if you can provide the data, I can proceed to create a line chart for you.
For the purpose of this demonstration, let's assume we have some hypothetical US population data for a few years. I'll generate some sample data and create a line chart using the `matplotlib` library in Python.
Here's how we can do it:
Received file id=assistant-tvLtfOn6uAJ9kxmnxgK2OXID
Here is a line chart that illustrates the hypothetical US population trend from 2010 to 2020. The data used here is for demonstration purposes only. If you have actual population data, you can provide it, and I will update the chart accordingly.
TERMINATE
--------------------------------------------------------------------------------
ChatResult(chat_id=None, chat_history=[{'content': 'Draw a line chart to show the population trend in US. Show how you solved it with code.', 'role': 'assistant'}, {'content': "To draw a line chart showing the population trend in the US, we first need to obtain the data that contains the population figures over a range of years. As I don't have access to the internet in this environment, I cannot download the data directly. However, if you can provide the data, I can proceed to create a line chart for you.\n\nFor the purpose of this demonstration, let's assume we have some hypothetical US population data for a few years. I'll generate some sample data and create a line chart using the `matplotlib` library in Python.\n\nHere's how we can do it:\n\n\nReceived file id=assistant-tvLtfOn6uAJ9kxmnxgK2OXID\n\nHere is a line chart that illustrates the hypothetical US population trend from 2010 to 2020. The data used here is for demonstration purposes only. If you have actual population data, you can provide it, and I will update the chart accordingly.\n\nTERMINATE\n", 'role': 'user'}], summary="To draw a line chart showing the population trend in the US, we first need to obtain the data that contains the population figures over a range of years. As I don't have access to the internet in this environment, I cannot download the data directly. However, if you can provide the data, I can proceed to create a line chart for you.\n\nFor the purpose of this demonstration, let's assume we have some hypothetical US population data for a few years. I'll generate some sample data and create a line chart using the `matplotlib` library in Python.\n\nHere's how we can do it:\n\n\nReceived file id=assistant-tvLtfOn6uAJ9kxmnxgK2OXID\n\nHere is a line chart that illustrates the hypothetical US population trend from 2010 to 2020. The data used here is for demonstration purposes only. If you have actual population data, you can provide it, and I will update the chart accordingly.\n\n\n", cost=({'total_cost': 0}, {'total_cost': 0}), human_input=[])
Now we have the file id. We can download and display it.
api_response = gpt_assistant.openai_client.files.with_raw_response.retrieve_content(
"assistant-tvLtfOn6uAJ9kxmnxgK2OXID"
)
if api_response.status_code == 200:
content = api_response.content
image_data_bytes = io.BytesIO(content)
image = Image.open(image_data_bytes)
display(image)
gpt_assistant.delete_assistant()
Permanently deleting assistant...