Notebooks
OpenAI Assistants in AutoGen
Use Cases
- Use cases
- Reference Agents
- Notebooks
- All Notebooks
- Solving Multiple Tasks in a Sequence of Chats
- Solving Multiple Tasks in a Sequence of Async Chats
- Using RetrieveChat for Retrieve Augmented Code Generation and Question Answering
- Interactive LLM Agent Dealing with Data Stream
- Agent Chat with Async Human Inputs
- Using RetrieveChat Powered by MongoDB Atlas for Retrieve Augmented Code Generation and Question Answering
- Auto Generated Agent Chat: Task Solving with Langchain Provided Tools as Functions
- Task Solving with Code Generation, Execution and Debugging
- Runtime Logging with AutoGen
- Agent Chat with Multimodal Models: DALLE and GPT-4V
- Using RetrieveChat with Qdrant for Retrieve Augmented Code Generation and Question Answering
- Enhanced Swarm Orchestration with AG2
- Translating Video audio using Whisper and GPT-3.5-turbo
- Agent Tracking with AgentOps
- Group Chat with Coder and Visualization Critic
- ReasoningAgent - Advanced LLM Reasoning with Multiple Search Strategies
- DeepResearchAgent
- Tools with Dependency Injection
- A Uniform interface to call different LLMs
- SQL Agent for Spider text-to-SQL benchmark
- DeepSeek: Adding Browsing Capabilities to AG2
- CaptainAgent
- Adding Browsing Capabilities to AG2
- Auto Generated Agent Chat: Teaching AI New Skills via Natural Language Interaction
- Writing a software application using function calls
- Auto Generated Agent Chat: Function Inception
- Agentic RAG workflow on tabular data from a PDF file
- RealtimeAgent in a Swarm Orchestration
- Using FalkorGraphRagCapability with agents for GraphRAG Question & Answering
- Making OpenAI Assistants Teachable
- Cross-Framework LLM Tool Integration with AG2
- RealtimeAgent with local websocket connection
- Using RetrieveChat Powered by PGVector for Retrieve Augmented Code Generation and Question Answering
- OpenAI Assistants in AutoGen
- Solving Complex Tasks with Nested Chats
- Auto Generated Agent Chat: GPTAssistant with Code Interpreter
- Group Chat with Customized Speaker Selection Method
- Structured output
- Agent Chat with custom model loading
- AutoBuild
- FSM - User can input speaker transition constraints
- Using Neo4j's graph database with AG2 agents for Question & Answering
- Use AutoGen in Databricks with DBRX
- Language Agent Tree Search
- Using a local Telemetry server to monitor a GraphRAG agent
- SocietyOfMindAgent
- Perform Research with Multi-Agent Group Chat
- Group Chat
- StateFlow: Build Workflows through State-Oriented Actions
- Agent Chat with Multimodal Models: LLaVA
- Trip planning with a FalkorDB GraphRAG agent using a Swarm
- Auto Generated Agent Chat: Collaborative Task Solving with Multiple Agents and Human Users
- Agent with memory using Mem0
- Preprocessing Chat History with `TransformMessages`
- AgentOptimizer: An Agentic Way to Train Your LLM Agent
- Using RetrieveChat Powered by Couchbase Capella for Retrieve Augmented Code Generation and Question Answering
- Agent Observability with OpenLIT
- Chatting with a teachable agent
- RealtimeAgent in a Swarm Orchestration
- Conversational Chess using non-OpenAI clients
- Solving Complex Tasks with A Sequence of Nested Chats
- Config loader utility functions
- Currency Calculator: Task Solving with Provided Tools as Functions
- Chat with OpenAI Assistant using function call in AutoGen: OSS Insights for Advanced GitHub Data Analysis
- Web Scraping using Apify Tools
- Agent with memory using Mem0
- Auto Generated Agent Chat: Collaborative Task Solving with Coding and Planning Agent
- Demonstrating the `AgentEval` framework using the task of solving math problems as an example
- RealtimeAgent with WebRTC connection
- Automatically Build Multi-agent System from Agent Library
- Groupchat with Llamaindex agents
- Using Guidance with AutoGen
- RealtimeAgent in a Swarm Orchestration using WebRTC
- Cross-Framework LLM Tool for CaptainAgent
- RealtimeAgent in a Swarm Orchestration
- Websockets: Streaming input and output using websockets
- Auto Generated Agent Chat: Solving Tasks Requiring Web Info
- Auto Generated Agent Chat: Task Solving with Code Generation, Execution, Debugging & Human Feedback
- Assistants with Azure Cognitive Search and Azure Identity
- (Legacy) Implement Swarm-style orchestration with GroupChat
- Auto Generated Agent Chat: Task Solving with Provided Tools as Functions
- Agentchat MathChat
- Small, Local Model (IBM Granite) Multi-Agent RAG
- Group Chat with Retrieval Augmented Generation
- Run a standalone AssistantAgent
- Nested Chats for Tool Use in Conversational Chess
- Swarm Orchestration with AG2
- OptiGuide with Nested Chats in AutoGen
- Usage tracking with AutoGen
- From Dad Jokes To Sad Jokes: Function Calling with GPTAssistantAgent
- Discord, Slack, and Telegram messaging tools
- RAG OpenAI Assistants in AutoGen
- Auto Generated Agent Chat: Group Chat with GPTAssistantAgent
- Task Solving with Provided Tools as Functions (Asynchronous Function Calls)
- Generate Dalle Images With Conversable Agents
- Chat Context Dependency Injection
- Solving Multiple Tasks in a Sequence of Chats with Different Conversable Agent Pairs
- Engaging with Multimodal Models: GPT-4V in AutoGen
- Mitigating Prompt hacking with JSON Mode in Autogen
- Using Neo4j's native GraphRAG SDK with AG2 agents for Question & Answering
- Supercharging Web Crawling with Crawl4AI
- WebSurferAgent
- RealtimeAgent with gemini client
- Community Gallery
Notebooks
OpenAI Assistants in AutoGen
Two-agent chat with OpenAI assistants.
This notebook shows a very basic example of the
GPTAssistantAgent
,
which is an experimental AutoGen agent class that leverages the OpenAI
Assistant API for
conversational capabilities, working with UserProxyAgent
in AutoGen.
import logging
import os
from autogen import AssistantAgent, UserProxyAgent, config_list_from_json
from autogen.agentchat.contrib.gpt_assistant_agent import GPTAssistantAgent
logger = logging.getLogger(__name__)
logger.setLevel(logging.WARNING)
assistant_id = os.environ.get("ASSISTANT_ID", None)
config_list = config_list_from_json("OAI_CONFIG_LIST")
llm_config = {"config_list": config_list}
assistant_config = {"assistant_id": assistant_id}
gpt_assistant = GPTAssistantAgent(
name="assistant",
instructions=AssistantAgent.DEFAULT_SYSTEM_MESSAGE,
llm_config=llm_config,
assistant_config=assistant_config,
)
user_proxy = UserProxyAgent(
name="user_proxy",
code_execution_config={
"work_dir": "coding",
"use_docker": False,
}, # Please set use_docker=True if docker is available to run the generated code. Using docker is safer than running the generated code directly.
is_termination_msg=lambda msg: "TERMINATE" in msg["content"],
human_input_mode="NEVER",
max_consecutive_auto_reply=1,
)
user_proxy.initiate_chat(gpt_assistant, message="Print hello world")
OpenAI client config of GPTAssistantAgent(assistant) - model: gpt-4-turbo-preview
GPT Assistant only supports one OpenAI client. Using the first client in the list.
No matching assistant found, creating a new assistant
user_proxy (to assistant):
Print hello world
--------------------------------------------------------------------------------
assistant (to user_proxy):
```python
print("Hello, world!")
```
--------------------------------------------------------------------------------
>>>>>>>> EXECUTING CODE BLOCK 0 (inferred language is python)...
user_proxy (to assistant):
exitcode: 0 (execution succeeded)
Code output:
Hello, world!
--------------------------------------------------------------------------------
assistant (to user_proxy):
TERMINATE
--------------------------------------------------------------------------------
ChatResult(chat_id=None, chat_history=[{'content': 'Print hello world', 'role': 'assistant'}, {'content': '```python\nprint("Hello, world!")\n```\n', 'role': 'user'}, {'content': 'exitcode: 0 (execution succeeded)\nCode output: \nHello, world!\n', 'role': 'assistant'}, {'content': 'TERMINATE\n', 'role': 'user'}], summary='\n', cost=({'total_cost': 0}, {'total_cost': 0}), human_input=[])
user_proxy.initiate_chat(gpt_assistant, message="Write py code to eval 2 + 2", clear_history=True)
user_proxy (to assistant):
Write py code to eval 2 + 2
--------------------------------------------------------------------------------
assistant (to user_proxy):
```python
# Calculate 2+2 and print the result
result = 2 + 2
print(result)
```
--------------------------------------------------------------------------------
>>>>>>>> EXECUTING CODE BLOCK 0 (inferred language is python)...
user_proxy (to assistant):
exitcode: 0 (execution succeeded)
Code output:
4
--------------------------------------------------------------------------------
assistant (to user_proxy):
The Python code successfully calculated \(2 + 2\) and printed the result, which is \(4\).
TERMINATE
--------------------------------------------------------------------------------
ChatResult(chat_id=None, chat_history=[{'content': 'Write py code to eval 2 + 2', 'role': 'assistant'}, {'content': '```python\n# Calculate 2+2 and print the result\nresult = 2 + 2\nprint(result)\n```\n', 'role': 'user'}, {'content': 'exitcode: 0 (execution succeeded)\nCode output: \n4\n', 'role': 'assistant'}, {'content': 'The Python code successfully calculated \\(2 + 2\\) and printed the result, which is \\(4\\).\n\nTERMINATE\n', 'role': 'user'}], summary='The Python code successfully calculated \\(2 + 2\\) and printed the result, which is \\(4\\).\n\n\n', cost=({'total_cost': 0}, {'total_cost': 0}), human_input=[])
gpt_assistant.delete_assistant()
Permanently deleting assistant...