Examples by Notebook
RAG OpenAI Assistants in AutoGen
Examples
- Examples by Category
- Examples by Notebook
- Notebooks
- Using RetrieveChat Powered by MongoDB Atlas for Retrieve Augmented Code Generation and Question Answering
- Using RetrieveChat Powered by PGVector for Retrieve Augmented Code Generation and Question Answering
- Using RetrieveChat with Qdrant for Retrieve Augmented Code Generation and Question Answering
- Agent Tracking with AgentOps
- AgentOptimizer: An Agentic Way to Train Your LLM Agent
- Task Solving with Code Generation, Execution and Debugging
- Assistants with Azure Cognitive Search and Azure Identity
- CaptainAgent
- Usage tracking with AutoGen
- Agent Chat with custom model loading
- Agent Chat with Multimodal Models: DALLE and GPT-4V
- Use AutoGen in Databricks with DBRX
- Auto Generated Agent Chat: Task Solving with Provided Tools as Functions
- Task Solving with Provided Tools as Functions (Asynchronous Function Calls)
- Writing a software application using function calls
- Currency Calculator: Task Solving with Provided Tools as Functions
- Groupchat with Llamaindex agents
- Group Chat
- Group Chat with Retrieval Augmented Generation
- Group Chat with Customized Speaker Selection Method
- FSM - User can input speaker transition constraints
- Perform Research with Multi-Agent Group Chat
- StateFlow: Build Workflows through State-Oriented Actions
- Group Chat with Coder and Visualization Critic
- Using Guidance with AutoGen
- Auto Generated Agent Chat: Task Solving with Code Generation, Execution, Debugging & Human Feedback
- Generate Dalle Images With Conversable Agents
- Auto Generated Agent Chat: Function Inception
- Auto Generated Agent Chat: Task Solving with Langchain Provided Tools as Functions
- Engaging with Multimodal Models: GPT-4V in AutoGen
- Agent Chat with Multimodal Models: LLaVA
- Runtime Logging with AutoGen
- Agent with memory using Mem0
- Solving Multiple Tasks in a Sequence of Async Chats
- Solving Multiple Tasks in a Sequence of Chats
- Nested Chats for Tool Use in Conversational Chess
- Conversational Chess using non-OpenAI clients
- Solving Complex Tasks with A Sequence of Nested Chats
- Solving Complex Tasks with Nested Chats
- OptiGuide with Nested Chats in AutoGen
- Chat with OpenAI Assistant using function call in AutoGen: OSS Insights for Advanced GitHub Data Analysis
- Auto Generated Agent Chat: Group Chat with GPTAssistantAgent
- RAG OpenAI Assistants in AutoGen
- OpenAI Assistants in AutoGen
- Auto Generated Agent Chat: GPTAssistant with Code Interpreter
- Agent Observability with OpenLIT
- Auto Generated Agent Chat: Collaborative Task Solving with Coding and Planning Agent
- ReasoningAgent - Advanced LLM Reasoning with Multiple Search Strategies
- SocietyOfMindAgent
- SQL Agent for Spider text-to-SQL benchmark
- Interactive LLM Agent Dealing with Data Stream
- Structured output
- WebSurferAgent
- Swarm Orchestration with AG2
- Using a local Telemetry server to monitor a GraphRAG agent
- Trip planning with a FalkorDB GraphRAG agent using a Swarm
- (Legacy) Implement Swarm-style orchestration with GroupChat
- Chatting with a teachable agent
- Making OpenAI Assistants Teachable
- Auto Generated Agent Chat: Teaching AI New Skills via Natural Language Interaction
- Preprocessing Chat History with `TransformMessages`
- Auto Generated Agent Chat: Collaborative Task Solving with Multiple Agents and Human Users
- Translating Video audio using Whisper and GPT-3.5-turbo
- Auto Generated Agent Chat: Solving Tasks Requiring Web Info
- Web Scraping using Apify Tools
- Websockets: Streaming input and output using websockets
- Solving Multiple Tasks in a Sequence of Chats with Different Conversable Agent Pairs
- Demonstrating the `AgentEval` framework using the task of solving math problems as an example
- Agent Chat with Async Human Inputs
- Automatically Build Multi-agent System from Agent Library
- AutoBuild
- A Uniform interface to call different LLMs
- From Dad Jokes To Sad Jokes: Function Calling with GPTAssistantAgent
- Language Agent Tree Search
- Mitigating Prompt hacking with JSON Mode in Autogen
- Using RetrieveChat for Retrieve Augmented Code Generation and Question Answering
- Using Neo4j's graph database with AG2 agents for Question & Answering
- Enhanced Swarm Orchestration with AG2
- Cross-Framework LLM Tool Integration with AG2
- RealtimeAgent in a Swarm Orchestration
- ReasoningAgent - Advanced LLM Reasoning with Multiple Search Strategies
- Application Gallery
Examples by Notebook
RAG OpenAI Assistants in AutoGen
This notebook shows an example of the
GPTAssistantAgent
with retrieval augmented generation. GPTAssistantAgent
is an
experimental AutoGen agent class that leverages the OpenAI Assistant
API for
conversational capabilities, working with UserProxyAgent
in AutoGen.
import logging
import os
from autogen import UserProxyAgent, config_list_from_json
from autogen.agentchat.contrib.gpt_assistant_agent import GPTAssistantAgent
logger = logging.getLogger(__name__)
logger.setLevel(logging.WARNING)
assistant_id = os.environ.get("ASSISTANT_ID", None)
config_list = config_list_from_json("OAI_CONFIG_LIST")
llm_config = {
"config_list": config_list,
}
assistant_config = {
"assistant_id": assistant_id,
"tools": [{"type": "retrieval"}],
"file_ids": ["file-AcnBk5PCwAjJMCVO0zLSbzKP"],
# add id of an existing file in your openai account
# in this case I added the implementation of conversable_agent.py
}
gpt_assistant = GPTAssistantAgent(
name="assistant",
instructions="You are adapt at question answering",
llm_config=llm_config,
assistant_config=assistant_config,
)
user_proxy = UserProxyAgent(
name="user_proxy",
code_execution_config=False,
is_termination_msg=lambda msg: "TERMINATE" in msg["content"],
human_input_mode="ALWAYS",
)
user_proxy.initiate_chat(gpt_assistant, message="Please explain the code in this file!")
gpt_assistant.delete_assistant()
OpenAI client config of GPTAssistantAgent(assistant) - model: gpt-4-turbo-preview
GPT Assistant only supports one OpenAI client. Using the first client in the list.
Matching assistant found, using the first matching assistant: {'id': 'asst_sKUCUXkaXyTidtlyovbqppH3', 'created_at': 1710320924, 'description': None, 'file_ids': ['file-AcnBk5PCwAjJMCVO0zLSbzKP'], 'instructions': 'You are adapt at question answering', 'metadata': {}, 'model': 'gpt-4-turbo-preview', 'name': 'assistant', 'object': 'assistant', 'tools': [ToolRetrieval(type='retrieval')]}
user_proxy (to assistant):
Please explain the code in this file!
--------------------------------------------------------------------------------
assistant (to user_proxy):
The code in the file appears to define tests for various functionalities related to a GPT-based assistant agent. Here is a summary of the main components and functionalities described in the visible portion of the code:
1. **Imports and Setup**: The script imports necessary libraries and modules, such as `pytest` for testing, `uuid` for generating unique identifiers, and `openai` along with custom modules like `autogen` and `OpenAIWrapper`. It sets up the system path to include specific directories for importing test configurations and dependencies.
2. **Conditional Test Skipping**: The script includes logic to conditionally skip tests based on the execution environment or missing dependencies. This is done through the `@pytest.mark.skipif` decorator, which conditionally skips test functions based on the `skip` flag, determined by whether certain imports are successful or other conditions are met.
3. **Test Configurations**: The code loads configurations for interacting with the OpenAI API and a hypothetical Azure API (as indicated by the placeholder `azure`), filtering these configurations by certain criteria such as API type and version.
4. **Test Cases**:
- **Configuration List Test (`test_config_list`)**: Ensures that the configurations for both OpenAI and the hypothetical Azure API are loaded correctly by asserting the presence of at least one configuration for each.
- **GPT Assistant Chat Test (`test_gpt_assistant_chat`)**: Tests the functionality of a GPT Assistant Agent by simulating a chat interaction. It uses a mock function to simulate an external API call and checks if the GPT Assistant properly processes the chat input, invokes the external function, and provides an expected response.
- **Assistant Instructions Test (`test_get_assistant_instructions` and related functions)**: These tests verify that instructions can be set and retrieved correctly for a GPTAssistantAgent. It covers creating an agent, setting instructions, and ensuring the set instructions can be retrieved as expected.
- **Instructions Overwrite Test (`test_gpt_assistant_instructions_overwrite`)**: Examines whether the instructions for a GPTAssistantAgent can be successfully overwritten by creating a new agent with the same ID but different instructions, with an explicit indication to overwrite the previous instructions.
Each test case aims to cover different aspects of the GPTAssistantAgent's functionality, such as configuration loading, interactive chat behavior, function registration and invocation, and instruction management. The mocks and assertions within the tests are designed to ensure that each component of the GPTAssistantAgent behaves as expected under controlled conditions.
--------------------------------------------------------------------------------
user_proxy (to assistant):
TERMINATE
--------------------------------------------------------------------------------
Permanently deleting assistant...