Examples by Notebook
Agent Chat with Async Human Inputs
Examples
- Examples by Category
- Examples by Notebook
- Notebooks
- Using RetrieveChat Powered by MongoDB Atlas for Retrieve Augmented Code Generation and Question Answering
- Using RetrieveChat Powered by PGVector for Retrieve Augmented Code Generation and Question Answering
- Using RetrieveChat with Qdrant for Retrieve Augmented Code Generation and Question Answering
- Agent Tracking with AgentOps
- AgentOptimizer: An Agentic Way to Train Your LLM Agent
- Task Solving with Code Generation, Execution and Debugging
- Assistants with Azure Cognitive Search and Azure Identity
- CaptainAgent
- Usage tracking with AutoGen
- Agent Chat with custom model loading
- Agent Chat with Multimodal Models: DALLE and GPT-4V
- Use AutoGen in Databricks with DBRX
- Auto Generated Agent Chat: Task Solving with Provided Tools as Functions
- Task Solving with Provided Tools as Functions (Asynchronous Function Calls)
- Writing a software application using function calls
- Currency Calculator: Task Solving with Provided Tools as Functions
- Groupchat with Llamaindex agents
- Group Chat
- Group Chat with Retrieval Augmented Generation
- Group Chat with Customized Speaker Selection Method
- FSM - User can input speaker transition constraints
- Perform Research with Multi-Agent Group Chat
- StateFlow: Build Workflows through State-Oriented Actions
- Group Chat with Coder and Visualization Critic
- Using Guidance with AutoGen
- Auto Generated Agent Chat: Task Solving with Code Generation, Execution, Debugging & Human Feedback
- Generate Dalle Images With Conversable Agents
- Auto Generated Agent Chat: Function Inception
- Auto Generated Agent Chat: Task Solving with Langchain Provided Tools as Functions
- Engaging with Multimodal Models: GPT-4V in AutoGen
- Agent Chat with Multimodal Models: LLaVA
- Runtime Logging with AutoGen
- Agent with memory using Mem0
- Solving Multiple Tasks in a Sequence of Async Chats
- Solving Multiple Tasks in a Sequence of Chats
- Nested Chats for Tool Use in Conversational Chess
- Conversational Chess using non-OpenAI clients
- Solving Complex Tasks with A Sequence of Nested Chats
- Solving Complex Tasks with Nested Chats
- OptiGuide with Nested Chats in AutoGen
- Chat with OpenAI Assistant using function call in AutoGen: OSS Insights for Advanced GitHub Data Analysis
- Auto Generated Agent Chat: Group Chat with GPTAssistantAgent
- RAG OpenAI Assistants in AutoGen
- OpenAI Assistants in AutoGen
- Auto Generated Agent Chat: GPTAssistant with Code Interpreter
- Agent Observability with OpenLIT
- Auto Generated Agent Chat: Collaborative Task Solving with Coding and Planning Agent
- ReasoningAgent - Advanced LLM Reasoning with Multiple Search Strategies
- SocietyOfMindAgent
- SQL Agent for Spider text-to-SQL benchmark
- Interactive LLM Agent Dealing with Data Stream
- Structured output
- WebSurferAgent
- Swarm Orchestration with AG2
- Using a local Telemetry server to monitor a GraphRAG agent
- Trip planning with a FalkorDB GraphRAG agent using a Swarm
- (Legacy) Implement Swarm-style orchestration with GroupChat
- Chatting with a teachable agent
- Making OpenAI Assistants Teachable
- Auto Generated Agent Chat: Teaching AI New Skills via Natural Language Interaction
- Preprocessing Chat History with `TransformMessages`
- Auto Generated Agent Chat: Collaborative Task Solving with Multiple Agents and Human Users
- Translating Video audio using Whisper and GPT-3.5-turbo
- Auto Generated Agent Chat: Solving Tasks Requiring Web Info
- Web Scraping using Apify Tools
- Websockets: Streaming input and output using websockets
- Solving Multiple Tasks in a Sequence of Chats with Different Conversable Agent Pairs
- Demonstrating the `AgentEval` framework using the task of solving math problems as an example
- Agent Chat with Async Human Inputs
- Automatically Build Multi-agent System from Agent Library
- AutoBuild
- A Uniform interface to call different LLMs
- From Dad Jokes To Sad Jokes: Function Calling with GPTAssistantAgent
- Language Agent Tree Search
- Mitigating Prompt hacking with JSON Mode in Autogen
- Using RetrieveChat for Retrieve Augmented Code Generation and Question Answering
- Using Neo4j's graph database with AG2 agents for Question & Answering
- Enhanced Swarm Orchestration with AG2
- Cross-Framework LLM Tool Integration with AG2
- RealtimeAgent in a Swarm Orchestration
- ReasoningAgent - Advanced LLM Reasoning with Multiple Search Strategies
- Application Gallery
Examples by Notebook
Agent Chat with Async Human Inputs
%pip install "autogen" chromadb sentence_transformers tiktoken pypdf nest-asyncio
import asyncio
from typing import Any, Callable, Dict, List, Optional, Tuple, Union
import nest_asyncio
from autogen import AssistantAgent
from autogen.agentchat.contrib.retrieve_user_proxy_agent import RetrieveUserProxyAgent
from autogen.agentchat.user_proxy_agent import UserProxyAgent
# Define an asynchronous function that simulates some asynchronous task (e.g., I/O operation)
async def my_asynchronous_function():
print("Start asynchronous function")
await asyncio.sleep(2) # Simulate some asynchronous task (e.g., I/O operation)
print("End asynchronous function")
return "input"
# Define a custom class `CustomisedUserProxyAgent` that extends `UserProxyAgent`
class CustomisedUserProxyAgent(UserProxyAgent):
# Asynchronous function to get human input
async def a_get_human_input(self, prompt: str) -> str:
# Call the asynchronous function to get user input asynchronously
user_input = await my_asynchronous_function()
return user_input
# Asynchronous function to receive a message
async def a_receive(
self,
message: Union[Dict, str],
sender,
request_reply: Optional[bool] = None,
silent: Optional[bool] = False,
):
# Call the superclass method to handle message reception asynchronously
await super().a_receive(message, sender, request_reply, silent)
class CustomisedAssistantAgent(AssistantAgent):
# Asynchronous function to get human input
async def a_get_human_input(self, prompt: str) -> str:
# Call the asynchronous function to get user input asynchronously
user_input = await my_asynchronous_function()
return user_input
# Asynchronous function to receive a message
async def a_receive(
self,
message: Union[Dict, str],
sender,
request_reply: Optional[bool] = None,
silent: Optional[bool] = False,
):
# Call the superclass method to handle message reception asynchronously
await super().a_receive(message, sender, request_reply, silent)
def create_llm_config(model, temperature, seed):
config_list = [
{
"model": "<model_name>",
"api_key": "<api_key>",
},
]
llm_config = {
"seed": int(seed),
"config_list": config_list,
"temperature": float(temperature),
}
return llm_config
nest_asyncio.apply()
async def main():
boss = CustomisedUserProxyAgent(
name="boss",
human_input_mode="ALWAYS",
max_consecutive_auto_reply=0,
code_execution_config=False,
)
assistant = CustomisedAssistantAgent(
name="assistant",
system_message="You will provide some agenda, and I will create questions for an interview meeting. Every time when you generate question then you have to ask user for feedback and if user provides the feedback then you have to incorporate that feedback and generate new set of questions and if user don't want to update then terminate the process and exit",
llm_config=create_llm_config("gpt-4", "0.4", "23"),
)
await boss.a_initiate_chat(
assistant,
message="Resume Review, Technical Skills Assessment, Project Discussion, Job Role Expectations, Closing Remarks.",
n_results=3,
)
await main() # noqa: F704
boss (to assistant):
Resume Review, Technical Skills Assessment, Project Discussion, Job Role Expectations, Closing Remarks.
--------------------------------------------------------------------------------
assistant (to boss):
1. Can you walk me through your resume, highlighting the most significant parts?
2. What technical skills do you possess that make you a strong candidate for this position?
3. Can you discuss a project you've worked on that you believe showcases your abilities?
4. What are your expectations for this job role?
5. Do you have any questions or concerns before we conclude this interview?
Can you please provide your feedback on these questions?
--------------------------------------------------------------------------------
Provide feedback to assistant. Press enter to skip and use auto-reply, or type 'exit' to end the conversation: exit