Examples by Notebook
SQL Agent for Spider text-to-SQL benchmark
Examples
- Examples by Category
- Examples by Notebook
- Notebooks
- Using RetrieveChat Powered by MongoDB Atlas for Retrieve Augmented Code Generation and Question Answering
- Using RetrieveChat Powered by PGVector for Retrieve Augmented Code Generation and Question Answering
- Using RetrieveChat with Qdrant for Retrieve Augmented Code Generation and Question Answering
- Agent Tracking with AgentOps
- AgentOptimizer: An Agentic Way to Train Your LLM Agent
- Task Solving with Code Generation, Execution and Debugging
- Assistants with Azure Cognitive Search and Azure Identity
- CaptainAgent
- Usage tracking with AutoGen
- Agent Chat with custom model loading
- Agent Chat with Multimodal Models: DALLE and GPT-4V
- Use AutoGen in Databricks with DBRX
- Auto Generated Agent Chat: Task Solving with Provided Tools as Functions
- Task Solving with Provided Tools as Functions (Asynchronous Function Calls)
- Writing a software application using function calls
- Currency Calculator: Task Solving with Provided Tools as Functions
- Groupchat with Llamaindex agents
- Group Chat
- Group Chat with Retrieval Augmented Generation
- Group Chat with Customized Speaker Selection Method
- FSM - User can input speaker transition constraints
- Perform Research with Multi-Agent Group Chat
- StateFlow: Build Workflows through State-Oriented Actions
- Group Chat with Coder and Visualization Critic
- Using Guidance with AutoGen
- Auto Generated Agent Chat: Task Solving with Code Generation, Execution, Debugging & Human Feedback
- Generate Dalle Images With Conversable Agents
- Auto Generated Agent Chat: Function Inception
- Auto Generated Agent Chat: Task Solving with Langchain Provided Tools as Functions
- Engaging with Multimodal Models: GPT-4V in AutoGen
- Agent Chat with Multimodal Models: LLaVA
- Runtime Logging with AutoGen
- Agent with memory using Mem0
- Solving Multiple Tasks in a Sequence of Async Chats
- Solving Multiple Tasks in a Sequence of Chats
- Nested Chats for Tool Use in Conversational Chess
- Conversational Chess using non-OpenAI clients
- Solving Complex Tasks with A Sequence of Nested Chats
- Solving Complex Tasks with Nested Chats
- OptiGuide with Nested Chats in AutoGen
- Chat with OpenAI Assistant using function call in AutoGen: OSS Insights for Advanced GitHub Data Analysis
- Auto Generated Agent Chat: Group Chat with GPTAssistantAgent
- RAG OpenAI Assistants in AutoGen
- OpenAI Assistants in AutoGen
- Auto Generated Agent Chat: GPTAssistant with Code Interpreter
- Agent Observability with OpenLIT
- Auto Generated Agent Chat: Collaborative Task Solving with Coding and Planning Agent
- ReasoningAgent - Advanced LLM Reasoning with Multiple Search Strategies
- SocietyOfMindAgent
- SQL Agent for Spider text-to-SQL benchmark
- Interactive LLM Agent Dealing with Data Stream
- Structured output
- WebSurferAgent
- Swarm Orchestration with AG2
- Using a local Telemetry server to monitor a GraphRAG agent
- Trip planning with a FalkorDB GraphRAG agent using a Swarm
- (Legacy) Implement Swarm-style orchestration with GroupChat
- Chatting with a teachable agent
- Making OpenAI Assistants Teachable
- Auto Generated Agent Chat: Teaching AI New Skills via Natural Language Interaction
- Preprocessing Chat History with `TransformMessages`
- Auto Generated Agent Chat: Collaborative Task Solving with Multiple Agents and Human Users
- Translating Video audio using Whisper and GPT-3.5-turbo
- Auto Generated Agent Chat: Solving Tasks Requiring Web Info
- Web Scraping using Apify Tools
- Websockets: Streaming input and output using websockets
- Solving Multiple Tasks in a Sequence of Chats with Different Conversable Agent Pairs
- Demonstrating the `AgentEval` framework using the task of solving math problems as an example
- Agent Chat with Async Human Inputs
- Automatically Build Multi-agent System from Agent Library
- AutoBuild
- A Uniform interface to call different LLMs
- From Dad Jokes To Sad Jokes: Function Calling with GPTAssistantAgent
- Language Agent Tree Search
- Mitigating Prompt hacking with JSON Mode in Autogen
- Using RetrieveChat for Retrieve Augmented Code Generation and Question Answering
- Using Neo4j's graph database with AG2 agents for Question & Answering
- Enhanced Swarm Orchestration with AG2
- Cross-Framework LLM Tool Integration with AG2
- RealtimeAgent in a Swarm Orchestration
- ReasoningAgent - Advanced LLM Reasoning with Multiple Search Strategies
- Application Gallery
Examples by Notebook
SQL Agent for Spider text-to-SQL benchmark
This notebook demonstrates a basic SQL agent that translates natural language questions into SQL queries.
Environment
For this demo, we use a SQLite database environment based on a standard text-to-sql benchmark called Spider. The environment provides a gym-like interface and can be used as follows.
# %pip install spider-env
import json
import os
from typing import Annotated, Dict
from spider_env import SpiderEnv
from autogen import ConversableAgent, UserProxyAgent, config_list_from_json
gym = SpiderEnv()
# Randomly select a question from Spider
observation, info = gym.reset()
Loading cached Spider dataset from /home/wangdazhang/.cache/spider
Schema file not found for /home/wangdazhang/.cache/spider/spider/database/flight_4
Schema file not found for /home/wangdazhang/.cache/spider/spider/database/small_bank_1
Schema file not found for /home/wangdazhang/.cache/spider/spider/database/icfp_1
Schema file not found for /home/wangdazhang/.cache/spider/spider/database/twitter_1
Schema file not found for /home/wangdazhang/.cache/spider/spider/database/epinions_1
Schema file not found for /home/wangdazhang/.cache/spider/spider/database/chinook_1
Schema file not found for /home/wangdazhang/.cache/spider/spider/database/company_1
# The natural language question
question = observation["instruction"]
print(question)
Find the famous titles of artists that do not have any volume.
# The schema of the corresponding database
schema = info["schema"]
print(schema)
CREATE TABLE "artist" (
"Artist_ID" int,
"Artist" text,
"Age" int,
"Famous_Title" text,
"Famous_Release_date" text,
PRIMARY KEY ("Artist_ID")
);
CREATE TABLE "volume" (
"Volume_ID" int,
"Volume_Issue" text,
"Issue_Date" text,
"Weeks_on_Top" real,
"Song" text,
"Artist_ID" int,
PRIMARY KEY ("Volume_ID"),
FOREIGN KEY ("Artist_ID") REFERENCES "artist"("Artist_ID")
);
CREATE TABLE "music_festival" (
"ID" int,
"Music_Festival" text,
"Date_of_ceremony" text,
"Category" text,
"Volume" int,
"Result" text,
PRIMARY KEY ("ID"),
FOREIGN KEY ("Volume") REFERENCES "volume"("Volume_ID")
);
Agent Implementation
Using AutoGen, a SQL agent can be implemented with a ConversableAgent. The gym environment executes the generated SQL query and the agent can take execution results as feedback to improve its generation in multiple rounds of conversations.
os.environ["AUTOGEN_USE_DOCKER"] = "False"
config_list = config_list_from_json(env_or_file="OAI_CONFIG_LIST")
def check_termination(msg: Dict):
if "tool_responses" not in msg:
return False
json_str = msg["tool_responses"][0]["content"]
obj = json.loads(json_str)
return "error" not in obj or obj["error"] is None and obj["reward"] == 1
sql_writer = ConversableAgent(
"sql_writer",
llm_config={"config_list": config_list},
system_message="You are good at writing SQL queries. Always respond with a function call to execute_sql().",
is_termination_msg=check_termination,
)
user_proxy = UserProxyAgent("user_proxy", human_input_mode="NEVER", max_consecutive_auto_reply=5)
@sql_writer.register_for_llm(description="Function for executing SQL query and returning a response")
@user_proxy.register_for_execution()
def execute_sql(
reflection: Annotated[str, "Think about what to do"], sql: Annotated[str, "SQL query"]
) -> Annotated[Dict[str, str], "Dictionary with keys 'result' and 'error'"]:
observation, reward, _, _, info = gym.step(sql)
error = observation["feedback"]["error"]
if not error and reward == 0:
error = "The SQL query returned an incorrect result"
if error:
return {
"error": error,
"wrong_result": observation["feedback"]["result"],
"correct_result": info["gold_result"],
}
else:
return {
"result": observation["feedback"]["result"],
}
The agent can then take as input the schema and the text question, and generate the SQL query.
message = f"""Below is the schema for a SQL database:
{schema}
Generate a SQL query to answer the following question:
{question}
"""
user_proxy.initiate_chat(sql_writer, message=message)
user_proxy (to sql_writer):
Below is the schema for a SQL database:
CREATE TABLE "artist" (
"Artist_ID" int,
"Artist" text,
"Age" int,
"Famous_Title" text,
"Famous_Release_date" text,
PRIMARY KEY ("Artist_ID")
);
CREATE TABLE "volume" (
"Volume_ID" int,
"Volume_Issue" text,
"Issue_Date" text,
"Weeks_on_Top" real,
"Song" text,
"Artist_ID" int,
PRIMARY KEY ("Volume_ID"),
FOREIGN KEY ("Artist_ID") REFERENCES "artist"("Artist_ID")
);
CREATE TABLE "music_festival" (
"ID" int,
"Music_Festival" text,
"Date_of_ceremony" text,
"Category" text,
"Volume" int,
"Result" text,
PRIMARY KEY ("ID"),
FOREIGN KEY ("Volume") REFERENCES "volume"("Volume_ID")
);
Generate a SQL query to answer the following question:
Find the famous titles of artists that do not have any volume.
--------------------------------------------------------------------------------
>>>>>>>> USING AUTO REPLY...
sql_writer (to user_proxy):
***** Suggested tool Call (call_eAu0OEzS8l3QvN3jQSn4w0hJ): execute_sql *****
Arguments:
{"reflection":"Generating SQL to find famous titles of artists without any volume","sql":"SELECT a.Artist, a.Famous_Title FROM artist a WHERE NOT EXISTS (SELECT 1 FROM volume v WHERE v.Artist_ID = a.Artist_ID)"}
****************************************************************************
--------------------------------------------------------------------------------
>>>>>>>> EXECUTING FUNCTION execute_sql...
user_proxy (to sql_writer):
user_proxy (to sql_writer):
***** Response from calling tool "call_eAu0OEzS8l3QvN3jQSn4w0hJ" *****
{"error": "The SQL query returned an incorrect result", "wrong_result": [["Ophiolatry", "Antievangelistical Process (re-release)"], ["Triumfall", "Antithesis of All Flesh"]], "correct_result": [["Antievangelistical Process (re-release)"], ["Antithesis of All Flesh"]]}
**********************************************************************
--------------------------------------------------------------------------------
>>>>>>>> USING AUTO REPLY...
sql_writer (to user_proxy):
***** Suggested tool Call (call_5LXoKqdZ17kPCOHJbbpSz2yk): execute_sql *****
Arguments:
{"reflection":"Adjusting SQL to only select famous titles and exclude artist names for artists without any volume.","sql":"SELECT a.Famous_Title FROM artist a WHERE NOT EXISTS (SELECT 1 FROM volume v WHERE v.Artist_ID = a.Artist_ID)"}
****************************************************************************
--------------------------------------------------------------------------------
>>>>>>>> EXECUTING FUNCTION execute_sql...
user_proxy (to sql_writer):
user_proxy (to sql_writer):
***** Response from calling tool "call_5LXoKqdZ17kPCOHJbbpSz2yk" *****
{"result": [["Antievangelistical Process (re-release)"], ["Antithesis of All Flesh"]]}
**********************************************************************
--------------------------------------------------------------------------------
>>>>>>>> NO HUMAN INPUT RECEIVED.
On this page