Skip to content

QuickResearchTool

autogen.tools.experimental.QuickResearchTool #

QuickResearchTool(*, llm_config, tavily_api_key=None, num_results_per_query=3)

Bases: Tool

Performs parallel web research across multiple queries.

For each query, the tool: 1. Searches the web using Tavily to get top results 2. Crawls each result URL using crawl4ai 3. Summarizes page content via LLM (any provider supported by AG2)

This is a lightweight alternative to DeepResearchTool for quick fact-finding across multiple topics in parallel.

ATTRIBUTE DESCRIPTION
tavily_api_key

The Tavily API key for web search.

llm_config

LLM configuration for summarization.

Initialize the QuickResearchTool.

PARAMETER DESCRIPTION
llm_config

LLM configuration for summarization. Supports any AG2-compatible provider.

TYPE: LLMConfig

tavily_api_key

Tavily API key. Falls back to TAVILY_API_KEY env var.

TYPE: str | None DEFAULT: None

num_results_per_query

Number of search results to crawl per query. Defaults to 3.

TYPE: int DEFAULT: 3

RAISES DESCRIPTION
ValueError

If tavily_api_key is not provided or set in env vars.

Source code in autogen/tools/experimental/quick_research/quick_research.py
def __init__(
    self,
    *,
    llm_config: LLMConfig,
    tavily_api_key: str | None = None,
    num_results_per_query: int = 3,
) -> None:
    """Initialize the QuickResearchTool.

    Args:
        llm_config: LLM configuration for summarization. Supports any AG2-compatible provider.
        tavily_api_key: Tavily API key. Falls back to TAVILY_API_KEY env var.
        num_results_per_query: Number of search results to crawl per query. Defaults to 3.

    Raises:
        ValueError: If tavily_api_key is not provided or set in env vars.
    """
    self.tavily_api_key = tavily_api_key or os.getenv("TAVILY_API_KEY")
    self.llm_config = llm_config
    self.num_results_per_query = num_results_per_query

    # Extract config_list as list of dicts for OpenAIWrapper
    self._config_list = [c.model_dump() for c in llm_config.config_list]

    if self.tavily_api_key is None:
        raise ValueError("tavily_api_key must be provided either as an argument or via TAVILY_API_KEY env var")

    async def quick_research(
        queries: Annotated[list[str], "List of search queries to research (max 5)."],
        tavily_api_key: Annotated[str, Depends(on(self.tavily_api_key))],
        config_list: Annotated[list[dict[str, Any]], Depends(on(self._config_list))],
        num_results_per_query: Annotated[int, Depends(on(self.num_results_per_query))],
        chunk_prompt: Annotated[
            str, "Prompt for summarizing individual text chunks."
        ] = "Summarise the chunk, preserving all facts.",
        merger_prompt: Annotated[
            str, "Prompt for merging chunk summaries."
        ] = "Merge the partial summaries into one coherent overview.",
        full_prompt: Annotated[
            str, "Prompt for summarizing short texts in one pass."
        ] = "Provide a concise but complete summary.",
    ) -> str:
        """Research multiple queries in parallel and return summarized sources.

        For each query:
        1. Performs Tavily search to get top N results
        2. Crawls each result URL using crawl4ai
        3. Summarizes page content via LLM

        Args:
            queries: List of search queries (max 5).
            tavily_api_key: Tavily API key (injected dependency).
            config_list: LLM config list (injected dependency).
            num_results_per_query: Number of results per query (injected dependency).
            chunk_prompt: Prompt for summarizing individual text chunks.
            merger_prompt: Prompt for merging chunk summaries.
            full_prompt: Prompt for summarizing short texts in one pass.

        Returns:
            JSON string: list of objects with query and sources (title, url, summary).
        """
        if not queries:
            return "[]"

        queries = queries[:MAX_QUERIES]

        enc = tiktoken.get_encoding("cl100k_base")

        research_tasks = [
            _research_single_query(
                query,
                tavily_api_key=tavily_api_key,
                config_list=config_list,
                enc=enc,
                num_results=num_results_per_query,
                chunk_prompt=chunk_prompt,
                merger_prompt=merger_prompt,
                full_prompt=full_prompt,
            )
            for query in queries
        ]

        results = await asyncio.gather(*research_tasks, return_exceptions=True)

        output = []
        for result in results:
            if isinstance(result, Exception):
                continue
            output.append(result)

        return json.dumps(output)

    super().__init__(
        name="quick_research",
        description="Research multiple queries in parallel using web search, crawling, and LLM summarization.",
        func_or_tool=quick_research,
    )

name property #

name

description property #

description

func property #

func

tool_schema property #

tool_schema

Get the schema for the tool.

This is the preferred way of handling function calls with OpeaAI and compatible frameworks.

function_schema property #

function_schema

Get the schema for the function.

This is the old way of handling function calls with OpenAI and compatible frameworks. It is provided for backward compatibility.

realtime_tool_schema property #

realtime_tool_schema

Get the schema for the tool.

This is the preferred way of handling function calls with OpeaAI and compatible frameworks.

tavily_api_key instance-attribute #

tavily_api_key = tavily_api_key or getenv('TAVILY_API_KEY')

llm_config instance-attribute #

llm_config = llm_config

num_results_per_query instance-attribute #

num_results_per_query = num_results_per_query

register_for_llm #

register_for_llm(agent)

Registers the tool for use with a ConversableAgent's language model (LLM).

This method registers the tool so that it can be invoked by the agent during interactions with the language model.

PARAMETER DESCRIPTION
agent

The agent to which the tool will be registered.

TYPE: ConversableAgent

Source code in autogen/tools/tool.py
def register_for_llm(self, agent: "ConversableAgent") -> None:
    """Registers the tool for use with a ConversableAgent's language model (LLM).

    This method registers the tool so that it can be invoked by the agent during
    interactions with the language model.

    Args:
        agent (ConversableAgent): The agent to which the tool will be registered.
    """
    if self._func_schema:
        agent.update_tool_signature(self._func_schema, is_remove=False)
    else:
        agent.register_for_llm()(self)

register_for_execution #

register_for_execution(agent)

Registers the tool for direct execution by a ConversableAgent.

This method registers the tool so that it can be executed by the agent, typically outside of the context of an LLM interaction.

PARAMETER DESCRIPTION
agent

The agent to which the tool will be registered.

TYPE: ConversableAgent

Source code in autogen/tools/tool.py
def register_for_execution(self, agent: "ConversableAgent") -> None:
    """Registers the tool for direct execution by a ConversableAgent.

    This method registers the tool so that it can be executed by the agent,
    typically outside of the context of an LLM interaction.

    Args:
        agent (ConversableAgent): The agent to which the tool will be registered.
    """
    agent.register_for_execution()(self)

register_tool #

register_tool(agent)

Register a tool to be both proposed and executed by an agent.

Equivalent to calling both register_for_llm and register_for_execution with the same agent.

Note: This will not make the agent recommend and execute the call in the one step. If the agent recommends the tool, it will need to be the next agent to speak in order to execute the tool.

PARAMETER DESCRIPTION
agent

The agent to which the tool will be registered.

TYPE: ConversableAgent

Source code in autogen/tools/tool.py
def register_tool(self, agent: "ConversableAgent") -> None:
    """Register a tool to be both proposed and executed by an agent.

    Equivalent to calling both `register_for_llm` and `register_for_execution` with the same agent.

    Note: This will not make the agent recommend and execute the call in the one step. If the agent
    recommends the tool, it will need to be the next agent to speak in order to execute the tool.

    Args:
        agent (ConversableAgent): The agent to which the tool will be registered.
    """
    self.register_for_llm(agent)
    self.register_for_execution(agent)