Open In Colab Open on GitHub

Introduction

The ReasoningAgent is designed to enhance language models’ reasoning capabilities through systematic exploration of thought processes. By implementing the Tree of Thoughts (ToT) framework, it enables LLMs like GPT-4 and Llama to break down complex problems into manageable steps and explore multiple solution paths simultaneously.

This notebook demonstrates the key features and capabilities of the ReasoningAgent, showing how it can effectively reason about problems even when using smaller models like gpt-4o-mini.

Search Strategies

The ReasoningAgent supports multiple search strategies for exploring the reasoning space:

1. Beam Search (Default)

  • Maintains the top k most promising paths at each step
  • Efficient for problems with clear evaluation criteria
  • Configurable beam width to balance exploration vs computation
  • Special case: DFS mode (beam size = 1) for linear reasoning similar to Chain-of-Thought

2. Monte Carlo Tree Search (MCTS)

  • Balances exploration and exploitation using UCT formula
  • Particularly effective for problems with delayed rewards
  • Stochastic exploration helps avoid local optima
  • Configurable number of simulations and exploration constant

3. Language Agent Tree Search (LATS)

  • Provides immediate reflection feedback before the next simulation
  • Helps identify poor reasoning paths early for future improvement
  • Especially useful for complex multi-step reasoning

Core Components

  1. Thinker Agent: Generates potential next steps in the reasoning process
  2. Grader Agent: Evaluates the quality of each reasoning step
  3. Tree Structure: Organizes thoughts hierarchically for systematic exploration
  4. Visualization Tools: Built-in Graphviz support for analyzing reasoning paths
  5. Logging Features: Log and save thinking trajectories to finetune the language model

Configuration Options

The agent is highly configurable through a single reason_config dictionary:

import os
import random

from autogen import AssistantAgent, ReasoningAgent, ThinkNode, UserProxyAgent

api_key = os.environ.get("OPENAI_API_KEY")

config_list = [{"model": "gpt-4o-mini", "api_key": api_key}]
verbose = False

question = "What is the expected maximum dice value if you can roll a 6-sided dice three times?"
random.seed(1)  # setup seed for reproducibility
def last_meaningful_msg(sender, recipient, summary_args):
    import warnings

    if sender == recipient:
        return "TERMINATE"

    summary = ""
    chat_messages = recipient.chat_messages[sender]

    for msg in reversed(chat_messages):
        try:
            content = msg["content"]
            if isinstance(content, str):
                summary = content.replace("TERMINATE", "")
            elif isinstance(content, list):
                # Remove the `TERMINATE` word in the content list.
                summary = "\n".join(
                    x["text"].replace("TERMINATE", "") for x in content if isinstance(x, dict) and "text" in x
                )
            if summary.strip().rstrip():
                return summary
        except (IndexError, AttributeError) as e:
            warnings.warn(f"Cannot extract summary using last_msg: {e}. Using an empty str as summary.", UserWarning)
    return summary

Chain-of-Thought Reasoning with DFS

The simplest form of tree-based reasoning uses depth-first search (DFS) to explore a single path, similar to OpenAI’s O1 feature. By setting method="dfs" in the reason_config, the agent will: 1. Generate one reasoning step at a time 2. Follow that single path until reaching a conclusion 3. Never explore alternative branches

Note: The effectiveness depends on the underlying model’s training. Models not specifically trained for step-by-step reasoning may show limited improvement with this approach.

reason_agent = ReasoningAgent(
    name="reason_agent",
    system_message="answer math questions",
    llm_config={"config_list": config_list},
    verbose=verbose,
    reason_config={"method": "dfs", "max_depth": 3},  # Using DFS
    # NOTE: it is equivalent to use beam size 1 for O1-style reasoning
    # reason_config={"method": "beam_search", "beam_size": 1, "max_depth": 3},
)
user_proxy = UserProxyAgent(
    name="user_proxy",
    human_input_mode="NEVER",
    code_execution_config=False,
    max_consecutive_auto_reply=10,
)
ans = user_proxy.initiate_chat(reason_agent, message=question, summary_method=last_meaningful_msg)
print(ans.summary)

Beam Search in Tree of Thought

Beam Search is a powerful technique used in tree-based reasoning that allows the agent to explore multiple paths simultaneously. By setting beam_size greater than 1, the agent can maintain several candidate solutions at each step, evaluating them based on their potential to lead to the best final answer. This method is particularly effective when the solution space is large and complex, as it balances exploration and exploitation, ensuring that promising paths are prioritized while still considering alternative options.

In this approach, the agent generates multiple reasoning steps in parallel, allowing it to compare different trajectories and select the most promising ones for further exploration. This can lead to more robust and accurate conclusions, especially in scenarios where intermediate evaluations are critical to the final outcome.

reason_agent = ReasoningAgent(
    name="reason_agent",
    llm_config={"config_list": config_list},
    verbose=verbose,
    reason_config={"method": "beam_search", "beam_size": 3, "max_depth": 3},
)
user_proxy = UserProxyAgent(
    name="user_proxy",
    human_input_mode="NEVER",
    code_execution_config={"use_docker": False},
    max_consecutive_auto_reply=10,
)
ans = user_proxy.initiate_chat(
    reason_agent,
    message="Design a mixed integer linear program for a coffee roasting supply chain",
    summary_method=last_meaningful_msg,
)
print(ans.summary)

MCTS

This section demonstrates how to use Monte Carlo Tree Search (MCTS) with ReasoningAgent for complex reasoning tasks. MCTS provides several advantages over beam search when:

  1. Ground truth evaluation is available
  2. LLM-based evaluation is expensive
  3. You want to generate diverse, high-quality training data
mcts_agent = ReasoningAgent(
    name="mcts_agent",
    system_message="answer math questions",
    llm_config={"config_list": config_list},
    verbose=True,
    # setup small depth and simulations for conciseness.
    reason_config={"method": "mcts", "nsim": 5, "max_depth": 4},
)


user_proxy = UserProxyAgent(
    name="user_proxy",
    human_input_mode="NEVER",
    code_execution_config=False,
    max_consecutive_auto_reply=10,
)
ans = user_proxy.initiate_chat(mcts_agent, message=question, summary_method=last_meaningful_msg)
print(ans.summary)

LATS

It is important to note that our reasoning agent operates based on “process” and lacks direct access to the environment. In contrast, the LATS approach relies on feedback from the environment. To address this, we utilize our existing grader agent to generate pseudo-rewards and provide feedback. The major difference between our LATS implementation and our MCTS implementation is that the LATS approach incorporate the reflection into prompt context before next round of simulation. You can define the agent using the LATS approach as follows.

lats_agent = ReasoningAgent(
    name="mcts_agent",
    system_message="answer math questions",
    llm_config={"config_list": config_list},
    verbose=True,
    # setup small depth and simulations for conciseness.
    reason_config={"method": "lats", "nsim": 5, "max_depth": 4},
)


user_proxy = UserProxyAgent(
    name="user_proxy",
    human_input_mode="NEVER",
    code_execution_config=False,
    max_consecutive_auto_reply=10,
)
ans = user_proxy.initiate_chat(lats_agent, message=question, summary_method=last_meaningful_msg)
print(ans.summary)

Visualizing the Reasoning Tree

Installation of Graphviz

To visualize the reasoning tree, you need to install Graphviz. Please note that using pip install may not be sufficient for all operating systems. In some cases, you might need to manually download and install Graphviz.

pip install graphviz

To save the visualization as “tree_of_thoughts.png”, run the following command:

visualize_tree(mcts_agent._root)

Utilizing ReasoningAgent for Nested Chat Interactions

In this example, we will explore how the ReasoningAgent can be employed to facilitate nested chat interactions, specifically for writing a blog post about NVIDIA. The agent will engage in a structured dialogue to enhance the quality of the content through iterative feedback and reasoning.

Task: Writing a Blog Post on NVIDIA

The goal is to generate a concise yet engaging blog post about NVIDIA. The process involves one turn (for simplicity) of conversation where the agent reflects on the content, reasons about improvements, and incorporates user feedback. You can update the max_turns parameter to execute multiple times.

WARNING: It may take a long time to run this example (up to 10 minutes).

writer = AssistantAgent(
    name="Writer",
    llm_config={"config_list": config_list},
    system_message="""
    You are a professional writer, known for your insightful and engaging articles.
    You transform complex concepts into compelling narratives.
    You should improve the quality of the content based on the feedback from the user.
    """,
)
reason_agent_for_writer = ReasoningAgent(
    name="reason_agent",
    llm_config={"config_list": config_list},
    verbose=verbose,
    reason_config={"method": "lats", "nsim": 2, "max_depth": 3},
)


def reflection_message(recipient, messages, sender, config):
    print("Reflecting...", "yellow")
    return f"Reflect, Reason and provide critique on the following writing. \n\n {recipient.chat_messages_for_summary(sender)[-1]['content']}"
user_proxy.register_nested_chats(
    [
        {
            "recipient": reason_agent_for_writer,
            "message": reflection_message,
            "summary_method": "last_msg",
            "max_turns": 1,
        }
    ],
    trigger=writer,
)
task = """Write a concise but engaging blogpost about Nvidia."""
res = user_proxy.initiate_chat(recipient=writer, message=task, max_turns=2, summary_method="last_msg")
print(res.summary)

Use a different Model for Grading

To use a different model for grading instead of gpt-4o, pass the grader_llm_config argument when initializing the ReasoningAgent. This ensures that the grading of trajectories is performed using the specified configuration from the config_list, separate from the main llm_config.

grader_config_list = [{"model": "gpt-4o-mini", "api_key": api_key}]

grader_llm_config = {"config_list": grader_config_list}

writer = AssistantAgent(
    name="Writer",
    llm_config={"config_list": config_list},
    system_message="""
    You are a professional writer, known for your insightful and engaging articles.
    You transform complex concepts into compelling narratives.
    You should improve the quality of the content based on the feedback from the user.
    """,
)
reason_agent_for_writer = ReasoningAgent(
    name="reason_agent",
    llm_config={"config_list": config_list},
    grader_llm_config=grader_llm_config,
    verbose=verbose,
    reason_config={"method": "lats", "nsim": 2, "max_depth": 3},
)

Save data to future training

In this section, we will focus on saving the reasoning agent’s decision-making data to help future training. By capturing the structure and content of the reasoning tree, we can create a valuable dataset that can be used to enhance the agent’s learning process. This data will allow us to analyze the agent’s reasoning patterns, improve its performance, and refine its ability to generate high-quality responses. The saved data can be utilized for various training methodologies, including supervised fine-tuning and reinforcement learning, ultimately contributing to the development of a more robust and effective reasoning agent.

import json
data = reason_agent._root.to_dict()
with open("reasoning_tree.json", "w") as f:
    json.dump(data, f)

# recover the node
new_node = ThinkNode.from_dict(json.load(open("reasoning_tree.json")))  # noqa: SIM115
from autogen.agentchat.contrib.reasoning_agent import extract_rlhf_preference_dataset, extract_sft_dataset

sft_data = extract_sft_dataset(reason_agent._root)
rlhf_data = extract_rlhf_preference_dataset(reason_agent._root)
print(rlhf_data)

Utilizing Ground Truth to Enhance Training Data Generation

Access to ground truth answers allows us to improve the evaluation of reasoning paths. In this section, we will explore: - The process of incorporating ground truth into prompts - The methods by which the agent leverages ground truth for evaluation

prompt = """What is the expected maximum dice value if you can roll a 6-sided dice three times?

GROUND_TRUTH:
We define X as the highest outcome among the three rolls.
The probability that X is at least m is 1 - \\left(\frac{m-1}{6}\right)^3 for each m from 1 to 6.
Summing these probabilities gives the expectation E(X) = \\sum_{m=1}^{6} [1 - (\frac{m-1}{6})^3].
Calculating this sum results in E(X) = 6 - \frac{225}{216} = \frac{119}{24}, which approximates to 4.9583.
Therefore, the expected maximum value when rolling a six-sided die three times is \frac{119}{24} or approximately 4.9583.
"""
random.seed(1)  # setup seed for reproducibility

mcts_agent2 = ReasoningAgent(
    name="mcts_agent",
    system_message="answer math questions",
    llm_config={"config_list": config_list},
    verbose=True,
    # setup small depth and simulations for conciseness.
    reason_config={"method": "mcts", "nsim": 5, "max_depth": 4},
)


user_proxy = UserProxyAgent(
    name="user_proxy",
    human_input_mode="NEVER",
    code_execution_config=False,
    max_consecutive_auto_reply=10,
)


ans = user_proxy.initiate_chat(mcts_agent2, message=prompt, summary_method=last_meaningful_msg)
print(ans.summary)

Forest of Thoughts

The concept of a “Forest of Thoughts” allows us to leverage bootstrapping techniques to execute the tree of thoughts multiple times, creating a diverse set of answers. After running these independent reasoning processes, we can aggregate them to form our final answer.

forest_agent = ReasoningAgent(
    name="mcts_agent",
    system_message="answer math questions",
    llm_config={"config_list": config_list},
    verbose=True,
    # setup small depth and simulations for conciseness.
    reason_config={"method": "dfs", "max_depth": 4, "forest_size": 3},
)


user_proxy = UserProxyAgent(
    name="user_proxy",
    human_input_mode="NEVER",
    code_execution_config=False,
    max_consecutive_auto_reply=10,
)
ans = user_proxy.initiate_chat(forest_agent, message=question, summary_method=last_meaningful_msg)
print(ans.summary)