Skip to content

ReasoningAgent - Advanced LLM Reasoning with Multiple Search Strategies#

Open In Colab Open on GitHub

Introduction#

The ReasoningAgent is designed to enhance language models’ reasoning capabilities through systematic exploration of thought processes. By implementing the Tree of Thoughts (ToT) framework, it enables LLMs like GPT-4 and Llama to break down complex problems into manageable steps and explore multiple solution paths simultaneously.

This notebook demonstrates the key features and capabilities of the ReasoningAgent, showing how it can effectively reason about problems.

Search Strategies#

The ReasoningAgent supports multiple search strategies for exploring the reasoning space:

1. Beam Search (Default)#

  • Maintains the top k most promising paths at each step
  • Efficient for problems with clear evaluation criteria
  • Configurable beam width to balance exploration vs computation
  • Special case: DFS mode (beam size = 1) for linear reasoning similar to Chain-of-Thought

2. Monte Carlo Tree Search (MCTS)#

  • Balances exploration and exploitation using UCT formula
  • Particularly effective for problems with delayed rewards
  • Stochastic exploration helps avoid local optima
  • Configurable number of simulations and exploration constant

3. Language Agent Tree Search (LATS)#

  • Provides immediate reflection feedback before the next simulation
  • Helps identify poor reasoning paths early for future improvement
  • Especially useful for complex multi-step reasoning

Core Components#

  1. Thinker Agent: Generates potential next steps in the reasoning process
  2. Grader Agent: Evaluates the quality of each reasoning step
  3. Interim Execution: Option to execute the selected steps, enabling stepwise reasoning.
  4. Code Execution: a child user agent will execute code automatically during reasoning
  5. Tree Structure: Organizes thoughts hierarchically for systematic exploration
  6. Visualization Tools: Built-in Graphviz support for analyzing reasoning paths
  7. Logging Features: Log and save thinking trajectories to finetune the language model
  8. Configuration Options: The agent is highly configurable through a single reason_config dictionary
import json
import random

from autogen import AssistantAgent, LLMConfig, UserProxyAgent
from autogen.agents.experimental import ReasoningAgent, ThinkNode

# Put your key in the OPENAI_API_KEY environment variable
llm_config = LLMConfig(api_type="openai", model="gpt-4o")

question = "What is the expected maximum dice value if you can roll a 6-sided dice three times?"
random.seed(1)  # setup seed for reproducibility

Write last_meaningful_msg summary function.

def last_meaningful_msg(sender, recipient, summary_args):
    import warnings

    if sender == recipient:
        return "TERMINATE"

    summary = ""
    chat_messages = recipient.chat_messages[sender]

    for msg in reversed(chat_messages):
        try:
            content = msg["content"]
            if isinstance(content, str):
                summary = content.replace("TERMINATE", "")
            elif isinstance(content, list):
                # Remove the `TERMINATE` word in the content list.
                summary = "\n".join(
                    x["text"].replace("TERMINATE", "") for x in content if isinstance(x, dict) and "text" in x
                )
            if summary.strip().rstrip():
                return summary
        except (IndexError, AttributeError) as e:
            warnings.warn(f"Cannot extract summary using last_msg: {e}. Using an empty str as summary.", UserWarning)
    return summary

Initialize a user_proxy agent.

user_proxy = UserProxyAgent(
    name="user_proxy",
    human_input_mode="NEVER",
    code_execution_config=False,
    is_termination_msg=lambda x: True,  # terminate when reasoning agent responds
)

Chain-of-Thought Reasoning with DFS#

The simplest form of tree-based reasoning uses depth-first search (DFS) to explore a single path, similar to OpenAI’s O1 feature. By setting method="dfs" in the reason_config, the agent will: 1. Generate one reasoning step at a time 2. Follow that single path until reaching a conclusion 3. Never explore alternative branches

Note: The effectiveness depends on the underlying model’s training. Models not specifically trained for step-by-step reasoning may show limited improvement with this approach.

Note 2: To enable the execution of each selected step before generating the next step suggestions, pass "interim_execution": True in reason_config.

with llm_config:
    reason_agent = ReasoningAgent(
        name="reason_agent",
        system_message="answer math questions",
        reason_config={"method": "dfs", "max_depth": 3},  # Using DFS
        silent=False,
        # NOTE: it is equivalent to use beam size 1 for O1-style reasoning
        # reason_config={"method": "beam_search", "beam_size": 1, "max_depth": 3},
    )
ans = user_proxy.initiate_chat(reason_agent, message=question, summary_method=last_meaningful_msg)
print(ans.summary)

Beam Search in Tree of Thought#

Beam Search is a powerful technique used in tree-based reasoning that allows the agent to explore multiple paths simultaneously. By setting beam_size greater than 1, the agent can maintain several candidate solutions at each step, evaluating them based on their potential to lead to the best final answer. This method is particularly effective when the solution space is large and complex, as it balances exploration and exploitation, ensuring that promising paths are prioritized while still considering alternative options.

In this approach, the agent generates multiple reasoning steps in parallel, allowing it to compare different trajectories and select the most promising ones for further exploration. This can lead to more robust and accurate conclusions, especially in scenarios where intermediate evaluations are critical to the final outcome.

with llm_config:
    reason_agent = ReasoningAgent(name="reason_agent", reason_config={"method": "beam_search", "beam_size": 3})
ans = user_proxy.initiate_chat(reason_agent, message=question, summary_method=last_meaningful_msg)
print(ans.summary)

We can see that in this case the agent suggests to execute a script. Later, we will see how it can do this internally.

Beam Search with Batch Grading#

By default, node grading is performed one at a time. While this approach is often sufficient, certain applications benefit from a batched grading approach on each beam expansion. In other words, instead of grading all nodes across the entire search in a single pass, we group each beam’s newly expanded nodes into a single batch for grading. This yields:

  1. Context-aware evaluation: Within a single beam iteration, the grader can compare and contrast multiple node expansions at once.
  2. Improved efficiency: Combining multiple evaluations into one request per beam iteration can reduce the total number of LLM calls.

To enable batch grading, set "batch_grading": True in the reason_config. By default, batch_grading is set to False, meaning individual node grading is performed without batching.

with llm_config:
    reason_agent = ReasoningAgent(
        name="reason_agent", reason_config={"method": "beam_search", "beam_size": 3, "batch_grading": True}
    )
ans = user_proxy.initiate_chat(reason_agent, message=question, summary_method=last_meaningful_msg)
print(ans.summary)

MCTS#

This section demonstrates how to use Monte Carlo Tree Search (MCTS) with ReasoningAgent for complex reasoning tasks. MCTS provides several advantages over beam search when:

  1. Ground truth evaluation is available
  2. LLM-based evaluation is expensive
  3. You want to generate diverse, high-quality training data
with llm_config:
    mcts_agent = ReasoningAgent(
        name="mcts_agent",
        system_message="answer math questions",
        # setup small depth and simulations for conciseness.
        reason_config={"method": "mcts", "nsim": 3, "max_depth": 4},
    )
ans = user_proxy.initiate_chat(mcts_agent, message=question, summary_method=last_meaningful_msg)
print(ans.summary)

LATS#

It is important to note that our reasoning agent operates based on “process” and lacks direct access to the environment. In contrast, the LATS approach relies on feedback from the environment. To address this, we utilize our existing grader agent to generate pseudo-rewards and provide feedback. The major difference between our LATS implementation and our MCTS implementation is that the LATS approach incorporate the reflection into prompt context before next round of simulation. You can define the agent using the LATS approach as follows.

with llm_config:
    lats_agent = ReasoningAgent(
        name="mcts_agent",
        system_message="answer math questions",
        # setup small depth and simulations for conciseness.
        reason_config={"method": "lats", "nsim": 3, "max_depth": 4},
    )
lats_res = user_proxy.initiate_chat(recipient=lats_agent, message=question, summary_method=last_meaningful_msg)
print(ans.summary)

Use a different Model for Grading#

To use a different model for grading instead of gpt-4o, pass the grader_llm_config argument when initializing the ReasoningAgent. This ensures that the grading of trajectories is performed using the specified configuration from the config_list, separate from the main llm_config.

grader_llm_config = LLMConfig(api_type="openai", model="gpt-4o-mini")

with llm_config:
    writer = AssistantAgent(
        name="Writer",
        system_message="""You are a professional writer, known for your insightful and engaging articles.
You transform complex concepts into compelling narratives.
You should improve the quality of the content based on the feedback from the user.
""",
    )
    reason_agent_for_writer = ReasoningAgent(
        name="reason_agent",
        grader_llm_config=grader_llm_config,
        reason_config={"method": "lats", "nsim": 2, "max_depth": 3},
    )
data = reason_agent._root.to_dict()
with open("reasoning_tree.json", "w") as f:
    json.dump(data, f)

# recover the node
with open("reasoning_tree.json", "r") as f:
    new_node = ThinkNode.from_dict(json.load(f))
sft_data = reason_agent.extract_sft_dataset()
rlhf_data = reason_agent.extract_rlhf_preference_dataset()
print(rlhf_data)

Utilizing Ground Truth to Enhance Training Data Generation#

Access to ground truth answers allows us to improve the evaluation of reasoning paths. In this section, we will explore: - The process of incorporating ground truth into prompts - The methods by which the agent leverages ground truth for evaluation

ans = user_proxy.initiate_chat(lats_agent, message=question, summary_method=last_meaningful_msg)
print(ans.summary)

Interim Execution During Reasoning#

You can enable interim_execution by setting it to True in reason_config. This allows intermediate steps to be executed during the reasoning process, promoting more effective step-by-step thinking and enabling future steps to be informed by the outputs of earlier ones. By default interim_execution is False which means that the selected steps won’t be executed during reasoning.

with llm_config:
    lats_agent = ReasoningAgent(
        name="mcts_agent",
        system_message="answer math questions",
        reason_config={"method": "lats", "nsim": 3, "max_depth": 4, "interim_execution": True},
    )

ans = user_proxy.initiate_chat(lats_agent, message=question, summary_method=last_meaningful_msg)
print(ans.summary)

Code Execution During Reasoning#

You can setup the parameter code_execution_config in reasoning agent to enable code execution during reasoning. By default, code_execution_config=False, which means it will not execute code for reasoning. Note that to allow for code execution, interim_execution must be set to True at reason_config.

with llm_config:
    lats_agent = ReasoningAgent(
        name="mcts_agent",
        system_message="answer math questions",
        reason_config={"method": "lats", "nsim": 3, "max_depth": 4, "interim_execution": True},
        code_execution_config={"use_docker": False, "work_dir": "mypy_cache"},
        # Enable Code execution. We skip docker here for simplicity
    )

ans = user_proxy.initiate_chat(
    lats_agent, message=question + " Run a python simulation to get the result", summary_method=last_meaningful_msg
)
print(ans.summary)

Visualizing the Reasoning Tree#

Installation of Graphviz#

To visualize the reasoning tree, you need to install Graphviz. Please note that using pip install may not be sufficient for all operating systems. In some cases, you might need to manually download and install Graphviz.

pip install graphviz

To save the visualization as “tree_of_thoughts.png”, run the following command:#

mcts_agent.visualize_tree()

Utilizing ReasoningAgent for Nested Chat Interactions#

In this example, we will explore how the ReasoningAgent can be employed to facilitate nested chat interactions, specifically for writing a blog post about NVIDIA. The agent will engage in a structured dialogue to enhance the quality of the content through iterative feedback and reasoning.

Task: Writing a Blog Post on NVIDIA#

The goal is to generate a concise yet engaging blog post about NVIDIA. The process involves one turn (for simplicity) of conversation where the agent reflects on the content, reasons about improvements, and incorporates user feedback. You can update the max_turns parameter to execute multiple times.

with llm_config:
    writer = AssistantAgent(
        name="Writer",
        system_message="""You are a professional writer, known for your insightful and engaging articles.
You transform complex concepts into compelling narratives.
You should improve the quality of the content based on the feedback from the user.
    """,
    )
    reason_agent_for_writer = ReasoningAgent(
        name="reason_agent",
        reason_config={"method": "lats", "nsim": 2, "max_depth": 3},
    )

def reflection_message(recipient, messages, sender, config):
    print("Reflecting...", "yellow")
    return f"Reflect, Reason and provide critique on the following writing. \n\n {recipient.chat_messages_for_summary(sender)[-1]['content']}"
user_proxy.register_nested_chats(
    [
        {
            "recipient": reason_agent_for_writer,
            "message": reflection_message,
            "summary_method": "last_msg",
            "max_turns": 1,
        }
    ],
    trigger=writer,
)
task = """Write a concise but engaging blogpost about Nvidia."""
res = user_proxy.initiate_chat(recipient=writer, message=task, max_turns=2, summary_method="last_msg")
print(res.summary)

Use a different Model for Grading#

To use a different model for grading instead of gpt-4o, pass the grader_llm_config argument when initializing the ReasoningAgent. This ensures that the grading of trajectories is performed using the specified configuration from the config_list, separate from the main llm_config.

# Put your key in the OPENAI_API_KEY environment variable
grader_llm_config = LLMConfig(api_type="openai", model="gpt-4o-mini")

with llm_config:
    writer = AssistantAgent(
        name="Writer",
        system_message="""You are a professional writer, known for your insightful and engaging articles.
You transform complex concepts into compelling narratives.
You should improve the quality of the content based on the feedback from the user.
        """,
    )
    reason_agent_for_writer = ReasoningAgent(
        name="reason_agent",
        grader_llm_config=grader_llm_config,
        reason_config={"method": "lats", "nsim": 2, "max_depth": 3},
    )

Save data to future training#

In this section, we will focus on saving the reasoning agent’s decision-making data to help future training. By capturing the structure and content of the reasoning tree, we can create a valuable dataset that can be used to enhance the agent’s learning process. This data will allow us to analyze the agent’s reasoning patterns, improve its performance, and refine its ability to generate high-quality responses. The saved data can be utilized for various training methodologies, including supervised fine-tuning and reinforcement learning, ultimately contributing to the development of a more robust and effective reasoning agent.

data = reason_agent._root.to_dict()
with open("reasoning_tree.json", "w") as f:
    json.dump(data, f)

# recover the node
with open("reasoning_tree.json", "r") as f:
    new_node = ThinkNode.from_dict(json.load(f))
sft_data = reason_agent.extract_sft_dataset()
rlhf_data = reason_agent.extract_rlhf_preference_dataset()
print(rlhf_data)

Utilizing Ground Truth to Enhance Training Data Generation#

Access to ground truth answers allows us to improve the evaluation of reasoning paths. In this section, we will explore: - The process of incorporating ground truth into prompts - The methods by which the agent leverages ground truth for evaluation

prompt = """What is the expected maximum dice value if you can roll a 6-sided dice three times?

GROUND_TRUTH:
We define X as the highest outcome among the three rolls.
The probability that X is at least m is 1 - \\left(\frac{m-1}{6}\right)^3 for each m from 1 to 6.
Summing these probabilities gives the expectation E(X) = \\sum_{m=1}^{6} [1 - (\frac{m-1}{6})^3].
Calculating this sum results in E(X) = 6 - \frac{225}{216} = \frac{119}{24}, which approximates to 4.9583.
Therefore, the expected maximum value when rolling a six-sided die three times is \frac{119}{24} or approximately 4.9583.
"""
random.seed(1)  # setup seed for reproducibility

with llm_config:
    mcts_agent2 = ReasoningAgent(
        name="mcts_agent",
        system_message="answer math questions",
        # setup small depth and simulations for conciseness.
        reason_config={"method": "mcts", "nsim": 3, "max_depth": 4},
    )

ans = user_proxy.initiate_chat(mcts_agent2, message=prompt, summary_method=last_meaningful_msg)
print(ans.summary)

Forest of Thoughts#

The concept of a “Forest of Thoughts” allows us to leverage bootstrapping techniques to execute the tree of thoughts multiple times, creating a diverse set of answers. After running these independent reasoning processes, we can aggregate them to form our final answer.

with llm_config:
    forest_agent = ReasoningAgent(
        name="mcts_agent",
        system_message="answer math questions",
        # setup small depth and simulations for conciseness.
        reason_config={"method": "dfs", "max_depth": 4, "forest_size": 3},
    )
ans = user_proxy.initiate_chat(forest_agent, message=question, summary_method=last_meaningful_msg)
print(ans.summary)

Scope#

The effectiveness of a LLM agent on a given task can be significantly enhanced through prompt optimization. To support this for the ReasoningAgent, a scope parameter can be specified during initialization. This parameter will provide valuable context about the agent’s intended use, the reasoning process it should follow, and any constraints or pitfalls to avoid. This information is incorporated into the agent’s thought process to guide its behavior more effectively.

Note: The scope differs from the system_message in that it informs the agent’s reasoning throughout the entire thinking process, whereas the system_message is used solely when generating the final response.

scope = """You assess ethical risks of AI systems used in services.
Begin by identifying stakeholders and their interests.
Then, evaluate potential ethical risks (bias, transparency, impact).
Finally, suggest mitigation strategies and ethical safeguards"""

with llm_config:
    reason_agent = ReasoningAgent(
        name="reason_agent",
        reason_config={"method": "dfs", "max_depth": 3},  # Using DFS
        silent=False,
        scope=scope,
    )
question = "What are the ethical risks of using AI in healthcare?"
ans = user_proxy.initiate_chat(reason_agent, message=question, summary_method=last_meaningful_msg)
print(ans.summary)