autogen
autogen.ReasoningAgent
ReasoningAgent
(In preview) Assistant agent, designed to solve a task with LLM.
AssistantAgent is a subclass of ConversableAgent configured with a default system message.
The default system message is designed to solve a task with LLM,
including suggesting python code blocks and debugging.
human_input_mode
is default to “NEVER”
and code_execution_config
is default to False.
This agent doesn’t execute code by default, and expects the user to execute the code.
Initialize a ReasoningAgent that uses tree-of-thought reasoning.
Parameters:Name | Description |
---|---|
name | Name of the agent Type: str |
llm_config | Configuration for the language model Type: dict |
grader_llm_config | Optional separate configuration for the grader model. If not provided, uses llm_config Type: dict | None Default: None |
max_depth | Maximum depth of the reasoning tree Type: int Default: 4 |
beam_size | DEPRECATED. Number of parallel reasoning paths to maintain Type: int Default: 3 |
answer_approach | DEPRECATED. Either “pool” or “best” - how to generate final answer Type: str Default: ‘pool’ |
verbose | Whether to show intermediate steps Type: bool Default: True |
reason_config | Configuration for the reasoning method. Supported parameters: method (str): The search strategy to use. Options: - “beam_search” (default): Uses beam search with parallel paths - “mcts”: Uses Monte Carlo Tree Search for exploration - “lats”: Uses Language Agent Tree Search with per-step rewards - “dfs”: Uses depth-first search (equivalent to beam_search with beam_size=1) Common parameters: max_depth (int): Maximum depth of reasoning tree (default: 3) forest_size (int): Number of independent trees to maintain (default: 1) rating_scale (int): Scale for grading responses, e.g. 1-10 (default: 10) Beam Search specific: beam_size (int): Number of parallel paths to maintain (default: 3) answer_approach (str): How to select final answer, “pool” or “best” (default: “pool”) MCTS/LATS specific: nsim (int): Number of simulations to run (default: 3) exploration_constant (float): UCT exploration parameter (default: 1.41) Example configs: \\{"method": "beam_search", "beam_size": 5, "max_depth": 4} \\{"method": "mcts", "nsim": 10, "exploration_constant": 2.0} \\{"method": "lats", "nsim": 5, "forest_size": 3} Type: dict Default: {} |
**kwargs |
Instance Attributes
method
Instance Methods
generate_forest_response
Generate a response using tree-of-thought reasoning.
Parameters:Name | Description |
---|---|
messages | Input messages to respond to Type: list[dict] |
sender | Agent sending the messages Type: autogen.Agent |
config | Optional configuration Type: dict | None Default: None |
Type | Description |
---|---|
tuple[bool, str] | Tuple[bool, str]: Success flag and generated response |
rate_node
Rate the quality of a reasoning path using the grader agent.
Parameters:Name | Description |
---|---|
node | Node containing the reasoning trajectory to evaluate Type: autogen.ThinkNode |
ground_truth | Optional ground truth to provide to the grader Type: str Default: None |
is_outcome | indicates whether the rating is for an outcome (final answer) or a process (thinking trajectory). Type: bool Default: False |
Type | Description |
---|---|
float | float: Normalized score between 0 and 1 indicating trajectory quality |