web_surfer
autogen.agentchat.contrib.web_surfer.WebSurferAgent
WebSurferAgent
(In preview) An agent that acts as a basic web surfer that can search the web and visit web pages.
Parameters:Name | Description |
---|---|
name | name of the agent. Type: str |
system_message | system message for the ChatCompletion inference. Type: str | list[str] | None Default: “You are a helpful AI assistant with access to a web browser (via the provided functions). In fact, YOU ARE THE ONLY MEMBER OF YOUR PARTY WITH ACCESS TO A WEB BROWSER, so please help out where you can by performing web searches, navigating pages, and reporting what you find. Today’s date is 2025-02-11” |
description | a short description of the agent. This description is used by other agents (e.g. the GroupChatManager) to decide when to call upon this agent. (Default: system_message) Type: str | None Default: ‘A helpful assistant with access to a web browser. Ask them to perform web searches, open pages, navigate to Wikipedia, answer questions from pages, and or generate summaries.‘ |
is_termination_msg | a function that takes a message in the form of a dictionary and returns a boolean value indicating if this received message is a termination message. The dict can contain the following keys: “content”, “role”, “name”, “function_call”. Type: Callable[[dict[str, Any]], bool] | None Default: None |
max_consecutive_auto_reply | the maximum number of consecutive auto replies. default to None (no limit provided, class attribute MAX_CONSECUTIVE_AUTO_REPLY will be used as the limit in this case). When set to 0, no auto reply will be generated. Type: int | None Default: None |
human_input_mode | whether to ask for human inputs every time a message is received. Possible values are “ALWAYS”, “TERMINATE”, “NEVER”. (1) When “ALWAYS”, the agent prompts for human input every time a message is received. Under this mode, the conversation stops when the human input is “exit”, or when is_termination_msg is True and there is no human input. (2) When “TERMINATE”, the agent only prompts for human input only when a termination message is received or the number of auto reply reaches the max_consecutive_auto_reply. (3) When “NEVER”, the agent will never prompt for human input. Under this mode, the conversation stops when the number of auto reply reaches the max_consecutive_auto_reply or when is_termination_msg is True. Type: Literal['ALWAYS', 'NEVER', 'TERMINATE'] Default: ‘TERMINATE’ |
function_map | Mapping function names (passed to openai) to callable functions, also used for tool calls. Type: dict[str, typing.Callable] | None Default: None |
code_execution_config | config for the code execution. To disable code execution, set to False. Otherwise, set to a dictionary with the following keys: - work_dir (Optional, str): The working directory for the code execution. If None, a default working directory will be used. The default working directory is the “extensions” directory under “path_to_autogen”. - use_docker (Optional, list, str or bool): The docker image to use for code execution. Default is True, which means the code will be executed in a docker container. A default list of images will be used. If a list or a str of image name(s) is provided, the code will be executed in a docker container with the first image successfully pulled. If False, the code will be executed in the current environment. We strongly recommend using docker for code execution. - timeout (Optional, int): The maximum execution time in seconds. - last_n_messages (Experimental, int or str): The number of messages to look back for code execution. If set to ‘auto’, it will scan backwards through all messages arriving since the agent last spoke, which is typically the last time execution was attempted. (Default: auto) Type: dict | Literal[False] Default: False |
llm_config | llm inference configuration. Please refer to OpenAIWrapper.create for available options. When using OpenAI or Azure OpenAI endpoints, please specify a non-empty ‘model’ either in llm_config or in each config of ‘config_list’ in llm_config .To disable llm-based auto reply, set to False. When set to None, will use self.DEFAULT_CONFIG, which defaults to False. Type: dict | Literal[False] | None Default: None |
summarizer_llm_config | Type: dict | Literal[False] | None Default: None |
default_auto_reply | default auto reply when no code execution or llm-based reply is generated. Type: dict | str | None Default: ” |
browser_config | Type: dict | None Default: None |
**kwargs |
Class Attributes
DEFAULT_DESCRIPTION
DEFAULT_PROMPT
Instance Methods
generate_surfer_reply
Generate a reply using autogen.oai.
Parameters:Name | Description |
---|---|
messages | Type: list[dict[str, str]] | None Default: None |
sender | Type: autogen.agentchat.agent.Agent | None Default: None |
config | Type: autogen.oai.client.OpenAIWrapper | None Default: None |