SocietyOfMindAgent

SocietyOfMindAgent(
    name: str,
    chat_manager: autogen.agentchat.groupchat.GroupChatManager,
    response_preparer: str | Callable | None = None,
    is_termination_msg: Callable[[dict], bool] | None = None,
    max_consecutive_auto_reply: int | None = None,
    human_input_mode: Literal['ALWAYS', 'NEVER', 'TERMINATE'] = 'TERMINATE',
    function_map: dict[str, typing.Callable] | None = None,
    code_execution_config: dict | Literal[False] = False,
    llm_config: dict | Literal[False] | None = False,
    default_auto_reply: dict | str | None = '',
    **kwargs
)

(In preview) A single agent that runs a Group Chat as an inner monologue. At the end of the conversation (termination for any reason), the SocietyOfMindAgent applies the response_preparer method on the entire inner monologue message history to extract a final answer for the reply.

Most arguments are inherited from ConversableAgent. New arguments are: chat_manager (GroupChatManager): the group chat manager that will be running the inner monologue response_preparer (Optional, Callable or String): If response_preparer is a callable function, then it should have the signature: f( self: SocietyOfMindAgent, messages: List[Dict]) where self is this SocietyOfMindAgent, and messages is a list of inner-monologue messages. The function should return a string representing the final response (extracted or prepared) from that history. If response_preparer is a string, then it should be the LLM prompt used to extract the final message from the inner chat transcript. The default response_preparer depends on if an llm_config is provided. If llm_config is False, then the response_preparer deterministically returns the last message in the inner-monolgue. If llm_config is set to anything else, then a default LLM prompt is used.

Parameters:
NameDescription
namename of the agent.

Type: str
chat_managerType: autogen.agentchat.groupchat.GroupChatManager
response_preparerType: str | Callable | None

Default: None
is_termination_msga function that takes a message in the form of a dictionary and returns a boolean value indicating if this received message is a termination message.

The dict can contain the following keys: “content”, “role”, “name”, “function_call”.

Type: Callable[[dict], bool] | None

Default: None
max_consecutive_auto_replythe maximum number of consecutive auto replies.

default to None (no limit provided, class attribute MAX_CONSECUTIVE_AUTO_REPLY will be used as the limit in this case).

When set to 0, no auto reply will be generated.

Type: int | None

Default: None
human_input_modewhether to ask for human inputs every time a message is received.

Possible values are “ALWAYS”, “TERMINATE”, “NEVER”.

(1) When “ALWAYS”, the agent prompts for human input every time a message is received.

Under this mode, the conversation stops when the human input is “exit”, or when is_termination_msg is True and there is no human input.

(2) When “TERMINATE”, the agent only prompts for human input only when a termination message is received or the number of auto reply reaches the max_consecutive_auto_reply.

(3) When “NEVER”, the agent will never prompt for human input.

Under this mode, the conversation stops when the number of auto reply reaches the max_consecutive_auto_reply or when is_termination_msg is True.

Type: Literal['ALWAYS', 'NEVER', 'TERMINATE']

Default: ‘TERMINATE’
function_mapMapping function names (passed to openai) to callable functions, also used for tool calls.

Type: dict[str, typing.Callable] | None

Default: None
code_execution_configconfig for the code execution.

To disable code execution, set to False.

Otherwise, set to a dictionary with the following keys: - work_dir (Optional, str): The working directory for the code execution.

If None, a default working directory will be used.

The default working directory is the “extensions” directory under “path_to_autogen”.

- use_docker (Optional, list, str or bool): The docker image to use for code execution.

Default is True, which means the code will be executed in a docker container.

A default list of images will be used.

If a list or a str of image name(s) is provided, the code will be executed in a docker container with the first image successfully pulled.

If False, the code will be executed in the current environment.

We strongly recommend using docker for code execution.

- timeout (Optional, int): The maximum execution time in seconds.

- last_n_messages (Experimental, int or str): The number of messages to look back for code execution.

If set to ‘auto’, it will scan backwards through all messages arriving since the agent last spoke, which is typically the last time execution was attempted.

(Default: auto)

Type: dict | Literal[False]

Default: False
llm_configllm inference configuration.

Please refer to OpenAIWrapper.create for available options.

When using OpenAI or Azure OpenAI endpoints, please specify a non-empty ‘model’ either in llm_config or in each config of ‘config_list’ in llm_config.

To disable llm-based auto reply, set to False.

When set to None, will use self.DEFAULT_CONFIG, which defaults to False.

Type: dict | Literal[False] | None

Default: False
default_auto_replydefault auto reply when no code execution or llm-based reply is generated.

Type: dict | str | None

Default:
**kwargs

Instance Attributes

chat_manager


Return the group chat manager.

Instance Methods

generate_inner_monologue_reply

generate_inner_monologue_reply(
    self,
    messages: list[dict] | None = None,
    sender: autogen.agentchat.agent.Agent | None = None,
    config: autogen.oai.client.OpenAIWrapper | None = None
) -> tuple[bool, str | dict | None]

Generate a reply by running the group chat

Parameters:
NameDescription
messagesType: list[dict] | None

Default: None
senderType: autogen.agentchat.agent.Agent | None

Default: None
configType: autogen.oai.client.OpenAIWrapper | None

Default: None

update_chat_manager

update_chat_manager(self, chat_manager: autogen.agentchat.groupchat.GroupChatManager | None) -> 

Update the chat manager.

Parameters:
NameDescription
chat_managerthe group chat manager

Type: autogen.agentchat.groupchat.GroupChatManager | None