TextMessageCompressor

TextMessageCompressor(
    text_compressor: autogen.agentchat.contrib.capabilities.text_compressors.TextCompressor | None = None,
    min_tokens: int | None = None,
    compression_params: dict = {},
    cache: autogen.cache.abstract_cache_base.AbstractCache | None = None,
    filter_dict: dict | None = None,
    exclude_filter: bool = True
)

A transform for compressing text messages in a conversation history.
It uses a specified text compression method to reduce the token count of messages, which can lead to more efficient processing and response generation by downstream models.

Parameters:
NameDescription
text_compressorType: autogen.agentchat.contrib.capabilities.text_compressors.TextCompressor | None

Default: None
min_tokensType: int | None

Default: None
compression_paramsType: dict

Default: {}
cacheType: autogen.cache.abstract_cache_base.AbstractCache | None

Default: None
filter_dictType: dict | None

Default: None
exclude_filterType: bool

Default: True

Instance Methods

apply_transform

apply_transform(self, messages: list[dict]) -> list[dict]

Applies compression to messages in a conversation history based on the specified configuration.
The function processes each message according to the compression_args and min_tokens settings, applying the specified compression configuration and returning a new list of messages with reduced token counts where possible.

Parameters:
NameDescription
messagesA list of message dictionaries to be compressed.

Type: list[dict]
Returns:
TypeDescription
list[dict]List[Dict]: A list of dictionaries with the message content compressed according to the configured method and scope.

get_logs

get_logs(
    self,
    pre_transform_messages: list[dict],
    post_transform_messages: list[dict]
) -> tuple[str, bool]
Parameters:
NameDescription
pre_transform_messagesType: list[dict]
post_transform_messagesType: list[dict]