TextMessageCompressor

TextMessageCompressor(
    text_compressor: TextCompressor | None = None,
    min_tokens: int | None = None,
    compression_params: dict = {},
    cache: AbstractCache | None = None,
    filter_dict: dict[str, Any] | None = None,
    exclude_filter: bool = True
)

A transform for compressing text messages in a conversation history.
It uses a specified text compression method to reduce the token count of messages, which can lead to more efficient processing and response generation by downstream models.

Parameters:
NameDescription
text_compressorType: TextCompressor | None

Default: None
min_tokensType: int | None

Default: None
compression_paramsType: dict

Default: {}
cacheType: AbstractCache | None

Default: None
filter_dictType: dict[str, typing.Any] | None

Default: None
exclude_filterType: bool

Default: True

Instance Methods

apply_transform

apply_transform(self, messages: list[dict[str, Any]]) -> list[dict[str, Any]]

Applies compression to messages in a conversation history based on the specified configuration.
The function processes each message according to the compression_args and min_tokens settings, applying the specified compression configuration and returning a new list of messages with reduced token counts where possible.

Parameters:
NameDescription
messagesA list of message dictionaries to be compressed.

Type: list[dict[str, typing.Any]]
Returns:
TypeDescription
list[dict[str, typing.Any]]List[Dict]: A list of dictionaries with the message content compressed according to the configured method and scope.

get_logs

get_logs(
    self,
    pre_transform_messages: list[dict[str, Any]],
    post_transform_messages: list[dict[str, Any]]
) -> tuple[str, bool]
Parameters:
NameDescription
pre_transform_messagesType: list[dict[str, typing.Any]]
post_transform_messagesType: list[dict[str, typing.Any]]