CohereLLMConfigEntry
autogen.oai.cohere.CohereLLMConfigEntry #
Bases: LLMConfigEntry
frequency_penalty class-attribute
instance-attribute
#
presence_penalty class-attribute
instance-attribute
#
max_tokens class-attribute
instance-attribute
#
max_tokens = Field(default=None, ge=0, description='The maximum number of tokens to generate before stopping.')
top_p class-attribute
instance-attribute
#
top_p = Field(default=None, ge=0, le=1, description='An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass.So 0.1 means only the tokens comprising the top 10% probability mass are considered.You should either alter `temperature` or `top_p`, but not both.')
temperature class-attribute
instance-attribute
#
temperature = Field(default=None, ge=0, le=1, description="Amount of randomness injected into the response. Use `temperature` closer to `0.0` for analytical / multiple choice, and closer to a model's maximum `temperature` for creative and generative tasks. Note that even with `temperature` of `0.0`, the results will not be fully deterministic.")
model_config class-attribute
instance-attribute
#
create_client #
apply_application_config #
Apply application level configurations.