autogen
autogen.Completion
Completion
(openai<1)
A class for OpenAI completion API.
It also supports: ChatCompletion, Azure OpenAI API.
Class Attributes
cache_path
cache_seed
chat_models
default_search_space
logged_history
max_retry_period
openai_completion_class
optimization_budget
price1K
request_timeout
retry_wait_time
tune
Static Methods
clear_cache
Clear cache.
Parameters:Name | Description |
---|---|
seed | The integer identifier for the pseudo seed. If omitted, all caches under cache_path_root will be cleared. Type: int | None Default: None |
cache_path_root | Type: str | None Default: ‘.cache’ |
cost
Compute the cost of an API call.
Parameters:Name | Description |
---|---|
response | The response from OpenAI API. Type: dict |
create
Make a completion for a given context.
Parameters:Name | Description |
---|---|
context | The context to instantiate the prompt. It needs to contain keys that are used by the prompt template or the filter function. E.g., prompt="Complete the following sentence: \\{prefix}, context=\\{"prefix": "Today I feel"} .The actual prompt will be: “Complete the following sentence: Today I feel”. More examples can be found at templating. Type: dict | None Default: None |
use_cache | Whether to use cached responses. Type: bool | None Default: True |
config_list | List of configurations for the completion to try. The first one that does not raise an error will be used. Only the differences from the default config need to be provided. E.g., python response = oai.Completion.create( config_list = [ \\{ "model": "gpt-4", "api_key": os.environ.get("AZURE_OPENAI_API_KEY"), "api_type": "azure", "base_url": os.environ.get("AZURE_OPENAI_API_BASE"), "api_version": "2024-02-01", }, \\{ "model": "gpt-3.5-turbo", "api_key": os.environ.get("OPENAI_API_KEY"), "api_type": "openai", "base_url": "https://api.openai.com/v1", }, \\{ "model": "llama-7B", "base_url": "http://127.0.0.1:8080", "api_type": "openai", }, ], prompt="Hi", ) Type: list[dict] | None Default: None |
filter_func | A function that takes in the context and the response and returns a boolean to indicate whether the response is valid. E.g., python def yes_or_no_filter(context, config, response): return context.get("yes_or_no_choice", False) is False or any( text in ["Yes.", "No."] for text in oai.Completion.extract_text(response) ) Type: Callable[[dict, dict], bool] | None Default: None |
raise_on_ratelimit_or_timeout | Whether to raise RateLimitError or Timeout when all configs fail. When set to False, -1 will be returned when all configs fail. Type: bool | None Default: True |
allow_format_str_template | Whether to allow format string template in the config. Type: bool | None Default: False |
**config | Configuration for the openai API call. This is used as parameters for calling openai API. The “prompt” or “messages” parameter can contain a template (str or Callable) which will be instantiated with the context. Besides the parameters for the openai API call, it can also contain: - max_retry_period (int): the total time (in seconds) allowed for retrying failed requests.- retry_wait_time (int): the time interval to wait (in seconds) before retrying a failed request.- cache_seed (int) for the cache.This is useful when implementing “controlled randomness” for the completion. |
extract_text
Extract the text from a completion or chat response.
Parameters:Name | Description |
---|---|
response | The response from OpenAI API. Type: dict |
Type | Description |
---|---|
list[str] | A list of text in the responses. |
extract_text_or_function_call
Extract the text or function calls from a completion or chat response.
Parameters:Name | Description |
---|---|
response | The response from OpenAI API. Type: dict |
Type | Description |
---|---|
list[str] | A list of text or function calls in the responses. |
instantiate
Name | Description |
---|---|
template | Type: str | None |
context | Type: dict | None Default: None |
allow_format_str_template | Type: bool | None Default: False |
print_usage_summary
Return the usage summary.
set_cache
Set cache path.
Parameters:Name | Description |
---|---|
seed | The integer identifier for the pseudo seed. Results corresponding to different seeds will be cached in different places. Type: int | None Default: 41 |
cache_path_root | Type: str | None Default: ‘.cache’ |
start_logging
Start book keeping.
Parameters:Name | Description |
---|---|
history_dict | A dictionary for book keeping. If no provided, a new one will be created. Type: dict | None Default: None |
compact | Whether to keep the history dictionary compact. Compact history contains one key per conversation, and the value is a dictionary like: Type: bool | None Default: True |
reset_counter | whether to reset the counter of the number of API calls. Type: bool | None Default: True |
stop_logging
End book keeping.
test
Evaluate the responses created with the config for the OpenAI API call.
Parameters:Name | Description |
---|---|
data | The list of test data points. |
eval_func=None | |
use_cache=True | |
agg_method='avg' | |
return_responses_and_per_instance_result=False | |
logging_level=30 | |
**config |