Completion

Completion()

(openai<1) A class for OpenAI completion API.

It also supports: ChatCompletion, Azure OpenAI API.

Class Attributes

cache_path



cache_seed



chat_models



default_search_space



logged_history



max_retry_period



openai_completion_class



optimization_budget



price1K



request_timeout



retry_wait_time



tune



Static Methods

clear_cache

clear_cache(seed: int | None = None, cache_path_root: str | None = '.cache') -> 

Clear cache.

Parameters:
NameDescription
seedThe integer identifier for the pseudo seed.

If omitted, all caches under cache_path_root will be cleared.

Type: int | None

Default: None
cache_path_rootType: str | None

Default: ‘.cache’

cost

cost(response: dict) -> 

Compute the cost of an API call.

Parameters:
NameDescription
responseThe response from OpenAI API.

Type: dict

create

create(
    context: dict | None = None,
    use_cache: bool | None = True,
    config_list: list[dict] | None = None,
    filter_func: Callable[[dict, dict], bool] | None = None,
    raise_on_ratelimit_or_timeout: bool | None = True,
    allow_format_str_template: bool | None = False,
    **config
) -> 

Make a completion for a given context.

Parameters:
NameDescription
contextThe context to instantiate the prompt.

It needs to contain keys that are used by the prompt template or the filter function.

E.g., prompt="Complete the following sentence: \\{prefix}, context=\\{"prefix": "Today I feel"}.

The actual prompt will be: “Complete the following sentence: Today I feel”.

More examples can be found at templating.

Type: dict | None

Default: None
use_cacheWhether to use cached responses.

Type: bool | None

Default: True
config_listList of configurations for the completion to try.

The first one that does not raise an error will be used.

Only the differences from the default config need to be provided.

E.g., python response = oai.Completion.create( config_list = [ \\{ "model": "gpt-4", "api_key": os.environ.get("AZURE_OPENAI_API_KEY"), "api_type": "azure", "base_url": os.environ.get("AZURE_OPENAI_API_BASE"), "api_version": "2024-02-01", }, \\{ "model": "gpt-3.5-turbo", "api_key": os.environ.get("OPENAI_API_KEY"), "api_type": "openai", "base_url": "https://api.openai.com/v1", }, \\{ "model": "llama-7B", "base_url": "http://127.0.0.1:8080", "api_type": "openai", }, ], prompt="Hi", )

Type: list[dict] | None

Default: None
filter_funcA function that takes in the context and the response and returns a boolean to indicate whether the response is valid.

E.g., python def yes_or_no_filter(context, config, response): return context.get("yes_or_no_choice", False) is False or any( text in ["Yes.", "No."] for text in oai.Completion.extract_text(response) )

Type: Callable[[dict, dict], bool] | None

Default: None
raise_on_ratelimit_or_timeoutWhether to raise RateLimitError or Timeout when all configs fail.

When set to False, -1 will be returned when all configs fail.

Type: bool | None

Default: True
allow_format_str_templateWhether to allow format string template in the config.

Type: bool | None

Default: False
**configConfiguration for the openai API call.

This is used as parameters for calling openai API.

The “prompt” or “messages” parameter can contain a template (str or Callable) which will be instantiated with the context.

Besides the parameters for the openai API call, it can also contain: - max_retry_period (int): the total time (in seconds) allowed for retrying failed requests.

- retry_wait_time (int): the time interval to wait (in seconds) before retrying a failed request.

- cache_seed (int) for the cache.

This is useful when implementing “controlled randomness” for the completion.


extract_text

extract_text(response: dict) -> list[str]

Extract the text from a completion or chat response.

Parameters:
NameDescription
responseThe response from OpenAI API.

Type: dict
Returns:
TypeDescription
list[str]A list of text in the responses.

extract_text_or_function_call

extract_text_or_function_call(response: dict) -> list[str]

Extract the text or function calls from a completion or chat response.

Parameters:
NameDescription
responseThe response from OpenAI API.

Type: dict
Returns:
TypeDescription
list[str]A list of text or function calls in the responses.

instantiate

instantiate(
    template: str | None,
    context: dict | None = None,
    allow_format_str_template: bool | None = False
) -> 
Parameters:
NameDescription
templateType: str | None
contextType: dict | None

Default: None
allow_format_str_templateType: bool | None

Default: False

print_usage_summary() -> dict

Return the usage summary.


set_cache

set_cache(seed: int | None = 41, cache_path_root: str | None = '.cache') -> 

Set cache path.

Parameters:
NameDescription
seedThe integer identifier for the pseudo seed.

Results corresponding to different seeds will be cached in different places.

Type: int | None

Default: 41
cache_path_rootType: str | None

Default: ‘.cache’

start_logging

start_logging(
    history_dict: dict | None = None,
    compact: bool | None = True,
    reset_counter: bool | None = True
) -> 

Start book keeping.

Parameters:
NameDescription
history_dictA dictionary for book keeping.

If no provided, a new one will be created.

Type: dict | None

Default: None
compactWhether to keep the history dictionary compact.

Compact history contains one key per conversation, and the value is a dictionary like:

Type: bool | None

Default: True
reset_counterwhether to reset the counter of the number of API calls.

Type: bool | None

Default: True

stop_logging

stop_logging() -> 

End book keeping.


test

test(
    data,
    eval_func=None,
    use_cache=True,
    agg_method='avg',
    return_responses_and_per_instance_result=False,
    logging_level=30,
    **config
) -> 

Evaluate the responses created with the config for the OpenAI API call.

Parameters:
NameDescription
dataThe list of test data points.

eval_func=None
use_cache=True
agg_method='avg'
return_responses_and_per_instance_result=False
logging_level=30
**config