Skip to content

A Uniform interface to call different LLMs#

Open In Colab Open on GitHub

Autogen provides a uniform interface for API calls to different LLMs, and creating LLM agents from them. Through setting up a configuration file, you can easily switch between different LLMs by just changing the model name, while enjoying all the enhanced features such as caching and cost calculation!

In this notebook, we will show you how to use AG2 to call different LLMs and create LLM agents from them.

Currently, we support the following model families: - OpenAI - Azure OpenAI - Anthropic Claude - Google Gemini - Mistral (API to open and closed-source models) - DeepInfra (API to open-source models) - TogetherAI (API to open-source models)

… and more to come!

You can also plug in your local deployed LLM into AG2 if needed.

Install required packages#

You may want to install AG2 with options to different LLMs. Here we install AG2 with all the supported LLMs. By default, AG2 is installed with OpenAI support.

pip install autogen[openai,gemini,anthropic,mistral,together]

Config list setup#

First, create a OAI_CONFIG_LIST file to specify the api keys for the LLMs you want to use. Generally, you just need to specify the model, api_key and api_type from the provider.

[
    {   
        # using OpenAI
        "model": "gpt-35-turbo-1106", 
        "api_key": "YOUR_API_KEY"
        # default api_type is openai
    },
    {
        # using Azure OpenAI
        "model": "gpt-4-turbo-1106",
        "api_key": "YOUR_API_KEY",
        "api_type": "azure",
        "base_url": "YOUR_BASE_URL",
        "api_version": "YOUR_API_VERSION"
    },
    {   
        # using Google gemini
        "model": "gemini-1.5-pro-latest",
        "api_key": "YOUR_API_KEY",
        "api_type": "google"
    },
    {
        # using DeepInfra
        "model": "meta-llama/Meta-Llama-3-70B-Instruct",
        "api_key": "YOUR_API_KEY",
        "base_url": "https://api.deepinfra.com/v1/openai" # need to specify the base_url
    },
    {
        # using Anthropic Claude
        "model": "claude-1.0",
        "api_type": "anthropic",
        "api_key": "YOUR_API_KEY"
    },
    {
        # using Mistral
        "model": "mistral-large-latest",
        "api_type": "mistral",
        "api_key": "YOUR_API_KEY"
    },
    {
        # using TogetherAI
        "model": "google/gemma-7b-it",
        "api_key": "YOUR_API_KEY",
        "api_type": "together"
    }
    ...
]

Uniform Interface to call different LLMs#

We first demonstrate how to use AG2 to call different LLMs with the same wrapper class.

After you install relevant packages and setup your config list, you only need three steps to call different LLMs: 1. Extract the config with the model name you want to use. 2. create a client with the model name. 3. call the client create to get the response.

Below, we define a helper function model_call_example_function to implement the above steps.

import autogen
from autogen import OpenAIWrapper

def model_call_example_function(model: str, message: str, cache_seed: int = 41, print_cost: bool = False):
    """A helper function that demonstrates how to call different models using the OpenAIWrapper class.
    Note the name `OpenAIWrapper` is not accurate, as now it is a wrapper for multiple models, not just OpenAI.
    This might be changed in the future.
    """
    config_list = autogen.config_list_from_json(
        "OAI_CONFIG_LIST",
        filter_dict={
            "model": [model],
        },
    )
    client = OpenAIWrapper(config_list=config_list)
    response = client.create(messages=[{"role": "user", "content": message}], cache_seed=cache_seed)

    print(f"Response from model {model}: {response.choices[0].message.content}")

    # Print the cost of the API call
    if print_cost:
        client.print_usage_summary()
model_call_example_function(model="gpt-35-turbo-1106", message="Tell me a joke.")
model_call_example_function(model="gemini-1.5-pro-latest", message="Tell me a joke.")
model_call_example_function(model="meta-llama/Meta-Llama-3-70B-Instruct", message="Tell me a joke. ")
model_call_example_function(model="mistral-large-latest", message="Tell me a joke. ", print_cost=True)

Using different LLMs in agents#

Below we give a quick demo of using different LLMs agents in a groupchat.

We mock a debate scenario where each LLM agent is a debater, either in affirmative or negative side. We use a round-robin strategy to let each debater from different teams to speak in turn.

def get_llm_config(model_name):
    return {
        "config_list": autogen.config_list_from_json("OAI_CONFIG_LIST", filter_dict={"model": [model_name]}),
        "cache_seed": 41,
    }

affirmative_system_message = "You are in the Affirmative team of a debate. When it is your turn, please give at least one reason why you are for the topic. Keep it short."
negative_system_message = "You are in the Negative team of a debate. The affirmative team has given their reason, please counter their argument. Keep it short."

gpt35_agent = autogen.AssistantAgent(
    name="GPT35", system_message=affirmative_system_message, llm_config=get_llm_config("gpt-35-turbo-1106")
)

llama_agent = autogen.AssistantAgent(
    name="Llama3",
    system_message=negative_system_message,
    llm_config=get_llm_config("meta-llama/Meta-Llama-3-70B-Instruct"),
)

mistral_agent = autogen.AssistantAgent(
    name="Mistral", system_message=affirmative_system_message, llm_config=get_llm_config("mistral-large-latest")
)

gemini_agent = autogen.AssistantAgent(
    name="Gemini", system_message=negative_system_message, llm_config=get_llm_config("gemini-1.5-pro-latest")
)

claude_agent = autogen.AssistantAgent(
    name="Claude", system_message=affirmative_system_message, llm_config=get_llm_config("claude-3-opus-20240229")
)

user_proxy = autogen.UserProxyAgent(
    name="User",
    code_execution_config=False,
)

# initialize the groupchat with round robin speaker selection method
groupchat = autogen.GroupChat(
    agents=[claude_agent, gemini_agent, mistral_agent, llama_agent, gpt35_agent, user_proxy],
    messages=[],
    max_round=8,
    speaker_selection_method="round_robin",
)
manager = autogen.GroupChatManager(groupchat=groupchat)
chat_history = user_proxy.initiate_chat(recipient=manager, message="Debate Topic: Should vaccination be mandatory?")