LLMs
Your AG2 agents are likely to need an LLM and you can configure one, or more, for each agent.
AG2’s agents can use LLMs through OpenAI, Anthropic, Google, Amazon, Mistral AI, Cerebras, Together AI, and Groq. Locally hosted models can also be used through Ollama, LiteLLM, and LM Studio.
From version 0.8: The OpenAI package, openai
, is not installed by default.
Install AG2 with your preferred model provider(s), for example:
pip install ag2[openai]
pip install ag2[gemini]
pip install ag2[anthropic,cohere,mistral]
First, we create the LLM configuration object with the API type, model, and, if necessary, the key. There are two ways to create an LLM configuration object:
- By passing the
config_list
parameter with a list of dictionaries containing the API type, model, and key.
- By passing the
api_type
,model
, andapi_key
as parameters.
It is important to never hard-code secrets into your code, therefore we read the OpenAI API key from an environment variable.
Then, when you create your agents you can pass the LLM configuration object to the agent in two ways:
- By passing the
llm_config
parameter to the agent as a keyword argument.
- By using the
with
statement to create a context manager for the LLM configuration.
This will set the LLM configuration for the agent.
The default LLM provider is OpenAI but if you would like to use a different provider, see the available providers.
AG2’s LLM configuration allows you to specify many LLMs for fallback support and the ability to filter them for an agent, see the LLM Configuration deep-dive.
Environment variables
The examples in these guides include an LLM configuration for OpenAI’s GPT-4o mini
model and will need the OPENAI_API_KEY
environment variable set with your OpenAI API key.
Set it in your terminal/command prompt: