Using Non-OpenAI Models
LM Studio
This notebook shows how to use AutoGen with multiple local models using LM Studio’s multi-model serving feature, which is available since version 0.2.17 of LM Studio.
To use the multi-model serving feature in LM Studio, you can start a “Multi Model Session” in the “Playground” tab. Then you select relevant models to load. Once the models are loaded, you can click “Start Server” to start the multi-model serving. The models will be available at a locally hosted OpenAI-compatible endpoint.
Two Agent Chats
In this example, we create a comedy chat between two agents using two different local models, Phi-2 and Gemma it.
We first create configurations for the models.
Now we create two agents, one for each model.
Now we start the conversation.