Mem0

Mem0 provides a smart, self-improving memory layer for Large Language Models (LLMs), enabling developers to create personalized AI experiences that evolve with each user interaction. This page focuses on Mem0 Open Source as the primary integration for AG2; a hosted Mem0 Platform alternative is included below.
At a high level, Mem0 Platform offers comprehensive memory management, self-improving memory capabilities, cross-platform consistency, and centralized memory control for AI applications. For more info, check out the Mem0 Platform Documentation.
| 🧠 Comprehensive Memory Management | Manage long-term, short-term, semantic, and episodic memories |
| 🔄 Self-Improving Memory | Adaptive system that learns from user interactions |
| 🌐 Cross-Platform Consistency | Unified user experience across various AI platforms |
| 🎛️ Centralized Memory Control | Effortless storage, updating, and deletion of memories |
| 🚀 Simplified Development | API-first approach for streamlined integration |
Installation#
Mem0 Open Source works seamlessly with various AI applications.
-
Install Mem0 SDK:
-
Configure Your Environment: Mem0 Open Source uses your LLM provider by default. Add your OpenAI key (or configure another provider).
- Initialize Mem0 (Open Source):
After initializing Mem0, you can start using its memory management features in your AI application.
Note:
Memoryis the open-source client. The hosted Mem0 Platform usesMemoryClient(see below).
Features#
- Long-term Memory: Store and retrieve information persistently across sessions
- Short-term Memory: Manage temporary information within a single interaction
- Semantic Memory: Organize and retrieve conceptual knowledge
- Episodic Memory: Store and recall specific events or experiences
- Self-Improving System: Continuously refine understanding based on user interactions
Common Use Cases#
- Personalized Learning Assistants
- Customer Support AI Agents
- Healthcare Assistants
- Virtual Companions
Mem0 Platform Examples#
AG2 with Mem0 (Open Source) Example#
This example demonstrates how to use Mem0 with AG2 to create a conversational AI system with memory capabilities.
import os
from autogen import ConversableAgent, LLMConfig
from mem0 import Memory
# Set up environment variables
os.environ["OPENAI_API_KEY"] = "your_openai_api_key"
# Initialize Agent and Memory
agent = ConversableAgent(
"chatbot",
llm_config=LLMConfig({"api_type": "openai", "model": "gpt-5-nano", "api_key": os.environ.get("OPENAI_API_KEY")}),
code_execution_config=False,
function_map=None,
human_input_mode="NEVER",
)
memory = Memory()
# Insert a conversation into memory
conversation = [
{
"role": "assistant",
"content": "Hi, I'm Best Buy's chatbot!\n\nThanks for being a My Best Buy TotalTM member.\n\nWhat can I help you with?"
},
{
"role": "user",
"content": "Seeing horizontal lines on our tv. TV model: Sony - 77\" Class BRAVIA XR A80K OLED 4K UHD Smart Google TV"
},
]
memory.add(messages=conversation, user_id="customer_service_bot")
# Agent Inference
data = "Which TV am I using?"
relevant_memories = memory.search(data, user_id="customer_service_bot")
flatten_relevant_memories = "\n".join([m["memory"] for m in relevant_memories["results"]])
prompt = f"""Answer the user question considering the memories.
Memories:
{flatten_relevant_memories}
\n\n
Question: {data}
"""
reply = agent.generate_reply(messages=[{"content": prompt, "role": "user"}])
print("Reply :", reply)
# Multi Agent Conversation
manager = ConversableAgent(
"manager",
system_message="You are a manager who helps in resolving customer issues.",
llm_config=LLMConfig({"api_type": "openai", "model": "gpt-5-nano", "temperature": 0, "api_key": os.environ.get("OPENAI_API_KEY")}),
human_input_mode="NEVER"
)
customer_bot = ConversableAgent(
"customer_bot",
system_message="You are a customer service bot who gathers information on issues customers are facing.",
llm_config=LLMConfig({"api_type": "openai", "model": "gpt-5-nano", "temperature": 0, "api_key": os.environ.get("OPENAI_API_KEY")}),
human_input_mode="NEVER"
)
data = "What can the customer do to resolve the issue?"
relevant_memories = memory.search(data, user_id="customer_service_bot")
flatten_relevant_memories = "\n".join([m["memory"] for m in relevant_memories["results"]])
prompt = f"""
Context:
{flatten_relevant_memories}
\n\n
Question: {data}
"""
result = manager.send(prompt, customer_bot, request_reply=True)
memory.vector_store.client.close()
memory._telemetry_vector_store.client.close()
Access the complete code from this notebook: Mem0 with AG2
This example showcases: 1. Setting up AG2 agents and Mem0 memory 2. Adding a conversation to Mem0 memory 3. Using Mem0 to retrieve relevant memories for agent inference 4. Implementing a multi-agent conversation with memory-augmented context
Mem0 Platform (Hosted) Alternative#
If you prefer the hosted Mem0 Platform, use the MemoryClient with an API key:
import os
from mem0 import MemoryClient
client = MemoryClient(api_key=os.getenv("MEM0_API_KEY"))
results = client.search("Which TV am I using?", filters={"user_id": "customer_service_bot"})
For more Mem0 examples, visit our documentation.
