AG2 provides a built-in TestConfig utility in the autogen.beta.testing module to help you write unit tests for your agents. It allows you to mock LLM responses and simulate tool execution scenarios without making actual API calls.
To mock LLM answers, you can use TestConfig in place of a standard model configuration. Pass the expected responses as arguments to TestConfig. Each argument represents the mocked response for a sequential turn in the conversation.
importpytestfromautogen.betaimportAgentfromautogen.beta.testingimportTestConfig@pytest.mark.asyncioasyncdeftest_mock_llm_answer():# Provide a TestConfig with the mocked string responseagent=Agent("test_agent")# Ask the agent, passing the TestConfigres=awaitagent.ask("Hi!",config=TestConfig("This is a mocked response."),)# The agent returns the mocked responseassertres.content=="This is a mocked response."
You can also use TestConfig to yield tool calls. This allows you to test both successful tool execution and error handling. By providing a ToolCall as the first response and a string as the final response, you can simulate a complete agent-tool interaction loop.
importpytestfromautogen.betaimportAgentfromautogen.beta.eventsimportToolCallfromautogen.beta.testingimportTestConfig@pytest.mark.asyncioasyncdeftest_tool_success():# Define a tooldefmy_tool()->str:return"tool execution result"agent=Agent("test_agent",tools=[my_tool])# Configure TestConfig to first return a ToolCall, then a final string answertest_config=TestConfig(ToolCall(name="my_tool"),"final result",)res=awaitagent.ask("Please use my_tool",config=test_config)# After the tool is called and succeeds, the agent returns the second mocked eventassertres.content=="final result"
importpytestfromautogen.betaimportAgentfromautogen.beta.eventsimportToolCallfromautogen.beta.testingimportTestConfig@pytest.mark.asyncioasyncdeftest_tool_raise_exc():# Define a tool that raises an errordeffailing_tool()->str:raiseValueError("Something went wrong")test_config=TestConfig(ToolCall(name="failing_tool"),"result",)agent=Agent("test_agent",config=test_config,tools=[failing_tool],)withpytest.raises(ValueError,match="Something went wrong"):awaitagent.ask("Hi!")
importpytestfromautogen.betaimportAgentfromautogen.beta.eventsimportToolCallfromautogen.beta.exceptionsimportToolNotFoundErrorfromautogen.beta.testingimportTestConfig@pytest.mark.asyncioasyncdeftest_tool_not_found():# Mock the LLM returning a tool call for "unregistered_tool"test_config=TestConfig(ToolCall(name="unregistered_tool"))# Agent is created WITHOUT any toolsagent=Agent("test_agent",config=test_config)withpytest.raises(ToolNotFoundError,match="Tool `unregistered_tool` not found"):awaitagent.ask("Hi!")