Built-in Provider Tools#
AG2 includes built-in tools that map to server-side capabilities offered by LLM providers. These tools are executed by the provider's API — not locally — and require no function implementation on your side.
| Tool | Anthropic | OpenAI | Gemini |
|---|---|---|---|
CodeExecutionTool | ✓ | ✓ | ✓ |
WebSearchTool | ✓ | ✓ | ✓ |
WebFetchTool | ✓ | ✗ | ✓ |
ShellTool | ✓ | ✓ | ✗ |
MCPServerTool | ✓ | ✓ | ✗ |
ImageGenerationTool | ✗ | ✓ | ✗ |
MemoryTool | ✓ | ✗ | ✗ |
Web Search#
Gives the model access to real-time web search results.
Not all parameters are supported by every provider. Unsupported parameters are silently ignored.
| Parameter | Anthropic | OpenAI | Gemini |
|---|---|---|---|
max_uses | ✓ | ✓ | ✗ |
user_location | ✓ | ✓ | ✗ |
search_context_size | ✗ | ✓ | ✗ |
allowed_domains | ✓ | ✓ | ✗ |
blocked_domains | ✓ | ✗ | ✓ |
Web Fetch#
Fetches full content from specific URLs. Useful for reading documentation, articles, or PDFs.
| Parameter | Anthropic | Gemini |
|---|---|---|
max_uses | ✓ | ✗ |
allowed_domains | ✓ | ✗ |
blocked_domains | ✓ | ✗ |
citations | ✓ | ✗ |
max_content_tokens | ✓ | ✗ |
Note
OpenAI does not support web fetch. Using WebFetchTool with an OpenAI config will raise an error.
Code Execution#
Lets the model write and run code inline during a conversation.
The tool accepts a version parameter for provider version pinning:
Memory#
Enables Claude to store and retrieve information across conversations. Claude can create, read, update, and delete files in a /memories directory.
Note
MemoryTool is currently only supported by Anthropic.
Shell#
Gives the model the ability to run shell commands. The execution environment depends on the provider.
OpenAI supports configuring the execution environment:
| Environment | Description |
|---|---|
ContainerAutoEnvironment | Provider-managed container with optional network policy |
ContainerReferenceEnvironment | Reference an existing container by ID |
LocalEnvironment | Client-side execution |
Warning
ShellTool gives the model direct shell access. Use it only with trusted prompts and consider restricting the environment.
MCP Server#
Integrates external MCP (Model Context Protocol) servers, giving the model access to remote tools.
| Parameter | Anthropic | OpenAI |
|---|---|---|
server_url | ✓ | ✓ |
server_label | ✓ | ✓ |
authorization_token | ✓ | ✗ |
description | ✓ | ✗ |
allowed_tools | ✓ | ✓ |
blocked_tools | ✓ | ✗ |
headers | ✗ | ✓ |
Image Generation#
Instructs the model to generate images inline during a conversation. Generated images are returned via reply.images.
| Parameter | Description |
|---|---|
quality | "low", "medium", "high", or "auto" |
size | e.g. "1024x1024", "1536x1024", or "auto" |
background | "transparent", "opaque", or "auto" |
output_format | "png", "jpeg", or "webp" |
output_compression | 0–100, for jpeg/webp only |
partial_images | 1–3, number of partial images to stream |
Note
ImageGenerationTool is only supported by OpenAI (Responses API). Using it with other providers will raise an error.
Anthropic Tool Versions#
Anthropic versions their server-side tools. Newer versions support dynamic filtering (Claude writes code to filter results before loading into context), but require Opus 4.6 or Sonnet 4.6.
Set the version on each built-in tool (defaults match the older Anthropic tool revisions):
The default versions are compatible with all Claude models including Haiku.