Skip to content

A2A (Agent-to-Agent) Communication#

The A2A (Agent-to-Agent Protocol) communication system in AG2 enables agents to communicate across different processes, machines, applications, or even different frameworks and languages. This powerful feature allows you to build distributed agent systems where agents can be deployed as separate services and interact with each other seamlessly.

Consider using A2A when:

  • The agent you need to talk to is a separate, standalone service (e.g., a specialized financial modeling agent)
  • The agent is maintained by a different team or organization
  • You need to connect agents written in different programming languages or agent frameworks
  • You want to enforce a strong, formal contract (the A2A protocol) between your system's components

Note

We are using official A2A python sdk to implement A2A in AG2.

You can check their documentation for more details on how to use A2A in your own projects.

Creating an A2A Server#

We've built a convenience wrapper that exposes regular AG2 ConversableAgent agents as A2A servers - let's have a quick look at how to use it:

server.py
from autogen import ConversableAgent, LLMConfig
from autogen.a2a import A2aAgentServer

# Create your regular agent
llm_config = LLMConfig({ "model": "gpt-4o-mini" })

agent = ConversableAgent(
    name="python_coder",
    system_message="You are an expert Python developer...",
    llm_config=llm_config,
    # set human_input_mode "NEVER" to avoid asking for human input on server side
    human_input_mode="NEVER",
)

# Create A2A server
server = A2aAgentServer(agent).build()

Now you can start it using any ASGI server, for example:

uvicorn server:server

To read more about how to configure an A2A server, please refer to A2A Server Setup documentation.

Creating an A2A Client#

client.py
1
2
3
4
5
6
from autogen.a2a import A2aRemoteAgent

remote_coding_agent = A2aRemoteAgent(
    url="http://localhost:8000",  # your server URL
    name="python_coder",
)

Now you can use it just like any other ConversableAgent:

from autogen import ConversableAgent, LLMConfig
from autogen.a2a import A2aRemoteAgent

llm_config = LLMConfig({ "model": "gpt-4o-mini" })

review_agent = ConversableAgent(
    name="reviewer",
    system_message="You are a code reviewer...",
    llm_config=llm_config,
)

remote_coding_agent = A2aRemoteAgent(
    url="http://localhost:9999",
    name="python_coder",
)

await review_agent.a_initiate_chat(
    recipient=remote_coding_agent,
    message={"role": "user", "content": prompt},
)

Warning

A2aRemoteAgent supports only asynchronous methods - this is the limitation of the A2A client we use.

To read more about how to use A2aRemoteAgent, please refer to A2A Client Usage documentation.

Interoperability with other frameworks#

A2A is supported by other frameworks, so you can use A2aRemoteAgent with them, too.

As an example, you can easily connect your AG2 agents with Pydantic AI agents:

server.py
1
2
3
4
5
6
7
from pydantic_ai import Agent

agent = Agent(
    "openai:gpt-4.1",
    instructions="You are an expert Python developer...",
)
app = agent.to_a2a()

Now, without any changes, your A2aRemoteAgent clients can interact with them:

client.py
1
2
3
4
5
6
7
from autogen.a2a import A2aRemoteAgent

# works correctly with other frameworks A2A servers
remote_agent = A2aRemoteAgent(
    url="http://localhost:8000",
    name="python_coder",
)