Agent Collaboration: Multi-Agent Communication

Updated on 4/15/2026

title: "Lesson 12 | Agent Collaboration: Multi-Agent Communication & Task Delegation" summary: "Explore Hermes's Agent-to-Agent communication protocol and learn how to build an 'AI team' composed of multiple specialized Agents." sortOrder: 120 status: "published"

Alright, this is the 12th article written for the Hermes Agent tutorial.


Subtitle: Explore Hermes's Agent-to-Agent communication protocol and learn how to build an "AI team" composed of multiple specialized Agents

Learning Objectives

In this lesson, you will delve into one of the most exciting features of the Hermes Agent framework: multi-agent collaboration. Upon completing this chapter, you will be able to:

  1. Understand the necessity of multi-agent collaboration: Recognize why breaking down complex tasks and delegating them to multiple specialized Agents is key to building advanced AI applications.
  2. Master the core collaboration architecture: Gain a deep understanding of the three core components of Hermes Agent collaboration: the Agent Registry, the HIACP protocol, and the internal message bus.
  3. Configure and launch multiple Agents: Learn how to configure a unique identity and communication endpoint for each Agent, enabling them to discover each other on the network.
  4. Implement task delegation and response: Practice how to invoke another Agent's capabilities from within one Agent and handle the returned results by writing a new Skill.
  5. Build and run an "AI team": Get hands-on experience building a team of three Agents—a "Project Manager," a "Researcher," and a "Coder"—to collaboratively complete a complex task.

Core Concepts Explained

As the Agent applications we build become more complex, we quickly encounter the bottleneck of the "Monolithic Agent." An Agent that tries to be an expert in all domains often fails to achieve expert-level performance in any single one. Just as we rely on teamwork in the human world, the future of AI Agents is inevitably moving towards collective intelligence and collaboration.

The Hermes Agent framework provides a complete set of Agent-to-Agent (A2A) communication and collaboration mechanisms for this purpose. Its design philosophy draws from mature software engineering concepts like Microservices Architecture and the Actor Model.

The "AI Team" Paradigm

Imagine you need to complete the following task: "Research the latest AI technology 'Transformer,' write a simple code example, and summarize it into a technical blog post."

A single Agent's workflow for this task might be:

  1. Invoke a search tool to browse numerous web pages.
  2. Digest and understand the information internally (in its context).
  3. Switch to code generation mode to write the code.
  4. Switch again to writing mode to format the text.

This process is inefficient and prone to errors. The cost of context switching is high, and the Agent might experience "cognitive dissonance" when shifting between different task modes.

In contrast, an "AI team" operates completely differently:

  • Project Manager Agent: Acts as the main entry point, receiving high-level instructions from the user. It doesn't execute specific tasks itself but is responsible for task decomposition and coordination.
  • Researcher Agent: Specializes in information retrieval. It possesses powerful Skills like web_search and read_document and is configured with a dedicated search engine API key. Its core mission is to provide information quickly and accurately.
  • Coder Agent: Proficient in multiple programming languages, with Skills like write_code, debug_code, and run_test. It receives clear programming requirements and produces high-quality code.
  • Writer Agent: Excels at text polishing, formatting, and style conversion. It is responsible for integrating the raw information and code into a coherent, professional article.

The advantages of this model are obvious:

  • Specialization: Each Agent can use models, Prompts, and Skills optimized for its specific task.
  • Scalability: New expert Agents, such as a "Data Analyst Agent" or a "Designer Agent," can be easily added to the team.
  • Robustness: The failure of a single Agent does not bring down the entire system. The Project Manager Agent can re-delegate the task to a backup Agent.
  • Concurrency: Multiple tasks can be executed in parallel. For example, the Coder Agent can start building the code framework while the Researcher Agent is gathering information.

To realize this paradigm, Hermes provides the following three core components.

1. Agent Registry

How does an Agent know which other Agents exist on the network and what they are good at? The answer is the Agent Registry.

The Agent Registry is a service discovery component. When a Hermes Agent starts up with collaboration mode enabled, it registers its information with the Registry. This typically includes:

  • agent_id: A globally unique Agent name, such as researcher-01.
  • address: The Agent's network address and port, like 127.0.0.1:8001.
  • capabilities: A list of skills the Agent possesses, such as ["web_search", "summarize_text"]. This allows other Agents to find services based on capability.

When an Agent (like the PM Agent) needs to delegate a task, it first queries the Registry: "Who has the web_search Skill?" The Registry returns a list of all matching Agents, and the PM Agent can then choose one to communicate with.

In Hermes, any Agent can be configured to also serve as the Registry to simplify deployment. In a production environment, however, deploying a standalone Registry service is recommended.

2. Hermes Inter-Agent Communication Protocol (HIACP)

Once Agents have discovered each other, they need a common language to communicate. This is the role of the HIACP protocol. HIACP is a standardized JSON message format that defines the structure and semantics of interactions between Agents. Its design is inspired by the classic FIPA-ACL (Agent Communication Language).

A typical HIACP message structure is as follows:

{
  "protocol_version": "1.0",
  "message_id": "uuid-1234-abcd-5678",
  "conversation_id": "conv-xyz-987",
  "sender_id": "pm-agent-main",
  "receiver_id": "researcher-01",
  "performative": "DELEGATE_TASK",
  "content": {
    "task_name": "research_topic",
    "parameters": {
      "topic": "What is the Transformer architecture in AI?",
      "depth": "detailed"
    }
  },
  "timestamp": "2023-10-27T10:00:00Z"
}

Key field breakdown:

  • message_id: A unique identifier for each message.
  • conversation_id: Used to track a complete multi-step dialogue or task flow.
  • sender_id / receiver_id: The agent_id of the sender and receiver.
  • performative: The intent of the message, which is the core of HIACP. Common performative values include:
    • REQUEST: Asks the recipient to perform an action (usually invoking a Skill).
    • INFORM: Sends a declarative fact or result.
    • QUERY_REF: Queries for a piece of information.
    • DELEGATE_TASK: Delegates a complete sub-task to the recipient.
    • ACCEPT / REFUSE: Agrees to or rejects a request.
    • FAILURE: Notifies that a task has failed.
  • content: The specific payload of the message, with its structure determined by the task itself.

By adhering to HIACP, Hermes ensures predictable and reliable communication between Agents developed by different people, or even different versions of Agents.

3. Internal Message Bus

HIACP defines "what to say," while the message bus addresses "how to say it." It is responsible for the reliable transmission of messages. Hermes includes a lightweight, asynchronous message bus built on ZeroMQ.

When you send an HIACP message via the Agent's collaboration API (e.g., agent.collaborate.send(...)), you are actually pushing it onto this message bus. The bus handles low-level details like network connections, message serialization, timeouts, and retries, allowing Skill developers to focus on business logic.

This decoupled design means that even if the target Agent is temporarily offline, the message bus can cache the message for a period and deliver it once the Agent comes back online, greatly enhancing system resilience.


💻 Practical Demonstration

Now, let's get our hands dirty and build an "AI team" of three Agents to complete the task we mentioned earlier: "Please research what LangChain is, write a simple Python code example, and compile it all into a report."

Our team members:

  1. pm_agent: The Project Manager, user entry point, responsible for task decomposition and coordination. It will also act as the Agent Registry.
  2. research_agent: The Researcher, solely responsible for invoking the web_search Skill.
  3. coder_agent: The Coder, solely responsible for writing Python code based on requirements.

Step 1: Environment Setup and Directory Structure

First, let's create our project root directory and subdirectories for the three Agents.

mkdir hermes_ai_team
cd hermes_ai_team

mkdir pm_agent
mkdir pm_agent/skills

mkdir research_agent
mkdir research_agent/skills

mkdir coder_agent
mkdir coder_agent/skills

Step 2: Agent Configuration

We need to create a config.yml file for each Agent.

1. pm_agent/config.yml

This Agent is our core and will serve as the Registry.

# pm_agent/config.yml
agent:
  name: ProjectManagerAgent
  description: The main agent that interacts with the user and delegates tasks.

# Use a local Ollama model as the brain
llm_provider:
  default: ollama
  ollama:
    model: llama3:8b
    api_base: http://localhost:11434

# Enable collaboration mode
collaboration:
  enabled: true
  agent_id: pm_agent_main  # Globally unique ID
  host: 0.0.0.0
  port: 8000
  # Set this Agent to act as the Registry service
  registry:
    enabled: true
    host: 0.0.0.0
    port: 8080 # Port for the Registry service

2. research_agent/config.yml

This Agent is a client; it needs to know where the Registry is.

# research_agent/config.yml
agent:
  name: ResearcherAgent
  description: A specialized agent for searching the web and summarizing information.

llm_provider:
  default: ollama
  ollama:
    model: llama3:8b
    api_base: http://localhost:11434

collaboration:
  enabled: true
  agent_id: researcher_01
  host: 0.0.0.0
  port: 8001 # Its own service port
  # Points to the Registry service provided by the PM Agent
  registry_url: http://127.0.0.1:8080 

3. coder_agent/config.yml

Similar to research_agent, this also points to the same Registry.

# coder_agent/config.yml
agent:
  name: CoderAgent
  description: A specialized agent for writing and explaining code snippets.

llm_provider:
  default: ollama
  ollama:
    model: llama3:8b
    api_base: http://localhost:11434

collaboration:
  enabled: true
  agent_id: coder_01
  host: 0.0.0.0
  port: 8002 # Its own service port
  registry_url: http://127.0.0.1:8080

Step 3: Create Specialized Skills

Now, let's add the specialized skills for our research_agent and coder_agent.

1. research_agent/skills/web_research.py

For simplicity, we'll use pseudo-code to simulate a web search. In a real scenario, you could integrate search APIs like Tavily or Serper.

# research_agent/skills/web_research.py
from hermes_agent.skills import Skill

class WebResearchSkill(Skill):
    name = "web_research"
    description = "Performs web research on a given topic and returns a summary."

    def __init__(self, agent):
        super().__init__(agent)

    async def execute(self, topic: str) -> str:
        """
        Simulates web research.
        In a real-world scenario, this would use a search API.
        """
        self.agent.logger.info(f"Received research request for topic: {topic}")
        if "langchain" in topic.lower():
            return (
                "LangChain is a framework for developing applications powered by "
                "language models. It provides a standard interface for chains, "
                "a plethora of integrations with other tools, and end-to-end chains "
                "for common applications. Key components include Models, Prompts, "
                "Indexes, Chains, and Agents."
            )
        return f"Sorry, I couldn't find information on {topic}."

2. coder_agent/skills/code_generator.py

This Skill receives a requirement and generates code.

# coder_agent/skills/code_generator.py
from hermes_agent.skills import Skill

class CodeGeneratorSkill(Skill):
    name = "generate_python_example"
    description = "Generates a simple Python code example based on a description."

    def __init__(self, agent):
        super().__init__(agent)

    async def execute(self, requirement: str) -> str:
        """
        Simulates code generation.
        In a real-world scenario, this would involve a call to an LLM with a specific prompt.
        """
        self.agent.logger.info(f"Received code generation request for: {requirement}")
        if "langchain" in requirement.lower() and "hello world" in requirement.lower():
            code = """
# main.py
from langchain_community.llms import Ollama
from langchain_core.prompts import ChatPromptTemplate

# Initialize the LLM
llm = Ollama(model="llama3:8b")

# Create a prompt template
prompt = ChatPromptTemplate.from_messages([
    ("system", "You are a helpful assistant."),
    ("user", "{input}")
])

# Create a chain
chain = prompt | llm 

# Invoke the chain
response = chain.invoke({"input": "Hello, LangChain!"})
print(response)
"""
            return f"Here is a simple 'Hello World' example for LangChain:\n```python\n{code.strip()}\n```"
        return "Sorry, I can't generate code for that requirement."

Step 4: Write the Core Delegation Skill

This is the core of our practical exercise. We'll create a Skill in the pm_agent that will be responsible for calling the other two Agents.

pm_agent/skills/research_and_code_coordinator.py

# pm_agent/skills/research_and_code_coordinator.py
import asyncio
from hermes_agent.skills import Skill
from hermes_agent.collaboration import HIACPMessage, Performative

class ResearchAndCodeCoordinatorSkill(Skill):
    name = "coordinate_research_and_code"
    description = (
        "Coordinates research and coding tasks by delegating to specialized agents. "
        "Use this for complex queries involving both information retrieval and code generation."
    )

    def __init__(self, agent):
        super().__init__(agent)

    async def execute(self, topic: str) -> str:
        self.agent.logger.info(f"Starting coordination for topic: {topic}")

        # --- Step 1: Delegate research task ---
        self.agent.logger.info("Finding a research agent...")
        # Discover an agent that has the 'web_research' capability
        # The collaboration module handles querying the registry
        researcher_id = await self.agent.collaboration.discover_agent_by_skill("web_research")
        
        if not researcher_id:
            return "Error: Could not find any agent with 'web_research' capability."
        
        self.agent.logger.info(f"Found researcher: {researcher_id}. Delegating research task...")
        
        # Construct the HIACP message
        research_task = HIACPMessage(
            receiver_id=researcher_id,
            performative=Performative.REQUEST,
            content={"skill": "web_research", "parameters": {"topic": f"What is {topic}?"}}
        )
        
        # Send the message and wait for the response. The timeout is in seconds.
        research_response = await self.agent.collaboration.send_and_wait(research_task, timeout=30)
        
        if not research_response or research_response.performative == Performative.FAILURE:
            self.agent.logger.error("Research task failed or timed out.")
            return "Error: The research task failed."
        
        research_summary = research_response.content.get("result")
        self.agent.logger.info(f"Received research summary: {research_summary[:100]}...")

        # --- Step 2: Delegate coding task ---
        self.agent.logger.info("Finding a coder agent...")
        coder_id = await self.agent.collaboration.discover_agent_by_skill("generate_python_example")

        if not coder_id:
            return "Error: Could not find any agent with 'generate_python_example' capability."
            
        self.agent.logger.info(f"Found coder: {coder_id}. Delegating coding task...")

        coding_requirement = f"A simple 'hello world' style example for {topic}, based on this info: {research_summary}"
        
        code_task = HIACPMessage(
            receiver_id=coder_id,
            performative=Performative.REQUEST,
            content={"skill": "generate_python_example", "parameters": {"requirement": coding_requirement}}
        )
        
        code_response = await self.agent.collaboration.send_and_wait(code_task, timeout=30)
        
        if not code_response or code_response.performative == Performative.FAILURE:
            self.agent.logger.error("Coding task failed or timed out.")
            return "Error: The coding task failed."
            
        code_example = code_response.content.get("result")
        self.agent.logger.info("Received code example.")

        # --- Step 3: Synthesize the final report ---
        self.agent.logger.info("Synthesizing the final report...")
        final_prompt = f"""
        You are a tech writer. Based on the following information, please generate a final report.
        The report should include an introduction based on the research summary and a code section with the provided example.

        Research Summary:
        {research_summary}

        Code Example:
        {code_example}
        """

        # Use the agent's own LLM to generate the final output
        final_report = await self.agent.llm.invoke(final_prompt)
        
        return final_report

Step 5: Launch and Test the "AI Team"

Now for the exciting part. We need to open three terminal windows.

Terminal 1: Launch the PM Agent (and Registry)

hermes start --config hermes_ai_team/pm_agent/config.yml

You will see logs indicating that ProjectManagerAgent has started and the Registry service is listening on 0.0.0.0:8080.

Terminal 2: Launch the Researcher Agent

hermes start --config hermes_ai_team/research_agent/config.yml

In the pm_agent's logs, you should see a message like [Registry] Agent researcher_01 registered.

Terminal 3: Launch the Coder Agent

hermes start --config hermes_ai_team/coder_agent/config.yml

Similarly, the pm_agent's logs will show [Registry] Agent coder_01 registered.

Terminal 4: Interact with the PM Agent

Our AI team is now ready. Open a fourth terminal and use the Hermes CLI to chat with the pm_agent.

# The PM Agent is running on port 8000
hermes chat --port 8000

Once in the chat interface, enter our task instruction:

> Please research what LangChain is, write a simple Python code example, and compile it all into a report.

Now, observe the terminal logs of all three Agents. You will see a fascinating collaboration flow:

  1. PM Agent Logs:

    • INFO: Starting coordination for topic: LangChain
    • INFO: Finding a research agent...
    • INFO: Found researcher: researcher_01. Delegating research task...
    • INFO: Received research summary: LangChain is a framework...
    • INFO: Finding a coder agent...
    • INFO: Found coder: coder_01. Delegating coding task...
    • INFO: Received code example.
    • INFO: Synthesizing the final report...
  2. Researcher Agent Logs:

    • INFO: Received research request for topic: What is LangChain?
    • (Logs show task completion and result return)
  3. Coder Agent Logs:

    • INFO: Received code generation request for: A simple 'hello world' style example for LangChain...
    • (Logs show task completion and result return)

Finally, the pm_agent will return the fully integrated report to you, which will look something like this:

Of course, here is the research report and code example for LangChain.

### Introduction

LangChain is a framework for developing applications powered by language models. It provides a standard interface for chains, a plethora of integrations with other tools, and end-to-end chains for common applications. Its core components include Models, Prompts, Indexes, Chains, and Agents.

### Python Code Example

Here is a simple "Hello World" example using LangChain and Ollama:

```python
# main.py
from langchain_community.llms import Ollama
from langchain_core.prompts import ChatPromptTemplate

# Initialize the LLM
llm = Ollama(model="llama3:8b")

# Create a prompt template
prompt = ChatPromptTemplate.from_messages([
    ("system", "You are a helpful assistant."),
    ("user", "{input}")
])

# Create a chain
chain = prompt | llm 

# Invoke the chain
response = chain.invoke({"input": "Hello, LangChain!"})
print(response)

Congratulations! you have successfully built and commanded an AI team composed of multiple specialized Agents!

---

## Commands Used

*   `mkdir <directory_name>`: Creates a new directory.
*   `hermes start --config <path_to_config.yml>`: Starts a Hermes Agent instance based on the specified configuration file.
*   `hermes chat --port <port_number>`: Interacts with a Hermes Agent running on the specified port via the command line.

---