Issue 20 | Group Seminar (Network of Agents) Practical Simulation

Updated on 4/15/2026

Subtitle: The tripartite game cyclic flow where the Writer finishes writing, hands it to the Editor for nitpicking, and it is sent back to the Writer for revision.

Hello students! I am your AI technical mentor. Today, we are not playing solo; we are doing a group fight! Oh no, I mean group collaboration! In the real world, no complex work can be handled by a single person, and the same goes for agents. The AI content agency we built previously had a Planner, Researcher, Writer, and Editor performing their respective duties, but the flow between them was often unidirectional. Today, we are going to break this linear thinking and introduce a more realistic and intelligent collaboration model—Network of Agents, especially the kind of cyclic flow with feedback and iteration.

Imagine your Writer works hard to write a draft and confidently hands it to the Editor. The Editor takes a look, frowns, and says, "This won't do, the logic is unsmooth, and the tone is wrong, revise it!" Then, bam, the draft is back in the Writer's hands. The Writer, feeling wronged but armed with the Editor's annotations, continues to revise. Doesn't this back-and-forth look exactly like a true portrayal of our daily work? This is the tripartite game cyclic flow we are going to simulate today using LangGraph: the Writer (writing party), the Editor (reviewing party), and the Content State running through it.

This cyclic flow is an indispensable capability for building any complex AI system with iteration, optimization, and negotiation mechanisms. Once you master it, your AI Agent will no longer be a rigid assembly line worker, but an intelligent team member who can truly "think," "provide feedback," and "iterate." Are you ready? Let's dive into the core of LangGraph and unlock this advanced skill!

🎯 Learning Objectives for this Issue

After completing this issue, you will be able to:

  1. Understand and build LangGraph cyclic flows based on conditional judgments: Master how to use add_conditional_edges to implement dynamic feedback and iteration mechanisms among multiple agents.
  2. Refine the management of complex Graph State: Learn how to design and update shared states in multi-agent collaboration to carry the input, output, and decision-making information of each agent.
  3. Simulate real collaboration scenarios of an AI content agency: Deeply integrate the roles of Writer and Editor to achieve an automatic iterative optimization process from the first draft to the final draft.
  4. Master strategies to avoid loop traps and optimize iteration processes: Understand the problems that may be encountered when building cyclic flows, and learn how to design robust exit conditions and optimization mechanisms.

📖 Principle Analysis

In LangGraph, the core of building a multi-agent collaborative "Network of Agents" lies in the Graph State and Conditional Edges. Imagine the Graph State as a shared whiteboard where all agents write, draw, and update information. Conditional Edges are like the host of a meeting, deciding who should speak next based on the latest discussion results (state) on the whiteboard, or whether the meeting can be adjourned.

Our "tripartite game" today—the Writer writing, the Editor nitpicking, and the Writer revising—is essentially an iterative optimization loop.

  1. Writer (Writing Party): Receives a topic or content to be revised, and outputs a first draft or a revised draft.
  2. Editor (Reviewing Party): Receives the Writer's draft and evaluates it. It outputs two things:
    • Feedback: Tells the Writer what needs improvement.
    • Decision: Judges whether the draft meets the requirements (approved) or still needs revision (needs_revision).
  3. Content State: This is our "whiteboard," which records:
    • The current draft (current_article).
    • The Editor's latest feedback (editor_feedback).
    • The current status of the draft (status: drafting, revising, approved).
    • It can even record the number of revisions (revision_count) to prevent infinite loops.

The essence of this process lies in the Editor's decision. If the Editor decides needs_revision, the flow will return to the Writer node via a conditional edge. If the Editor decides approved, the flow terminates (or enters the next stage, such as publishing).

Mermaid Diagram of the Core Architecture

Let's use a Mermaid diagram to visually display this cyclic flow:

graph TD
    A[Start] --> B(Writer: Write/Revise Article);
    B --> C{Editor: Review Article};
    C -- Draft needs revision (needs_revision) --> B;
    C -- Draft approved (approved) --> D[End: Article Finalized];

    style A fill:#f9f,stroke:#333,stroke-width:2px;
    style B fill:#bbf,stroke:#333,stroke-width:2px;
    style C fill:#bfb,stroke:#333,stroke-width:2px;
    style D fill:#f9f,stroke:#333,stroke-width:2px;

Diagram Explanation:

  • Start: The starting point of the entire process.
  • Writer (Write/Revise Article): This is a node representing the operation of the Writer Agent. It will generate or update the article based on the current state (whether it is the initial writing or a revision based on feedback).
  • Editor (Review Article): This is another node representing the operation of the Editor Agent. It will receive the Writer's output, review it, and make a decision.
  • Conditional Edges:
    • Draft needs revision (needs_revision): If the Editor determines that the draft needs modification, this conditional edge will route the flow back to the Writer node, forming a loop.
    • Draft approved (approved): If the Editor determines that the draft meets the requirements, this conditional edge will route the flow to the end, finalizing the article.
  • End: The endpoint of the process, marking the final completion of the article.

This diagram clearly shows how, through LangGraph's conditional edge mechanism, we connect two independent agents (Writer and Editor) into a collaborative network with feedback and iteration capabilities. This is the core charm of the "Network of Agents"!

💻 Practical Code Drill (Specific Application in the Agency Project)

Alright, theory is beautiful, but code is more practical. Now, we will seamlessly integrate this "tripartite game" mechanism into our AI content creation agency. We will create an ArticleRevisionGraph, which will contain two core Agents, Writer and Editor, and achieve iterative article modification through LangGraph's StateGraph and add_conditional_edges features.

We will use a simplified AgentState to simulate article content, editorial feedback, and revision status.

import operator
from typing import TypedDict, Annotated, List, Union
from langchain_core.messages import BaseMessage, HumanMessage
from langchain_openai import ChatOpenAI
from langgraph.graph import StateGraph, END
import os

# Ensure your OpenAI API Key is set in the environment variables
# os.environ["OPENAI_API_KEY"] = "YOUR_OPENAI_API_KEY"

# Define the state of the graph
class AgentState(TypedDict):
    """
    Represent the state of our content agency's article revision process.
    """
    topic: str  # The topic of the article
    current_article: str  # The current content of the article
    editor_feedback: str  # Editor's feedback
    revision_count: Annotated[int, operator.add]  # Revision count, accumulates using operator.add
    status: str  # The current status of the article (e.g., "drafting", "revising", "approved")

# Simulated Writer Agent
class WriterAgent:
    def __init__(self, llm_model: str = "gpt-4o-mini"):
        self.llm = ChatOpenAI(model=llm_model, temperature=0.7)

    def write_or_revise(self, state: AgentState) -> AgentState:
        """
        Writer Agent's node function: Writes the initial draft or revises the article based on the current state.
        """
        topic = state["topic"]
        current_article = state["current_article"]
        editor_feedback = state["editor_feedback"]
        revision_count = state["revision_count"]

        # Construct the prompt for the LLM
        if revision_count == 0:
            prompt = HumanMessage(f"You are a professional article writer. Please write a high-quality article based on the following topic:\nTopic: {topic}\n\nPlease ensure the article has a clear structure, rich content, and fluent language.")
            print(f"\n--- Writer: Drafting initial version (Revision {revision_count}) ---")
        else:
            prompt = HumanMessage(f"You are a professional article writer. Below is the previous article content and the editor's feedback. Please revise and polish it based on the feedback:\n\nOld Article Content:\n{current_article}\n\nEditor Feedback:\n{editor_feedback}\n\nPlease provide the complete revised article.")
            print(f"\n--- Writer: Revising article (Revision {revision_count}) ---")
            print(f"Received editor feedback: {editor_feedback}")

        # Call LLM to generate content
        response = self.llm.invoke([prompt])
        new_article = response.content

        # Update the state
        print(f"Writer generated a new article or revised draft:\n{new_article[:200]}...") # Print the first 200 characters
        return {"current_article": new_article, "status": "revising", "revision_count": state["revision_count"] + 1}

# Simulated Editor Agent
class EditorAgent:
    def __init__(self, llm_model: str = "gpt-4o-mini", max_revisions: int = 3):
        self.llm = ChatOpenAI(model=llm_model, temperature=0.7)
        self.max_revisions = max_revisions # Set the maximum number of revisions

    def review_article(self, state: AgentState) -> AgentState:
        """
        Editor Agent's node function: Reviews the article, provides feedback, and decides whether to approve or require revision.
        """
        current_article = state["current_article"]
        revision_count = state["revision_count"]

        # Construct the prompt for the LLM
        prompt = HumanMessage(f"You are a senior content editor. Please review the following article and provide detailed revision suggestions. If the article has reached the publishing standard, please explicitly state 'Article approved' in your feedback. Otherwise, please point out in detail what needs improvement.\n\nArticle Content:\n{current_article}")
        print(f"\n--- Editor: Reviewing article (Result of Revision {revision_count-1}) ---")

        # Call LLM to generate feedback
        response = self.llm.invoke([prompt])
        feedback = response.content

        # Determine if the article is approved
        status = "needs_revision"
        if "Article approved" in feedback or "reached the publishing standard" in feedback:
            status = "approved"
            print("Editor Decision: Article approved!")
        elif revision_count >= self.max_revisions:
            status = "approved" # Reached max revisions, force approval (may require manual intervention in real projects)
            feedback += "\n[System Prompt]: Maximum revision count reached. Article forcefully approved, manual review may be required."
            print(f"Editor Decision: Reached maximum revision count of {self.max_revisions}, forcefully approved.")
        else:
            print("Editor Decision: Article needs revision.")

        # Update the state
        print(f"Editor provided feedback:\n{feedback[:200]}...") # Print the first 200 characters
        return {"editor_feedback": feedback, "status": status}

# Define a routing function to decide the next flow
def route_article(state: AgentState) -> str:
    """
    Routes the article based on its 'status'.
    """
    if state["status"] == "approved":
        print("\n--- Routing: Article approved, process ends ---")
        return "end"
    elif state["status"] == "revising":
        print("\n--- Routing: Article needs revision, returning to Writer ---")
        return "writer"
    else:
        # Default case, e.g., initial state, or unknown state, usually goes back to Writer
        print("\n--- Routing: Initial or unknown state, returning to Writer ---")
        return "writer"

# Build the LangGraph
def build_article_revision_graph(llm_model: str = "gpt-4o-mini", max_revisions: int = 3):
    writer_agent = WriterAgent(llm_model=llm_model)
    editor_agent = EditorAgent(llm_model=llm_model, max_revisions=max_revisions)

    # Initialize StateGraph
    workflow = StateGraph(AgentState)

    # Add nodes
    workflow.add_node("writer", writer_agent.write_or_revise)
    workflow.add_node("editor", editor_agent.review_article)

    # Set the entry point
    workflow.set_entry_point("writer")

    # Add edges
    # After the Writer node completes, it always goes to the Editor for review
    workflow.add_edge("writer", "editor")

    # After the Editor node completes, a routing function decides whether to return to Writer or end
    workflow.add_conditional_edges(
        "editor",      # From node
        route_article, # Routing function
        {              # Mapping table
            "writer": "writer", # If routing function returns "writer", flow to "writer" node
            "end": END          # If routing function returns "end", the process ends
        }
    )

    # Compile the graph
    app = workflow.compile()
    return app

# Run the simulation
if __name__ == "__main__":
    # Ensure OpenAI API Key is set
    if not os.getenv("OPENAI_API_KEY"):
        print("Please set the OPENAI_API_KEY environment variable.")
        exit()

    # Build and run the graph
    app = build_article_revision_graph(llm_model="gpt-4o-mini", max_revisions=3) # You can try gpt-3.5-turbo or gpt-4o-mini

    initial_state = {
        "topic": "Exploring the Applications and Challenges of Artificial Intelligence in Education",
        "current_article": "",
        "editor_feedback": "",
        "revision_count": 0,
        "status": "drafting"
    }

    print("--- Starting Article Creation and Revision Process ---")
    final_state = None
    for s in app.stream(initial_state):
        print(s) # Print state changes at each step
        final_state = s

    print("\n--- Process Ended ---")
    print("\nFinal Article State:")
    print(f"Topic: {final_state['editor']['topic']}")
    print(f"Final Revision Count: {final_state['editor']['revision_count'] - 1}") # Subtract 1 because the first time is 0, then +1 each time
    print(f"Final Status: {final_state['editor']['status']}")
    print("\n--- Final Article Content ---")
    print(final_state['editor']['current_article'])
    print("\n--- Final Editor Feedback ---")
    print(final_state['editor']['editor_feedback'])

Code Analysis:

  1. AgentState Definition:
    • We used TypedDict to define AgentState, which contains topic, current_article, editor_feedback, revision_count, and status.
    • revision_count: Annotated[int, operator.add] is an advanced usage specific to LangGraph. It tells LangGraph that when merging states, the revision_count field should be accumulated using operator.add rather than simply overwritten. This means every time a node returns a new revision_count, it will be added to the original base. This is perfect for counters.
  2. WriterAgent:
    • The write_or_revise method is the core logic of the Writer node.
    • It determines whether it is the initial draft or a revision based on revision_count.
    • If it is a revision, it passes current_article and editor_feedback as context to the LLM, allowing the LLM to make targeted modifications.
    • It returns a new current_article, an updated status, and revision_count.
  3. EditorAgent:
    • The review_article method is the core logic of the Editor node.
    • It receives the current_article and asks the LLM to provide feedback.
    • The key is that it checks whether the LLM's feedback contains keywords like "Article approved" or "reached the publishing standard" to determine if the article can be finalized.
    • To prevent infinite loops, we introduced the max_revisions parameter. If the maximum number of revisions is reached, even if the Editor hasn't approved it, it will be forcefully approved. This is an important mechanism to prevent infinite loops in real projects (of course, it is usually accompanied by an alert for manual intervention).
    • It returns editor_feedback and the updated status.
  4. route_article Routing Function:
    • This is the core of add_conditional_edges. It receives the current AgentState and returns a string. This string will be used as a key to match the mapping table of add_conditional_edges, thereby determining the direction of the flow.
    • If status is approved, it returns "end", and the process terminates.
    • If status is revising, it returns "writer", and the process returns to the Writer node.
  5. Graph Construction (build_article_revision_graph):
    • workflow = StateGraph(AgentState): Creates a state graph based on AgentState.
    • workflow.add_node("writer", writer_agent.write_or_revise) and workflow.add_node("editor", editor_agent.review_article): Register the Writer and Editor methods as nodes in the graph.
    • workflow.set_entry_point("writer"): Specifies that the process starts from the Writer.
    • workflow.add_edge("writer", "editor"): After the Writer finishes, it unconditionally passes the result to the Editor.
    • workflow.add_conditional_edges("editor", route_article, {"writer": "writer", "end": END}): This is the core of this issue! After the Editor finishes, based on the judgment result of the route_article function, it decides whether to return to the writer node or proceed to END.
  6. Running the Simulation (if __name__ == "__main__":):
    • Sets the initial_state, including the article topic, empty content, empty feedback, and initial revision count.
    • app.stream(initial_state) iteratively executes the graph. Each iteration prints the current state changes, allowing you to clearly see how the process advances step by step.

Through this code, you have personally built a content creation process capable of self-iteration and self-optimization. This is not just a simple connection of two Agents; it is a true Network of Agents that can simulate the collaboration and feedback loops of human teams.

Pitfalls and Guides to Avoid Them

This complex workflow with loops and feedback, while powerful, is also prone to pitfalls. As a senior instructor, I must give you some preventive advice in advance.

  1. Infinite Loop Trap:
    • Pitfall: The Editor is never satisfied, and the Writer never revises it perfectly, causing the process to loop endlessly between Writer and Editor. Your API costs will flow away like water.
    • Guide:
      • Set a maximum revision count (max_revisions): This is the most direct and effective method. Add a counter in the Editor Agent. When the preset maximum number of revisions is reached, forcefully end the loop (e.g., mark it as "requires manual review" or directly "approved"). We have already implemented this in the code.
      • Clear approval standards: In the prompt to the LLM (Editor), explicitly state the criteria for "approval," such as "The article meets all the following conditions: complete structure, clear arguments, fluent language, and no grammatical errors." Give the LLM a clear basis for judgment.
      • Gradually converging feedback: Design the Editor's feedback so that each time it tries to bring the Writer closer to the goal, rather than raising new, irrelevant issues every time.
  2. Chaotic State Management:
    • Pitfall: The AgentState design is unreasonable, and key information is not correctly passed or updated between nodes, resulting in an Agent not getting the data it needs, or getting old data.
    • Guide:
      • TypedDict for clear structure: Always use TypedDict to define AgentState. It provides type checking and effectively avoids spelling errors and data structure inconsistencies.
      • Annotated for merging strategies: For fields that need to be accumulated or specially merged (like revision_count), use Annotated in conjunction with the operator module to ensure the state is updated correctly.
      • Single node responsibility: Each node should only be responsible for updating its relevant state fields. Do not try to modify all states in one node to avoid side effects.
      • Log printing: Printing key state information before and after the execution of each node can help you track data flow and state changes.
  3. LLM Output Volatility:
    • Pitfall: LLMs sometimes "talk nonsense." The Editor LLM might not give a clear signal of "approved" or "needs revision" in the format you expect, causing the routing function to misjudge.
    • Guide:
      • Robust routing logic: The route_article function should have enough fault tolerance for the LLM's output. For example, do not just check for one exact string, but check for multiple synonyms or phrases ("Article approved" in feedback or "reached the publishing standard" in feedback).
      • System-level prompt engineering: In the Agent's LLM prompt, explicitly require the LLM to output key information in a specific format, such as "Please explicitly write 'STATUS: APPROVED' or 'STATUS: REVISE' on the last line of the feedback." This allows the routing function to parse it more accurately.
      • Temperature parameter adjustment: At key decision points, you can appropriately lower the LLM's temperature parameter to make its output more stable and deterministic.
  4. Debugging Complex Graphs:
    • Pitfall: As the nodes and edges of the graph increase, tracking problems becomes very difficult.
    • Guide:
      • Use app.stream(): As shown in the code, the stream() method allows you to observe the state changes step by step after each node executes. This is a sharp tool for debugging complex graphs.
      • Detailed print logs: Add detailed print statements inside each Agent to output the currently received input, generated output, and decisions made.
      • Visualization tools: LangGraph provides methods like get_graph().draw_mermaid_png() to generate visualizations of the graph, which is very helpful for understanding the graph's structure and potential issues.

Remember, building complex AI systems is like playing with Lego bricks; every piece must be put in the right place, and you must anticipate the chain reactions it might bring. Think more, practice more, and you will become a LangGraph master!

📝 Summary of this Issue

Congratulations! In this issue of the "LangGraph Multi-Agent Expert Course," we have conquered an advanced and crucial pattern—Network of Agents. We are no longer satisfied with linear task flows; instead, we delved into the core of LangGraph, utilizing StateGraph, add_conditional_edges, and clever AgentState management to build an article creation and revision loop capable of self-iteration and self-feedback.

We simulated the real "tripartite game" between a Writer and an Editor in an AI content agency: the Writer submits a first draft, the Editor strictly reviews it and provides feedback, and if the draft does not meet the standard, it is sent back to the Writer for revision until the Editor is satisfied. This cyclic flow not only improves the quality of content output but also equips your AI Agents with true "learning" and "optimization" capabilities.

You have learned how to:

  • Cleverly design AgentState, especially using Annotated for state accumulation and merging.
  • Build Writer and Editor agents so they can write, review, and make decisions based on context.
  • Use add_conditional_edges and routing functions (route_article) to dynamically adjust the workflow based on agent decisions.
  • And, most importantly, master advanced techniques for preventing infinite loops and effectively debugging complex graphs.

This network of agents with feedback loops is the cornerstone of building any complex, highly adaptable AI system. Whether it's content creation, code review, product design, or medical diagnosis, as long as there is a need for iteration and optimization, this pattern will shine brightly.

In the next issue, we will continue to explore more advanced collaboration patterns and optimization techniques based on our current foundation. Keep up the enthusiasm, and see you next time!