Issue 30 | Capstone Project Roadshow: Comprehensive AI Creation and Distribution Agency (Capstone)

Updated on 4/17/2026

From receiving instructions, conducting network-wide research, primary writing, editorial review, to graphic and text layout, thirty issues' worth of effort will be perfectly wrapped up in these 500 lines of Graph.

Class, welcome to the final installment of "LangGraph Multi-Agent Expert Course"! I am your old friend, your instructor who has accompanied you for thirty full issues.

Looking back at these thirty days and nights, we started from the most basic single-node LLM calls, battled our way up, and tackled Tool Calling, Memory mechanisms, Human-in-the-loop (human intervention), and various complex Routing logic. Do you remember the goal we set in Issue 1? We were going to personally build an "AI Universal Content Creation Agency (AI Content Agency)".

Today is the moment of truth.

In real enterprise-grade AI architectures, a lone Agent can no longer survive. What you need is a highly collaborative "virtual company." Today, we will integrate all the scattered modules we wrote in the previous 29 issues—the strategic Planner, the tireless Researcher, the sharp-witted Writer, the meticulous Editor, and the layout-savvy Publisher—into one massive StateGraph.

Take a deep breath, open your IDE, and let's complete this final piece of the puzzle.


🎯 This Issue's Learning Objectives

In this capstone project roadshow, you will gain the following advanced benefits:

  1. Master Global State Management: Define a "data bus" that can carry the entire content production lifecycle, allowing 5 independent Agents to share and safely modify the context.
  2. Build Complex Conditional Routing & Circuit Breaker: Implement the "love-hate relationship" (sending back for rewrite) between the Editor and Writer, and set a maximum revision count to prevent "infinite loops."
  3. Complete End-to-End Orchestration: Transform thirty issues of theory into directly runnable Capstone project code with an industrial-grade prototype.

📖 Principle Analysis

Before writing code, as usual, let's look at the architecture diagram. Don't just roll up your sleeves and start typing immediately; 70% of a senior architect's time is spent drawing diagrams and designing State.

Our AI Content Agency workflow is as follows:

  1. User throws out a broad topic (e.g., "Write an in-depth analysis of the dismal sales of Apple Vision Pro").
  2. Planner receives instructions, breaks them down into a detailed writing outline and core questions requiring research.
  3. Researcher based on the outline, calls search engine tools (simulated) to obtain the latest data from across the network.
  4. Writer takes the outline and research data, and sweats it out to write the first draft.
  5. Editor steps in, reviewing the first draft with extremely strict standards. If it's not good enough, they provide revision comments (Review Comments), and send it back to the Writer for a rewrite.
  6. Publisher: Once the Editor approves (or the maximum retry count is reached, forcing a compromise), the Publisher takes over for final Markdown formatting and image placement (simulated), and outputs the final draft.

Let's visualize this grand workflow using Mermaid:

graph TD
    %% Define styles
    classDef human fill:#f9f,stroke:#333,stroke-width:2px;
    classDef agent fill:#bbf,stroke:#333,stroke-width:2px;
    classDef router fill:#fbf,stroke:#333,stroke-width:2px;
    classDef endnode fill:#bfb,stroke:#333,stroke-width:2px;

    START((Start)) --> UserInput[User Input Topic]:::human
    UserInput --> Planner[Planner: Deconstruct Outline & Research Direction]:::agent
    Planner --> Researcher[Researcher: Conduct Network-wide Research]:::agent
    Researcher --> Writer[Writer: Write First Draft/Revision]:::agent
    Writer --> Editor[Editor: Quality Review]:::agent
    
    Editor --> EditorRouter{Approved?}:::router
    
    EditorRouter -- No (Send back for revision) --> Writer
    EditorRouter -- Yes (or Max Revisions Reached) --> Publisher[Publisher: Graphic/Text Layout & Finalization]:::agent
    
    Publisher --> END((End)):::endnode

    %% State Description Box
    subgraph State [Global State (Data Bus)]
        direction LR
        topic[Topic]
        outline[Outline]
        research[Research Data]
        draft[Current Draft]
        comments[Revision Comments]
        revision_count[Revision Count]
    end

Instructor's Sharp Comment: Looking at this diagram, many students might ask: "Instructor, why doesn't the Researcher search again when the Writer is revising?" This is an excellent question! In the industry, this depends on your cost and latency budget. To ensure the clarity of today's 500 lines of code, we set the Researcher to conduct deep research only once initially, and subsequent Writer revisions are based solely on Editor feedback. Knowing how to simplify architecture is the mark of a true master.


💻 Practical Code Walkthrough

Without further ado, let's get to the code. This code is the culmination of thirty issues of hard work. To allow everyone to run it directly, I've used langchain_openai and mocked (simulated) some time-consuming Tools.

Please read the bilingual comments in the code carefully; they contain a wealth of practical details.

import operator
from typing import TypedDict, Annotated, List
from langchain_core.messages import HumanMessage, SystemMessage
from langchain_openai import ChatOpenAI
from langgraph.graph import StateGraph, START, END

# ==========================================
# 1. Define the Global State
# ==========================================
# Instructor's Note: The State is our Agency's central database.
# Note the Annotated usage for revision_count; it automatically increments by 1 on each rejection.
class AgencyState(TypedDict):
    topic: str                      # Initial topic entered by the user
    outline: str                    # Outline produced by the Planner
    research_data: str              # Research material produced by the Researcher
    draft: str                      # Article draft produced by the Writer
    review_comments: str            # Revision comments provided by the Editor
    revision_count: Annotated[int, operator.add] # Records the number of times rejected by the Editor
    final_article: str              # Final draft produced by the Publisher

# Initialize LLM (GPT-4o or Claude-3.5-Sonnet recommended for best multi-agent performance)
llm = ChatOpenAI(model="gpt-4o", temperature=0.7)

# ==========================================
# 2. Define Agent Nodes
# ==========================================

def planner_node(state: AgencyState):
    """Planner: Responsible for breaking down a broad topic into a structured outline"""
    print("👨‍💼 [Planner] Deconstructing the topic and formulating the outline...")
    sys_msg = SystemMessage(content="You are a senior media editor-in-chief. Based on the user's topic, output a detailed writing outline containing 3-4 core paragraphs.")
    user_msg = HumanMessage(content=f"Topic: {state['topic']}")
    
    response = llm.invoke([sys_msg, user_msg])
    # State update: Write the outline to State, and initialize revision count to 0
    return {"outline": response.content, "revision_count": 0}

def researcher_node(state: AgencyState):
    """Researcher: Gathers research material based on the outline (simulated data replaces real Search Tool here)"""
    print("🕵️‍♂️ [Researcher] Gathering in-depth material from across the network...")
    # In a real project, this would bind_tools(SearchTool) and execute a loop.
    # To ensure smooth Capstone execution, we let the LLM directly generate pseudo-research data based on the outline.
    sys_msg = SystemMessage(content="You are an ace researcher. Based on the editor-in-chief's outline, provide rich data, cases, and facts as writing material.")
    user_msg = HumanMessage(content=f"Outline:\n{state['outline']}")
    
    response = llm.invoke([sys_msg, user_msg])
    return {"research_data": response.content}

def writer_node(state: AgencyState):
    """Writer: Writes combining the outline, material, and potential revision comments"""
    print(f"✍️ [Writer] Writing diligently (current revision count: {state.get('revision_count', 0)})...")
    
    sys_prompt = "You are a top-tier writer. Please write a captivating article based on the outline and research data."
    # If there are Editor's revision comments, the Writer must follow them
    if state.get("review_comments"):
        sys_prompt += f"\n\nAttention! These are the editor-in-chief's revision comments; please adhere to them strictly:\n{state['review_comments']}"
        
    sys_msg = SystemMessage(content=sys_prompt)
    user_msg = HumanMessage(content=f"Outline:\n{state['outline']}\n\nMaterial:\n{state['research_data']}")
    
    response = llm.invoke([sys_msg, user_msg])
    return {"draft": response.content}

def editor_node(state: AgencyState):
    """Editor: Strict quality controller. Decides whether the article proceeds to the next stage or is sent back for rewrite"""
    print("🧐 [Editor] Reviewing the draft with a magnifying glass...")
    sys_msg = SystemMessage(content="""
    You are an extremely strict editorial director. Please review the draft.
    If you deem it perfect, reply only with "APPROVED".
    If you think it needs improvement, list specific revision comments (do not make changes yourself, just point out issues).
    """)
    user_msg = HumanMessage(content=f"Current draft:\n{state['draft']}")
    
    response = llm.invoke([sys_msg, user_msg])
    comments = response.content
    
    if "APPROVED" in comments.upper():
        print("   ✅ [Editor] Approved!")
        return {"review_comments": "APPROVED"}
    else:
        print("   ❌ [Editor] Not approved, sending back for rewrite!")
        # Key point: When sending back, in addition to recording comments, also increment revision_count by 1
        return {"review_comments": comments, "revision_count": 1}

def publisher_node(state: AgencyState):
    """Publisher: Final formatting and beautification"""
    print("🖨️ [Publisher] Performing exquisite Markdown formatting and generating the final draft...")
    sys_msg = SystemMessage(content="You are a formatting master. Please add appropriate Markdown headings, bold key points, and insert [Image Placeholder] where suitable.")
    user_msg = HumanMessage(content=f"Final content:\n{state['draft']}")
    
    response = llm.invoke([sys_msg, user_msg])
    return {"final_article": response.content}

# ==========================================
# 3. Core Routing Logic (Conditional Routing)
# ==========================================
def editor_router(state: AgencyState) -> str:
    """
    Decides the Editor's subsequent path.
    Circuit Breaker: If the number of revisions exceeds 2, force approval to prevent infinite loops.
    """
    if state["review_comments"] == "APPROVED":
        return "to_publisher"
    elif state["revision_count"] >= 2:
        print("   ⚠️ [System] Maximum revision count reached, circuit breaker triggered, forcing entry into publishing stage!")
        return "to_publisher"
    else:
        return "back_to_writer"

# ==========================================
# 4. Build the Capstone Graph
# ==========================================
workflow = StateGraph(AgencyState)

# Add all nodes
workflow.add_node("Planner", planner_node)
workflow.add_node("Researcher", researcher_node)
workflow.add_node("Writer", writer_node)
workflow.add_node("Editor", editor_node)
workflow.add_node("Publisher", publisher_node)

# Define edges (workflow order)
workflow.add_edge(START, "Planner")
workflow.add_edge("Planner", "Researcher")
workflow.add_edge("Researcher", "Writer")
workflow.add_edge("Writer", "Editor")

# Add conditional edges (Editor's decision)
workflow.add_conditional_edges(
    "Editor",
    editor_router,
    {
        "to_publisher": "Publisher",
        "back_to_writer": "Writer"
    }
)

workflow.add_edge("Publisher", END)

# Compile Graph
agency_app = workflow.compile()

# ==========================================
# 5. The Moment of Truth: Run the Demo
# ==========================================
if __name__ == "__main__":
    print("\n🚀 Welcome to AI Content Agency! System starting...\n" + "="*50)
    
    initial_state = {
        "topic": "Analyzing Real-world Implementation Cases and Challenges of Large AI Models in Healthcare in 2024"
    }
    
    # Run in stream mode so we can see the execution process step by step
    for output in agency_app.stream(initial_state):
        # Print the name of the node that just finished execution
        for key, value in output.items():
            print(f"--- Node [{key}] execution complete ---")
            
    print("\n" + "="*50 + "\n🎉 Final draft generated!\n")
    # Get the final state and print the final draft
    final_state = agency_app.get_state(config={"configurable": {"thread_id": "1"}}).values 
    # Note: If not using a checkpointer, you can directly get it from the last output of the stream.
    # For simplicity, we can get it directly like this:
    print(value.get("final_article", "Generation failed"))

💣 Pitfalls and Avoidance Guide (Troubleshooting from an Advanced Perspective)

Class, just because the code runs doesn't mean everything is perfect. In the past, when leading teams to implement such multi-agent architectures, I've stepped into countless pitfalls. Today, as a graduation gift, I'm giving you my ultimate guide to avoiding pitfalls:

💣 Pitfall One: The Editor and Writer's "Infinite Death Loop"

Phenomenon: The Editor thinks the Writer's draft is poor and never gives "APPROVED"; meanwhile, the Writer stubbornly insists on their own writing style, or the LLM's capability limit is reached, and it cannot revise to the effect the Editor desires. As a result, the Graph frantically loops between these two nodes until your OpenAI API balance is exhausted. Avoidance: You must implement a Circuit Breaker mechanism. In the code above, we introduced revision_count and forcefully check for >= 2 in editor_router to break out of the loop. In enterprise-grade projects, you can even introduce a Human_Intervention node, which, when revisions exceed 3 times, directly sends Feishu/DingTalk messages to have a real editor-in-chief intervene.

💣 Pitfall Two: State Expanding Infinitely Leading to Context Window Overflow

Phenomenon: If you store every draft in the State as an array using Annotated[list, operator.add], after a few revisions, the Prompt length will grow exponentially, directly triggering an LLM Token limit error. Avoidance: Differentiate between "states that need to be appended" and "states that need to be overwritten." In our AgencyState, draft and review_comments are both str types, which means each execution will overwrite the old value. The Writer only needs to know the "current latest draft" and "latest revision comments," not ancient discarded drafts.

💣 Pitfall Three: Prompt Drift

Phenomenon: After receiving extremely harsh criticism from the Editor, the Writer node not only revises the article but also adds a sentence at the beginning of the article like: "Okay, Editor-in-Chief, I am very sorry, I have revised the article according to your requirements, here is the main text...". This leads to the final text output to the Publisher containing such superfluous text. Avoidance: This is caused by the LLM's "people-pleasing personality." In the Writer's System Prompt, you must add extremely strict constraint instructions: "You are only allowed to output the main body of the article. Absolutely no explanatory statements, apologies, or conversations unrelated to the main text!" You can even use LangChain's Structured Output (Pydantic) to forcefully constrain the output format.


📝 This Issue's Summary

Class, accompanied by the line 🎉 Final draft generated! printed in the console, our 30-issue "LangGraph Multi-Agent Expert Course" officially comes to an end.

From vaguely understanding the concept of Graph in Issue 1, to today, where we've built a complete AI agency with mere hundreds of lines of code, encompassing instruction distribution, network-wide retrieval, content creation, quality review, iterative revision, and graphic/text layout. You have not only mastered LangGraph's underlying logic but also developed an advanced "multi-agent architectural thinking."

Please remember: LangGraph is just a framework; it gives Agents their skeleton. But your designed State transitions and carefully tuned Prompts are the key to giving this system its soul.

Graduation is not an end, but a new beginning. You now possess the ability to refactor enterprise-grade complex workflows. Whether it's building a financial research report Agent, a code review Agent, or the Content Agency we built today, the underlying "Tao" (principles) are all interconnected.

Take this arsenal of 30 issues, and go forth and conquer the AI world! Don't forget, when you encounter a Bug you can't fix, come back and take a look at the instructor's tutorial. Until we meet again in the AI world! 🚀