Part 05 | Initial Success: Running the "Research-Write" Closed Loop

⏱ Est. reading time: 20 min Updated on 5/7/2026

🎯 Learning Objectives for This Session

Hey there, future AI architects, welcome back to our LangGraph Masterclass! Last time, we discussed the philosophy of LangGraph. Now, it's time to turn those theories into actual code. Don't rush into building complex decision trees just yet; let's start with the most fundamental but crucial step: getting the MVP of our AI Content Agency up and running!

In this session, we will:

  • Master the core mechanism of LangGraph's sequential execution flow: Understand how add_node and add_edge weave together a fixed workflow.
  • Successfully integrate the Researcher and Writer Agents into the LangGraph workflow: They are no longer standalone scripts, but integral parts of our agency's system.
  • Learn to define and manage Graph State: Ensure information flows smoothly and accurately between different Agents.
  • Successfully run the "Research-Write" MVP closed loop of the AI Content Agency: Witness the birth of your first content creation pipeline firsthand!

📖 Core Concepts

Why Start with a Fixed Sequence? — The Triumph of MVP Thinking

In software engineering, we constantly emphasize the concept of MVP (Minimum Viable Product). What is the MVP for our AI Content Agency? It's the ability to complete a minimal closed loop "from topic to content". The most straightforward approach is to have a researcher investigate first, and then hand the findings over to a writer to draft the content. In between, there are no complex decisions, no retries, and no branching—just a straight line.

Why do it this way?

  1. Reduce Complexity: The most common mistake beginners make is trying to cram all the logic in right from the start. LangGraph is powerful, but if you can't even manage a simple sequential flow, how can you handle complex conditional branches? Getting it running first to validate core functionality is the golden rule.
  2. Rapid Validation: With a fixed sequence, we can quickly verify whether each Agent functions correctly and whether data passes smoothly between them. If an issue arises, troubleshooting is much simpler.
  3. Lay the Foundation: Sequential flow is the bedrock of all complex graphs. Once you grasp the essence of add_node and add_edge, introducing add_conditional_edges later will feel completely natural. It's like building a house: without a solid foundation, even the most magnificent superstructure is just a castle in the air.

The Basic Building Blocks of LangGraph: Node, Edge, State

Remember the core of LangGraph we mentioned last time? It's a directed graph composed of Nodes and Edges.

  • Node: In our AI Content Agency, each Agent (like the Researcher or Writer) is a node. A node receives input, executes a task, and outputs a result. This "task execution" could be calling an LLM, executing a tool, or even just running a simple Python function.
  • Edge: Edges define the direction of flow between nodes. An edge from Node A to Node B means that after Node A completes its task, its output will serve as Node B's input (or rather, the state updates, and then B reads that state).
  • State: This is the soul of LangGraph! All nodes in the Graph share a mutable state. When a node finishes its task, it updates this shared state. The next node then reads the information it needs from this updated state. This state-passing mechanism is the foundation of multi-agent collaboration.

In this session, we will primarily use add_node to register our Agents and add_edge to define the fixed sequential flow between them.

Mermaid Diagram: The "Research-Write" Workflow in Action

Here is the core workflow we are building in this session. It's highly concise and intuitive:

graph TD
    A[Start] --> B(Researcher);
    B --> C(Writer);
    C --> D[End];

This diagram clearly illustrates:

  1. Start: The starting point of the workflow, where we pass in the initial topic.
  2. Researcher: Receives the topic, conducts research, and updates the shared state with the findings.
  3. Writer: Reads the research findings from the shared state, drafts the content, and updates the shared state with the draft.
  4. End: The endpoint of the workflow, where we can retrieve the finalized content from the final state.

Simple, right? But it is exactly this simplicity that forms the cornerstone of complex systems.

💻 Hands-on Code Walkthrough

Alright, theory is great, but nothing beats rolling up our sleeves and getting to work! We will use Python to implement this "Research-Write" closed loop.

import operator
from typing import Annotated, List, Tuple, TypedDict
from langchain_core.messages import BaseMessage, HumanMessage
from langchain_core.runnables import Runnable
from langchain_openai import ChatOpenAI
from langgraph.graph import StateGraph, END

# =============================================================================
# 1. Define the shared state of our AI Content Agency (Agency State)
#    This is the data shared and updated by all nodes during the Graph's execution.
# =============================================================================
class AgencyState(TypedDict):
    """
    This represents the shared state of our AI Content Agency during content creation.
    """
    topic: str  # The topic for content creation.
    research_data: str  # Data collected by the researcher.
    draft_content: str  # Draft content written by the writer.
    # (Optional) If you need to store message history, you can define it like this:
    # messages: Annotated[List[BaseMessage], operator.add] 

# =============================================================================
# 2. Simulate our Agents (Node Functions)
#    To focus on LangGraph itself, we use simple Python functions to simulate Agent behavior here.
#    In a real project, these would be more complex LangChain Agents or custom Runnables.
# =============================================================================

# Simulate the Researcher Agent
def research_agent(state: AgencyState) -> AgencyState:
    """
    Simulates the Researcher Agent.
    It takes the current state, gets the topic from 'topic', simulates research,
    and updates 'research_data'.
    """
    print(f"\n--- 研究员正在调查主题: {state['topic']} ---") # Researcher is researching topic.
    # Simulate a time-consuming operation and research results.
    simulated_research_result = f"关于 '{state['topic']}' 的初步调查结果:\n" \
                                "1. 市场趋势显示,此类内容近期关注度高。\n" \
                                "2. 主要竞品内容侧重于技术原理,缺乏应用场景。\n" \
                                "3. 用户普遍反馈希望有更多实战案例和避坑指南。"
    
    print("--- 研究员调查完成,更新状态 ---") # Researcher finished research, updating state.
    # Update state and return the new state dictionary.
    return {"research_data": simulated_research_result}

# Simulate the Writer Agent
def writer_agent(state: AgencyState) -> AgencyState:
    """
    Simulates the Writer Agent.
    It takes the current state, gets research data from 'research_data', simulates writing,
    and updates 'draft_content'.
    """
    print(f"\n--- 写作者正在基于调查结果撰写内容 ---") # Writer is writing content based on research results.
    print(f"调查结果摘要:\n{state['research_data'][:100]}...") # Research results summary.

    # Simulate a time-consuming operation and writing process.
    simulated_draft = f"标题:深入浅出:{state['topic']} 实战指南与避坑\n\n" \
                      f"前言:在当今快速发展的AI时代,{state['topic']} 已成为不可忽视的关键技术。本指南将结合最新市场趋势和用户反馈,为您提供一份详尽的实战策略。\n\n" \
                      f"第一部分:市场洞察与竞品分析\n{state['research_data']}\n\n" \
                      f"第二部分:核心技术原理与应用场景\n(此处省略具体技术细节,聚焦实战)\n\n" \
                      f"第三部分:常见问题与避坑指南\n(此处基于用户反馈提供建议)\n\n" \
                      f"结语:掌握 {state['topic']},赋能未来AI应用!"
    
    print("--- 写作者撰写完成,更新状态 ---") # Writer finished writing, updating state.
    # Update state and return the new state dictionary.
    return {"draft_content": simulated_draft}

# =============================================================================
# 3. Build the LangGraph Workflow
#    Use StateGraph to define our Agent nodes and the sequential flow between them.
# =============================================================================

# Instantiate StateGraph, specifying our defined shared state type.
workflow = StateGraph(AgencyState)

# Add Nodes
# Each node is associated with an Agent function we defined above.
workflow.add_node("researcher", research_agent) # Register the researcher node.
workflow.add_node("writer", writer_agent)       # Register the writer node.

# Set Entry Point
# Define where the Graph starts execution.
workflow.set_entry_point("researcher") # Start from the researcher.

# Add Edges
# Define the sequential flow between nodes.
workflow.add_edge("researcher", "writer") # After the researcher completes its task, flow to the writer.

# Set Finish Point
# Define where the Graph finishes execution.
workflow.set_finish_point("writer") # After the writer completes its task, the Graph finishes.

# Compile the Graph
# Compile the defined workflow into an executable Runnable.
app = workflow.compile()

# =============================================================================
# 4. Run our AI Content Agency MVP
#    Pass in the initial state and observe the Graph's execution process and final result.
# =============================================================================
if __name__ == "__main__":
    print("--- AI 内容机构 MVP 启动 ---") # AI Content Agency MVP started.

    # Define initial state: the topic we want to create content about.
    initial_state = {"topic": "LangGraph 多智能体协同在内容创作中的应用", "research_data": "", "draft_content": ""}

    # Run the Graph
    # Calling app.stream() allows you to see state updates at each node step by step.
    final_state = None
    for s in app.stream(initial_state):
        # 's' is a dictionary where the key is the name of the currently executing node,
        # and the value is the state update after that node's execution.
        print(s)
        print("----") # Separator for readability.
        final_state = s # Keep track of the last state for final output.

    print("\n--- AI 内容机构 MVP 运行结束 ---") # AI Content Agency MVP finished running.

    # Print the final generated content.
    if final_state and "writer" in final_state: # Ensure final state exists and contains the result from the writer node.
        print("\n🎉 最终生成的内容草稿 (Final Generated Content Draft):")
        print(final_state["writer"]["draft_content"])
    elif final_state and END in final_state: # If the final state is directly END, get it from the END node.
        print("\n🎉 最终生成的内容草稿 (Final Generated Content Draft):")
        print(final_state[END]["draft_content"])
    else:
        print("\n⚠️ 未能获取最终内容草稿。") # Failed to retrieve final content draft.

Code Walkthrough: Step by Step

  1. Defining AgencyState (TypedDict)

    • We defined AgencyState using TypedDict. This is the single source of truth shared across the entire Graph.
    • topic: When the workflow starts, we provide the agency with a topic.
    • research_data: The researcher writes its findings here.
    • draft_content: The writer writes the initial draft here.
    • Key Takeaway: This single, mutable state is the core of LangGraph. Every node reads the information it needs from here and writes its output back to it.
  2. Simulating Agents (research_agent, writer_agent)

    • To keep the focus on LangGraph's structure, I used simple Python functions to simulate the Agents here.
    • Each function receives an AgencyState object (or a subset of it), performs some simulated "work", and returns a dictionary. This dictionary is then used to update the current AgencyState.
    • Note: The returned dictionary is merged with the current state (by default, this is a shallow merge: dictionary values are overwritten, while list values using Annotated[List[..., operator.add]] are appended). Here, we simply overwrite research_data and draft_content.
  3. Building the LangGraph Workflow (workflow = StateGraph(AgencyState))

    • StateGraph(AgencyState): Initializes a state graph and explicitly tells it that our shared state type is AgencyState.
    • workflow.add_node("researcher", research_agent): Registers our research_agent function as a node named "researcher". When the Graph flows to this node, it executes the research_agent function.
    • workflow.set_entry_point("researcher"): Sets the entry point of the Graph. This means when the Graph starts, it begins execution at the "researcher" node.
    • workflow.add_edge("researcher", "writer"): Adds an edge from "researcher" to "writer". This indicates that after the "researcher" node finishes, the control flow unconditionally transfers to the "writer" node.
    • workflow.set_finish_point("writer"): Sets the finish point of the Graph. Once the "writer" node finishes executing, the Graph stops and returns the final state.
    • app = workflow.compile(): Compiles our defined workflow into an executable LangChain Runnable object. This app object serves as the "brain" of our entire AI Content Agency.
  4. Running Our MVP (app.stream(initial_state))

    • We create an initial_state dictionary containing only the topic. research_data and draft_content are initialized as empty strings.
    • app.stream(initial_state): Starts the Graph. It returns an iterator, and each iteration outputs a dictionary representing the state changes after the current node executes. This is incredibly helpful for debugging and understanding the flow.
    • Finally, we extract draft_content from the final_state. This is the very first piece of work completed by our agency!

Pitfalls and How to Avoid Them

As an architect, my job isn't just to teach you "how to do it", but also to warn you about "which traps to avoid".

  1. State Design Traps: Information Loss or Contamination

    • The Pitfall: Poorly designed AgencyState. For instance, forgetting to add an Agent's output field to the state, leaving the next Agent without the necessary information. Or, different Agents accidentally overwriting each other's critical data.
    • How to Avoid:
      • Plan the state in advance: Before writing code, draw a diagram listing what inputs each Agent needs and what outputs it produces. These inputs and outputs become the fields of your AgencyState.
      • Apply the Single Responsibility Principle (SRP) to state fields: Try to assign the update responsibility of each state field to a specific Agent. For example, research_data should primarily be updated by the researcher, and draft_content by the writer.
      • Use Annotated and operator.add for lists: If your state requires accumulating lists (like message history), using Annotated[List[BaseMessage], operator.add] ensures new messages are appended rather than overwritten. We didn't use it in this session, but we will in the future.
  2. Unclear Node Responsibilities: The "Kitchen Sink" Agent

    • The Pitfall: Cramming too much logic into a single Agent node, like having one Agent handle both research and writing. This makes the node bloated, hard to maintain, difficult to debug, and impossible to reuse.
    • How to Avoid:
      • Keep node granularity moderate: Each node should only be responsible for a clear, logically independent task. Our research_agent handling only research and writer_agent handling only writing in this session is a perfect example.
      • Facilitate debugging: Nodes with a single responsibility are easier to test and troubleshoot. If a node fails, you immediately know whether the research logic or the writing logic is at fault.
  3. Debugging Difficulties: Black Box Execution

    • The Pitfall: Directly calling app.invoke() without printing intermediate states. When the Graph's execution result isn't what you expected, you won't know which step went wrong.
    • How to Avoid:
      • Utilize app.stream(): As demonstrated in our code, the stream() method is your best debugging buddy. It lets you see the state updates after each node executes, helping you trace data flow and logic.
      • Print logs inside Agent functions: I added print statements inside research_agent and writer_agent. This is extremely useful during the development phase, allowing you to visually track what each Agent is doing and what inputs it received.
  4. When to Introduce Conditional Logic? Premature Optimization

    • The Pitfall: Obsessing over how to add conditional checks, retry loops, or human-in-the-loop approvals before the MVP is even running.
    • How to Avoid:
      • Get it running first, then optimize: Emphasizing MVP thinking once again. Ensure the simplest "Research-Write" closed loop runs stably and outputs the expected content. This is your solid foundation for moving toward complex workflows.
      • Iterate gradually: Once you have mastered sequential flows, then consider introducing add_conditional_edges to implement complex decision logic. That will be the challenge for the next stage, but with the foundation built in this session, you'll tackle it with confidence.

📝 Session Summary

Congratulations! We have just built and run the first MVP version of our AI Content Agency—a "Research-Write" closed loop that goes from a topic to a content draft.

In this session, we:

  • Clarified the importance of MVP thinking in complex system development, and why starting with a fixed sequential flow is a wise move.
  • Deeply understood LangGraph's Node, Edge, and State concepts, which are the building blocks of all complex workflows.
  • Successfully orchestrated the simulated Researcher and Writer Agents into LangGraph through hands-on code, building a sequential flow using add_node, set_entry_point, add_edge, and set_finish_point.
  • Mastered the key to state management, ensuring seamless information transfer between Agents.
  • Learned common pitfalls and avoidance strategies during development, which is invaluable experience for a senior architect.

Now, you have a running "skeleton". Next time, we will start thinking: what if the researcher finds the topic too vague, or the writer feels the research data is insufficient? How do we enable them to provide feedback and retry? That's right, we will begin exploring LangGraph's conditional logic and looping mechanisms to make our agency smarter and more robust!

Ready? The excitement continues in the next session!