Part 07 | Conditional Edges: The "Branching Decider" of Workflows

⏱ Est. reading time: 20 min Updated on 5/7/2026

🎯 Learning Objectives for This Session

Hello everyone! I'm your AI technical mentor. Today, we're skipping the fluff and diving straight into something hardcore: LangGraph's Conditional Edges. This is a crucial step in making our "AI Universal Content Creation Agency" a truly intelligent and dynamic workflow. By the end of this session, you will:

  1. Deeply understand the core mechanism of conditional edges: Master how LangGraph uses conditional logic to make your multi-agent workflow act like a smart traffic hub, making decisions based on real-time conditions.
  2. Master LLM intent recognition and dynamic branch design: Learn how to use the output of Large Language Models (LLMs) as the basis for decision-making to achieve intelligent workflow branching, avoiding unnecessary computation and resource consumption.
  3. Introduce "intelligent judgment" capabilities to the Agency project: We will specifically refactor our Planner agent so it's no longer a simple "waterfall" commander, but can intelligently assign tasks based on content needs. For example: calling tools only when necessary, and skipping straight to the next step when they aren't.
  4. Improve workflow efficiency and flexibility: Through hands-on practice, you will build a system capable of dynamically adjusting its execution path based on inputs, making your AI content agency more efficient and responsive.

📖 Principle Breakdown

In previous lessons, our LangGraph workflows were mostly linear or jumped between fixed nodes. This is like a one-way street or having only a few fixed intersections. But in the real world, especially in a complex system like our "AI Universal Content Creation Agency," requirements are constantly changing. Will a short tweet and an in-depth research report require the same workflow path? Obviously not!

This is where Conditional Edges step into the spotlight!

What are Conditional Edges? Simply put, conditional edges allow you to dynamically determine which node to jump to next after a node finishes executing, based on that node's output or the current global state. It's not a rigid "if-else," but rather a "switch-case" with multiple branches where you have complete control over the routing logic.

Imagine your Planner agent receives a content creation request.

  • If the request is "Write an in-depth report on AI ethics," the Planner might decide: "Hmm, this requires Research first, then Write, and finally Edit."
  • If the request is "Generate 5 social media captions for summer outfits," the Planner might decide: "No Research needed here. Let's call the TitleGeneratorTool directly, and then the Writer can do some minor tweaks."
  • If the request is "I just want to ask the AI what the latest news is," the Planner might decide: "This is a simple Query, just go straight to __end__."

See that? The same Planner makes different "branching decisions" based on different inputs. That's the magic of conditional edges!

How It Works:

  1. Node Execution: A node (e.g., Planner) completes its task and generates an output. This output updates our GraphState.
  2. Decision Function: This is the core of the conditional edge. You provide a Python function (in LangGraph) that takes the current GraphState as input. Its job is to return a string based on the information in the GraphState (especially the output of the previous node). This string represents the name of the next node to jump to, or the special __end__ (indicating the end of the workflow).
  3. Edge Mapping: You also need to provide a dictionary that maps the string returned by the decision function to the actual node. For example, if the decision function returns "research", it routes to the researcher node; if it returns "tool_call", it routes to the tool_executor node.

Implementation in LangGraph: add_conditional_edges()

This method is the key to building dynamic branches. Its signature looks roughly like this: graph.add_conditional_edges(source_node, decision_function, edge_mapping)

  • source_node: The node that triggers the conditional check.
  • decision_function: A Python function that receives the GraphState and returns the next node name.
  • edge_mapping: A dictionary mapping the return values of the decision_function to actual node names.

Mermaid Diagram: The "Branching Decider" Workflow of the AI Content Agency

Alright, enough theory. Let's use a Mermaid diagram to visually see how our Planner agent utilizes conditional edges to become the "smart traffic commander" of this agency.

graph TD
    A[User Request] --> B(Planner Agent)

    B -- LLM Intent Recognition --> C{Decision Point: What to do?}

    C -- "Needs Tool Call" --> D[Tool Executor]
    D -- "Tool Result" --> B

    C -- "Needs In-depth Research" --> E[Researcher Agent]
    E -- "Research Result" --> F[Writer Agent]

    C -- "Direct Writing" --> F[Writer Agent]

    C -- "Task Complete / No Further Processing" --> G[__END__]

    F --> H[Editor Agent]
    H --> G

Diagram Explanation:

  1. User Request (A): The starting point of everything. The user submits a content creation request to our AI content agency.
  2. Planner Agent (B): This is our core decision-maker. It receives the user request and uses its internal LLM for intent recognition and planning.
  3. Decision Point: What to do? (C): This is where the conditional edge comes into play. The Planner agent determines the next step based on the LLM's output.
    • "Needs Tool Call": If the Planner determines that an external tool (like a keyword generator or content template generator) needs to be called, the workflow routes to the Tool Executor (D).
    • "Needs In-depth Research": If the Planner determines that the content requires extensive fact-checking or background knowledge (e.g., an in-depth report), the workflow routes to the Researcher (E).
    • "Direct Writing": If the Planner determines that the task can be handled directly by the Writer (e.g., simple social media copy), the workflow routes directly to the Writer (F).
    • "Task Complete / No Further Processing": If the Planner determines that the current request has been fulfilled, or if it's a query rather than a creation task, the workflow goes straight to __END__ (G).
  4. Tool Executor (D): Responsible for executing the tools specified by the Planner. Once executed, the tool's output is returned to the Planner (B), forming an Agentic Loop that allows the Planner to make the next decision based on the tool's results.
  5. Researcher Agent (E): Executes research tasks. Once the research is complete, it passes the results to the Writer (F).
  6. Writer Agent (F): Creates content based on the research results or direct instructions from the Planner.
  7. Editor Agent (H): Proofreads and polishes the output from the Writer.
  8. END (G): The end of the workflow, indicating task completion.

Through this structure, our Planner agent is no longer a simple "forwarder" but a true "branching decider" capable of intelligently scheduling resources based on actual conditions, greatly improving the flexibility and efficiency of the entire system.

💻 Hands-on Code Practice (Application in the Agency Project)

Now, let's turn theory into code. We will refactor the Planner so it can dynamically decide whether to call tools, conduct research, or write directly based on the LLM's output.

Core Concepts:

  1. Define GraphState: Extend our state to store LLM decisions and tool call information.
  2. Mock Tools: For demonstration purposes, we'll define some simple tools first.
  3. Refactor Planner Node: Make the Planner not only generate plans but also indicate the next action (next_action).
  4. Implement Decision Function: Determine the next node based on the Planner's output.
  5. Build LangGraph: Use add_conditional_edges to construct the dynamic workflow.

We will use Python and LangChain/LangGraph.

import operator
from typing import Annotated, List, Tuple, Union, Literal, TypedDict
from langchain_core.agents import AgentAction, AgentFinish, Tool
from langchain_core.messages import BaseMessage, HumanMessage, AIMessage
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain_openai import ChatOpenAI
from langgraph.graph import StateGraph, END

# Ensure you have set the OPENAI_API_KEY environment variable
# import os
# os.environ["OPENAI_API_KEY"] = "YOUR_API_KEY"

# --- 1. Define GraphState ---
# GraphState is the "blackboard" shared by all our nodes
class AgentState(TypedDict):
    """
    Represents the state of our content agency's workflow.
    """
    input: str # The original user input request
    chat_history: Annotated[List[BaseMessage], operator.add] # Chat history for context
    agent_outcome: Union[AgentAction, AgentFinish, None] # Agent's decision, could be tool call or final answer
    intermediate_steps: Annotated[List[Tuple[AgentAction, str]], operator.add] # Tool calls and their results
    next_action: Literal["tool_call", "research", "write", "end", "continue"] # Planner's decision on the next action
    research_result: str # Result from the Researcher agent
    writer_output: str # Content produced by the Writer agent
    editor_output: str # Content produced by the Editor agent

# --- 2. Mock Tools ---
# For demonstration, we create a few simple tools
def get_keywords(topic: str) -> str:
    """
    Generates a list of relevant keywords for a given topic.
    """
    print(f"\n--- Calling Tool: get_keywords for '{topic}' ---")
    return f"Keywords for '{topic}': AI, Machine Learning, Deep Learning, Generative AI, LLMs"

def get_content_template(topic: str) -> str:
    """
    Provides a basic content structure template for a given topic.
    """
    print(f"\n--- Calling Tool: get_content_template for '{topic}' ---")
    return f"Template for '{topic}': Introduction, Main Points (3-5), Conclusion, Call to Action."

tools = [
    Tool(name="get_keywords", func=get_keywords, description="Useful for generating keywords related to a topic."),
    Tool(name="get_content_template", func=get_content_template, description="Useful for getting a content structure template for a topic."),
]

# --- 3. Define LLM Model ---
llm = ChatOpenAI(model="gpt-4o", temperature=0)

# --- 4. Define Agent Nodes ---

# 4.1 Planner Agent (Agent Node)
# Planner now not only plans but also determines the next step based on requirements
class PlannerAgent:
    def __init__(self, llm: ChatOpenAI, tools: List[Tool]):
        self.llm = llm
        self.tools = tools
        self.prompt = ChatPromptTemplate.from_messages([
            ("system", """
            You are a planner for an experienced content creation agency. Your task is to determine the best next action based on the user's request.
            Available actions include:
            - `tool_call`: If you need to call external tools to gather information or assist in content generation.
            - `research`: If in-depth background research is required.
            - `write`: If you can start writing directly.
            - `end`: If the task is complete, or the request is a simple question that doesn't require further creation.
            
            Please output your decision strictly in JSON format, including 'plan' (your plan) and 'next_action' (the next action).
            If 'next_action' is 'tool_call', you must also include tool call details in the 'tool_calls' field.
            
            Available tools: {tool_names}
            
            Example Output (Needs tools):
            {{
                "plan": "User requested keyword generation, need to call the get_keywords tool.",
                "next_action": "tool_call",
                "tool_calls": [
                    {{
                        "tool_name": "get_keywords",
                        "args": {{"topic": "Application of AI in Education"}}
                    }}
                ]
            }}
            
            Example Output (Needs research):
            {{
                "plan": "User requested an article on quantum computing, in-depth research is needed.",
                "next_action": "research"
            }}
            
            Example Output (Direct writing):
            {{
                "plan": "User requested a short social media post, can start writing directly.",
                "next_action": "write"
            }}
            
            Example Output (End):
            {{
                "plan": "User is just greeting, task complete.",
                "next_action": "end"
            }}
            """),
            MessagesPlaceholder(variable_name="chat_history"),
            ("user", "{input}"),
            MessagesPlaceholder(variable_name="agent_outcome"), # Used to pass tool results back
            MessagesPlaceholder(variable_name="intermediate_steps")
        ])
        
        # Bind tools
        self.runnable = self.prompt.partial(tool_names=", ".join([tool.name for tool in tools])) | llm.bind_tools(tools)

    def __call__(self, state: AgentState):
        print("\n--- Entering Planner Agent ---")
        current_input = state["input"]
        chat_history = state.get("chat_history", [])
        intermediate_steps = state.get("intermediate_steps", [])

        # If there are tool results, add them to the chat history so the LLM knows
        if intermediate_steps:
            for action, observation in intermediate_steps:
                chat_history.append(AIMessage(content=f"Tool Call: {action.tool} with args {action.tool_input}"))
                chat_history.append(AIMessage(content=f"Tool Output: {observation}"))
            
        response = self.runnable.invoke({
            "input": current_input,
            "chat_history": chat_history,
            "intermediate_steps": intermediate_steps,
            "agent_outcome": state["agent_outcome"] # Used to pass the decision of the previous agent, e.g., AgentFinish returned by the tool executor
        })
        
        # Parse LLM output to determine next_action
        # Note: More robust JSON parsing and error handling are needed here
        try:
            parsed_response = response.tool_calls[0] if response.tool_calls else {}
            # Fallback for when LLM doesn't use tool_calls but direct JSON
            if not parsed_response and response.content:
                import json
                parsed_response = json.loads(response.content)

            next_action = parsed_response.get("next_action")
            tool_calls = parsed_response.get("tool_calls", [])

            # If LLM directly outputs a tool call but next_action is not tool_call, we correct it
            if response.tool_calls and next_action != "tool_call":
                 next_action = "tool_call"
            elif not response.tool_calls and not next_action: # LLM might reply with raw text
                next_action = "end" # Default to end, or a "chat" node could be designed
            
            print(f"Planner Decision: {next_action}")
            print(f"Planner Plan: {parsed_response.get('plan', 'No specific plan.')}")

            # Update state
            new_state = {
                "chat_history": chat_history + [HumanMessage(content=current_input), response],
                "next_action": next_action,
                "agent_outcome": response # Store the raw LLM output for easier processing of tool calls later
            }
            return new_state
        except Exception as e:
            print(f"Error parsing Planner output: {e}")
            print(f"LLM raw response: {response.content}")
            # If parsing fails, default to end or enter an error handling flow
            return {"next_action": "end", "chat_history": chat_history + [HumanMessage(content=current_input), AIMessage(content=f"Error: {e}")]}

planner_agent = PlannerAgent(llm, tools)

# 4.2 Tool Executor Node
# Responsible for executing the tools instructed by the Planner agent
class ToolExecutorAgent:
    def __init__(self, tools: List[Tool]):
        self.tools_map = {tool.name: tool for tool in tools}

    def __call__(self, state: AgentState):
        print("\n--- Entering Tool Executor Agent ---")
        tool_calls = state["agent_outcome"].tool_calls # Extract tool calls from the Planner's output

        intermediate_steps = []
        for tool_call in tool_calls:
            tool_name = tool_call.name
            tool_args = tool_call.args
            
            if tool_name in self.tools_map:
                try:
                    tool_output = self.tools_map[tool_name].func(**tool_args)
                    intermediate_steps.append((AgentAction(tool=tool_name, tool_input=tool_args, log=""), tool_output))
                    print(f"Executed tool '{tool_name}' with args {tool_args}. Output: {tool_output}")
                except Exception as e:
                    error_msg = f"Error executing tool '{tool_name}': {e}"
                    intermediate_steps.append((AgentAction(tool=tool_name, tool_input=tool_args, log=""), error_msg))
                    print(error_msg)
            else:
                error_msg = f"Tool '{tool_name}' not found."
                intermediate_steps.append((AgentAction(tool=tool_name, tool_input=tool_args, log=""), error_msg))
                print(error_msg)
        
        # Clear agent_outcome because the tool executor is not the final result
        return {"intermediate_steps": intermediate_steps, "agent_outcome": None} 

tool_executor_agent = ToolExecutorAgent(tools)

# 4.3 Researcher Agent (Simplified)
def researcher_node(state: AgentState):
    print("\n--- Entering Researcher Agent ---")
    current_input = state["input"]
    # Simulate research process
    research_content = f"Research on '{current_input}': Detailed findings and insights. This would typically involve web searches, database queries, etc."
    print(f"Research completed for: {current_input}")
    return {"research_result": research_content, "next_action": "write"} # After research is complete, instruct the next step to be writing

# 4.4 Writer Agent (Simplified)
def writer_node(state: AgentState):
    print("\n--- Entering Writer Agent ---")
    current_input = state["input"]
    research_result = state.get("research_result", "No specific research provided.")
    # Simulate writing process
    writing_content = f"Article Title: {current_input}\n\n" \
                      f"Based on research: {research_result}\n\n" \
                      f"Content: This is a beautifully written piece about {current_input}, incorporating all key findings and creative flair. " \
                      f"It aims to engage the audience and fulfill the content brief."
    print(f"Writing completed for: {current_input}")
    return {"writer_output": writing_content, "next_action": "edit"} # After writing is complete, instruct the next step to be editing

# 4.5 Editor Agent (Simplified)
def editor_node(state: AgentState):
    print("\n--- Entering Editor Agent ---")
    writer_output = state["writer_output"]
    # Simulate editing process
    edited_content = f"--- Edited Version ---\n{writer_output}\n\n" \
                     f"Editor's notes: Checked grammar, improved flow, added a stronger call to action. Content is now polished and ready."
    print(f"Editing completed for the written content.")
    return {"editor_output": edited_content, "next_action": "end"} # After editing is complete, instruct to end

# --- 5. Decision Function: LangGraph's "Branching Decider" ---
# This function determines the next node based on the next_action output by the Planner agent
def decide_next_step(state: AgentState) -> str:
    """
    Decides the next node based on the Planner's `next_action` or the current state.
    """
    next_action = state["next_action"]
    print(f"\n--- Decision Point: Planner recommended '{next_action}' ---")
    
    if next_action == "tool_call":
        return "tool_executor"
    elif next_action == "research":
        return "researcher"
    elif next_action == "write":
        return "writer"
    elif next_action == "edit": # When Writer finishes, it sets next_action to 'edit'
        return "editor"
    elif next_action == "end":
        return END
    else:
        # Default handling, e.g., for errors or unknown actions
        print(f"Warning: Unknown next_action '{next_action}'. Ending workflow.")
        return END

# --- 6. Build LangGraph ---
workflow = StateGraph(AgentState)

# Add nodes
workflow.add_node("planner", planner_agent)
workflow.add_node("tool_executor", tool_executor_agent)
workflow.add_node("researcher", researcher_node)
workflow.add_node("writer", writer_node)
workflow.add_node("editor", editor_node)

# Set entry point
workflow.set_entry_point("planner")

# Add conditional edges - This is the core of this session!
workflow.add_conditional_edges(
    "planner", # Call decide_next_step function after the planner node finishes executing
    decide_next_step,
    {
        "tool_call": "tool_executor", # If decide_next_step returns "tool_call", route to tool_executor
        "research": "researcher",     # If decide_next_step returns "research", route to researcher
        "write": "writer",            # If decide_next_step returns "write", route to writer
        END: END                      # If decide_next_step returns END, then end
    }
)

# Add normal edges
# After tool_executor finishes, it usually needs to return to planner for re-evaluation (Agentic Loop)
workflow.add_edge("tool_executor", "planner") 

# After researcher finishes, it usually goes to writer
workflow.add_edge("researcher", "writer")

# After writer finishes, it usually goes to editor (triggered via next_action='edit' inside writer_node)
# We add this manually here because writer_node is just a function and cannot set conditional edges directly
# Instead, its return value updates the state, which is then evaluated by another node (e.g., a unified decision node)
# For simplicity, we let the writer node directly return "next_action": "edit", and we add a conditional edge from writer to editor here
# A more rigorous approach would be for the writer node to also return to a general decision point
workflow.add_conditional_edges(
    "writer",
    decide_next_step, # Call decide_next_step function after the writer node finishes executing
    {
        "edit": "editor",
        END: END
    }
)

# After editor finishes, end
workflow.add_edge("editor", END)


# Compile workflow
app = workflow.compile()

# --- 7. Run Workflow Demo ---

print("\n--- Demo 1: Request requiring tool call (Get keywords) ---")
inputs_1 = {"input": "Please help me generate keywords about 'Application of AI in Healthcare'.", "chat_history": []}
for s in app.stream(inputs_1):
    print(s)
# Expected flow: Planner -> Tool Executor -> Planner (agentic loop) -> END (because Planner might consider the task complete after the tool call)


print("\n\n--- Demo 2: Request requiring in-depth research (Write in-depth article) ---")
inputs_2 = {"input": "Please write an in-depth article about 'The Future Development of Quantum Computing'.", "chat_history": []}
for s in app.stream(inputs_2):
    print(s)
# Expected flow: Planner -> Researcher -> Writer -> Editor -> END


print("\n\n--- Demo 3: Request for direct writing (Short social media copy) ---")
inputs_3 = {"input": "Write a short social media copy for a summer promotional campaign.", "chat_history": []}
for s in app.stream(inputs_3):
    print(s)
# Expected flow: Planner -> Writer -> Editor -> END

print("\n\n--- Demo 4: Simple inquiry (Direct end) ---")
inputs_4 = {"input": "Hello, AI Content Agency!", "chat_history": []}
for s in app.stream(inputs_4):
    print(s)
# Expected flow: Planner -> END (Planner decides no creation is needed, ends directly)

Code Breakdown:

  1. AgentState Extension: We added fields like next_action, research_result, writer_output, and editor_output. These are crucial for passing information and decisions between agents.
  2. PlannerAgent Refactoring:
    • Its Prompt was redesigned to explicitly require the LLM to output in JSON format, which must include the next_action field to guide the workflow's direction.
    • If next_action is tool_call, it also expects a tool_calls field.
    • We use llm.bind_tools(tools) to let the LLM know about available tools, so it can directly generate tool calls in response.tool_calls.
    • The __call__ method is now responsible for parsing the LLM's output and updating state["next_action"].
  3. ToolExecutorAgent: Responsible for executing the tools specified by the Planner agent. The execution results are returned via intermediate_steps, and the workflow is routed back to the Planner, forming an Agentic Loop. This is a common design pattern in advanced agentic architectures.
  4. researcher_node, writer_node, editor_node: These are simplified agent nodes that simulate their respective functions. Upon completing their tasks, they suggest the next recommended action by updating state["next_action"].
  5. decide_next_step Function: This is the core decision function for the conditional edges. It receives the current AgentState and returns the name of the next node to jump to (or END) based on the value of state["next_action"].
  6. add_conditional_edges():
    • We added a conditional edge to the planner node. This means that every time the planner node finishes executing, it will call the decide_next_step function to determine where to go next.
    • Note that we also added a conditional edge to the writer node, allowing it, upon completing its writing task, to route to the next step based on its returned state.