第 28 期 | 复杂流程编排:深入理解底层数据结构 (EN)
🎯 Learning Objectives for This Session
Hey there, future AI architects! I'm your instructor, a veteran with a decade of experience in the AI trenches. In previous sessions, we taught our intelligent support copilot how to "listen" (receive input), "speak" (generate responses), and even "understand" (via prompt engineering and output parsers). But right now, it's still just a "single-celled organism"—far from being a "thinker" capable of solving problems independently.
In this session, we will dive deep into one of LangChain's core concepts—Chains—to empower your support copilot to "think" logically and step-by-step, just like a human. By the end of this session, you will:
- Understand the core concepts and roles of LangChain Chains: Say goodbye to single-step LLM calls and move towards breaking down and orchestrating complex tasks.
- Master the use of
LLMChain: This is the foundation of all chains. Learn how to seamlessly integrate prompt templates with Large Language Models (LLMs). - Master
SimpleSequentialChainandSequentialChain: Build multi-step thinking workflows for your copilot, enabling it to handle complex tasks like intent recognition, information retrieval, and response generation. - Learn how to enhance logical reasoning and problem-solving via chain programming: Transform your copilot from a simple "parrot" into an active "brain" that analyzes and processes information.
Ready? Let's unlock your AI's ability to "think"!
📖 Understanding the Concepts
Why Do We Need "Chains"?
Imagine you are a senior customer support agent receiving a message: "It's been three days and my order still hasn't shipped! What on earth is going on?!" How would you handle this?
- Identify emotion and intent: The customer is anxious and frustrated. The core request is to "check order shipping status" and they want it "resolved ASAP."
- Retrieve information: Look up the shipping records in the order system using the order number (if provided) or customer info.
- Analyze information: The order is indeed unshipped. The reason might be out-of-stock inventory or logistics delays.
- Generate response: Based on the findings and the customer's mood, craft a professional response that soothes their emotions, explains the reason, and offers a solution.
This process can never be accomplished through a simple "Q&A". It is a multi-step thinking process with logical dependencies.
Traditional LLM calls are like asking a question and getting a direct answer. For complex tasks, we'd have to constantly construct new prompts, manually feeding the output of the previous step as the input for the next. This is not only tedious but also hard to maintain.
LangChain's Chain concept was born exactly to solve this problem. It allows us to break down complex tasks into a series of smaller, more focused steps. Each step consists of one or more components (like an LLM, PromptTemplate, OutputParser, Tool, etc.) connected in a specific sequence. The output of the previous step can serve as the input for the next, forming a directional data and logic flow.
LLMChain: The Fundamental Unit of Chains
In LangChain, LLMChain is the most basic and commonly used chain. It wraps a prompt template (PromptTemplate) and a Large Language Model (LLM). Its workflow is: Receive input -> Format into a prompt -> Send to LLM -> Get raw LLM output.
You can think of it as "a single unit of thought" for the intelligent copilot. For example, "identifying customer emotion" is an LLMChain that takes a customer query and outputs an emotion tag.
Sequential Chains: SimpleSequentialChain and SequentialChain
When we need to link multiple LLMChains or other types of chains together to form a complete thinking process, we use Sequential Chains.
SimpleSequentialChain- Characteristics: The most straightforward chaining method. It requires the single output of the previous chain to be the single input of the next chain.
- Use Case: When you have a clear, linear workflow where each step's output is directly and entirely used by the next step.
- Limitations: Inflexible; cannot handle multiple input/output variables.
SequentialChain(General Sequential Chain)- Characteristics: More powerful and flexible. It allows you to define multiple input variables and multiple output variables for the entire chain, and you can specify the inputs and outputs for each sub-chain. This means intermediate outputs can be preserved or used as inputs for multiple subsequent chains.
- Use Case: When you need a complex thinking process involving branching, passing multiple intermediate results, or outputting multiple pieces of information at the end.
- How it works: You need to explicitly specify
input_variables(for the overall chain),output_variables(final outputs of the overall chain), as well as theinput_variablesandoutput_keyfor each sub-chain. Theoutput_keysaves the sub-chain's output into a dictionary for subsequent chains to use.
Using SequentialChain, we can build a human-like "thinking path" for our support copilot:
graph TD
A[Customer Query] --> B{Step 1: Customer Emotion Recognition Chain}
B -- Customer Tone --> C{Step 2: Draft Response Generation Chain}
C -- Draft Response --> D{Step 3: Professional Response Refinement Chain}
D -- Final Response --> E[Intelligent Copilot Response]
style B fill:#f9f,stroke:#333,stroke-width:2px
style C fill:#ccf,stroke:#333,stroke-width:2px
style D fill:#cfc,stroke:#333,stroke-width:2pxThis diagram clearly illustrates how a customer query goes through an assembly line of different "thinking" stages to ultimately generate a coherent, high-quality response. Each stage is handled by an independent LLMChain, and they are linked together via a SequentialChain to implement the copilot's complex logic.
💻 Hands-On Code Practice (Application in the Copilot Project)
Alright, enough theory—it's time to roll up our sleeves and get to work! We will use SequentialChain to implement an advanced response workflow for our intelligent support copilot:
Scenario Simulation: When a customer reaches out, our intelligent copilot needs to:
- Identify customer tone: Determine if the customer is "Urgent", "Neutral", or "Dissatisfied".
- Generate an initial draft response: Create a polite preliminary draft based on the query and tone.
- Refine the final response: Combine the tone and the draft to generate a more professional, empathetic, and soothing final response.
This workflow perfectly showcases the power of SequentialChain, as it requires passing intermediate results (customer_tone, draft_response) to subsequent chains.
import os
from langchain_openai import ChatOpenAI
from langchain.prompts import PromptTemplate
from langchain.chains import LLMChain, SequentialChain
from langchain.globals import set_debug # Useful for debugging, allows seeing inputs/outputs of each chain
# Set OpenAI API Key
# In a real project, read this from environment variables or secure configs; do not hardcode
# os.environ["OPENAI_API_KEY"] = "YOUR_OPENAI_API_KEY" # Replace with your actual API Key
# Enable LangChain debug mode to print detailed inputs/outputs of each chain. Very useful!
set_debug(True)
print("--- Preparing to initialize the Intelligent Copilot Chain ---")
# 1. Initialize LLM
# We use gpt-4o because it excels at understanding complex instructions and generating high-quality text.
# temperature controls the randomness of the generated text. 0.7 means it will be somewhat creative but not too wild.
llm = ChatOpenAI(temperature=0.7, model_name="gpt-4o")
# --- Define the 1st Chain: Customer Tone Recognition ---
# Goal: Identify the tone (Urgent, Neutral, Dissatisfied) from the customer's query.
# output_key="customer_tone": Save this chain's output to the dictionary with the key "customer_tone".
prompt_template_tone = PromptTemplate(
input_variables=["customer_query"],
template="""You are a professional customer service tone analyzer. Please analyze the following customer query and determine if the tone is 'Urgent', 'Neutral', or 'Dissatisfied'.
Output only the tone label, without any explanation.
Customer Query: "{customer_query}"
Tone: """
)
tone_chain = LLMChain(llm=llm, prompt=prompt_template_tone, output_key="customer_tone", verbose=True)
print("\n--- 'Customer Tone Recognition' Chain Defined ---")
# --- Define the 2nd Chain: Initial Draft Response Generation ---
# Goal: Generate a polite initial draft response based on the customer query and identified tone.
# input_variables=["customer_query", "customer_tone"]: Requires the output of the previous chain as input.
# output_key="draft_response": Save this chain's output to the dictionary with the key "draft_response".
prompt_template_draft = PromptTemplate(
input_variables=["customer_query", "customer_tone"],
template="""You are an intelligent customer support assistant. Based on the customer's query and tone, generate an initial, polite draft response.
Note that this is just a draft and will be refined later.
Customer Query: "{customer_query}"
Customer Tone: "{customer_tone}"
Draft Response: """
)
draft_chain = LLMChain(llm=llm, prompt=prompt_template_draft, output_key="draft_response", verbose=True)
print("\n--- 'Initial Draft Response Generation' Chain Defined ---")
# --- Define the 3rd Chain: Professional Response Refinement ---
# Goal: Combine the customer tone and initial draft to generate the most professional and empathetic final response.
# input_variables=["customer_query", "customer_tone", "draft_response"]: Requires outputs from the previous two chains.
# output_key="final_response": Save this chain's output to the dictionary with the key "final_response".
prompt_template_refine = PromptTemplate(
input_variables=["customer_query", "customer_tone", "draft_response"],
template="""You are a top-tier customer service expert, skilled at generating the most professional, empathetic, and soothing final responses based on customer emotions and original draft responses.
Customer Query: "{customer_query}"
Customer Tone: "{customer_tone}"
Draft Response: "{draft_response}"
Please refine the draft response above to generate the final customer support reply. If the customer tone is 'Urgent' or 'Dissatisfied', ensure the response reflects extra care and a strong willingness to resolve the issue.
Final Response: """
)
refine_chain = LLMChain(llm=llm, prompt=prompt_template_refine, output_key="final_response", verbose=True)
print("\n--- 'Professional Response Refinement' Chain Defined ---")
# --- Combine into a SequentialChain ---
# input_variables: The initial inputs received by the entire SequentialChain.
# output_variables: The final output variables of the entire SequentialChain.
# (Note: We can include intermediate results here to easily inspect the thinking process)
# chains: The list of sub-chains to execute in order.
overall_copilot_chain = SequentialChain(
chains=[tone_chain, draft_chain, refine_chain],
input_variables=["customer_query"],
output_variables=["customer_tone", "draft_response", "final_response"], # Output intermediate results here for easier debugging and analysis
verbose=True # Enable verbose mode for the entire chain to see the execution of each sub-chain
)
print("\n--- 'Overall Intelligent Copilot Chain' Assembled ---")
print("-----------------------------------")
# --- Run the Chain for Testing ---
# Example 1: Urgent/Dissatisfied Customer Tone
user_query_1 = "It's been three days and my order still hasn't shipped! What on earth is going on?!"
print(f"\n--- Processing Customer Query: '{user_query_1}' ---")
try:
result_1 = overall_copilot_chain.invoke({"customer_query": user_query_1})
print("\n--- Chain Processing Results ---")
print(f"Customer Tone: {result_1['customer_tone']}")
print(f"Draft Response: {result_1['draft_response']}")
print(f"Final Response: {result_1['final_response']}")
except Exception as e:
print(f"An error occurred during processing: {e}")
print("\n-----------------------------------")
# Example 2: Neutral Customer Tone
user_query_2 = "Could you please check the shipping status of my order XYZ123?"
print(f"\n--- Processing Customer Query: '{user_query_2}' ---")
try:
result_2 = overall_copilot_chain.invoke({"customer_query": user_query_2})
print("\n--- Chain Processing Results ---")
print(f"Customer Tone: {result_2['customer_tone']}")
print(f"Draft Response: {result_2['draft_response']}")
print(f"Final Response: {result_2['final_response']}")
except Exception as e:
print(f"An error occurred during processing: {e}")
print("\n-----------------------------------")
# --- TypeScript / JavaScript Developers Look Here ---
# LangChain.js (TypeScript/JavaScript) has similar chaining concepts.
# You can use `RunnableSequence` from @langchain/core to build sequential execution logic.
# The concepts are exactly the same, just with slightly different syntax.
# For example:
# import { ChatOpenAI } from "@langchain/openai";
# import { PromptTemplate } from "@langchain/core/prompts";
# import { RunnableSequence } from "@langchain/core/runnables";
#
# const llm = new ChatOpenAI({ temperature: 0.7, modelName: "gpt-4o" });
#
# const promptTemplateTone = PromptTemplate.fromTemplate(
# `You are a professional customer service tone analyzer... Customer Query: {customer_query}\nTone:`
# );
#
# const toneChain = RunnableSequence.from([
# promptTemplateTone,
# llm,
# // You can add output parsers here, etc.
# ]);
#
# // Multiple RunnableSequences can be chained using .pipe() or RunnableSequence.from([chain1, chain2])
# // For more complex inputs/outputs, you can use .assign() or RunnableMap
# // The core idea is always to chain different components together to form a processing pipeline.
# console.log("--- TypeScript / JavaScript developers please refer to RunnableSequence ---");
Expected Output:
When you run this Python code, set_debug(True) and verbose=True will generate a large amount of log output, detailing the inputs, outputs, and LLM call specifics for each chain.
For the first urgent/dissatisfied query, you might see output similar to this (exact content is generated by the LLM):
--- Processing Customer Query: 'It's been three days and my order still hasn't shipped! What on earth is going on?!' ---
[chain/start] [1:chain:overall_copilot_chain] Entering Chain run with input: {
"customer_query": "It's been three days and my order still hasn't shipped! What on earth is going on?!"
}
[chain/start] [1:chain:overall_copilot_chain > 2:chain:tone_chain] Entering Chain run with input: {
"customer_query": "It's been three days and my order still hasn't shipped! What on earth is going on?!"
}
[llm/start] [1:chain:overall_copilot_chain > 2:chain:tone_chain > 3:llm:llm] Entering LLM run with input: {
"prompts": [
"Human: You are a professional customer service tone analyzer. Please analyze the following customer query and determine if the tone is 'Urgent', 'Neutral', or 'Dissatisfied'.\n Output only the tone label, without any explanation.\n\n Customer Query: \"It's been three days and my order still hasn't shipped! What on earth is going on?!\"\n Tone: "
]
}
[llm/end] [1:chain:overall_copilot_chain > 2:chain:tone_chain > 3:llm:llm] Exiting LLM run with output: {
"generations": [
[
{
"text": "Dissatisfied",
"generation_info": null,
"message": {
"lc_kwargs": {
"content": "Dissatisfied",
"additional_kwargs": {}
},
"lc_type": "AIMessage"
}
}
]
],
"llm_output": {
"token_usage": {
"completion_tokens": 2,
"prompt_tokens": 63,
"total_tokens": 65
},
"model_name": "gpt-4o"
}
}
[chain/end] [1:chain:overall_copilot_chain > 2:chain:tone_chain] Exiting Chain run with output: {
"customer_tone": "Dissatisfied"
}
[chain/start] [1:chain:overall_copilot_chain > 4:chain:draft_chain] Entering Chain run with input: {
"customer_query": "It's been three days and my order still hasn't shipped! What on earth is going on?!",
"customer_tone": "Dissatisfied"
}
[llm/start] [1:chain:overall_copilot_chain > 4:chain:draft_chain > 5:llm:llm] Entering LLM run with input: {
"prompts": [
"Human: You are an intelligent customer support assistant. Based on the customer's query and tone, generate an initial, polite draft response.\n Note that this is just a draft and will be refined later.\n\n Customer Query: \"It's been three days and my order still hasn't shipped! What on earth is going on?!\"\n Customer Tone: \"Dissatisfied\"\n Draft Response: "
]
}
[llm/end] [1:chain:overall_copilot_chain > 4:chain:draft_chain > 5:llm:llm] Exiting LLM run with output: {
"generations": [
[
{
"text": "We sincerely apologize for the inconvenience. We understand your frustration regarding the delay in shipping your order. Please provide your order number, and we will immediately check the specific...