第 13 期 | Routing 路由机制:复杂意图的分发中心 (EN)

⏱ Est. reading time: 20 min Updated on 5/7/2026

🎯 Learning Objectives for This Session

Hey, future AI architects! Welcome to the first session of the LangChain Full-Stack Masterclass. I'm your instructor, a ten-year veteran in the AI field and a technical mentor passionate about education.

Today, we embark on our LangChain journey by diving straight into its core—Large Language Models (LLMs) and PromptTemplates. Think of these two as the "brain" and the "instruction manual" you use to build your smart support copilot; they are the foundation of all complex functionalities. By the end of this session, you will:

  1. Thoroughly understand the roles and differences between LLMs and ChatModels in LangChain, knowing exactly when to call upon which "heavyweight."
  2. Master the art and science of the PromptTemplate, learning how to unlock the LLM's potential with precise instructions, just like a top-tier coach.
  3. Build the first conversational cornerstone of your smart support copilot hands-on, enabling your AI assistant to perform basic interactions based on instructions.
  4. Identify and avoid common Prompt Engineering pitfalls, cultivating high-quality AI application development habits right from the start.

Ready? Let's uncover the mysteries of LangChain together!

📖 Concept Breakdown

In our "Smart Support Knowledge Base" project, the ultimate goal is to enable an AI copilot to accurately and efficiently answer user questions, and even proactively offer help. To achieve this, we first need to give it a "brain" and a set of "communication rules."

1. LLM / ChatModel: The "Brain" of the Smart Support Copilot

Imagine your smart support copilot needs the ability to understand human language and communicate using it. This capability comes from a Large Language Model (LLM).

An LLM is a deep learning model trained on massive amounts of text data. It can understand, generate, translate, and summarize text, and even perform complex reasoning. In LangChain, the LLM is the core driving force behind all intelligent applications.

LangChain provides a unified interface for various LLMs, freeing you from worrying about whether the underlying model is OpenAI's GPT-4, Google's Gemini, or even your own locally deployed Llama 2. This abstraction is exactly where LangChain's charm lies.

However, there is a minor "gotcha" here: LangChain distinguishes between two primary model interfaces: LLM and ChatModel.

  • LLM (Completion API): This model interface typically takes a simple string as input (e.g., a question) and returns a string as output (e.g., an answer). It operates more like a "text completion" or "text generation" mode.
    • Best for: Simple text generation, summarization, translation, etc.
    • Example: text-davinci-003 (an early OpenAI model)
  • ChatModel (Chat API): This model interface is much better suited for multi-turn conversations. It takes a "list of messages" as input, where each message has a specific role (System, Human, AI), and returns a "message" as output. The System message can be used to set the AI's persona and behavioral guidelines, which is especially crucial when building a support copilot!
    • Best for: Smart customer support, chatbots, multi-turn dialogue systems, etc.
    • Example: gpt-3.5-turbo, gpt-4 (OpenAI's recommended chat models)

Why is ChatModel so important for our smart support project?

Because a support copilot needs a clear "persona" (e.g., "You are a friendly, professional customer support agent") and must be able to understand conversational context. The System message allows us to clearly define the copilot's behavior, while the message list better simulates a real conversation flow. Therefore, for our smart support project, ChatModel will be our top choice.

2. PromptTemplate: The "Instruction Manual" and "Persona Setup"

Having a "brain" isn't enough; you also have to tell it how to think and how to answer. This brings us to the PromptTemplate.

Prompt Engineering is the art and science of interacting with LLMs. The higher the quality of the instructions (Prompts) you give the LLM, the better the answers it provides. The role of the PromptTemplate is to standardize and "engineer" this art.

A good PromptTemplate is like a detailed instruction manual. It can:

  • Set the Persona: "You are a professional smart customer support agent."
  • Define the Task: "Your task is to answer user questions about our products."
  • Provide Context: "Here is the user's question: {user_query}, along with relevant knowledge base snippets: {knowledge_chunk}."
  • Specify Format: "Please answer concisely and guide the user to visit the official website at the end."

The most powerful feature of a PromptTemplate is its variable placeholders. You can define variables (like {user_query}) and dynamically fill them at runtime, thereby generating a complete Prompt tailored to a specific scenario. This dramatically improves the reusability and maintainability of your Prompts.

Similarly, corresponding to LLM and ChatModel, LangChain provides two types of PromptTemplates:

  • PromptTemplate: Designed for LLM models. It takes a string template and outputs a formatted string.
  • ChatPromptTemplate: Designed for ChatModel models. It takes a template of a message list, which contains SystemMessagePromptTemplate, HumanMessagePromptTemplate, AIMessagePromptTemplate, etc., and outputs a formatted list of messages. This perfectly aligns with the input format of a ChatModel.

In our smart support project, ChatPromptTemplate is absolutely the star of the show! We can use SystemMessagePromptTemplate to set the support copilot's personality and rules, and HumanMessagePromptTemplate to insert the user's specific questions.

3. LLM + PromptTemplate: Collaborative Workflow

Now, let's combine the brain (ChatModel) and the instruction manual (ChatPromptTemplate) to see how they work together to provide basic conversational capabilities for our smart support copilot.

After a user asks a question, the workflow looks like this:

  1. User Query: For example, "How do I reset my password?"
  2. PromptTemplate Filling: The user's query ({user_query}) is injected into our predefined ChatPromptTemplate.
  3. Generate Complete Prompt (Message List): The ChatPromptTemplate generates a complete message list containing System and Human messages based on the template and the filled variables.
  4. Send to ChatModel: This message list is sent to the ChatModel (e.g., gpt-3.5-turbo).
  5. ChatModel Generates Response: The ChatModel understands the context and task from the message list and generates an AI reply.
  6. Copilot Answers: Finally, this reply is returned to the user as the smart support copilot's answer.

The entire process can be clearly illustrated with the following Mermaid diagram:

graph TD
    A[User Query] --> B{ChatPromptTemplate};
    B -- Fill variables (e.g., {user_query}) --> C[Generate Formatted Message List];
    C --> D[LangChain ChatModel];
    D -- Call LLM API (e.g., OpenAI GPT-4) --> E[LLM Service];
    E -- Return Raw AI Response --> D;
    D -- Extract AI Message Content --> F[Smart Support Copilot Answer];
    F --> G[User Receives Answer];

    subgraph Smart Support Copilot Core
        B
        C
        D
        F
    end

This diagram clearly shows the data flow and the responsibilities of each component. The ChatPromptTemplate is responsible for "translating" user input and system instructions, while the ChatModel is responsible for "thinking" and "generating" the answer. This is the first simple yet powerful prototype of your smart support copilot!

💻 Practical Code Walkthrough

Alright, enough theory—let's roll up our sleeves and get to work! Now, let's translate these concepts into real code to lay a solid foundation for our smart support copilot.

We will use OpenAI's models as an example, so please ensure you have installed the necessary libraries and set the OPENAI_API_KEY environment variable.

Environment Setup

If you haven't installed them yet, please run:

# Python
pip install langchain langchain-openai python-dotenv

# TypeScript (Node.js)
npm install langchain @langchain/openai dotenv

And create a .env file in your project's root directory, adding your OpenAI API Key:

OPENAI_API_KEY="sk-YOUR_OPENAI_API_KEY_HERE"

1. Initialize LLM / ChatModel

First, let's initialize our "brain." Remember, for conversational systems, we prioritize ChatOpenAI.

Python Code:

import os
from dotenv import load_dotenv
from langchain_openai import ChatOpenAI, OpenAI
from langchain_core.prompts import PromptTemplate, ChatPromptTemplate, SystemMessagePromptTemplate, HumanMessagePromptTemplate
from langchain_core.messages import HumanMessage, SystemMessage

# Load environment variables
load_dotenv()

# Ensure OPENAI_API_KEY is set
if not os.getenv("OPENAI_API_KEY"):
    raise ValueError("OPENAI_API_KEY 环境变量未设置。请在 .env 文件中配置。")

print("--- 初始化 LangChain LLM/ChatModel ---")

# 1. Initialize a traditional LLM (based on Completion API)
# Note: text-davinci-003 has been deprecated by OpenAI, used here for conceptual demonstration only
# llm = OpenAI(temperature=0.7, model_name="text-davinci-003")
# print(f"初始化 LLM 模型: {llm.model_name}")

# 2. Initialize a ChatModel (based on Chat API), the main workhorse for our smart support copilot
# temperature: Controls the randomness of the model's output. 0.0 is highly deterministic, 1.0 is highly creative.
#              Support scenarios typically require a lower temperature (0.0 - 0.7) to ensure accurate and consistent answers.
chat_model = ChatOpenAI(temperature=0.7, model_name="gpt-3.5-turbo")
print(f"初始化 ChatModel 模型: {chat_model.model_name}")

# You can also try gpt-4, but at a higher cost
# chat_model_gpt4 = ChatOpenAI(temperature=0.7, model_name="gpt-4")
# print(f"初始化 ChatModel 模型: {chat_model_gpt4.model_name}")

print("\n模型初始化完成。\n")

TypeScript Code:

import 'dotenv/config'; // Ensure this is imported at the top of the file to load environment variables
import { ChatOpenAI, OpenAI } from '@langchain/openai';
import { PromptTemplate, ChatPromptTemplate, SystemMessagePromptTemplate, HumanMessagePromptTemplate } from '@langchain/core/prompts';
import { HumanMessage, SystemMessage } from '@langchain/core/messages';

// Ensure OPENAI_API_KEY is set
if (!process.env.OPENAI_API_KEY) {
    throw new Error("OPENAI_API_KEY 环境变量未设置。请在 .env 文件中配置。");
}

console.log("--- 初始化 LangChain LLM/ChatModel ---");

// 1. Initialize a traditional LLM (based on Completion API)
// Note: text-davinci-003 has been deprecated by OpenAI, used here for conceptual demonstration only
// const llm = new OpenAI({ temperature: 0.7, modelName: "text-davinci-003" });
// console.log(`初始化 LLM 模型: ${llm.modelName}`);

// 2. Initialize a ChatModel (based on Chat API), the main workhorse for our smart support copilot
// temperature: Controls the randomness of the model's output. 0.0 is highly deterministic, 1.0 is highly creative.
//              Support scenarios typically require a lower temperature (0.0 - 0.7) to ensure accurate and consistent answers.
const chatModel = new ChatOpenAI({ temperature: 0.7, modelName: "gpt-3.5-turbo" });
console.log(`初始化 ChatModel 模型: ${chatModel.modelName}`);

// You can also try gpt-4, but at a higher cost
// const chatModelGpt4 = new ChatOpenAI({ temperature: 0.7, modelName: "gpt-4" });
// console.log(`初始化 ChatModel 模型: ${chatModelGpt4.modelName}`);

console.log("\n模型初始化完成。\n");

2. Build PromptTemplate

Next, let's define the "instruction manual" for our smart support copilot. We will demonstrate how to use both PromptTemplate and ChatPromptTemplate, focusing primarily on ChatPromptTemplate.

Python Code:

# --- 1. Classic PromptTemplate Example (for LLM models) ---
print("--- 经典 PromptTemplate 示例 ---")
# Define a simple Prompt template containing a variable {question}
classic_template = "你是一个专业的客服助手。请根据以下问题提供简洁明了的答案:\n问题: {question}\n答案:"
classic_prompt = PromptTemplate.from_template(classic_template)

# Format the Prompt
formatted_classic_prompt = classic_prompt.format(question="如何重置我的账户密码?")
print(f"格式化后的经典 Prompt:\n{formatted_classic_prompt}\n")

# --- 2. ChatPromptTemplate Example (for ChatModel models, the main workhorse for our smart support copilot) ---
print("--- ChatPromptTemplate 示例 ---")
# Use ChatPromptTemplate to define the "persona" and "task" of the smart support copilot
# SystemMessagePromptTemplate: Used to set the AI's behavioral guidelines and role
# HumanMessagePromptTemplate: Used to carry the user's input
chat_cs_template = ChatPromptTemplate.from_messages([
    SystemMessagePromptTemplate.from_template(
        "你是一个友好的、专业的智能客服,专门解答关于我们产品的问题。你的回答应该简洁、准确,并引导用户访问官方文档获取更多信息。"
    ),
    HumanMessagePromptTemplate.from_template("用户问题: {user_query}") # {user_query} is the variable we will dynamically fill
])

# Format the ChatPromptTemplate to generate a message list
# Note: This returns a list where each element is a Message object (SystemMessage, HumanMessage, etc.)
formatted_chat_messages = chat_cs_template.format_messages(user_query="我的订单状态是什么?")
print("格式化后的 ChatPromptTemplate 消息列表:")
for msg in formatted_chat_messages:
    print(f"  - 角色: {msg.type}, 内容: {msg.content}")
print("\nChatPromptTemplate 构建完成。\n")

TypeScript Code:

// --- 1. Classic PromptTemplate Example (for LLM models) ---
console.log("--- 经典 PromptTemplate 示例 ---");
// Define a simple Prompt template containing a variable {question}
const classicTemplateString = "你是一个专业的客服助手。请根据以下问题提供简洁明了的答案:\n问题: {question}\n答案:";
const classicPrompt = PromptTemplate.fromTemplate(classicTemplateString);

// Format the Prompt
const formattedClassicPrompt = await classicPrompt.format({ question: "如何重置我的账户密码?" });
console.log(`格式化后的经典 Prompt:\n${formattedClassicPrompt}\n`);

// --- 2. ChatPromptTemplate Example (for ChatModel models, the main workhorse for our smart support copilot) ---
console.log("--- ChatPromptTemplate 示例 ---");
// Use ChatPromptTemplate to define the "persona" and "task" of the smart support copilot
// SystemMessagePromptTemplate: Used to set the AI's behavioral guidelines and role
// HumanMessagePromptTemplate: Used to carry the user's input
const chatCsTemplate = ChatPromptTemplate.fromMessages([
    SystemMessagePromptTemplate.fromTemplate(
        "你是一个友好的、专业的智能客服,专门解答关于我们产品的问题。你的回答应该简洁、准确,并引导用户访问官方文档获取更多信息。"
    ),
    HumanMessagePromptTemplate.fromTemplate("用户问题: {user_query}") // {user_query} is the variable we will dynamically fill
]);

// Format the ChatPromptTemplate to generate a message list
// Note: This returns a list where each element is a Message object (SystemMessage, HumanMessage, etc.)
const formattedChatMessages = await chatCsTemplate.formatMessages({ user_query: "我的订单状态是什么?" });
console.log("格式化后的 ChatPromptTemplate 消息列表:");
for (const msg of formattedChatMessages) {
    console.log(`  - 角色: ${msg._getType()}, 内容: ${msg.content}`);
}
console.log("\nChatPromptTemplate 构建完成。\n");

3. Combining LLM + PromptTemplate: The First Smart Support Prototype

Now, we will combine the ChatModel and ChatPromptTemplate to build our first smart support copilot. It will generate a professional answer based on our predefined "persona" and the user's question.

Python Code:

# Ensure chat_model and chat_cs_template are already defined from the code blocks above
# If you are running this block independently, make sure to run the initialization and template definition code above first

print("--- 智能客服小助手对话模拟 ---")

async def ask_customer_service(question: str):
    """
    Smart support copilot query function.
    It fills the user's query into the ChatPromptTemplate, then calls the ChatModel to get the answer.
    """
    print(f"\n用户提问: {question}")
    # 1. Format the PromptTemplate to generate the message list required by ChatModel
    messages = await chat_cs_template.format_messages(user_query=question)
    
    # Print the specific message content sent to the model for debugging purposes
    # print("--- 发送给 ChatModel 的消息 ---")
    # for msg in messages:
    #     print(f"  - {msg.type.capitalize()}: {msg.content}")
    # print("-----------------------------")

    # 2. Call ChatModel to get the response
    # .ainvoke() is an asynchronous call. If you are outside Jupyter or a top-level await environment, you need to wrap it with asyncio.run()
    response = await chat_model.ainvoke(messages)
    
    # response is an AIMessage object; its .content property is the text generated by the AI
    return response.content

# Simulate a few smart support copilot conversations
# Scenario 1: User asks about product features
answer1 = await ask_customer_service("你们的产品有什么特色功能?")
print(f"客服小助手: {answer1}")

# Scenario 2: User asks about after-sales support
answer2 = await ask_customer_service("如果我遇到技术问题,应该如何寻求帮助?")
print(f"客服小助手: {answer2}")

# Scenario 3: User asks about the refund policy (Even if not in the knowledge base, the model will answer reasonably based on its general knowledge and our set persona)
answer3 = await ask_customer_service("请问你们的退款政策是怎样的?")
print(f"客服小助手: {answer3}")

print("\n--- 智能客服对话模拟结束 ---")

TypeScript Code:

// Ensure chatModel and chatCsTemplate are already defined from the code blocks above
// If you are running this block independently, make sure to run the initialization and template definition code above first

console.log("--- 智能客服小助手对话模拟 ---");

async function askCustomerService(question: string): Promise<string> {
    /**
     * Smart support copilot query function.
     * It fills the user's query into the ChatPromptTemplate, then calls the ChatModel to get the answer.
     */
    console.log(`\n用户提问: ${question}`);
    // 1. Format the PromptTemplate to generate the message list required by ChatModel
    const messages = await chatCsTemplate.formatMessages({ user_query: question });

    // Print the specific message content sent to the model for debugging purposes
    // console.log("--- 发送给 ChatModel 的消息 ---");
    // for (const msg of messages) {
    //     console.log(`  - ${msg._getType()}: ${msg.content}`);
    // }
    // console.log("-----------------------------");

    // 2. Call ChatModel to get the response
    const response = await chatModel.invoke(messages);

    // response is an AIMessage object; its .content property is the text generated by the AI
    return response.content;
}

// Simulate a few smart support copilot conversations
(async () => {
    // Scenario 1: User asks about product features
    const answer1 = await askCustomerService("你们的产品有什么特色功能?");
    console.log(`客服小助手: ${answer1}`);

    // Scenario 2: User asks about after-sales support
    const answer2 = await askCustomerService("如果我遇到技术问题,应该如何寻求帮助?");
    console