第 01 期 | 课程蓝图与架构设计:构建智能客服该从何入手? (EN)

⏱ Est. reading time: 20 min Updated on 5/7/2026

🎯 Learning Objectives for this Session

Hello, future AI architects! Welcome to the first stop of the LangChain Full-Stack Masterclass. Today, we're going to get our hands dirty and build a foundational intelligent support copilot prototype from scratch. Don't worry, we'll take it step-by-step to ensure you're ready to build by the end of this lesson!

In this session, you will:

  1. Thoroughly understand LangChain's core value and basic architecture: Figure out exactly what it does and why it's indispensable.
  2. Master connecting and using Large Language Models (LLMs): Enable your code to converse with "brains" like OpenAI and Anthropic.
  3. Learn to use PromptTemplates to precisely guide model output: Give LLMs clear "marching orders" so they behave exactly as you want.
  4. Build your first LangChain-based intelligent support prototype: Handcraft a capable little assistant that can answer customer questions!

📖 Core Concepts Explained

What is LangChain and Why Do We Need It?

Imagine you are developing a complex AI application, like our "Intelligent Support Knowledge Base". It's not just about asking an LLM a simple question; it might need to:

  • Dynamically adjust prompts based on user roles and conversation history.
  • Retrieve information from external databases (like our knowledge base) and feed it to the LLM.
  • Make the LLM not only answer questions but also execute actions (like querying orders or generating reports).
  • Structure the LLM's output for easier downstream processing.

If you had to write all these requirements from scratch using raw API calls, it would be a nightmare! The code would become bloated, convoluted, and a pain to maintain.

LangChain is here to save the day!

It is a powerful framework designed to help developers build applications powered by Large Language Models (LLMs) more easily and efficiently. It provides a suite of modular components and Chains, allowing you to break down, assemble, and reuse the complex logic of various LLM applications just like building with LEGO bricks.

In a nutshell: LangChain upgrades LLM capabilities from "single-turn conversations" to "complex workflows", making your AI applications smarter, more controllable, and highly scalable.

For our intelligent support knowledge base project, LangChain is an absolute game-changer! It helps us:

  1. Unify access to various LLMs: No need to worry about rewriting tons of code if you switch models.
  2. Flexibly construct Prompts: Dynamically generate the most effective prompts based on customer questions and knowledge base context.
  3. Chain complex logic: For example, first retrieve from the knowledge base, then have the LLM summarize, and finally output the answer.
  4. Process outputs: Ensure the LLM's responses match the expected format for our support copilot.

A First Look at LangChain's Core Components

In the world of LangChain, there are a few "star" components that are indispensable for building any application:

  1. LLMs (Large Language Models): This is the core of the core, the "brain" of your AI application. It is responsible for understanding and generating text. LangChain provides a unified interface to call various large models like OpenAI, Anthropic, Google Gemini, etc.
  2. PromptTemplates: Having a brain isn't enough; you need to know how to "ask" to get good answers. PromptTemplates are the "art" of communicating with LLMs. They allow you to define reusable templates and dynamically insert variables to generate structured, high-quality prompts.
  3. OutputParsers: What an LLM spits out might be plain text, a JSON string, or even a snippet of Python code. The role of an OutputParser is to parse these raw outputs into structured data that our programs can easily handle.
  4. Chains: This is LangChain's most distinctive feature. It allows you to link multiple components together (like a PromptTemplate, an LLM, an OutputParser, or even other chains) to form an end-to-end workflow. This way, complex multi-step tasks can be clearly organized.

Initial Architecture of the Intelligent Support Copilot (Mermaid Diagram)

To make things more intuitive, let's first look at the core workflow of our version 1.0 intelligent support copilot. Although simple, it captures the essence of LangChain:

graph TD
    A[Customer Question] --> B(PromptTemplate: Role Setting + Question Formatting)
    B --> C{LLM: Large Language Model OpenAI/Anthropic}
    C --> D(StrOutputParser: Extract Plain Text Answer)
    D --> E[Support Copilot Response]

    style A fill:#f9f,stroke:#333,stroke-width:2px
    style E fill:#ccf,stroke:#333,stroke-width:2px
    linkStyle 0 stroke-width:2px,fill:none,stroke:green;
    linkStyle 1 stroke-width:2px,fill:none,stroke:blue;
    linkStyle 2 stroke-width:2px,fill:none,stroke:orange;
    linkStyle 3 stroke-width:2px,fill:none,stroke:purple;

Diagram Explanation:

  • Customer Question (A): This is the input to our support copilot, the user's raw question.
  • PromptTemplate (B): After receiving the customer's question, LangChain uses our preset template to add a "role setting" (e.g., "You are a customer support expert") and "instructions" for the LLM, packaging the raw question into a complete, clear prompt.
  • LLM (C): Upon receiving the formatted prompt, the large language model begins to think, process, and generate an initial answer.
  • StrOutputParser (D): The LLM's raw output might contain extra metadata. This parser helps us extract the core plain text answer.
  • Support Copilot Response (E): Finally, our support copilot returns the parsed plain text answer to the customer.

This workflow is concise yet powerful. It demonstrates how LangChain orchestrates different modules to work together and accomplish a task.

💻 Hands-On Code Practice (Practical Application in the Copilot Project)

Alright, enough theory—let's roll up our sleeves and get to work! We are now going to use LangChain to build the intelligent support copilot prototype illustrated above.

Environment Setup

Before we begin, please ensure you have either a Python or Node.js environment installed.

Python Environment:

  1. Create a virtual environment (Recommended):

    python -m venv .venv
    source .venv/bin/activate # macOS/Linux
    .venv\Scripts\activate # Windows
    
  2. Install LangChain and related libraries:

    pip install langchain-openai langchain_core python-dotenv
    
    • langchain-openai: Used to connect to OpenAI models.
    • langchain_core: LangChain's core components, including PromptTemplate, OutputParser, etc.
    • python-dotenv: Used to load environment variables and securely manage your API Key.
  3. Obtain an API Key: Head over to the OpenAI website to get your OpenAI API Key. Create a .env file in your project's root directory and add your Key:

    OPENAI_API_KEY="sk-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
    

    Remember: NEVER commit your .env file to version control systems (like Git)!

TypeScript/JavaScript Environment:

  1. Initialize the project:

    mkdir langchain-copilot && cd langchain-copilot
    npm init -y
    
  2. Install LangChain and related libraries:

    npm install langchain @langchain/openai @langchain/core dotenv
    npm install -D @types/node # Install Node.js type definitions
    
    • langchain: The main LangChain library.
    • @langchain/openai: Used to connect to OpenAI models.
    • @langchain/core: LangChain's core components.
    • dotenv: Used to load environment variables.
  3. Configure TypeScript (if using TS): Create a tsconfig.json file:

    {
      "compilerOptions": {
        "target": "es2021",
        "module": "commonjs",
        "lib": ["es2021"],
        "strict": true,
        "esModuleInterop": true,
        "skipLibCheck": true,
        "forceConsistentCasingInFileNames": true,
        "outDir": "./dist"
      },
      "include": ["src/**/*.ts"],
      "exclude": ["node_modules"]
    }
    

    Add a start script to your package.json:

    "scripts": {
      "start": "tsc && node dist/index.js"
    }
    
  4. Obtain an API Key: Similarly, create a .env file in your project's root directory and add your Key:

    OPENAI_API_KEY="sk-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
    

Python Hands-On Coding

# filename: src/copilot_v0_1.py
from dotenv import load_dotenv
import os

# Import LangChain core components
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate, SystemMessagePromptTemplate, HumanMessagePromptTemplate
from langchain_core.output_parsers import StrOutputParser
from langchain.chains import LLMChain # The most basic chain, connecting LLM and Prompt

# Load environment variables from the .env file to ensure the API Key is available
load_dotenv()

# 1. Initialize the LLM (Large Language Model)
# We use OpenAI's gpt-3.5-turbo model as the 'brain' of our support copilot.
# The temperature parameter controls the model's creativity/randomness. 0.7 is a balanced value, neither too rigid nor too wild.
llm = ChatOpenAI(model="gpt-3.5-turbo", temperature=0.7)

# 2. Define the Prompt Template
# This is the 'soul' of our support copilot. It tells the model who it is and how to answer questions.
# We use ChatPromptTemplate, which is better suited for interacting with message-based chat models.
# SystemMessagePromptTemplate defines the copilot's role and behavioral guidelines.
# HumanMessagePromptTemplate defines the format of the customer's question.
system_template = (
    "你是一位经验丰富、乐于助人的智能客服助手。你的任务是清晰、简洁地回答客户的问题,"
    "并保持友好专业的语气。请避免提供个人建议或进行敏感话题讨论。"
    "如果问题超出你的知识范围,请礼貌地告知客户你无法回答。"
)
human_template = "客户问题:{question}" # {question} is a placeholder that will be filled with the actual question at runtime

chat_prompt = ChatPromptTemplate.from_messages([
    SystemMessagePromptTemplate.from_template(system_template),
    HumanMessagePromptTemplate.from_template(human_template),
])

# 3. Define the Output Parser
# The model's raw output might be a complex object. We need to parse it into plain text for easy display to the user.
# StrOutputParser converts the model's output directly into a string.
output_parser = StrOutputParser()

# 4. Combine into a Chain
# This is the core of LangChain: chaining together the LLM, PromptTemplate, and OutputParser we defined above.
# LLMChain is the most fundamental chain. Its workflow is:
# Receive input -> Format into Prompt (via chat_prompt) -> Send to LLM -> Parse LLM output (via output_parser) -> Return final result.
initial_copilot_chain = LLMChain(
    llm=llm,
    prompt=chat_prompt,
    output_parser=output_parser,
    verbose=True # Enable verbose mode to see LangChain's internal execution details in the console. Very helpful for debugging!
)

# 5. Hands-on Practice: Call your intelligent support copilot
def ask_intelligent_copilot(question: str) -> str:
    """
    Simulate the intelligent support copilot answering a customer question.
    Pass the customer question into the LangChain chain, retrieve, and return the copilot's answer.
    """
    print(f"\n--- 客户提问 ---\n{question}")
    # The invoke method is the recommended way to execute a chain in LangChain 0.1.x and later.
    # It accepts a dictionary as input. The dictionary keys must match the placeholder names in the PromptTemplate (here, "question").
    response = initial_copilot_chain.invoke({"question": question})
    print(f"\n--- 客服助手回复 ---\n{response}")
    return response

# Simulate a few customer questions in the main program to test our support copilot
if __name__ == "__main__":
    print("🚀 智能客服助手 V0.1 启动!\n")

    # Scenario 1: Question about product return policy
    ask_intelligent_copilot("你们的退货政策是什么?我购买的商品不满意想退货。")

    # Scenario 2: Question about LangChain itself (as an example to show its general answering capability)
    ask_intelligent_copilot("LangChain 是什么?它有什么用?")

    # Scenario 3: Question beyond its knowledge scope
    ask_intelligent_copilot("请告诉我宇宙的起源和终结。")

    # Scenario 4: Question about software installation
    ask_intelligent_copilot("我下载了你们的软件,但是不知道怎么安装,能给个详细步骤吗?")

    print("\n🎉 智能客服助手 V0.1 演示结束!")

TypeScript Hands-On Coding

// filename: src/copilot_v0_1.ts
// Dependencies to install:
// npm install langchain @langchain/openai @langchain/core dotenv
// npm install -D @types/node

import { config } from 'dotenv'; // Import dotenv configuration
import { ChatOpenAI } from '@langchain/openai'; // Import OpenAI chat model
import { ChatPromptTemplate, SystemMessagePromptTemplate, HumanMessagePromptTemplate } from '@langchain/core/prompts'; // Import Prompt templates
import { StringOutputParser } from '@langchain/core/output_parsers'; // Import string output parser
import { LLMChain } from 'langchain/chains'; // Import LLMChain

// Load environment variables from the .env file to ensure the API Key is available
config();

// 1. Initialize the LLM (Large Language Model)
// We use OpenAI's gpt-3.5-turbo model as the 'brain' of our support copilot.
// modelName specifies the model name, temperature controls creativity/randomness.
const llm = new ChatOpenAI({
    modelName: "gpt-3.5-turbo",
    temperature: 0.7, // Slightly higher temperature for more natural answers
});

// 2. Define the Prompt Template
// This is the 'soul' of our support copilot. It tells the model who it is and how to answer questions.
// SystemMessagePromptTemplate defines the copilot's role and behavioral guidelines.
// HumanMessagePromptTemplate defines the format of the customer's question.
const systemTemplate = (
    "你是一位经验丰富、乐于助人的智能客服助手。你的任务是清晰、简洁地回答客户的问题," +
    "并保持友好专业的语气。请避免提供个人建议或进行敏感话题讨论。" +
    "如果问题超出你的知识范围,请礼貌地告知客户你无法回答。"
);
const humanTemplate = "客户问题:{question}"; // {question} is a placeholder

const chatPrompt = ChatPromptTemplate.fromMessages([
    SystemMessagePromptTemplate.fromTemplate(systemTemplate),
    HumanMessagePromptTemplate.fromTemplate(humanTemplate),
]);

// 3. Define the Output Parser
// The model's raw output might be a complex object. We need to parse it into plain text.
// StringOutputParser converts the model's output directly into a string.
const outputParser = new StringOutputParser();

// 4. Combine into a Chain
// This is the core of LangChain: chaining together the LLM, PromptTemplate, and OutputParser we defined above.
// LLMChain workflow: Receive input -> Format into Prompt -> Send to LLM -> Parse LLM output -> Return result.
const initialCopilotChain = new LLMChain({
    llm: llm,
    prompt: chatPrompt,
    outputParser: outputParser,
    // verbose: true // LLMChain in TypeScript doesn't have a direct verbose option,
                   // you can control logging during LLM instantiation or via other means, like setting the DEBUG environment variable
});

// 5. Hands-on Practice: Call your intelligent support copilot
async function askIntelligentCopilot(question: string): Promise<string> {
    console.log(`\n--- 客户提问 ---\n${question}`);
    // The invoke method is the recommended way to execute a chain in LangChain 0.1.x and later.
    // It accepts an object as input. The object keys must match the placeholder names in the PromptTemplate (here, "question").
    const response = await initialCopilotChain.invoke({ question: question });

    // In TypeScript, invoke returns an object. You need to access the final result based on the outputParser's type.
    // For StringOutputParser, the result is typically in the 'output' field.
    const answer = response.output as string;

    console.log(`\n--- 客服助手回复 ---\n${answer}`);
    return answer;
}

// Simulate a few customer questions in the main program to test our support copilot
(async () => {
    console.log("🚀 智能客服助手 V0.1 启动!\n");

    // Scenario 1: Question about product return policy
    await askIntelligentCopilot("你们的退货政策是什么?我购买的商品不满意想退货。");

    // Scenario 2: Question about LangChain itself
    await askIntelligentCopilot("LangChain 是什么?它有什么用?");

    // Scenario 3: Question beyond its knowledge scope
    await askIntelligentCopilot("请告诉我宇宙的起源和终结。");

    // Scenario 4: Question about software installation
    await askIntelligentCopilot("我下载了你们的软件,但是不知道怎么安装,能给个详细步骤吗?");

    console.log("\n🎉 智能客服助手 V0.1 演示结束!");
})();

Running the code:

  • Python: python src/copilot_v0_1.py
  • TypeScript: npm start (Make sure you have configured package.json and tsconfig.json according to the steps above)

You will see the support copilot provide corresponding answers based on your questions. Even for questions beyond its knowledge scope, it will politely decline to answer—this is exactly the magic of the SystemMessagePromptTemplate at work!

Pitfalls and How to Avoid Them