Issue 01 | Course Blueprint and Architecture Design: Where to Start Building an "Intelligent Customer Service Knowledge Base"?
🎯 Learning Objectives for this Issue
Hello, future AI architects! Welcome to the first stop of the "LangChain Full-Stack Masterclass". Today, we will get our hands dirty and build a foundational smart customer service assistant prototype from scratch. Don't worry, we will take it step by step, ensuring you can apply it right after learning!
In this issue, you will:
- Thoroughly understand the core value and basic architecture of LangChain: Figure out what this thing does and why we absolutely need to use it.
- Master how to connect and use Large Language Models (LLMs): Enable your code to converse with "brains" like OpenAI and Anthropic.
- Learn to use PromptTemplate to precisely guide model output: Issue "military orders" to the LLM so it obediently follows your instructions.
- Build your first LangChain-based smart customer service prototype: Personally craft a "little helper" capable of answering customer questions!
📖 Principle Analysis
What is LangChain? Why do we need it?
Imagine you are developing a complex AI application, such as our "Smart Customer Service Knowledge Base". It's not just simply asking an LLM a question; it might need to:
- Dynamically adjust the questioning method based on user roles and historical conversations.
- Retrieve information from external databases (like our knowledge base) and feed it to the LLM.
- Have the LLM not only answer questions but also execute actions (like querying orders or generating reports).
- Structure the LLM's output for convenient subsequent processing.
If you had to write all these requirements from scratch using raw API calls, it would be a nightmare! The code would become long and messy, and maintaining it would be fatal.
LangChain is here to save you!
It is a powerful framework designed to help developers build applications powered by Large Language Models (LLMs) more easily and efficiently. It provides a series of modular components and Chains, allowing you to dismantle, combine, and reuse the complex logic of various LLM applications just like building with Lego blocks.
To summarize in one sentence: LangChain upgrades the capabilities of LLMs from "single conversations" to "complex workflows", making your AI applications smarter, more controllable, and more scalable.
For our smart customer service knowledge base project, LangChain is an absolute godsend! It can help us:
- Uniformly integrate various LLMs: No need to worry about changing a ton of code when switching models.
- Flexibly construct Prompts: Dynamically generate the most effective prompts based on customer questions and knowledge base content.
- Chain complex logic: For example, first retrieve from the knowledge base, then have the LLM summarize, and finally provide the answer.
- Process outputs: Ensure the answers provided by the LLM conform to the expected format of our customer service assistant.
A First Look at LangChain's Core Components
In the world of LangChain, there are several "star" components that we cannot do without when building any application:
- LLMs (Large Language Models): This is the core of the core, the "brain" of your AI application. It is responsible for understanding and generating text. LangChain provides a unified interface to call various large models like OpenAI, Anthropic, Google Gemini, etc.
- PromptTemplates: Having a brain is not enough; you also need to know how to "ask" to get good answers. PromptTemplate is the "art" of communicating with the LLM. It allows you to define reusable templates and dynamically insert variables, thereby generating structured, high-quality prompts.
- OutputParsers: What the LLM spits out is sometimes plain text, sometimes a JSON string, or even a piece of Python code. The role of the OutputParser is to parse these raw outputs into structured data that our programs can easily process.
- Chains: This is the most distinctive feature of LangChain. It allows you to link multiple components (such as PromptTemplate, LLM, OutputParser, or even other chains) together to form an end-to-end workflow. In this way, complex multi-step tasks can be clearly organized.
Initial Architecture of the Smart Customer Service Assistant (Mermaid Diagram)
To make it more intuitive to understand, let's first look at the core workflow of our first version of the smart customer service assistant. Although simple, it contains the essence of LangChain:
graph TD
A[Customer Question] --> B(PromptTemplate: Role Setting + Question Formatting)
B --> C{LLM: Large Language Model (OpenAI/Anthropic)}
C --> D(StrOutputParser: Extract Plain Text Answer)
D --> E[Smart Customer Service Assistant Response]
style A fill:#f9f,stroke:#333,stroke-width:2px
style E fill:#ccf,stroke:#333,stroke-width:2px
linkStyle 0 stroke-width:2px,fill:none,stroke:green;
linkStyle 1 stroke-width:2px,fill:none,stroke:blue;
linkStyle 2 stroke-width:2px,fill:none,stroke:orange;
linkStyle 3 stroke-width:2px,fill:none,stroke:purple;Diagram Explanation:
- Customer Question (A): This is the input for our customer service assistant, the user's raw question.
- PromptTemplate (B): After receiving the customer question, LangChain uses our preset template to add "role settings" (e.g., "You are a customer service expert") and "instructions" to the LLM, packaging the raw question into a complete, clear prompt.
- LLM (C): Upon receiving the formatted prompt, the large language model begins to think, process, and generate an initial answer.
- StrOutputParser (D): The LLM's raw output might contain some extra information; this parser helps us extract the core plain text answer.
- Smart Customer Service Assistant Response (E): Finally, our customer service assistant returns the parsed plain text answer to the customer.
This workflow is concise and powerful, demonstrating how LangChain orchestrates different modules to work together to complete a task.
💻 Practical Code Drill (Specific Application in the Customer Service Project)
Alright, enough theory, let's roll up our sleeves and get to work! Now we will use LangChain to build the smart customer service assistant prototype illustrated above.
Environment Preparation
Before starting, please ensure you have installed a Python or Node.js environment.
Python Environment:
Create a virtual environment (Recommended):
python -m venv .venv source .venv/bin/activate # macOS/Linux .venv\Scripts\activate # WindowsInstall LangChain and related libraries:
pip install langchain-openai langchain_core python-dotenvlangchain-openai: Used to connect to OpenAI models.langchain_core: LangChain's core components, including PromptTemplate, OutputParser, etc.python-dotenv: Used to load environment variables and securely manage API Keys.
Obtain an API Key: Go to the OpenAI Official Website to get your OpenAI API Key. Create a
.envfile in the project root directory and add your Key:OPENAI_API_KEY="sk-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"Remember: Do not upload the
.envfile to version control systems (like Git)!
TypeScript/JavaScript Environment:
Initialize the project:
mkdir langchain-copilot && cd langchain-copilot npm init -yInstall LangChain and related libraries:
npm install langchain @langchain/openai @langchain/core dotenv npm install -D @types/node # Install Node.js type definitionslangchain: The main LangChain library.@langchain/openai: Used to connect to OpenAI models.@langchain/core: LangChain's core components.dotenv: Used to load environment variables.
Configure TypeScript (if using TS): Create a
tsconfig.jsonfile:{ "compilerOptions": { "target": "es2021", "module": "commonjs", "lib": ["es2021"], "strict": true, "esModuleInterop": true, "skipLibCheck": true, "forceConsistentCasingInFileNames": true, "outDir": "./dist" }, "include": ["src/**/*.ts"], "exclude": ["node_modules"] }Add a
startscript inpackage.json:"scripts": { "start": "tsc && node dist/index.js" }Obtain an API Key: Similarly, create a
.envfile in the project root directory and add your Key:OPENAI_API_KEY="sk-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
Python Code Practice
# filename: src/copilot_v0_1.py
from dotenv import load_dotenv
import os
# Import LangChain core components
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate, SystemMessagePromptTemplate, HumanMessagePromptTemplate
from langchain_core.output_parsers import StrOutputParser
from langchain.chains import LLMChain # The most basic chain, connecting LLM and Prompt
# Load environment variables from the .env file to ensure the API Key is available
load_dotenv()
# 1. Initialize LLM (Large Language Model)
# We use OpenAI's gpt-3.5-turbo model as the "brain" of the customer service assistant.
# The temperature parameter controls the model's creativity/randomness; 0.7 is a relatively balanced value, neither too rigid nor too wild.
llm = ChatOpenAI(model="gpt-3.5-turbo", temperature=0.7)
# 2. Define Prompt Template
# This is the "soul" of our customer service assistant, telling the model who it is and how to answer questions.
# We use ChatPromptTemplate, which is more suitable for interacting with message-based chat models.
# SystemMessagePromptTemplate defines the assistant's role and code of conduct.
# HumanMessagePromptTemplate defines the format of the customer's question.
system_template = (
"You are an experienced and helpful smart customer service assistant. Your task is to answer customer questions clearly and concisely, "
"maintaining a friendly and professional tone. Please avoid providing personal advice or discussing sensitive topics. "
"If a question is beyond your knowledge scope, politely inform the customer that you cannot answer."
)
human_template = "Customer Question: {question}" # {question} is a placeholder that will be filled with the actual question at runtime
chat_prompt = ChatPromptTemplate.from_messages([
SystemMessagePromptTemplate.from_template(system_template),
HumanMessagePromptTemplate.from_template(human_template),
])
# 3. Define Output Parser
# The model's raw output might be a complex object; we need to parse it into plain text for convenient display to the user later.
# StrOutputParser will directly convert the model's output into a string.
output_parser = StrOutputParser()
# 4. Combine into a Chain
# This is the core of LangChain, linking the LLM, PromptTemplate, and OutputParser we defined earlier.
# LLMChain is the most basic chain, its workflow is:
# Receive input -> Format into Prompt (via chat_prompt) -> Send to LLM -> Parse LLM output (via output_parser) -> Return final result.
initial_copilot_chain = LLMChain(
llm=llm,
prompt=chat_prompt,
output_parser=output_parser,
verbose=True # Enable verbose, you can see LangChain's internal execution details in the console, which is very helpful for debugging!
)
# 5. Practical Drill: Call your smart customer service assistant
def ask_intelligent_copilot(question: str) -> str:
"""
Simulate the smart customer service assistant answering customer questions.
Pass the customer question into the LangChain chain, retrieve and return the assistant's answer.
"""
print(f"\n--- Customer Question ---\n{question}")
# The invoke method is the recommended calling method after LangChain version 0.1.x, used to execute the chain.
# It receives a dictionary as input, and the dictionary's keys must match the placeholder names in the PromptTemplate (here it is "question").
response = initial_copilot_chain.invoke({"question": question})
print(f"\n--- Customer Service Assistant Reply ---\n{response}")
return response
# Simulate a few customer questions in the main program to test our customer service assistant
if __name__ == "__main__":
print("🚀 Smart Customer Service Assistant V0.1 Started!\n")
# Simulate customer question scenario 1: Regarding product return policy
ask_intelligent_copilot("What is your return policy? I am not satisfied with the product I purchased and want to return it.")
# Simulate customer question scenario 2: Regarding LangChain itself (as an example to show its general answering capability)
ask_intelligent_copilot("What is LangChain? What is it used for?")
# Simulate customer question scenario 3: Question beyond the knowledge scope
ask_intelligent_copilot("Please tell me the origin and end of the universe.")
# Simulate customer question scenario 4: Regarding software installation
ask_intelligent_copilot("I downloaded your software, but I don't know how to install it. Can you give me detailed steps?")
print("\n🎉 Smart Customer Service Assistant V0.1 Demo Ended!")
TypeScript Code Practice
// filename: src/copilot_v0_1.ts
// Dependencies to install:
// npm install langchain @langchain/openai @langchain/core dotenv
// npm install -D @types/node
import { config } from 'dotenv'; // Import dotenv configuration
import { ChatOpenAI } from '@langchain/openai'; // Import OpenAI chat model
import { ChatPromptTemplate, SystemMessagePromptTemplate, HumanMessagePromptTemplate } from '@langchain/core/prompts'; // Import Prompt templates
import { StringOutputParser } from '@langchain/core/output_parsers'; // Import string output parser
import { LLMChain } from 'langchain/chains'; // Import LLMChain
// Load environment variables from the .env file to ensure the API Key is available
config();
// 1. Initialize LLM (Large Language Model)
// We use OpenAI's gpt-3.5-turbo model as the "brain" of the customer service assistant.
// modelName specifies the model name, temperature controls creativity/randomness.
const llm = new ChatOpenAI({
modelName: "gpt-3.5-turbo",
temperature: 0.7, // Increase temperature slightly to make answers more natural
});
// 2. Define Prompt Template
// This is the "soul" of our customer service assistant, telling the model who it is and how to answer questions.
// SystemMessagePromptTemplate defines the assistant's role and code of conduct.
// HumanMessagePromptTemplate defines the format of the customer's question.
const systemTemplate = (
"You are an experienced and helpful smart customer service assistant. Your task is to answer customer questions clearly and concisely, " +
"maintaining a friendly and professional tone. Please avoid providing personal advice or discussing sensitive topics. " +
"If a question is beyond your knowledge scope, politely inform the customer that you cannot answer."
);
const humanTemplate = "Customer Question: {question}"; // {question} is a placeholder
const chatPrompt = ChatPromptTemplate.fromMessages([
SystemMessagePromptTemplate.fromTemplate(systemTemplate),
HumanMessagePromptTemplate.fromTemplate(humanTemplate),
]);
// 3. Define Output Parser
// The model's raw output might be a complex object; we need to parse it into plain text.
// StringOutputParser will directly convert the model's output into a string.
const outputParser = new StringOutputParser();
// 4. Combine into a Chain
// This is the core of LangChain, linking the LLM, PromptTemplate, and OutputParser we defined earlier.
// LLMChain's workflow: Receive input -> Format into Prompt -> Send to LLM -> Parse LLM output -> Return result.
const initialCopilotChain = new LLMChain({
llm: llm,
prompt: chatPrompt,
outputParser: outputParser,
// verbose: true // In TypeScript, LLMChain does not have a direct verbose option,
// you can control logging during LLM instantiation or through other means, such as setting the DEBUG environment variable
});
// 5. Practical Drill: Call your smart customer service assistant
async function askIntelligentCopilot(question: string): Promise<string> {
console.log(`\n--- Customer Question ---\n${question}`);
// The invoke method is the recommended calling method after LangChain version 0.1.x, used to execute the chain.
// It receives an object as input, and the object's keys must match the placeholder names in the PromptTemplate (here it is "question").
const response = await initialCopilotChain.invoke({ question: question });
// In TypeScript, invoke returns an object, and you need to access the final result based on the outputParser's type.
// For StringOutputParser, the result is usually in the `output` field.
const answer = response.output as string;
console.log(`\n--- Customer Service Assistant Reply ---\n${answer}`);
return answer;
}
// Simulate a few customer questions in the main program to test our customer service assistant
(async () => {
console.log("🚀 Smart Customer Service Assistant V0.1 Started!\n");
// Simulate customer question scenario 1: Regarding product return policy
await askIntelligentCopilot("What is your return policy? I am not satisfied with the product I purchased and want to return it.");
// Simulate customer question scenario 2: Regarding LangChain itself
await askIntelligentCopilot("What is LangChain? What is it used for?");
// Simulate customer question scenario 3: Question beyond the knowledge scope
await askIntelligentCopilot("Please tell me the origin and end of the universe.");
// Simulate customer question scenario 4: Regarding software installation
await askIntelligentCopilot("I downloaded your software, but I don't know how to install it. Can you give me detailed steps?");
console.log("\n🎉 Smart Customer Service Assistant V0.1 Demo Ended!");
})();
Running the code:
- Python:
python src/copilot_v0_1.py - TypeScript:
npm start(Ensure you have configuredpackage.jsonandtsconfig.jsonaccording to the steps above)
You will see the customer service assistant provide corresponding answers based on your questions. Even for questions beyond its knowledge scope, it can politely decline, which is exactly the credit of SystemMessagePromptTemplate!