第 15 期 | Agent 核心:打造自主决策小助手 (EN)
🎯 Learning Objectives for This Session
Hey there, future AI architects! Welcome to the first stop of the LangChain Full-Stack Masterclass. I'm your instructor—a ten-year veteran in the AI trenches and a passionate "evangelist" for education. Today, we're going to lay the foundation of your LangChain skyscraper: understanding and mastering LLMs and Prompt Templates.
By the end of this session, you will be able to:
- Grasp the core role and abstraction of LLMs in LangChain: Understand how LangChain lets you easily "tame" various large language models.
- Master the construction and application of Prompt Templates: Learn how to write highly efficient and flexible "magic spells" to make LLMs follow your commands.
- Build foundational Q&A capabilities for intelligent customer support: Combine LLMs with Prompt Templates to give our support copilot the ability to "speak."
- Master prompt iteration and optimization strategies: Learn how to improve the accuracy and professionalism of the copilot's responses through continuous trial and error.
Ready? Fasten your seatbelts, and let's go!
📖 Under the Hood
First, let's be clear: The LLM (Large Language Model) is the "brain" of our intelligent customer support. It's responsible for understanding user intent, digesting information, and generating coherent, logical responses. Without it, our support copilot is just an empty shell.
But here's the catch: the market is flooded with LLMs. There's OpenAI's GPT series, Google's Gemini, and countless open-source models on Hugging Face. Does this mean we have to rewrite our code every time we switch models? That's neither reasonable nor efficient!
Enter LangChain. It acts as both a "translator" and a "dispatcher," providing us with a unified interface to interact with various LLMs. Whether you're using GPT-3.5, GPT-4, or even your own fine-tuned model, LangChain lets you call them using a standardized approach. It encapsulates the underlying API call details of different models, allowing you to focus on business logic rather than model adaptation.
So, how do we communicate with this "brain"? Can we just throw a sentence at it? Sure, but the results are often disappointing. It's like assigning a task to a new colleague by simply saying, "Get that done." They probably won't know what "that" is or what "done" looks like.
This is where the Prompt Template comes into play. It's not just a simple prompt; it's a reusable "blueprint for conversation." By defining a template, we tell the LLM its role (e.g., "You are a professional customer support assistant"), the task it needs to complete ("Please answer user questions politely and in detail"), and the input information ("The user's question is: {user_query}"). Through this structured approach, we significantly improve the LLM's accuracy in understanding tasks and the quality of its responses.
In short, the LLM is the "brain" of intelligent customer support, and the Prompt Template is the "language specification" we use to communicate with that brain. Combining the two is what makes our support copilot both smart and obedient.
Let's look at a diagram to intuitively understand this core workflow:
graph TD
subgraph Intelligent Support System
A[User Input: "What is my order status?"]
A --> B{PromptTemplate};
B -- Fill Variable: {user_query} --> C[Build Complete Prompt: "You are a professional support assistant, please help based on the user's question. Question: What is my order status?"];
C --> D(LangChain LLM Abstraction Layer);
D -- API Call --> E[Large Language Model (LLM Provider, e.g., OpenAI GPT-3.5)];
E -- Return Raw Response --> F(LangChain OutputParser Module);
F --> G[Structure/Clean Response];
G --> H[Support Copilot Response: "Please provide your order number, and I will check it for you."];
end
style A fill:#FFDDC1,stroke:#FF8C00,stroke-width:2px
style B fill:#D1FFC1,stroke:#008000,stroke-width:2px
style C fill:#C1E0FF,stroke:#1E90FF,stroke-width:2px
style D fill:#E0C1FF,stroke:#8A2BE2,stroke-width:2px
style E fill:#FFC1E0,stroke:#FF1493,stroke-width:2px
style F fill:#C1FFD1,stroke:#3CB371,stroke-width:2px
style G fill:#FFD1C1,stroke:#FF6347,stroke-width:2px
style H fill:#FFDDC1,stroke:#FF8C00,stroke-width:2pxAs you can see from the diagram, the user's question is first received by the Prompt Template. Then, combined with the preset template, a complete and structured Prompt is generated. This Prompt is sent to the underlying LLM via LangChain's LLM abstraction layer. After processing, the LLM returns a raw response, which LangChain can further parse and clean using an OutputParser, ultimately generating the support response we want to show the user. Throughout this entire process, LangChain plays a crucial bridging role.
💻 Practical Code Drill (Application in the Support Project)
Alright, theory is great, but nothing beats running actual code! Now, let's build the most foundational Q&A module for our "Intelligent Support Knowledge Base" project.
1. Environment Setup
First, ensure you have installed LangChain and the SDK of the LLM provider you intend to use. We'll use OpenAI as an example here.
pip install langchain openai # For Python users
# Or
npm install langchain openai # For TypeScript users
2. Setting the API Key
Take note! Never hardcode your API Key directly into your code! It's neither safe nor professional. The best practice is to use environment variables.
Python:
In your .env file (or set directly as a system environment variable):
OPENAI_API_KEY="你的OpenAI API Key"
Then load it in your code:
import os
from dotenv import load_dotenv
load_dotenv() # Load environment variables from the .env file
openai_api_key = os.getenv("OPENAI_API_KEY")
if not openai_api_key:
raise ValueError("OPENAI_API_KEY 环境变量未设置!")
TypeScript:
In your .env file:
OPENAI_API_KEY="你的OpenAI API Key"
Load it in your code (ensure dotenv is installed: npm install dotenv):
import * as dotenv from 'dotenv';
dotenv.config(); // Load environment variables from the .env file
const openaiApiKey = process.env.OPENAI_API_KEY;
if (!openaiApiKey) {
throw new Error("OPENAI_API_KEY 环境变量未设置!");
}
// Pass it in when initializing the OpenAI client
// new OpenAI({ apiKey: openaiApiKey, ... });
3. LLM Initialization
Next, let's initialize our LLM. Here we choose OpenAI's gpt-3.5-turbo because it offers great value for money and fast response times, making it a perfect starting point for customer support scenarios.
Python Code:
import os
from dotenv import load_dotenv
from langchain_openai import ChatOpenAI # Import ChatOpenAI, highly recommended for conversational scenarios
from langchain.prompts import ChatPromptTemplate
from langchain.chains import LLMChain
# 1. Load environment variables
load_dotenv()
openai_api_key = os.getenv("OPENAI_API_KEY")
if not openai_api_key:
raise ValueError("OPENAI_API_KEY 环境变量未设置!请在 .env 文件中配置。")
# 2. Initialize the LLM model
# The temperature parameter controls the model's creativity.
# 0 means the most conservative, deterministic response; 1 means the most creative, divergent response.
# Support scenarios usually require accurate and consistent responses, so we set a lower value, like 0.1-0.5.
llm = ChatOpenAI(model="gpt-3.5-turbo", temperature=0.3, api_key=openai_api_key)
print("LLM 模型初始化成功!")
# Simulated run
# response = llm.invoke("你好,你是一个什么样的模型?")
# print(f"LLM 原始回复: {response.content}")
TypeScript Code:
import * as dotenv from 'dotenv';
import { ChatOpenAI } from '@langchain/openai'; // Import ChatOpenAI
import { ChatPromptTemplate } from '@langchain/core/prompts';
import { LLMChain } from 'langchain/chains';
// 1. Load environment variables
dotenv.config();
const openaiApiKey = process.env.OPENAI_API_KEY;
if (!openaiApiKey) {
throw new Error("OPENAI_API_KEY 环境变量未设置!请在 .env 文件中配置。");
}
// 2. Initialize the LLM model
// The temperature parameter controls the model's creativity.
// 0 means the most conservative, deterministic response; 1 means the most creative, divergent response.
// Support scenarios usually require accurate and consistent responses, so we set a lower value, like 0.1-0.5.
const llm = new ChatOpenAI({
model: "gpt-3.5-turbo",
temperature: 0.3, // A lower temperature value ensures response stability and accuracy
openAIApiKey: openaiApiKey,
});
console.log("LLM 模型初始化成功!");
// Simulated run
// async function runLLMDemo() {
// const response = await llm.invoke("你好,你是一个什么样的模型?");
// console.log(`LLM 原始回复: ${response.content}`);
// }
// runLLMDemo();
4. Creating the Prompt Template
Now, let's define a Prompt Template tailored for the intelligent support scenario. We will set the copilot's role and tell it how to handle user questions.
Python Code:
# ... (Continuing from the LLM initialization code above)
# 3. Create the Prompt Template
# We use ChatPromptTemplate because it's better suited for chat models like GPT.
# MessagesPlaceholder is used to introduce memory or external context later; we'll just use HumanMessageTemplate to simulate it for now.
prompt_template = ChatPromptTemplate.from_messages(
[
("system", "你是一个专业的智能客服助手,你的职责是礼貌、准确、详细地回答用户关于产品或服务的问题。"),
("human", "用户问题:{user_query}"),
]
)
print("Prompt Template 创建成功!")
# Simulated run
# formatted_prompt = prompt_template.format_messages(user_query="我的订单号是多少?")
# print("格式化后的 Prompt:")
# for message in formatted_prompt:
# print(f" {message.type}: {message.content}")
TypeScript Code:
// ... (Continuing from the LLM initialization code above)
// 3. Create the Prompt Template
// We use ChatPromptTemplate because it's better suited for chat models like GPT.
// MessagesPlaceholder is used to introduce memory or external context later; we'll just use HumanMessageTemplate to simulate it for now.
const promptTemplate = ChatPromptTemplate.fromMessages([
["system", "你是一个专业的智能客服助手,你的职责是礼貌、准确、详细地回答用户关于产品或服务的问题。"],
["human", "用户问题:{user_query}"],
]);
console.log("Prompt Template 创建成功!");
// Simulated run
// async function runPromptDemo() {
// const formattedPrompt = await promptTemplate.formatMessages({ user_query: "我的订单号是多少?" });
// console.log("格式化后的 Prompt:");
// formattedPrompt.forEach(message => {
// console.log(` ${message._getType()}: ${message.content}`);
// });
// }
// runPromptDemo();
5. Combining LLM and Prompt Template (Building a Chain)
One of LangChain's most powerful features is its ability to combine different components (LLM, Prompt Template, Memory, Tools, etc.) like Lego bricks to form a "Chain." Here, we use the foundational LLMChain to connect the LLM and the Prompt Template.
Python Code:
# ... (Continuing from the Prompt Template creation code above)
# 4. Combine LLM and Prompt Template (Build Chain)
# LLMChain takes the output of the Prompt Template as the input for the LLM.
qa_chain = LLMChain(llm=llm, prompt=prompt_template, verbose=True) # verbose=True allows you to see the execution details of the Chain
print("LLMChain 构建成功!")
# 5. Demonstration of the support copilot's basic Q&A capabilities
async def ask_customer_copilot(user_question: str):
"""
Function to ask the intelligent support copilot a question.
"""
print(f"\n--- 用户提问 ---")
print(f"用户: {user_question}")
print(f"--- 小助手思考中 ---")
# Run the Chain, passing the user's question as the user_query variable
response = await qa_chain.ainvoke({"user_query": user_question}) # Use ainvoke for asynchronous calling
print(f"--- 智能客服回复 ---")
# The result of LLMChain is usually found in the 'text' key
print(f"小助手: {response['text']}")
return response['text']
# Run demonstration
import asyncio
async def main():
await ask_customer_copilot("我的订单号是多少?")
await ask_customer_copilot("你们的产品有什么特色?")
await ask_customer_copilot("如何申请退货?")
if __name__ == "__main__":
asyncio.run(main())
Python Code Output Example:
LLM 模型初始化成功!
Prompt Template 创建成功!
LLMChain 构建成功!
--- 用户提问 ---
用户: 我的订单号是多少?
--- 小助手思考中 ---
> Entering new LLMChain chain...
Prompt after formatting:
System: 你是一个专业的智能客服助手,你的职责是礼貌、准确、详细地回答用户关于产品或服务的问题。
Human: 用户问题:我的订单号是多少?
> Finished chain.
--- 智能客服回复 ---
小助手: 您好!为了更好地帮助您查询订单号,请您提供一下您的姓名、注册邮箱或手机号码,以便我核对您的账户信息。感谢您的配合!
--- 用户提问 ---
用户: 你们的产品有什么特色?
--- 小助手思考中 ---
> Entering new LLMChain chain...
Prompt after formatting:
System: 你是一个专业的智能客服助手,你的职责是礼貌、准确、详细地回答用户关于产品或服务的问题。
Human: 用户问题:你们的产品有什么特色?
> Finished chain.
--- 智能客服回复 ---
小助手: 您好!我们的产品主要有以下几个特色:
1. **创新技术**:我们采用了最新的[某项核心技术,例如:AI驱动的个性化推荐算法],确保用户体验的领先性。
2. **用户友好**:界面设计简洁直观,操作流程优化,即使是初次使用的用户也能快速上手。
3. **稳定可靠**:经过严格的测试和优化,系统运行稳定,数据处理高效,保障您的使用无忧。
4. **优质服务**:我们提供24/7的专业客服支持,随时为您解答疑问,解决问题。
请问您对我们产品的哪一方面更感兴趣呢?我可以为您提供更详细的信息。
--- 用户提问 ---
用户: 如何申请退货?
--- 小助手思考中 ---
> Entering new LLMChain chain...
Prompt after formatting:
System: 你是一个专业的智能客服助手,你的职责是礼貌、准确、详细地回答用户关于产品或服务的问题。
Human: 用户问题:如何申请退货?
> Finished chain.
--- 智能客服回复 ---
小助手: 您好!申请退货的具体流程如下:
1. **登录账户**:请您首先登录您的[平台名称/官网]账户。
2. **查找订单**:在“我的订单”或“订单管理”页面找到您需要退货的订单。
3. **提交退货申请**:点击订单详情页的“申请退货”按钮,并按照页面提示填写退货原因、选择退货商品及数量。
4. **等待审核**:提交申请后,我们的客服团队会在[通常审核时间,例如:1-3个工作日]内进行审核。请您留意站内信或邮箱通知。
5. **寄回商品**:审核通过后,您将收到退货地址和寄回商品的指引。请确保商品包装完好,附件齐全。
6. **退款处理**:我们收到退回商品并确认无误后,将在[通常退款时间,例如:7个工作日]内为您办理退款。
如果您在操作过程中遇到任何问题,欢迎随时联系在线客服或拨打客服热线。
TypeScript Code:
// ... (Continuing from the Prompt Template creation code above)
// 4. Combine LLM and Prompt Template (Build Chain)
// LLMChain takes the output of the Prompt Template as the input for the LLM.
const qaChain = new LLMChain({
llm: llm,
prompt: promptTemplate,
verbose: true, // verbose=true allows you to see the execution details of the Chain
});
console.log("LLMChain 构建成功!");
// 5. Demonstration of the support copilot's basic Q&A capabilities
async function askCustomerCopilot(userQuestion: string): Promise<string> {
/**
* Function to ask the intelligent support copilot a question.
*/
console.log(`\n--- 用户提问 ---`);
console.log(`用户: ${userQuestion}`);
console.log(`--- 小助手思考中 ---`);
// Run the Chain, passing the user's question as the user_query variable
// The invoke method returns an object, and the result of LLMChain is usually found in the 'text' key
const response = await qaChain.invoke({ user_query: userQuestion });
console.log(`--- 智能客服回复 ---`);
console.log(`小助手: ${response.text}`);
return response.text;
}
// Run demonstration
async function main() {
await askCustomerCopilot("我的订单号是多少?");
await askCustomerCopilot("你们的产品有什么特色?");
await askCustomerCopilot("如何申请退货?");
}
if (require.main === module) {
main();
}
TypeScript Code Output Example:
LLM 模型初始化成功!
Prompt Template 创建成功!
LLMChain 构建成功!
--- 用户提问 ---
用户: 我的订单号是多少?
--- 小助手思考中 ---
> Entering new LLMChain chain...
Prompt after formatting:
System: 你是一个专业的智能客服助手,你的职责是礼貌、准确、详细地回答用户关于产品或服务的问题。
Human: 用户问题:我的订单号是多少?
> Finished chain.
--- 智能客服回复 ---
小助手: 您好!为了更好地帮助您查询订单号,请您提供一下您的姓名、注册邮箱或手机号码,以便我核对您的账户信息。感谢您的配合!
--- 用户提问 ---
用户: 你们的产品有什么特色?
--- 小助手思考中 ---
> Entering new LLMChain chain...
Prompt after formatting:
System: 你是一个专业的智能客服助手,你的职责是礼貌、准确、详细地回答用户关于产品或服务的问题。
Human: 用户问题:你们的产品有什么特色?
> Finished chain.
--- 智能客服回复 ---
小助手: 您好!我们的产品主要有以下几个特色:
1. **创新技术**:我们采用了最新的[某项核心技术,例如:AI驱动的个性化推荐算法],确保用户体验的领先性。
2. **用户友好**:界面设计简洁直观,操作流程优化,即使是初次使用的用户也能快速上手。
3. **稳定可靠**:经过严格的测试和优化,系统运行稳定,数据处理高效,保障您的使用无忧。
4. **优质服务**:我们提供24/7的专业客服支持,随时为您解答疑问,解决问题。
请问您对我们产品的哪一方面更感兴趣呢?我可以为您提供更详细的信息。
--- 用户提问 ---
用户: 如何申请退货?
--- 小助手思考中 ---
> Entering new LLMChain chain...
Prompt after formatting:
System: 你是一个专业的智能客服助手,你的职责是礼貌、准确、详细地回答用户关于产品或服务的问题。
Human: 用户问题:如何申请退货?
> Finished chain.
--- 智能客服回复 ---
小助手: 您好!申请退货的具体流程如下:
1. **登录账户**:请您首先登录您的[平台名称/官网]账户。
2. **查找订单**:在“我的订单”或“订单管理”页面找到您需要退货的订单。
3. **提交退货申请**:点击订单详情页的“申请退货”按钮,并按照页面提示填写退货原因、选择退货商品及数量。
4. **等待审核**:提交申请后,我们的客服团队会在[通常审核时间,例如:1-3个工作日]内进行审核。请您留意站内信或邮箱通知。
5. **寄回商品**:审核通过后,您将收到退货地址和寄回商品的指引。请确保商品包装完好,附件齐全。
6. **退款处理**:我们收到退回商品并确认无误后,将在[通常退款时间,例如:7个工作日]内为您办理退款。
如果您在操作过程中遇到任何问题,欢迎随时联系在线客服或拨打客服热线。
See that? Through a simple combination of an LLM and a Prompt Template, our intelligent support copilot is already capable of generating decent responses based on user questions! Although it doesn't have memory yet and doesn't know specific product details, the foundational conversational capability is there.
Pitfalls and Best Practices
As a senior developer, I've seen too many beginners stumble here. So, make sure you keep these "survival tips" handy!
Prompt Engineering is an art, but more importantly, a science!
- Pitfall: Thinking a Prompt is just casually writing a sentence.
- Best Practice: The Prompt is your "bible" for communicating with the LLM. A good Prompt pushes the LLM's capabilities to the limit; a bad Prompt can turn the LLM into an "artificial idiot."
- Be clear and specific: Avoid vague, open-ended questions. For example, instead of asking "How is the product?", ask "Please detail the highlights and applicable scenarios for Product A."
- Role assignment: Give the LLM a clear role, and it will play it better. Like our "professional intelligent customer support assistant."
- Explicit instructions: Tell it what to do and what not to do (e.g., "Do not fabricate information").
- Provide examples (Few-shot Prompting): If you have specific formatting requirements, provide a few Q&A examples directly. The LLM will learn much faster. We'll cover this later.
The metaphysics and science of the
temperatureparameter- Pitfall: Knowing nothing about the
temperatureparameter, or setting it randomly. - Best Practice:
temperaturecontrols the "randomness" or "creativity" of the LLM's responses.- 0 approaches determinism: Responses are the most stable and repetitive, suitable for scenarios requiring high accuracy and consistency (like customer support or code generation).
- 1 approaches randomness: Responses are the most creative and divergent, suitable for brainstorming or creative writing.
- Support scenarios: It is recommended to set
temperaturebetween0.1and0.5. Setting it too high might cause the copilot to "talk nonsense," while setting it too low might make it overly rigid.
- Pitfall: Knowing nothing about the
API Key Security: A lesson written in blood!
- Pitfall: Hardcoding the API Key directly into the code or committing it to a version control system (like Git).
- Best Practice: Let me emphasize this again: Never hardcode your API Key! Using environment variables is the industry standard. Once your Key is leaked, at best