第 14 期 | ReAct 框架解析:思考与行动交替的艺术 (EN)

⏱ Est. reading time: 19 min Updated on 5/7/2026

🎯 Learning Objectives for This Session

Hey there, future AI masters! Welcome to Part 01 of the LangChain Full-Stack Masterclass. I'm your instructor—a ten-year veteran in the AI space and a passionate tech evangelist. Starting today, we're going to build a production-ready Intelligent Support Knowledge Base using LangChain.

In this session, we will take the first step of our long journey. Our goals are clear, and the takeaways will be massive:

  1. Understand LangChain's core value and design philosophy: Figure out why we need LangChain and what pain points it solves for us.
  2. Master the fundamental building blocks of LangChain: Get to know LLMs, Prompts, and Chains, and understand how they work together synergistically.
  3. Build and run your first LLM Chain: Get your hands dirty by building a simple intelligent support prototype that can receive user questions and provide initial responses.
  4. Gain insights into the cornerstone of LLM application development: Lay a solid foundation for more complex support features later on, and know exactly where to go next.

📖 Under the Hood: Core Concepts

Alright everyone, take a deep breath and get ready to dive into the world of AI. Have you ever wondered how the "brain" of an intelligent support copilot actually works? How does it understand your questions and generate answers that sound incredibly human?

In the pre-LangChain era, building an application based on Large Language Models (LLMs) was like building a house barehanded in the wilderness. You had to manually handle interactions with the LLM, manage prompts, process conversation history, integrate external tools... It was exhausting!

LangChain, on the other hand, is like a Swiss Army Knife. It didn't reinvent the wheel; instead, it elegantly integrated all those scattered wheels, screws, and wrenches, allowing you to build complex LLM applications quickly and gracefully, just like snapping Lego blocks together. It provides a standardized set of interfaces and components, helping us abstract and modularize the common patterns and complex workflows of LLM app development.

The Backbone of LangChain: LLMs, Prompts, and Chains

In the LangChain universe, there are three core concepts that make up the "brain" and "nervous system" of our intelligent support copilot:

  1. LLMs (Large Language Models):

    • What is it? This is the "brain" of our intelligent support system—the core AI model capable of understanding language, generating text, and reasoning. Examples include OpenAI's GPT series, Google's Gemini, and Anthropic's Claude.
    • What role does it play in customer support? It provides the foundational ability to answer user questions, summarize conversations, and generate replies. It is responsible for "thinking" and "expressing."
    • How does LangChain handle it? LangChain provides a consistent interface to interact with various LLMs. Whether you are using OpenAI or Hugging Face, your code remains highly consistent, which drastically simplifies model switching and management.
  2. Prompts:

    • What is it? This is the "language" you use to communicate with the LLM. It consists of instructions, context, and examples. The kind of questions and background information you feed the LLM dictates the kind of answers it will produce.
    • What role does it play in customer support? It acts as the "script" that guides the LLM on how to play the role of a support agent and how to answer specific types of questions. For example, telling the LLM: "You are a professional support assistant, please answer user questions politely and concisely."
    • How does LangChain handle it? LangChain provides PromptTemplate, allowing you to define and manage prompts in a structured way. This makes it easy to dynamically insert variables and avoid hardcoding, which is crucial for support systems that need to generate different prompts based on varying scenarios.
  3. Chains:

    • What is it? This is one of LangChain's most central concepts. It wires together multiple components (like LLMs, Prompts, parsers, etc.) to form an end-to-end workflow. You can think of a Chain as a series of actions executed in a specific sequence.
    • What role does it play in customer support? It is the "nervous system" of our intelligent copilot, connecting user inputs, prompts, LLM processing, and the final output. A simple Chain might be "User Question -> Format Prompt -> LLM Generates Answer." A more complex Chain could be "User Question -> Retrieve from Knowledge Base -> LLM Generates Answer based on Knowledge Base."
    • How does LangChain handle it? LangChain offers various pre-defined Chain types (like LLMChain, SequentialChain, RetrievalQAChain, etc.) and also allows you to build custom Chains, greatly improving the efficiency and maintainability of building complex LLM apps.

The Foundation of an Intelligent Copilot: A Simple Q&A Chain

For our Intelligent Support Knowledge Base project, the very first and most basic capability it needs is—answering questions. Even if it's as simple as tossing the user's question directly to the LLM and letting it generate a reply. This is the task of our very first Chain.

Imagine this process:

  1. The user inputs a question, such as "What is my order status?"
  2. This question is sent into a Prompt Template, which tells the LLM: "You are a professional support agent, please answer this question: [User Question]".
  3. The formatted prompt is sent to the LLM.
  4. The LLM processes it and generates an answer, such as "Please provide your order number, and I will check it for you."
  5. This answer serves as the initial response from our intelligent copilot.

All of this can be handled effortlessly by LangChain's LLMChain.

Mermaid Diagram: The Copilot's First Chain

graph TD
    A[User Input: "What is my order status?"] --> B{PromptTemplate: "You are a customer support agent, please answer: {question}"}
    B --> C[LLM (e.g., GPT-3.5-turbo)]
    C --> D[LLMChain]
    D --> E[Output: "Please provide your order number, and I will check it for you."]

    style A fill:#f9f,stroke:#333,stroke-width:2px
    style B fill:#bbf,stroke:#333,stroke-width:2px
    style C fill:#ccf,stroke:#333,stroke-width:2px
    style D fill:#ddf,stroke:#333,stroke-width:2px
    style E fill:#f9f,stroke:#333,stroke-width:2px
    linkStyle 0 stroke:#666,stroke-width:2px,fill:none,stroke-dasharray: 5 5;
    linkStyle 1 stroke:#666,stroke-width:2px,fill:none,stroke-dasharray: 5 5;
    linkStyle 2 stroke:#666,stroke-width:2px,fill:none,stroke-dasharray: 5 5;
    linkStyle 3 stroke:#666,stroke-width:2px,fill:none,stroke-dasharray: 5 5;

This diagram clearly illustrates how the "brain" of our first intelligent support copilot works. The user's question is wrapped by the PromptTemplate, sent to the LLM for processing, and finally coordinated by the LLMChain to generate a reply. Simple yet powerful, isn't it?

💻 Hands-On Coding (Practical Application in the Copilot Project)

Theory is great, but nothing beats writing actual code. Now, let's implement this simple Q&A chain together in our intelligent support project.

1. Environment Setup

First, ensure your Python environment is ready. We need to install the core LangChain library and the OpenAI integration library (since we will default to using OpenAI's models as examples in this course).

# Ensure you are using Python 3.8+
# Create and activate a virtual environment (recommended)
python -m venv venv
source venv/bin/activate # macOS/Linux
# venv\Scripts\activate # Windows

# Install LangChain and OpenAI integration
pip install langchain-openai python-dotenv

Next, you will need an OpenAI API Key. For security reasons, we typically store sensitive information in a .env file.

Create a file named .env in your project's root directory and add your API Key:

# .env file
OPENAI_API_KEY="YOUR_OPENAI_API_KEY_HERE"

Important Note: Please make sure to replace YOUR_OPENAI_API_KEY_HERE with your actual API Key. Never hardcode your API Key into your code, and never commit it to a public repository!

2. Building the Copilot's First Chain

Now, let's write the code to implement the first feature of our intelligent support copilot—receiving user questions and having the LLM provide an initial answer.

import os
from dotenv import load_dotenv
from langchain_openai import ChatOpenAI
from langchain_core.prompts import PromptTemplate
from langchain.chains import LLMChain

# 1. Load environment variables
# This step ensures we can read OPENAI_API_KEY from the .env file
load_dotenv()

# 2. Initialize the LLM
# We will use OpenAI's GPT-3.5 Turbo model.
# The temperature parameter controls the randomness of the generated text. 0.0 means highly deterministic output, ideal for customer support scenarios.
# verbose=True prints the detailed execution process of the Chain to the console, which is very helpful for debugging.
llm = ChatOpenAI(model="gpt-3.5-turbo", temperature=0.0, verbose=True)

# 3. Define the PromptTemplate
# This sets the "persona" and "instructions" for our intelligent support copilot.
# {question} is a placeholder that will be replaced by the user's actual question at runtime.
prompt_template = PromptTemplate(
    input_variables=["question"],
    template="你是一个专业的、友好的智能客服助手。请根据以下用户问题提供简洁、准确的回答:\n\n用户问题:{question}\n\n你的回答:",
)

# 4. Create the LLMChain
# Combine the LLM and PromptTemplate into a Chain.
# This is the core workflow of our support copilot: receive question -> format question -> submit to LLM -> get LLM's answer.
llm_chain = LLMChain(llm=llm, prompt=prompt_template)

# 5. Simulate user interaction and run the Chain
print("--- 智能客服助手启动 ---")
while True:
    user_input = input("\n请问您有什么问题?(输入 '退出' 结束): ")
    if user_input.lower() == '退出':
        print("感谢使用,再见!")
        break

    # Call the Chain to process user input
    # The chain.invoke method is the recommended usage in LangChain 0.1.0+
    # It accepts a dictionary as input, where the keys correspond to the input_variables in prompt_template
    response = llm_chain.invoke({"question": user_input})

    # Print the intelligent support copilot's answer
    # For LLMChain, the output of invoke is a dictionary, where the 'text' key contains the final generated result from the LLM.
    print(f"\n智能客服:{response['text']}")

Python Code Breakdown:

  • load_dotenv(): Loads environment variables from the .env file, ensuring os.environ["OPENAI_API_KEY"] can correctly retrieve your key.
  • ChatOpenAI(model="gpt-3.5-turbo", temperature=0.0, verbose=True): Initializes a ChatOpenAI instance. model specifies the model to use, and setting temperature to 0.0 means we want the LLM to provide more deterministic and less creative answers, which is ideal for support scenarios. verbose=True is a fantastic debugging tool; it prints out the internal execution details of the Chain, including the exact Prompt sent to the LLM, which is crucial for understanding and troubleshooting.
  • PromptTemplate(...): Defines the instruction template we send to the LLM. input_variables=["question"] declares the variables in the template that can be dynamically replaced. template is the actual text, where {question} will be populated with the user's input.
  • LLMChain(llm=llm, prompt=prompt_template): This is the core of today's session! It binds our LLM and PromptTemplate together, creating a basic Q&A workflow. When this llm_chain is invoked, it takes the user input, inserts it into the prompt_template, sends the complete prompt to the llm, and finally returns the llm's generated result.
  • llm_chain.invoke({"question": user_input}): This is the method to execute the Chain. It accepts a dictionary where the keys must match the input_variables defined in the PromptTemplate.

3. Demonstration of the Output

Run the Python code above, and you will see an interaction similar to this:

--- 智能客服助手启动 ---

请问您有什么问题?(输入 '退出' 结束): 我的订单号是多少?

> Entering new LLMChain chain...
Prompt after formatting:
你是一个专业的、友好的智能客服助手。请根据以下用户问题提供简洁、准确的回答:

用户问题:我的订单号是多少?

你的回答:

> Finished chain.

智能客服:很抱歉,我无法直接查询您的订单号。为了保护您的账户安全和隐私,请您登录您的账户或联系人工客服获取您的订单信息。

请问您有什么问题?(输入 '退出' 结束): 你们公司的产品有什么特点?

> Entering new LLMChain chain...
Prompt after formatting:
你是一个专业的、友好的智能客服助手。请根据以下用户问题提供简洁、准确的回答:

用户问题:你们公司的产品有什么特点?

你的回答:

> Finished chain.

智能客服:我们的产品主要特点包括:
1.  **创新性设计:** 致力于提供独特且用户友好的解决方案。
2.  **高性能表现:** 确保产品在各种场景下都能稳定高效运行。
3.  **优质客户服务:** 我们提供全方位的支持,确保用户拥有最佳体验。
4.  **可持续发展:** 在产品开发和运营中注重环保和社会责任。
希望这些信息对您有所帮助!

请问您有什么问题?(输入 '退出' 结束): 退出
感谢使用,再见!

See that? Our intelligent support copilot is already capable of basic Q&A! Thanks to verbose=True, you can also clearly see the exact Prompt the LLM received, which is vital for understanding the LLM's behavior.

Of course, this initial version of the copilot is still quite "dumb". It has no memory and cannot access external knowledge. It is merely providing generalized answers based on the prompt you gave it and the current question. But this is exactly the starting point from which we will gradually build a complex system!

Traps and Pitfalls Guide

As an experienced mentor, I must give you a heads-up to help you avoid some common pitfalls that beginners often fall into:

  1. API Key Leakage: Fatal Error!

    • Pitfall: Hardcoding OPENAI_API_KEY directly in your code or committing it to public repositories like GitHub.
    • Best Practice: Never do this! Using a .env file and the python-dotenv library is standard practice. In production environments, you also need to consider more secure key management solutions, such as a cloud provider's Key Management Service (KMS). Also, ensure your .gitignore file includes .env to prevent accidental commits.
  2. Prompt Engineering: Garbage In, Garbage Out (GIGO)

    • Pitfall: Writing a sloppy Prompt and then complaining that the LLM's answer isn't good enough.
    • Best Practice: LLMs are powerful, but they aren't mind readers. The clearer, more specific, and more structured your instructions are, the better it will perform. We only used a very simple Prompt in this session, but we will dive deep into the art of Prompt Engineering later. Remember, the Prompt is your bridge to communicating with the LLM and the key to shaping its behavior. Carefully observe the full Prompt printed by verbose=True and think about how to improve it.
  3. Understanding and Using the temperature Parameter

    • Pitfall: Not understanding what temperature does and setting it randomly, resulting in LLM answers that are either too erratic or too rigid.
    • Best Practice: temperature controls the "creativity" or "randomness" of the LLM's output. 0.0 means the LLM will try to give the most likely, deterministic answer, which is highly suitable for support and QA tasks that require accuracy and consistency. If you are writing a story or generating poetry, you might want a higher temperature (like 0.7-1.0). For customer support scenarios, we generally lean towards a lower temperature.
  4. Choosing Between LLM and ChatModel

    • Pitfall: Confusing langchain.llms with langchain_openai.ChatOpenAI (or langchain.chat_models).
    • Best Practice: LangChain distinguishes between two types of language models:
      • LLM: Takes a string as input and returns a string as output (e.g., langchain_openai.OpenAI).
      • ChatModel: Takes a list of messages (like user messages, system messages, AI messages) as input and returns a message object as output (e.g., langchain_openai.ChatOpenAI). For models optimized for dialogue like GPT-3.5-turbo, using ChatOpenAI is highly recommended, as it handles conversational context and role information much better. While LLMChain can accept both types of models, the advantages of ChatModel will become increasingly apparent in real-world conversational applications.
  5. Debugging with verbose=True

    • Pitfall: The code isn't working, you don't know where the problem is, and you resort to blind guessing.
    • Best Practice: verbose=True is your best friend! It prints out detailed execution logs inside the Chain, including the exact Prompt sent to the LLM. This is incredibly valuable for understanding why the LLM gave a certain answer and for troubleshooting Prompt issues. Cultivate good debugging habits!

📝 Session Summary

Congratulations! In this session, we successfully took the first step toward building an Intelligent Support Knowledge Base. We:

  • Deeply understood LangChain's core value, realizing how it acts as a "Swiss Army Knife" for LLM app development by simplifying complexity.
  • Mastered LangChain's three foundational pillars: LLMs (the brain), Prompts (the instructions), and Chains (the workflows).
  • Built and ran our first LLMChain hands-on, equipping our intelligent support copilot with basic Q&A capabilities.
  • Learned crucial best practices and pitfalls, clearing obstacles for our future development journey.

Right now, our intelligent copilot is still in its infancy and can only handle simple Q&A, but it already possesses the core ability to interact with an LLM. It's like we've installed a rudimentary "brain" into it.

However, simple Q&A isn't enough! A truly intelligent copilot needs memory (to remember previous conversations), it needs knowledge (to access company product docs and FAQs), and it needs to take action (like checking orders or creating support tickets). These are exactly the powerful capabilities we will unlock step by step!

In the next session, we will dive deep into how to add "memory" to our support copilot, enabling it to remember past interactions and provide a more coherent, personalized service. Stay tuned!