Ep 12: Birthing a Conversational Bot — Chat Trigger & AI Agent Node Deep Dive
AI Agent Internal Architecture
The AI Agent node isn't a simple "call API once" — it's an agentic decision loop:
graph TB
subgraph "AI Agent Internal Execution Loop"
Input[📥 User Input] --> Think[🧠 LLM Thinks]
Think --> Decision{Need a tool?}
Decision -->|"No → direct answer"| Answer[📤 Generate Reply]
Decision -->|"Yes → pick tool"| ToolCall[🔧 Call Tool]
ToolCall --> ToolResult[📋 Get Result]
ToolResult --> Think2[🧠 Think Again]
Think2 --> Decision2{Need more tools?}
Decision2 -->|"Yes"| ToolCall
Decision2 -->|"No"| Answer
end
style Think fill:#8b5cf6,stroke:#7c3aed,color:#fff
style ToolCall fill:#ff6d5b,stroke:#e55a4e,color:#fffAI Agent ≠ single API call. It's a multi-turn inner loop — the LLM might decide it needs weather data, then realize it also needs exchange rates, and finally synthesize everything into one comprehensive answer.
1. Chat Trigger
Chat Trigger provides a built-in web chat UI as the entry point for conversational workflows.
// ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
// Chat Trigger output Item structure
// ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
{
"json": {
"chatInput": "What's the weather in Beijing?",
"sessionId": "sess_abc123", // Session ID for memory management
"action": "sendMessage"
}
}
// ⚠️ Enable authentication in production! Otherwise anyone can chat with your Agent
2. Complete Chatbot Workflow
graph TB
CT[💬 Chat Trigger] --> Agent[🤖 AI Agent]
subgraph "Agent Sub-nodes"
Agent --> Model[🧠 OpenAI gpt-4o]
Agent --> Mem[💾 Window Buffer Memory]
end
style Agent fill:#ff6d5b,stroke:#e55a4e,color:#fff3. System Prompt Design (4 Layers)
graph TB
L1["🎭 Layer 1: Role Definition
Who are you? Who do you serve?"]
L2["📏 Layer 2: Behavior Rules
Length, tone, language"]
L3["🚫 Layer 3: Boundaries
What's forbidden?"]
L4["📋 Layer 4: Output Format
JSON? Markdown? Plain text?"]
L1 --> L2 --> L3 --> L44. Multi-Turn Conversation Sequence
sequenceDiagram
participant User as 👤 User
participant CT as 💬 Chat Trigger
participant Agent as 🤖 AI Agent
participant Mem as 💾 Memory
participant LLM as 🧠 GPT-4o
Note over User,LLM: === Turn 1 ===
User->>CT: "Hello, what can you do?"
CT->>Agent: {chatInput: "Hello...", sessionId: "s1"}
Agent->>Mem: Read session "s1" history → empty
Agent->>LLM: [System Prompt] + [User message]
LLM-->>Agent: "Hi! I'm your AI assistant..."
Agent->>Mem: Save: [user Q, AI A]
Note over User,LLM: === Turn 2 ===
User->>CT: "How do I configure Webhooks?"
Agent->>Mem: Read history → [Turn 1 context]
Agent->>LLM: [System] + [History] + [New message]
Note over LLM: Model sees full context
LLM-->>Agent: "Here are the steps..."Next Episode
Ep 13 dives deep into Memory node internals — Window Buffer, Token Buffer, and cross-session long-term memory management.