Phase 5 / Ep 23: Plugin vs Skill —— When to Use a Plugin?
🎯 Learning Objective: Clearly distinguish the use cases of Plugin and Skill.
1. Core Differences
graph LR
subgraph Skill["🧩 Skill"]
direction TB
S1["📄 Markdown Declarative"]
S2["💰 Consumes Tokens"]
S3["🔄 Loaded per Conversation"]
S4["📦 Injected into Context"]
end
subgraph Plugin["🔌 Plugin"]
direction TB
P1["⚙️ Independent Executable Process"]
P2["🆓 Does Not Consume Tokens"]
P3["🏃 Continuously Running"]
P4["🔗 Message Pipeline Middleware"]
end2. Comparison Table
| Dimension | 🧩 Skill | 🔌 Plugin |
|---|---|---|
| Execution Mode | Injected into LLM Context | Independent Process |
| Token Consumption | ✅ On Every Load | ❌ Zero Consumption |
| Language | Markdown + Shell | Any Language (Node/Python/Go/Rust) |
| Applicable Scenarios | Capability Declaration, Knowledge Injection | Message Filtering, Data Processing, Middleware |
| Performance | Limited by Context Window | Unlimited |
| Development Difficulty | ⭐ Low | ⭐⭐⭐ Medium-High |
3. Selection Decision Tree
graph TD
A["I need to extend Agent capabilities"] --> B{"Does LLM need to understand instructions?"}
B -->|"Yes"| C{"Large codebase / Heavy computation?"}
C -->|"No"| D["✅ Use Skill"]
C -->|"Yes"| E["🔌 Use Plugin\n+\n🧩 Use Skill for declaration"]
B -->|"No"| F{"Need to intercept/filter messages?"}
F -->|"Yes"| G["🔌 Use Plugin"]
F -->|"No"| H["🔌 Use Plugin"]4. Typical Scenarios
| Scenario | Recommended Solution | Reason |
|---|---|---|
| Teach Agent to check weather | Skill | Lightweight instructions + API call |
| Log all messages | Plugin | No AI involvement needed, pure pipeline |
| Sensitive word filtering | Plugin | Message interception, zero Tokens |
| Code review | Skill | Requires AI to understand code |
| Message translation (Pre-processing) | Plugin | Translate before AI processing |
Next Episode Teaser: Ep 24, Plugin Pipeline Architecture — Understanding message flow in the Plugin pipeline's "Onion Model".