Lesson 11: Synergy Strategies and Content Division

⏱ Est. reading time: 3 min Updated on 5/7/2026

For the best development experience, it is recommended to enable both the built-in auto memory and the third-party claude-mem. This lesson explains how to divide content between them to leverage their respective strengths.

11.1 The Three-Layer Memory Model

We can visualize Claude's memory systems as a three-tier model:

  1. Hard Rules Layer (CLAUDE.md): Stores the "Project Constitution." Manually written and fully loaded at every startup. Best for: coding standards, core architecture diagrams.
  2. Curated Notes Layer (auto memory): Stores the LLM's subjective insights. Written by the LLM autonomously or prompted by the user; loads the first 200 index lines. Best for: personal preferences, core project decisions.
  3. Full Log Layer (claude-mem): Stores all operational details. Automatically captured by Hooks and retrieved via MCP on demand. Best for: debugging logs, specific fix details, cross-project experience.

11.2 Content Allocation Matrix

Content Type CLAUDE.md auto memory claude-mem
Indentation/Naming Specs ✅ Primary (Human set) Alternative ❌ Not recommended
Tool Preference (e.g. pnpm) Alternative ✅ Primary (LLM learns) (Auto-captured)
Deadlines & Milestones ❌ Changes too often ✅ Best (Project type) (Auto-captured)
Specific Bug Fix Details ❌ Too verbose ❌ (Unless iconic) ✅ Primary (Auto)
Cross-Project General Tips ❌ (Isolated) ❌ (Isolated) ✅ Only choice

11.3 Synergy Case Study

Scenario: Learning a trick to prevent Postgres concurrent deadlocks.

  • auto memory: You tell the LLM: "Note that when handling concurrent updates in Postgres, we must sort IDs first to prevent deadlocks." The LLM writes a feedback memory. Next time you start this project, the LLM immediately receives this guiding principle.
  • claude-mem: The Hook automatically captures the summary of the entire conversation where the deadlock was solved. Six months later, in a different project, you encounter a similar issue and ask: "Have we dealt with deadlocks before?" The LLM uses mem-search to retrieve the detailed solution and even points to the specific file paths from the past.

This synergy ensures core principles are loaded into "conscious" context while providing a massive library of details for "subconscious" retrieval.