Memento: Building Long-Term Memory for LLM Coding Assistants

Copied to clipboard! Copies a prompt to paste into ChatGPT, Claude, etc.

LLM coding assistants like Claude are powerful, but they have a fundamental limitation: every conversation starts from scratch. Each session is isolated, with no memory of previous interactions beyond what you manually provide in prompts.

After working with Claude Code intensively for a year, I built mementoβ€”a system that gives LLMs persistent, searchable, long-term memory across all sessions.

The Problem: Ephemeral Knowledge

When you work with an LLM on a project, you make decisions:

  • “We’re using argparse for CLI parsing, not manual argument handling”
  • “All services deploy to k8s, not systemd”
  • “Use uv for Python, not pip”
  • “Smoke tests are preferred over comprehensive unit test suites”

Next session, the LLM doesn’t remember any of this. You either:

  1. Re-explain everything (wastes time)
  2. Hope the LLM infers it from code (unreliable)
  3. Add it to a CLAUDE.md file (static, grows unwieldy)

What you really need is a living knowledge base that:

  • Persists across sessions
  • Evolves as you discover new patterns
  • Is searchable and queryable
  • Can be referenced explicitly or discovered automatically
  • Supports links between related concepts

Enter: The Zettelkasten Method

The solution comes from an unlikely source: a 20th-century German sociologist named Niklas Luhmann.

Luhmann developed the Zettelkasten methodβ€”a note-taking system where:

  • Each note is a single concept
  • Notes link to related notes
  • Knowledge emerges from the network of connections
  • Notes accumulate over time into a “second brain”

Luhmann wrote 70 books and 400 papers using this method. His secret? The Zettelkasten wasn’t just storageβ€”it was a thinking partner that helped him discover connections and develop ideas.

Sound familiar? That’s exactly what we need for LLM workflows.

Memento: Zettelkasten for LLMs

Memento is my implementation of this idea, built on org-roamβ€”an Emacs package that brings Zettelkasten to org-mode files.

The Architecture

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚  Org-Roam   β”‚  ← Personal knowledge graph (local files)
β”‚   Notes     β”‚     - Emacs editing
β””β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”˜     - Git version control
       β”‚
       β”‚
β”Œβ”€β”€β”€β”€β”€β”€β–Όβ”€β”€β”€β”€β”€β”€β”
β”‚   Memento   β”‚  ← Clojure CLI tool
β”‚     CLI     β”‚     - Full-text search
β””β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”˜     - CRUD operations
       β”‚
       β”‚
β”Œβ”€β”€β”€β”€β”€β”€β–Όβ”€β”€β”€β”€β”€β”€β”
β”‚ Claude Code β”‚  ← Invokes CLI via skill
β”‚   Skill     β”‚     - Structured JSON output
β””β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”˜     - Escalating search strategy
       β”‚
       β”‚
β”Œβ”€β”€β”€β”€β”€β”€β–Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚  Claude / LLMs      β”‚
β”‚  All Sessions       β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Key Features

1. Public Notes Directory

Notes live in ~/.roam_public/. Private notes in ~/.roam/ are never accessed by mementoβ€”hard separation by directory, not tags.

#+TITLE: Homelab Deployment Guide
#+TAGS: PUBLIC deployment k8s

* Docker images must be built on backbone.lan
The Mac is arm64, k3s nodes are amd64...

* Always use Ansible for changes
Never SSH and run ad-hoc commands...

2. Full-Text Search

LLMs search notes via CLI:

# Search with previews (fast, low context)
~/bin/memento search "deployment patterns" --format json

# Search with full content (detailed)
~/bin/memento search "k8s" --include-content --limit 5 --format json

Returns relevant notes instantly. No need to remember exact note titles.

3. Linked Knowledge

Notes link to each other using org-roam’s ID system:

* Pattern: Input Validation
See also [[id:abc123][Error Handling Patterns]]
Related: [[id:def456][Testing Patterns]]

Claude can follow these links to explore related conceptsβ€”just like you would.

4. Version Control

Since notes are plain text files, everything lives in git:

  • Full history of every change
  • Rollback to any previous version
  • Diff notes over time
  • Branch and merge knowledge bases

5. Structured Metadata

Each note has rich metadata:

:PROPERTIES:
:ID:       b24c6cd5-c677-4792-893a-7e85856c4572
:END:
#+TITLE: Claude Global Context
#+CREATED: [2025-09-26 Thu]
#+MODIFIED: [2025-09-30 Mon 14:23]
#+TAGS: PUBLIC ai-context development

LLMs can query by creation date, modification date, tags, or any custom property.

How I Use It: Real Examples

The Global Context Note

Every Claude session starts by reading claude-global-context:

#+TITLE: Claude Global Context

* Testing Approach
- Write tests for new features
- Rely on smoke tests (trigger with =make test=)
- **Whenever tests pass after a change, commit with descriptive message**

* Homelab Rules
- All services deploy to k8s, not systemd
- Docker images built on backbone.lan (amd64), not Mac (arm64)
- Use Ansible for all changes, never ad-hoc SSH

* Python Rules
- Always use =uv run= for Python execution
- Never use system Python directly

This note evolved from 50 lines to 500+ lines over months. New sessions automatically get all accumulated wisdom.

Project-Specific Guides

For complex projects, I maintain dedicated guides:

#+TITLE: Homelab Deployment Guide
#+TAGS: PUBLIC deployment k8s

* Building Docker Images
Always build on backbone.lan:

#+begin_src bash
ssh backbone.lan "cd /tmp && git clone ... && docker build ..."
#+end_src

* Deploying to k8s
Use Ansible, never kubectl apply directly:

#+begin_src bash
ansible-playbook playbook_deploy_k8s.yml --tags myservice
#+end_src

When Claude works on a homelab service, I say: “Read the note homelab-deployment-guide and follow those patterns.”

Session Handoffs

When a session ends, I create a checkpoint note:

#+TITLE: Zerg Refactor - Session 2025-12-28

* What We Did
- Added --robot flag for full automation
- Fixed barrier synchronization bug
- All tests pass

* What's Next
- [ ] Add retry logic for failed agents
- [ ] Improve TUI status display

* Key Decisions
- Robot mode auto-exits ALL agents, not just non-final phases
- State stored in ~/.zerg/{run-id}/

The next session reads this and continues seamlessly.

The Notes Index

To make knowledge discoverable, I maintain an index:

#+TITLE: Memento Notes Index

* Development & Technical Guides
- [[id:homelab-deployment][Homelab Deployment Guide]]
- [[id:hammerspoon-ui][Hammerspoon UI Style Guide]]
- [[id:smoke-tests][Smoke Test Paradigm]]

* Quick Lookup by Use Case
- Deploying services β†’ [[id:homelab-deployment][Homelab Deployment Guide]]
- Building UIs β†’ [[id:hammerspoon-ui][Hammerspoon UI Style Guide]]

Claude can explore this index to find relevant notes for any task.

The CLI Integration

Memento exposes operations via CLI:

# Read a specific note
~/bin/memento read claude-global-context --format json

# Search notes (with previews)
~/bin/memento search "deployment" --format json

# Search with full content
~/bin/memento search "k8s" --include-content --limit 5 --format json

# List notes
~/bin/memento list --limit 10 --format json

# Create a new note
~/bin/memento create "New Pattern" "Content here" --tags "PUBLIC,patterns" --format json

# Append to existing note
~/bin/memento append homelab-guide "New section content" --format json

# Get statistics
~/bin/memento stats --format json

Claude Code invokes these via a skill that wraps the CLI commands.

Real Impact: Before and After

Before Memento:

  • Each Claude session started from zero knowledge
  • I’d re-explain the same patterns weekly
  • Architectural decisions were forgotten between sessions
  • No way to share discoveries across different Claude instances

After Memento:

  • New sessions automatically load accumulated knowledge
  • Patterns discovered once are applied forever
  • Architectural decisions are documented and referenced
  • Multiple Claude sessions coordinate through shared notes

Concrete example:

Before: “Build Docker images on backbone.lan, not locally” β†’ Explain this in 3 different sessions over 2 weeks

After: Add to homelab-deployment-guide once β†’ Every future session applies it automatically

The Zettelkasten Effect

Here’s what surprised me: just like Luhmann’s Zettelkasten became a thinking partner, my memento system has become more than a memory aid.

By linking notes together, patterns emerge:

  • “Oh, this deployment pattern is similar to the one in the other service”
  • “These three testing approaches all relate to the smoke test philosophy”
  • “This architectural decision has implications for that other project”

The LLM can follow these links and make connections I didn’t explicitly draw. It’s not just rememberingβ€”it’s thinking through the knowledge graph.

Implementation Details

Memento is written in Clojureβ€”chosen for its excellent text processing and rapid prototyping. The CLI outputs JSON for structured consumption:

;; Core note structure (org-roam compatible)
{:id "homelab-deployment-guide"
 :title "Homelab Deployment Guide"
 :content "..."
 :tags ["PUBLIC" "deployment" "k8s"]
 :created "2025-09-26"
 :modified "2025-12-28"
 :links [{:target "docker-patterns" :type :reference}]}

The CLI handles all operationsβ€”search, read, create, update, append, archive. Claude Code invokes it via a skill that documents the escalating search strategy:

  1. Check global context first
  2. Search with previews
  3. Search with full content
  4. Load all notes (guaranteed fallback)

The Skill Layer

Claude Code uses a “skill” to interact with mementoβ€”a markdown file that documents the CLI commands and search strategy:

# Memento Skill

## Session Initialization
At the start of every session:
~/bin/memento read claude-global-context --format json

## Escalating Search
1. Check global context first
2. Search with previews: ~/bin/memento search "query"
3. Full content: ~/bin/memento search "query" --include-content
4. Load all: ~/bin/memento load-all --format json

This skill-based approach means the LLM knows exactly how to use memento without hardcoding behavior.

Key Lessons

After 6 months using memento:

Static files aren’t enough. A CLAUDE.md file works for small projects but becomes unwieldy at scale. You need searchable, linked, queryable knowledge.

Directory separation beats tags. I tried using PUBLIC tags for access control. Simpler: put public notes in ~/.roam_public/, never touch ~/.roam/.

Links create unexpected value. Following note links leads Claude to relevant context I forgot existed.

Knowledge accumulates exponentially. Each session adds a bit more knowledge. Over time, new sessions become dramatically more effective.

Version control is essential. Being able to diff notes over time, see what changed, and roll back mistakes is invaluable.

Skills beat MCP for simplicity. I originally planned an MCP server. A CLI + skill documentation is simpler to maintain and debug.

Future Directions

Ideas I’m exploring:

1. Automatic note generation: Have Claude automatically create notes summarizing important decisions during sessions

2. Similarity search: Use embeddings to find conceptually similar notes, not just keyword matches

3. Knowledge pruning: Identify stale or contradictory notes and flag them for review

4. Cross-project insights: Detect patterns that appear across multiple projects and surface them

Conclusion

LLMs are incredibly powerful, but without memory they’re like having amnesiaβ€”brilliant in the moment but unable to learn from experience.

Memento solves this by giving LLMs a persistent, searchable, networked knowledge base. It’s not just a better way to store notesβ€”it’s a fundamentally different approach to human-AI collaboration.

The Zettelkasten method taught us that external memory systems can be thinking partners. Memento proves the same is true for LLMs.

Every session builds on previous sessions. Knowledge compounds. Patterns emerge. And over time, you create a shared intellectual foundation that makes every future interaction more productive.

That’s the power of giving an LLM long-term memory.


Memento is built on org-roam implementing the Zettelkasten method.

Copied to clipboard!