ClawKit Logo
ClawKitReliability Toolkit

Give Your OpenClaw Agent Long-Term Memory (2026)

By default, OpenClaw agents start each session with a blank slate — they have no memory of previous conversations. With the elite-longterm-memory skill, your agent can store facts, decisions, and context in a vector database and recall them across sessions, days, or weeks later.

Why Persistent Memory Changes Everything

Without memory, your agent is like an employee with amnesia — you must re-explain everything every session. With persistent memory, the agent remembers your preferences, past decisions, learned patterns, and ongoing context. It gets smarter the more you use it.

Why Agents Need Persistent Memory

Cross-Session Context

Remember decisions from last week: "Don't use Provider X — had rate limit issues on March 3rd."

Learned Preferences

Store user preferences, coding style, writing tone, and workflow preferences that persist indefinitely.

Reduced Token Cost

Instead of re-explaining context each session (expensive), retrieve only relevant memories (cheap vector lookup).

Option 1: elite-longterm-memory (Recommended)

The elite-longterm-memory skill is a high-performance memory layer built for OpenClaw. It uses semantic search to find relevant memories automatically — no manual lookup required.

// clawhub.json — add elite-longterm-memory
{
  "mcpServers": {
    "elite-longterm-memory": {
      "command": "npx",
      "args": ["-y", "@openclaw/elite-longterm-memory"],
      "env": {
        "MEMORY_BACKEND": "chromadb",
        "CHROMADB_URL": "http://localhost:8000",
        "MEMORY_COLLECTION": "agent-memory",
        "MEMORY_MAX_RESULTS": "10",
        "MEMORY_SIMILARITY_THRESHOLD": "0.75"
      }
    }
  }
}

Start ChromaDB Locally

ChromaDB is an open-source vector database that runs locally — no cloud account needed:

# Install and start ChromaDB
pip install chromadb
chroma run --host localhost --port 8000

# Or via Docker (recommended for production)
docker run -p 8000:8000 chromadb/chroma

Option 2: chromadb-memory (Simpler Setup)

If you want a lighter-weight option, the chromadb-memory skill connects directly to ChromaDB with a simpler configuration. Good for personal projects and experimentation.

// clawhub.json — chromadb-memory (alternative)
{
  "mcpServers": {
    "chromadb-memory": {
      "command": "npx",
      "args": ["-y", "@openclaw/chromadb-memory"],
      "env": {
        "CHROMA_HOST": "localhost",
        "CHROMA_PORT": "8000",
        "COLLECTION_NAME": "my-agent-memories"
      }
    }
  }
}

How Memory Works in Practice

1
Session Start
Agent receives your mission and queries memory: "What do I know relevant to this task?"
2
Memory Retrieval
Vector search returns top-N semantically similar memories. Only relevant context is injected.
3
Task Execution
Agent completes the task using both current context and retrieved memories.
4
Memory Storage
At session end, agent summarizes key learnings and stores them as new memories.
5
Next Session
New session retrieves these stored memories — the cycle continues, agent gets smarter.

Real-World Example: Cross-Session Memory

Session 1 (Monday)
You: "Research the top 5 VPS providers for our project"
Agent: "Completed research. Hetzner is cheapest, but has EU-only datacenter. DigitalOcean has best uptime SLA. Storing results in memory."
→ Memory saved: "VPS Research 2026-02-28: Hetzner cheapest, DO best SLA, Vultr good for US East Coast"
Session 2 (Wednesday — new session, no context)
You: "Set up our OpenClaw agent on a VPS"
Agent: "Recalling VPS research from Monday... Based on your earlier research, I recommend DigitalOcean for best SLA. Should I start the setup there?"

Comparison: elite-longterm-memory vs chromadb-memory

Featureelite-longterm-memorychromadb-memory
Setup complexityMediumSimple
Auto-summarization✓ Yes✗ Manual
Memory compression✓ Automatic✗ None
Semantic search qualityHighMedium
Vector backend optionsChromaDB, Pinecone, WeaviateChromaDB only
Best forProduction agentsPrototyping

Cost Impact of Memory

Memory retrieval costs ~$0.0001 per lookup (vector search is cheap). But the token savings from not re-explaining context each session can reduce session cost by 30-60%. Use our Cost Estimator to model the savings.

Ready to Give Your Agent Memory?

Add the elite-longterm-memory skill via the Skill Registry, then configure it with the Config Wizard.

Need Help?

Try our automated tools to solve common issues instantly.