chromadb-memory
Long-term memory via ChromaDB with local Ollama embeddings. Auto-recall injects relevant context every turn. No cloud APIs required — fully self-hosted.
Why use this skill?
Enhance your OpenClaw agent with ChromaDB Memory for self-hosted, long-term semantic recall using local Ollama embeddings. Perfect for persistent context.
Install via CLI (Recommended)
clawhub install openclaw/skills/skills/msensintaffar/chromadb-memoryWhat This Skill Does
The ChromaDB Memory skill provides your OpenClaw agent with long-term semantic memory, utilizing ChromaDB for vector storage and local Ollama embeddings for natural language understanding. This means your agent can "remember" past conversations and information without needing to send data to cloud APIs, ensuring complete self-hosting and privacy. The core feature is "auto-recall," which automatically searches your ChromaDB collection for relevant memories based on the current user message and injects them into the agent's context before each turn. This allows the agent to maintain context over extended interactions and access stored knowledge dynamically. For more granular control, a chromadb_search tool is available for manual semantic searches over your collection.
Installation
To install the ChromaDB Memory skill, you first need to ensure you have the prerequisites in place:
-
ChromaDB: Run ChromaDB, preferably using Docker:
docker run -d --name chromadb -p 8100:8000 chromadb/chroma:latest -
Ollama: Pull an embedding model, such as
nomic-embed-text:ollama pull nomic-embed-text -
Indexed Documents: Populate your ChromaDB collection with documents. You can use any ChromaDB-compatible indexing tool for this.
Once prerequisites are met, follow these steps:
-
Copy Plugin Files: Copy the necessary script and configuration files to the OpenClaw extensions directory:
mkdir -p ~/.openclaw/extensions/chromadb-memory cp {baseDir}/scripts/index.ts ~/.openclaw/extensions/chromadb-memory/ cp {baseDir}/scripts/openclaw.plugin.json ~/.openclaw/extensions/chromadb-memory/ -
Configure OpenClaw: Add the
chromadb-memoryplugin to your OpenClaw configuration file (~/.openclaw/openclaw.json). Adjust theconfigsection to match your ChromaDB and Ollama setup, includingchromaUrl,collectionName,ollamaUrl,embeddingModel, and optionally fine-tuneautoRecall,autoRecallResults, andminScore.{ "plugins": { "entries": { "chromadb-memory": { "enabled": true, "config": { "chromaUrl": "http://localhost:8100", "collectionName": "longterm_memory", "ollamaUrl": "http://localhost:11434", "embeddingModel": "nomic-embed-text", "autoRecall": true, "autoRecallResults": 3, "minScore": 0.5 } } } } } -
Restart Gateway: Ensure the changes take effect by restarting the OpenClaw gateway:
openclaw gateway restart
Use Cases
This skill is invaluable for agents that require persistent knowledge and context across multiple interactions. Potential use cases include:
- Personal Assistants: Remembering user preferences, past appointments, or frequently requested information.
- Customer Support Bots: Recalling previous support tickets, customer history, or product-specific knowledge bases.
- Research Assistants: Maintaining context from long research sessions, remembering key findings, and connecting disparate pieces of information.
- Content Creation Agents: Remembering stylistic preferences, previous drafts, or project-specific guidelines.
- Personalized Tutors: Adapting explanations based on a student's past performance and learning history.
Example Prompts
- After a long discussion about project requirements: "Summarize the key technical constraints we identified for the new web app."
- Following a conversation where a specific API was discussed: "What was the endpoint we decided to use for user authentication?"
- When asking for advice on a topic previously explored: "Based on our last conversation about sustainable gardening, what are some beginner-friendly herbs?
Tips & Limitations
- Tuning
minScoreandautoRecallResults: If the auto-recall is injecting too much irrelevant information ("too noisy"), increaseminScore. Conversely, if important context is being missed, lowerminScoreor increaseautoRecallResults. - Manual Search: For precise retrieval, the
chromadb_searchtool allows you to explicitly query your memory. - Token Cost: While auto-recall adds some tokens to the context, the overhead is generally negligible for agents with large context windows (e.g., 200K+ tokens).
- Data Indexing: The effectiveness of this skill heavily relies on the quality and relevance of the data indexed in your ChromaDB collection. Ensure your indexing process is robust.
- No Cloud Dependencies: A significant advantage is the 100% local operation, enhancing privacy and reducing reliance on external services. This means no API keys or cloud costs for memory storage and retrieval.
- Embedding Model Choice: While
nomic-embed-textis recommended and configured by default, you can experiment with other Ollama-compatible embedding models if needed, provided they match the dimensionality expected by your stored data.
Metadata
Not sure this is the right skill?
Describe what you want to build — we'll match you to the best skill from 16,000+ options.
Find the right skillPaste this into your clawhub.json to enable this plugin.
{
"plugins": {
"official-msensintaffar-chromadb-memory": {
"enabled": true,
"auto_update": true
}
}
}Tags
Related Skills
autodream-core
通用记忆整理引擎 — 基于适配器模式的跨平台记忆整理技能。自动去重、合并、删除过时条目。| Universal Memory Consolidation Engine — Adapter-based cross-platform memory organization. Auto-dedup, merge, prune stale entries.
context-compressor
Intelligently compress context — conversations, code, logs. Preserve key information while reducing token usage. Auto-detects content type and applies optimal compression.
auto-context
智能上下文卫生检查器。分析当前会话的上下文污染程度 (长对话、主题漂移、噪声累积),建议:continue、/fork、/btw 或新会话。 支持手动触发(/auto-context)和自动触发(响应层实现)。 基于 ArXiv 论文和认知心理学研究的多维度评估体系。
memory-stack
AI 记忆栈架构 - 符合 2026 前沿的 AI 记忆系统。微调+RAG+ 上下文三层设计,mirrors 人类记忆工作方式。
mempalace-integration
MemPalace记忆系统集成 - AAAK压缩 + Hall分类 + L0-L3分层 30x无损压缩(1000→33 tokens)(1000→33 tokens)(1000→33 tokens)(1000→33 tokens)(1000→33 tokens)(1000→33 tokens),facts/events/preferences/advice分类,加载优先级