index1
AI memory system for coding agents — code index + cognitive facts, persistent across sessions.
Why use this skill?
Enhance your coding agent with index1. A high-performance hybrid search memory system for code and docs with built-in vector support and MCP integration.
Install via CLI (Recommended)
clawhub install openclaw/skills/skills/gladego/index1What This Skill Does
index1 is a sophisticated AI memory system designed specifically for coding agents. It provides a dual-layer architecture combining a code index with episodic cognitive facts. By utilizing hybrid search—fusing BM25 full-text search with vector semantic search via RRF—index1 enables agents to navigate large codebases with remarkable precision. It features built-in ONNX embeddings, allowing for vector search capabilities out-of-the-box without requiring external dependencies like Ollama. The system supports structure-aware chunking for common languages (Python, Rust, JavaScript, Markdown, etc.) and is specifically optimized for cross-language retrieval (CJK to English), making it a powerful tool for global software development teams.
Installation
Installation is flexible, supporting multiple package managers. For the best experience, use pipx install index1. Alternatively, you can use pip install index1 or npx index1@latest. Once installed, run index1 setup to automatically configure hooks and MCP for Claude Code. You can verify the installation by running index1 doctor to check your environment, ensuring all dependencies and paths are correctly configured for your specific operating system.
Use Cases
index1 is ideal for complex software engineering tasks. It excels in: 1) Large-scale codebase navigation where standard grep fails to provide semantic context. 2) Projects requiring cross-language knowledge retrieval, such as searching English-based codebases using Chinese, Japanese, or Korean queries. 3) Teams that need persistent memory across multiple chat sessions, allowing agents to remember architectural decisions or "cognitive" notes. 4) Resource-constrained environments where you need advanced semantic search without the overhead of heavy external LLM-based embedding servers.
Example Prompts
- "recall how we implemented the user authentication middleware in the new API service, I need to add a new rate limiter there."
- "learn that the legacy payment processing module is being deprecated in favor of the Stripe integration, please store this in the cognitive facts."
- "search the codebase for all components handling user profile images and explain the current error handling logic for upload failures."
Tips & Limitations
For optimal performance, always define a search strategy in your .claude/CLAUDE.md file to help your AI agent decide when to use index1 versus standard grep. For heavy usage, consider offloading embeddings to a dedicated service like Ollama with the bge-m3 model for superior multilingual support. Be mindful that index1 is local-first; while this is excellent for privacy, it consumes local storage to maintain the vector index, so ensure you have sufficient disk space for massive projects. If the search results seem irrelevant, check your indexing status with index1 status to ensure all necessary paths were included during the indexing process.
Metadata
Not sure this is the right skill?
Describe what you want to build — we'll match you to the best skill from 16,000+ options.
Find the right skillPaste this into your clawhub.json to enable this plugin.
{
"plugins": {
"official-gladego-index1": {
"enabled": true,
"auto_update": true
}
}
}Tags
Flags: file-read, file-write
Related Skills
autodream-core
通用记忆整理引擎 — 基于适配器模式的跨平台记忆整理技能。自动去重、合并、删除过时条目。| Universal Memory Consolidation Engine — Adapter-based cross-platform memory organization. Auto-dedup, merge, prune stale entries.
context-compressor
Intelligently compress context — conversations, code, logs. Preserve key information while reducing token usage. Auto-detects content type and applies optimal compression.
auto-context
智能上下文卫生检查器。分析当前会话的上下文污染程度 (长对话、主题漂移、噪声累积),建议:continue、/fork、/btw 或新会话。 支持手动触发(/auto-context)和自动触发(响应层实现)。 基于 ArXiv 论文和认知心理学研究的多维度评估体系。
local-rag-index-planner
规划本地知识库的目录、分片粒度、命名、更新时间与访问边界,而不是直接堆 RAG。;use for rag, indexing, knowledge workflows;do not use for 直接部署向量数据库, 忽略权限隔离.
memory-stack
AI 记忆栈架构 - 符合 2026 前沿的 AI 记忆系统。微调+RAG+ 上下文三层设计,mirrors 人类记忆工作方式。