ClawKit Logo
ClawKitReliability Toolkit
Back to Registry
Official Verified developer tools Safety 4/5

index1

AI memory system for coding agents — code index + cognitive facts, persistent across sessions.

Why use this skill?

Enhance your coding agent with index1. A high-performance hybrid search memory system for code and docs with built-in vector support and MCP integration.

skill-install — Terminal

Install via CLI (Recommended)

clawhub install openclaw/skills/skills/gladego/index1
Or

What This Skill Does

index1 is a sophisticated AI memory system designed specifically for coding agents. It provides a dual-layer architecture combining a code index with episodic cognitive facts. By utilizing hybrid search—fusing BM25 full-text search with vector semantic search via RRF—index1 enables agents to navigate large codebases with remarkable precision. It features built-in ONNX embeddings, allowing for vector search capabilities out-of-the-box without requiring external dependencies like Ollama. The system supports structure-aware chunking for common languages (Python, Rust, JavaScript, Markdown, etc.) and is specifically optimized for cross-language retrieval (CJK to English), making it a powerful tool for global software development teams.

Installation

Installation is flexible, supporting multiple package managers. For the best experience, use pipx install index1. Alternatively, you can use pip install index1 or npx index1@latest. Once installed, run index1 setup to automatically configure hooks and MCP for Claude Code. You can verify the installation by running index1 doctor to check your environment, ensuring all dependencies and paths are correctly configured for your specific operating system.

Use Cases

index1 is ideal for complex software engineering tasks. It excels in: 1) Large-scale codebase navigation where standard grep fails to provide semantic context. 2) Projects requiring cross-language knowledge retrieval, such as searching English-based codebases using Chinese, Japanese, or Korean queries. 3) Teams that need persistent memory across multiple chat sessions, allowing agents to remember architectural decisions or "cognitive" notes. 4) Resource-constrained environments where you need advanced semantic search without the overhead of heavy external LLM-based embedding servers.

Example Prompts

  1. "recall how we implemented the user authentication middleware in the new API service, I need to add a new rate limiter there."
  2. "learn that the legacy payment processing module is being deprecated in favor of the Stripe integration, please store this in the cognitive facts."
  3. "search the codebase for all components handling user profile images and explain the current error handling logic for upload failures."

Tips & Limitations

For optimal performance, always define a search strategy in your .claude/CLAUDE.md file to help your AI agent decide when to use index1 versus standard grep. For heavy usage, consider offloading embeddings to a dedicated service like Ollama with the bge-m3 model for superior multilingual support. Be mindful that index1 is local-first; while this is excellent for privacy, it consumes local storage to maintain the vector index, so ensure you have sufficient disk space for massive projects. If the search results seem irrelevant, check your indexing status with index1 status to ensure all necessary paths were included during the indexing process.

Metadata

Author@gladego
Stars2387
Views1
Updated2026-03-09
View Author Profile
AI Skill Finder

Not sure this is the right skill?

Describe what you want to build — we'll match you to the best skill from 16,000+ options.

Find the right skill
Add to Configuration

Paste this into your clawhub.json to enable this plugin.

{
  "plugins": {
    "official-gladego-index1": {
      "enabled": true,
      "auto_update": true
    }
  }
}

Tags

#mcp#memory#semantic-search#bm25#rag#cognitive#coding-agent
Safety Score: 4/5

Flags: file-read, file-write