ClawKit Logo
ClawKitReliability Toolkit
Back to Registry
Official Verified developer tools Safety 4/5

openclaw-reflect

Self-improvement layer with evaluation separation, rollback, and tiered operator gates. Observes outcomes across sessions, detects recurring patterns, proposes improvements, validates proposals through a separate evaluator invocation, and applies changes safely with snapshot/rollback capability.

Why use this skill?

Learn how openclaw-reflect enables your AI agent to automatically detect, analyze, and resolve recurring errors through a tiered safety system.

skill-install — Terminal

Install via CLI (Recommended)

clawhub install openclaw/skills/skills/atlaspa/openclaw-reflect
Or

What This Skill Does

openclaw-reflect acts as an autonomous meta-cognition layer for your AI agent, enabling it to learn from its past mistakes and optimize its operational behavior over time. Instead of repeating the same errors, the agent observes its tool execution outcomes, identifies recurring failure patterns, and leverages a tiered evaluation system to update its own instruction set. The process is gated by safety tiers and a mandatory evaluator step to ensure that proposed changes are legitimate improvements rather than noise or regression triggers.

Installation

To install the openclaw-reflect skill, execute the following command in your project terminal: clawhub install openclaw/skills/skills/atlaspa/openclaw-reflect

Ensure that your environment has write access to the .reflect/ directory, as the skill maintains state, logs, and audit trails for all self-improvement actions performed during session lifecycles.

Use Cases

  • Debugging Persistent Errors: If the agent consistently fails a specific CLI command due to environment variables or flag formatting, openclaw-reflect identifies this pattern and updates MEMORY.md to ensure future commands are formatted correctly.
  • Refining Procedural Guidelines: When the agent repeatedly struggles to follow a multi-step project protocol, it can propose updates to CLAUDE.md to clarify its behavioral constraints.
  • Audit Trails: Teams can use the skill to maintain a history of how an agent has evolved its "thinking" process, providing visibility into the logic behind architectural or policy changes.

Example Prompts

  1. "I noticed you keep making the same mistake with the database connection strings. Can you check if openclaw-reflect has any pending proposals to fix this?"
  2. "Show me the status of the recent reflection cycle and tell me if any changes were applied to my project instructions."
  3. "Run a manual observation for the last tool failure; I want to ensure the agent understands why the deployment script didn't trigger correctly."

Tips & Limitations

  • The Evaluator Gate: Always trust the evaluator's skepticism. The system is designed to prevent "hallucinated" improvements. If a proposal is rejected, it is likely because the pattern lacked sufficient frequency or the proposed solution was deemed risky.
  • Blast Radius: Understand that while Tier 1 changes are often automatic, Tier 3 changes (SOUL.md) will always pause for your manual approval. Do not bypass these warnings; they protect the core identity of your agent.
  • Manual Observations: While the system is automated, your input is still valuable. If you see a failure the agent misses, use the --manual flag to bootstrap the learning process, which significantly reduces the time required for the system to detect recurring issues.

Metadata

Author@atlaspa
Stars2387
Views1
Updated2026-03-09
View Author Profile
AI Skill Finder

Not sure this is the right skill?

Describe what you want to build — we'll match you to the best skill from 16,000+ options.

Find the right skill
Add to Configuration

Paste this into your clawhub.json to enable this plugin.

{
  "plugins": {
    "official-atlaspa-openclaw-reflect": {
      "enabled": true,
      "auto_update": true
    }
  }
}

Tags

#self-improvement#reflection#memory#safety#hooks#evaluation
Safety Score: 4/5

Flags: file-write, file-read, code-execution