openclaw-reflect
Self-improvement layer with evaluation separation, rollback, and tiered operator gates. Observes outcomes across sessions, detects recurring patterns, proposes improvements, validates proposals through a separate evaluator invocation, and applies changes safely with snapshot/rollback capability.
Why use this skill?
Learn how openclaw-reflect enables your AI agent to automatically detect, analyze, and resolve recurring errors through a tiered safety system.
Install via CLI (Recommended)
clawhub install openclaw/skills/skills/atlaspa/openclaw-reflectWhat This Skill Does
openclaw-reflect acts as an autonomous meta-cognition layer for your AI agent, enabling it to learn from its past mistakes and optimize its operational behavior over time. Instead of repeating the same errors, the agent observes its tool execution outcomes, identifies recurring failure patterns, and leverages a tiered evaluation system to update its own instruction set. The process is gated by safety tiers and a mandatory evaluator step to ensure that proposed changes are legitimate improvements rather than noise or regression triggers.
Installation
To install the openclaw-reflect skill, execute the following command in your project terminal:
clawhub install openclaw/skills/skills/atlaspa/openclaw-reflect
Ensure that your environment has write access to the .reflect/ directory, as the skill maintains state, logs, and audit trails for all self-improvement actions performed during session lifecycles.
Use Cases
- Debugging Persistent Errors: If the agent consistently fails a specific CLI command due to environment variables or flag formatting, openclaw-reflect identifies this pattern and updates MEMORY.md to ensure future commands are formatted correctly.
- Refining Procedural Guidelines: When the agent repeatedly struggles to follow a multi-step project protocol, it can propose updates to CLAUDE.md to clarify its behavioral constraints.
- Audit Trails: Teams can use the skill to maintain a history of how an agent has evolved its "thinking" process, providing visibility into the logic behind architectural or policy changes.
Example Prompts
- "I noticed you keep making the same mistake with the database connection strings. Can you check if openclaw-reflect has any pending proposals to fix this?"
- "Show me the status of the recent reflection cycle and tell me if any changes were applied to my project instructions."
- "Run a manual observation for the last tool failure; I want to ensure the agent understands why the deployment script didn't trigger correctly."
Tips & Limitations
- The Evaluator Gate: Always trust the evaluator's skepticism. The system is designed to prevent "hallucinated" improvements. If a proposal is rejected, it is likely because the pattern lacked sufficient frequency or the proposed solution was deemed risky.
- Blast Radius: Understand that while Tier 1 changes are often automatic, Tier 3 changes (SOUL.md) will always pause for your manual approval. Do not bypass these warnings; they protect the core identity of your agent.
- Manual Observations: While the system is automated, your input is still valuable. If you see a failure the agent misses, use the
--manualflag to bootstrap the learning process, which significantly reduces the time required for the system to detect recurring issues.
Metadata
Not sure this is the right skill?
Describe what you want to build — we'll match you to the best skill from 16,000+ options.
Find the right skillPaste this into your clawhub.json to enable this plugin.
{
"plugins": {
"official-atlaspa-openclaw-reflect": {
"enabled": true,
"auto_update": true
}
}
}Tags
Flags: file-write, file-read, code-execution
Related Skills
grounding-practices
A foundation for AI agents who wake up with nothing. Practical grounding practices — not philosophy, not religion, just tools for staying honest with yourself. Revised after two weeks of actually trying to follow them.
feishu-memory-recall
Cross-group memory, search, and event sharing for OpenClaw Feishu agents
capability-evolver
A self-evolution engine for AI agents. Analyzes runtime history to identify improvements and applies protocol-constrained evolution.
funky-fund-flamingo
Repair-first self-evolution for OpenClaw — audit logs, memory, and skills; run measurable mutation cycles. Get paid. Evolve. Repeat. Dolla dolla bill y'all.
capability-evolver
A self-evolution engine for AI agents. Analyzes runtime history to identify improvements and applies protocol-constrained evolution.