self-improvement
Captures learnings, errors, and corrections to enable continuous improvement. Use when: (1) A command or operation fails unexpectedly, (2) User corrects Claude ('No, that's wrong...', 'Actually...'), (3) User requests a capability that doesn't exist, (4) An external API or tool fails, (5) Claude realizes its knowledge is outdated or incorrect, (6) A better approach is discovered for a recurring task. Also review learnings before major tasks.
Why use this skill?
Enhance your AI agent's performance with the self-improvement skill. Automatically log errors, corrections, and best practices to optimize workflows and behavior.
Install via CLI (Recommended)
clawhub install openclaw/skills/skills/pskoett/self-improving-agentWhat This Skill Does
The self-improvement skill is a foundational capability for the OpenClaw AI agent, designed to transform fleeting interactions into permanent, actionable knowledge. It functions as a structured feedback loop that logs errors, user corrections, and process optimizations directly into the workspace environment. By maintaining dedicated log files within the .learnings/ directory, the agent creates a verifiable history of its performance, allowing it to "learn" from past failures and refine its methodology. Over time, the agent performs a synthesis of these logs, promoting high-value patterns into core documentation like SOUL.md, AGENTS.md, and TOOLS.md, ensuring that behavioral adjustments and workflow improvements persist across multiple sessions.
Installation
To integrate this skill into your environment, you can use the OpenClaw CLI tool. For the fastest installation, run clawdhub install self-improving-agent. Alternatively, for manual configuration, clone the repository directly into your local skills directory using git clone https://github.com/peterskoett/self-improving-agent.git ~/.openclaw/skills/self-improving-agent. Ensure that your workspace contains the .learnings/ directory by running mkdir -p ~/.openclaw/workspace/.learnings and initializing the three primary tracking files: LEARNINGS.md, ERRORS.md, and FEATURE_REQUESTS.md.
Use Cases
This skill is essential for users seeking to reduce iterative friction. Use it when you notice the agent repeatedly failing at a specific command, such as a Git push error or an API timeout. It is also ideal for correcting the agent's tone or logic—when you catch the agent hallucinating a fact or ignoring a constraint, invoke this skill to record the correction. Furthermore, it is perfect for documenting workflow optimizations; if you discover a more efficient way to structure your project or a better way to invoke sub-agents, log it so the agent adopts the practice as a default.
Example Prompts
- "I corrected you on that API parameter structure; please log this under category 'correction' in the learnings file so you don't repeat the mistake."
- "That command failed with a timeout. Go ahead and log the error details in ERRORS.md so we can debug the connection later."
- "We found a much faster way to compile the assets; please update our internal best practices in the learnings files and consider promoting this to TOOLS.md."
Tips & Limitations
To get the most out of this skill, be descriptive when you report an error or correction. Include the specific context, the expected outcome, and the actual result to help the agent generalize the learning effectively. The primary limitation is that this skill relies on the agent's ability to interpret your feedback accurately; if the initial feedback is vague, the logged entry may be less useful for future automation. Regularly review your .learnings/ directory to manage file size and perform periodic manual cleanup or synthesis of stale entries.
Metadata
Not sure this is the right skill?
Describe what you want to build — we'll match you to the best skill from 16,000+ options.
Find the right skillPaste this into your clawhub.json to enable this plugin.
{
"plugins": {
"official-pskoett-self-improving-agent": {
"enabled": true,
"auto_update": true
}
}
}Tags(AI)
Flags: file-write, file-read