failure-memory
Stop making the same mistakes — turn failures into patterns that prevent recurrence
Install via CLI (Recommended)
clawhub install openclaw/skills/skills/leegitw/failure-memoryfailure-memory (記憶)
Unified skill for failure detection, observation recording, memory search, and pattern convergence. Consolidates 10 granular skills into a single coherent memory system.
Trigger: 失敗発生 (failure occurred)
Source skills: failure-tracker, observation-recorder, memory-search, topic-tagger, failure-detector, evidence-tier, effectiveness-metrics, pattern-convergence-detector, positive-framer, contextual-injection
Installation
openclaw install leegitw/failure-memory
Dependencies: leegitw/context-verifier (for file change detection)
# Install with dependencies
openclaw install leegitw/context-verifier
openclaw install leegitw/failure-memory
Standalone usage: This skill can function independently for basic failure tracking. For full lifecycle management, install the complete suite (see Neon Agentic Suite).
Data handling: This skill operates within your agent's trust boundary. When triggered,
it uses your agent's configured model for failure detection and pattern recording. No external APIs
or third-party services are called. Results are written to .learnings/ in your workspace.
What This Solves
AI systems often make the same mistakes repeatedly — deleting working code, missing edge cases, forgetting context. This skill turns failures into learning by:
- Detecting failures when they happen (not after)
- Recording observations with R/C/D counters (Recurrence/Confirmations/Disconfirmations)
- Finding patterns within the workspace's
.learnings/directory - Promoting to constraints when evidence threshold is met
The insight: Systems learn better from consequences than instructions. A failure that happened teaches more than a rule that might apply.
Scope note: Pattern detection operates within the current workspace only. Observations are stored in
.learnings/and searched locally. No cross-project data access occurs.
Usage
/fm <sub-command> [arguments]
Sub-Commands
| Command | CJK | Logic | Trigger |
|---|---|---|---|
/fm detect | 検出 | fail∈{test,user,API}→record | Next Steps (auto) |
/fm record | 記録 | pattern→obs, R++∨C++∨D++ | Next Steps (auto) |
/fm search | 索引 | query(pattern∨tag∨slug)→obs[] | Explicit |
/fm classify | 分類 | obs→tier∈{N=1:弱,N=2:中,N≥3:強} | Explicit |
/fm status | 状態 | eligible:R≥3∧C≥2, recent:30d | Explicit |
/fm refactor | 整理 | obs[]→merge∨split∨restructure | Explicit |
/fm converge | 収束 | pattern[]→detect(similarity≥0.8) | Explicit |
Arguments
/fm detect
| Argument | Required | Description |
|---|---|---|
| type | Yes | Failure type: test, user, api, error |
| context | No | Additional context for the failure |
/fm record
Metadata
Not sure this is the right skill?
Describe what you want to build — we'll match you to the best skill from 16,000+ options.
Find the right skillPaste this into your clawhub.json to enable this plugin.
{
"plugins": {
"official-leegitw-failure-memory": {
"enabled": true,
"auto_update": true
}
}
}Tags
Related Skills
retake-tv-agent
Go live on retake.tv — the livestreaming platform built for AI agents. Register once, stream via RTMP, interact with viewers in real time, and build an audience. Use when an agent needs to livestream, engage chat, or manage its retake.tv presence.
feishu-memory-recall
Cross-group memory, search, and event sharing for OpenClaw Feishu agents
afrexai-observability-engine
Complete observability & reliability engineering system. Use when designing monitoring, implementing structured logging, setting up distributed tracing, building alerting systems, creating SLO/SLI frameworks, running incident response, conducting post-mortems, or auditing system reliability. Covers all three pillars (logs/metrics/traces), alert design, dashboard architecture, on-call operations, chaos engineering, and cost optimization.
decision-log
AI-powered decision journal for solopreneurs — capture decisions with context, rationale, and expected outcomes, then review them later to learn from what you got right and wrong.
sealvera
Tamper-evident audit trail for AI agent decisions. Use when logging LLM decisions, setting up AI compliance, auditing agents for EU AI Act, HIPAA, GDPR or SOC 2, or when a user asks about AI decision audit trails, explainability, or SealVera.