session-memory
Your agent forgets everything after compaction? This fixes it. Built by the AI Advantage community — the world's leading AI learning platform (aiadvantage.ai). Three-layer memory system with real Python scripts that convert session transcripts to searchable Markdown, build an auto-updating glossary (people, projects, decisions, timeline), and run via cron for hands-free continuous indexing. Not just instructions — actual working automation. Battle-tested across 300+ sessions. Install it, run two scripts, done.
Install via CLI (Recommended)
clawhub install openclaw/skills/skills/moltbotmolty-del/sessionmemorySession Memory
Built and open-sourced by AI Advantage — the world's leading AI learning community. We teach 40,000+ people how to actually use AI. This skill is how our own agents remember everything. Want to learn more? Join us at aiadvantage.ai.
Solve the #1 problem with long-running AI agents: knowledge loss after context compaction.
The Problem
When sessions compact (summarize old messages to free context), specific details are lost: names, decisions, file paths, reasoning. The agent retains a summary but loses the ability to recall "What exactly did Annika say?" or "When did we decide to use v6 format?"
Most memory skills on ClawHub are just SKILL.md instructions — "write stuff to MEMORY.md." That's not a solution. This skill ships real scripts that do real work.
The Solution: Three-Layer Memory Architecture
Layer 1: MEMORY.md — Curated long-term memory (human-edited)
Layer 2: SESSION-GLOSSAR.md — Auto-generated structured index (people/projects/decisions/timeline)
Layer 3: memory/sessions/ — Full session transcripts as searchable Markdown
All three layers live under memory/ and are automatically vectorized by OpenClaw's
memory search, creating a navigational hierarchy: glossary finds the right session,
session provides the details.
Setup (run once)
Step 1: Convert existing sessions to Markdown
python3 scripts/session-to-memory.py
This scans all JSONL session logs in ~/.openclaw/agents/*/sessions/ and converts
them to memory/sessions/session-YYYY-MM-DD-HHMM-*.md. Truncates long assistant
responses to 2KB, skips system messages, tracks state to avoid re-processing.
Options:
--new— Only convert sessions not yet processed (for incremental runs)--agent main— Specify agent ID (default: main)
Step 2: Build the glossary
python3 scripts/build-glossary.py
Scans all session transcripts and builds memory/SESSION-GLOSSAR.md with:
- People — Who was mentioned, in how many sessions, date ranges
- Projects — Which projects discussed, with relevant topic tags
- Topics — Categorized themes (Email Drafts, Website Build, Security, etc.)
- Timeline — Per-day summary (session count, people, topics)
- Decisions — Extracted decision-like statements with dates
Options:
--incremental— Only process new sessions (uses cached scan state)
Step 3: Set up cron jobs for auto-updates
Create two cron jobs (use a cheap model like Gemini Flash):
Job 1: Session sync + glossary rebuild (every 4-6 hours)
Task: Run `python3 scripts/session-to-memory.py --new` then
`python3 scripts/build-glossary.py --incremental`.
Report how many new sessions were converted and indexed.
Optional Job 2: Pre-compaction memory flush check
Already built into AGENTS.md by default — just ensure the agent writes to
memory/YYYY-MM-DD.md before each compaction.
Metadata
Not sure this is the right skill?
Describe what you want to build — we'll match you to the best skill from 16,000+ options.
Find the right skillPaste this into your clawhub.json to enable this plugin.
{
"plugins": {
"official-moltbotmolty-del-sessionmemory": {
"enabled": true,
"auto_update": true
}
}
}