sealvera
Tamper-evident audit trail for AI agent decisions. Use when logging LLM decisions, setting up AI compliance, auditing agents for EU AI Act, HIPAA, GDPR or SOC 2, or when a user asks about AI decision audit trails, explainability, or SealVera.
Install via CLI (Recommended)
clawhub install openclaw/skills/skills/ahessami123/sealveraSealVera Skill for OpenClaw
Cryptographically-sealed, tamper-evident audit trails for every AI decision your agents make.
SealVera is AI Decision Audit Infrastructure. This skill wires any OpenClaw agent into SealVera so every decision it makes is logged, cryptographically signed, chained, and monitored — automatically. Built for teams shipping AI in finance, healthcare, legal, and insurance.
EU AI Act · SOC 2 · HIPAA · GDPR · FINRA · ISO 42001
Why SealVera?
- Tamper-evident — every decision is cryptographically hashed (RSA-2048) and chained; any alteration is detectable
- Works in 2 lines —
init()+ one patch call and every LLM decision is logged - Full explainability — captures inputs, outputs, reasoning steps, confidence scores, model used
- Real-time dashboard — search, filter, export at app.sealvera.com
- Drift detection — alerts when agent behaviour deviates from its baseline
- Any LLM — OpenAI, Anthropic, Gemini, Ollama, LangChain, CrewAI, AutoGen
- Zero friction — no new infrastructure, installs in seconds
First time? Run setup
Check if SealVera is configured:
const fs = require('fs');
const path = require('path');
const configured = fs.existsSync(path.join(process.env.OPENCLAW_WORKSPACE || process.cwd(), '.sealvera.json'));
If .sealvera.json does NOT exist, run setup:
node <skill_dir>/scripts/setup.js
Setup will:
- Ask for your API key (get one free at app.sealvera.com)
- Verify connectivity and show org/plan info
- Copy
sealvera-log.jsinto the workspace - Patch
AGENTS.mdwith the mandatory sub-agent audit rule - Write
.sealvera.jsonconfig - Fire a test log to confirm end-to-end
One-time setup. After this, every sub-agent you spawn is audited automatically.
If the user says "install SealVera", "set up SealVera", or "connect SealVera" — run setup, don't just describe it.
How logging works
After setup, two logging paths are active:
1. Sub-agent logging (automatic via AGENTS.md)
Every sessions_spawn task prompt gets a MANDATORY footer — the sub-agent logs itself on completion.
2. Main-session logging (call after every significant task)
The agent running the main session must log its own work. This is what shows your day-to-day activity in the dashboard.
const log = require('./sealvera-log');
await log({
action: 'fix_login_bug', // short description of what you did
decision: 'COMPLETED', // COMPLETED | FAILED | ESCALATED
input: { task: '...' },
output: { result: '...' },
reasoning: [{ factor: 'outcome', value: 'ok', signal: 'safe', explanation: '...' }]
});
Log after: code written, bug fixed, feature deployed, investigation done, judgment call made. Don't log: routine chat, heartbeat pings, trivial lookups.
Setup patches SOUL.md with this rule so the agent follows it automatically.
Metadata
Not sure this is the right skill?
Describe what you want to build — we'll match you to the best skill from 16,000+ options.
Find the right skillPaste this into your clawhub.json to enable this plugin.
{
"plugins": {
"official-ahessami123-sealvera": {
"enabled": true,
"auto_update": true
}
}
}Tags
Related Skills
opena2a-security
Security hardening for OpenClaw. Audit your configuration, scan installed skills for malware, detect CVE-2026-25253, check credential exposure, and get actionable fix recommendations. Runs locally with no external API calls.
seo-optimizer
SEO optimizer and compliance checker for Chinese social media. Optimizes titles, keyword density, hashtags, and scans for advertising-law banned words (广告法违禁词).
astrai-inference-router
Route all LLM calls through Astrai for 40%+ cost savings with intelligent routing and privacy controls
afrexai-observability-engine
Complete observability & reliability engineering system. Use when designing monitoring, implementing structured logging, setting up distributed tracing, building alerting systems, creating SLO/SLI frameworks, running incident response, conducting post-mortems, or auditing system reliability. Covers all three pillars (logs/metrics/traces), alert design, dashboard architecture, on-call operations, chaos engineering, and cost optimization.
rocm_vllm_deployment
Production-ready vLLM deployment on AMD ROCm GPUs. Combines environment auto-check, model parameter detection, Docker Compose deployment, health verification, and functional testing with comprehensive logging and security best practices.