ops-framework
A 0-token jobs + monitoring framework for OpenClaw: run long-running read tasks via scripts, checkpoint/resume safely, and send periodic progress + immediate alerts to Telegram. Write jobs are blocked by default and must be explicitly approved and verified.
Why use this skill?
Optimize OpenClaw with the ops-framework. Manage long-running tasks, automate progress reports via Telegram, and ensure safe execution of write jobs with built-in monitoring.
Install via CLI (Recommended)
clawhub install openclaw/skills/skills/zjianru/ops-frameworkWhat This Skill Does
The ops-framework is a specialized, 0-token automation engine designed to manage long-running tasks and operational monitoring within OpenClaw. By shifting heavy-duty monitoring and task execution from the model's active token stream to a local background script, users can maintain state, track progress, and receive real-time alerts without incurring high compute costs. The framework consists of a robust CLI monitor (ops-monitor.py) and a declarative job configuration (ops-jobs.json), enabling seamless checkpointing, stalling detection, and safe execution workflows.
Installation
- Ensure your OpenClaw environment is set up. Navigate to your configuration directory.
- Execute the installation via the OpenClaw hub:
clawhub install openclaw/skills/skills/zjianru/ops-framework. - Map your OpenClaw state directory to
OPENCLAW_HOME. - Copy your configuration files to
~/.openclaw/net/config/ops-jobs.json. - Validate your configuration by running
python3 ops-monitor.py validate-config --config-file ~/.openclaw/net/config/ops-jobs.jsonto ensure syntax integrity.
Use Cases
- Continuous Monitoring: Keep tabs on long-running data syncs or inventory scans that run for hours. The system monitors process PID, progress keys, and status, alerting you via Telegram only when anomalies or 'ACTION REQUIRED' tags are triggered.
- Automated Health Checks: Schedule periodic linting or service health checks where the system remains silent during normal operation and only notifies you upon failure or warning-level output.
- Safe Write Execution: Execute sensitive system tasks by defining them as
one_shot_writejobs. These remain locked by default and require an explicit human-in-the-loop approval before execution, coupled with a mandatory verification step to ensure the integrity of the write operation.
Example Prompts
- "Ops-framework, list the status of all current long-running read jobs and check if any have stalled."
- "Start the sync_inventory job and notify me on Telegram only if the job encounters an error or reaches a critical state."
- "Force-stop the current database_backup job and provide a summary of the progress before termination."
Tips & Limitations
- Efficiency: Leverage the
0-tokendesign by setting up crons/systemd timers to callops-monitor.py tick. This keeps your OpenClaw context window free for actual intelligence tasks. - Contract Adherence: Ensure your status commands output strictly formatted JSON. If the output schema breaks, the monitor cannot track progress or detect stalls.
- Safety: Never modify the
one_shot_writeprotection logic. Treat these jobs as read-only templates until a manual security review is performed.
Metadata
Not sure this is the right skill?
Describe what you want to build — we'll match you to the best skill from 16,000+ options.
Find the right skillPaste this into your clawhub.json to enable this plugin.
{
"plugins": {
"official-zjianru-ops-framework": {
"enabled": true,
"auto_update": true
}
}
}Tags(AI)
Flags: file-read, file-write, external-api, code-execution
Related Skills
restart-guard
Safely restart the OpenClaw Gateway with context preservation, health monitoring, and failure notification. Use when the agent needs to restart the Gateway (config changes, model switches, plugin reloads, or any reason requiring a restart). Handles pre-restart context saving, guardian process spawning, gateway restart triggering, post-restart verification, and fallback notifications.
web-search-pro
Multi-engine web search with full parameter control. Supports Tavily, Exa, Serper, and SerpAPI with domain filtering, date ranges, deep search, news mode, and content extraction. Auto-selects the best engine based on query type and available API keys. 多引擎精细化搜索:支持域名过滤、日期范围、深度搜索、新闻模式、内容提取。 根据查询类型和可用 API Key 自动选择最优引擎。