clawzempic
Save 70-95% on LLM costs with smart routing, caching, and memory.
Install via CLI (Recommended)
clawhub install openclaw/skills/skills/naveenspark/clawzempicClawzempic
Drop-in LLM proxy that routes simple requests to cheaper models, caches repeated context, and adds cross-session memory. Works with Anthropic and OpenRouter keys.
Install
openclaw plugins install clawzempic
The plugin handles signup, auth, and model registration automatically.
Or standalone:
npx clawzempic
How It Works
Every request is scored for complexity in <2ms and routed to the right tier:
| Tier | Traffic | Savings |
|---|---|---|
| Simple | ~45% | up to 95% |
| Mid | ~25% | up to 80% |
| Complex | ~20% | 0% (full quality) |
| Reasoning | ~10% | 0% (full quality) |
No LLM classifier in the hot path. Weighted multi-dimension scorer handles routing.
Memory
Server-side memory across sessions. No plugins, no extra API keys, no config. Your agent doesn't need to "remember" to remember.
- Recent activity (per-session)
- Scratchpad (cross-session working notes)
- Context windowing (per-request)
- Core memory (permanent facts and preferences)
- Long-term recall (embedding-based)
Verify
npx clawzempic doctor # Check config + test connection
npx clawzempic savings # Savings dashboard
Links
- Website: https://clawzempic.ai
- Dashboard: https://www.clawzempic.ai/dash
- npm: https://www.npmjs.com/package/clawzempic
Metadata
Not sure this is the right skill?
Describe what you want to build — we'll match you to the best skill from 16,000+ options.
Find the right skillPaste this into your clawhub.json to enable this plugin.
{
"plugins": {
"official-naveenspark-clawzempic": {
"enabled": true,
"auto_update": true
}
}
}