Clawhub Skill 2
Skill by pathemata-mathemata
Why use this skill?
Optimize LLM costs with Xrouter, an open-source inference router for OpenClaw. Route queries to cheap or frontier models using smart classification.
Install via CLI (Recommended)
clawhub install openclaw/skills/skills/pathemata-mathemata/clawhub-skill-2What This Skill Does
Clawhub Skill 2 provides an integration for Xrouter, an open-source, hardware-aware inference router designed to optimize LLM requests. By sitting between OpenClaw and various LLM providers, it intelligently routes queries to the most cost-effective model based on task complexity. The router utilizes a 3-tier classification system: Tier 0 for simple tasks (cheap), Tier 1 for intermediate requirements (medium), and Tier 2 for advanced reasoning (frontier). This ensures you aren't overspending on compute for trivial queries while maintaining performance for complex ones. It includes a dashboard for token tracking, supports both Redis and in-memory LRU caching for performance, and provides a seamless OpenAI-compatible proxy interface.
Installation
To install this skill, use the OpenClaw CLI command: clawhub install openclaw/skills/skills/pathemata-mathemata/clawhub-skill-2. Once installed, ensure you have Node.js 20+ installed on your system. Navigate to the installation directory, run npm install to resolve dependencies, and then execute npm run configure. This interactive wizard will walk you through setting up your local and cloud provider endpoints, allowing you to select your preferred models and API keys for each tier. Finally, initiate the service using npm run dev to start the proxy on your local machine.
Use Cases
This skill is ideal for developers and AI-powered application architects who want to balance costs and latency. It works perfectly for building production pipelines where you want to handle simple classification or summarization tasks locally (using models like Llama 3.1 via Ollama) while automatically routing complex creative or coding tasks to premium frontier models like GPT-4 or Claude 3.5. It is also excellent for monitoring usage patterns via the built-in dashboard, which allows for granular analysis of token consumption and routing efficiency over time.
Example Prompts
- "Route my request through Xrouter to summarize the provided technical documentation, using the most cost-efficient model available that can handle simple bullet points."
- "Configure my OpenClaw environment to automatically switch to my frontier provider whenever I ask for complex code refactoring, using the Tier 2 routing logic."
- "Check the Xrouter usage dashboard and generate a summary report of my token expenditure for the last 24 hours."
Tips & Limitations
For the best results, ensure your local model server (such as Ollama) is running before you start the router to avoid failed attempts at local classification. While the classifier is highly efficient, it relies on a consistent connection to your chosen upstream providers. If you have limited hardware, you can disable the local classifier and force the router into 'Full Cloud Mode' via the configuration wizard. Note that the accuracy of routing decisions depends on the quality of the classifier model selected; consider testing different configurations if you notice unnecessary escalation to frontier models for simple tasks.
Metadata
Not sure this is the right skill?
Describe what you want to build — we'll match you to the best skill from 16,000+ options.
Find the right skillPaste this into your clawhub.json to enable this plugin.
{
"plugins": {
"official-pathemata-mathemata-clawhub-skill-2": {
"enabled": true,
"auto_update": true
}
}
}Tags(AI)
Flags: network-access, external-api