Ollama Hub
Manage, benchmark, and switch between local Ollama models with performance comparison.
Why use this skill?
Efficiently manage, test, and benchmark local Ollama LLMs with the OpenClaw Ollama Hub skill. Optimize your AI performance today.
Install via CLI (Recommended)
clawhub install openclaw/skills/skills/sa9saq/ollama-hubWhat This Skill Does
The Ollama Hub skill for OpenClaw is a powerful administrative and benchmarking interface designed to help developers manage local large language models. It provides a structured set of commands to interact with the Ollama service, allowing for seamless model deployment, removal, and performance testing. By leveraging this skill, users can maintain a optimized local AI environment, ensuring that the right model is chosen for specific tasks based on empirical performance data like tokens per second (t/s) and resource utilization metrics.
Installation
To integrate this skill into your OpenClaw environment, execute the following command in your terminal: clawhub install openclaw/skills/skills/sa9saq/ollama-hub. Ensure that the Ollama binary is installed on your host system and the service is actively running (typically on port 11434). Verify the service health before execution by running systemctl status ollama or querying the API endpoint directly.
Use Cases
- Model Lifecycle Management: Quickly pull new models from the Ollama library or prune unused models to recover disk space.
- Performance Benchmarking: Use the built-in timing functions to determine which model architectures (e.g., Llama 3, Mistral, Phi-3) offer the best speed-to-quality ratio on your specific hardware configuration.
- System Diagnostics: Troubleshoot model loading issues, verify GPU offloading status with
nvidia-smi, and check for port conflicts on the default Ollama API port. - Comparative Analysis: Generate structured tables comparing response times and output tokens for identical prompts across multiple model sizes.
Example Prompts
- "List all my currently installed Ollama models and tell me which one is taking up the most disk space."
- "Can you benchmark llama3.3:8b and mistral:7b using the prompt 'Explain the importance of clean code' and show me the tokens per second for each?"
- "I need to free up space; remove the llama3:70b model and then pull the latest version of phi3 to replace it."
Tips & Limitations
- Hardware Awareness: Always monitor your system's RAM and VRAM usage. Large models (70B+) require significant memory; running them on insufficient hardware will result in extremely slow performance or process crashes.
- Disk Space: Pulling large models can consume tens of gigabytes; ensure you have sufficient headroom on your partition.
- Networking: While Ollama runs locally, ensure your firewall allows traffic on port 11434 if you are connecting from a remote interface.
- Model Naming: Always use the
name:tagformat to avoid version ambiguity. If a model fails to run, verify the installation withollama listand attempt a clean re-pull to resolve corruption issues.
Metadata
Not sure this is the right skill?
Describe what you want to build — we'll match you to the best skill from 16,000+ options.
Find the right skillPaste this into your clawhub.json to enable this plugin.
{
"plugins": {
"official-sa9saq-ollama-hub": {
"enabled": true,
"auto_update": true
}
}
}Tags(AI)
Flags: file-read, file-write, code-execution
Related Skills
threat-model
Threat modeling and attack scenario design. Identify risks before they become vulnerabilities. STRIDE, attack trees, risk matrix.
Sns Auto Poster
Schedule and automate social media posts to X/Twitter with cron-based queue management.
security-review
Comprehensive security review for code, configs, and operations. OWASP, prompt injection, crypto security. Auto-triggers on security-related changes.
Process Monitor
Monitor system processes, identify top CPU/memory consumers, and alert on resource thresholds.
Readme Generator
Auto-generate comprehensive README.md files by analyzing project structure and configuration.