ClawKit Logo
ClawKitReliability Toolkit
Back to Registry
Official Verified

lmstudio-subagents

Reduces token usage from paid providers by offloading work to local LM Studio models. Use when: (1) Cutting costs—use local models for summarization, extraction, classification, rewriting, first-pass review, brainstorming when quality suffices, (2) Avoiding paid API calls for high-volume or repetitive tasks, (3) No extra model configuration—JIT loading and REST API work with existing LM Studio setup, (4) Local-only or privacy-sensitive work. Requires LM Studio 0.4+ with server (default :1234). No CLI required.

skill-install — Terminal

Install via CLI (Recommended)

clawhub install openclaw/skills/skills/t-sinclair2500/lm-studio-subagents
Or

LM Studio Models

Offload tasks to local models when quality suffices. Base URL: http://127.0.0.1:1234. Auth: Authorization: Bearer lmstudio. instance_id = loaded_instances[].id (same model can have multiple, e.g. key and key:2).

Key Terms

  • model: From GET models key; use in chat and optional load.
  • lm_studio_api_url: Default http://127.0.0.1:1234 (paths /api/v1/...).
  • response_id / previous_response_id: Chat returns response_id; pass as previous_response_id for stateful.
  • instance_id: For unload, use only the value from GET /api/v1/models for that model: each loaded_instances[].id. Do not assume it equals the model key; with multiple instances ids can be like key:2. LM Studio docs: List (loaded_instances[].id), Unload (instance_id).

Trigger in frontmatter; below = implementation.

Prerequisites

LM Studio 0.4+, server :1234, models on disk; load/unload via API (JIT optional); Node for script (curl ok).

Quick start

Minimal path: list models, then one chat. Replace <model> with a key from GET /api/v1/models and <task> with the task text.

curl -s -H 'Authorization: Bearer lmstudio' http://127.0.0.1:1234/api/v1/models
node scripts/lmstudio-api.mjs <model> '<task>' --temperature=0.5 --max-output-tokens=200

Stateful multi-turn: pass --previous-response-id=<id> from the prior script output. Or use --stateful to persist response_id automatically. Optional --log <path> for request/response.

node scripts/lmstudio-api.mjs <model> 'First turn...' --previous-response-id=$ID1
node scripts/lmstudio-api.mjs <model> 'Second turn...' --previous-response-id=$ID2

Complete Workflow

Step 0: Preflight

GET <base>/api/v1/models; non-200 or connection error = server not ready.

exec command:"curl -s -o /dev/null -w '%{http_code}' -H 'Authorization: Bearer lmstudio' http://127.0.0.1:1234/api/v1/models"

Step 1: List Models and Select

GET /api/v1/models to list models. Parse each entry: key, type, loaded_instances, max_context_length, capabilities. If a model already has loaded_instances.length > 0 and fits the task, skip to Step 5; otherwise pick a key for chat (and optional load in Step 3). Choose by task: vision -> capabilities.vision; embedding -> type=embedding; context -> max_context_length. Prefer already-loaded; prefer smaller for speed, larger for reasoning. Note loaded_instances[].id for optional unload later.

Example — list models:

exec command:"curl -s -H 'Authorization: Bearer lmstudio' http://127.0.0.1:1234/api/v1/models"

Parse models[] (key, type, loaded_instances, max_context_length, capabilities, params_string). If a model has loaded_instances.length > 0 and fits task, skip to Step 5; else pick key for chat (and optional load). Note loaded_instances[].id for optional unload.

Step 2: Model Selection

Metadata

Stars982
Views0
Updated2026-02-14
View Author Profile
AI Skill Finder

Not sure this is the right skill?

Describe what you want to build — we'll match you to the best skill from 16,000+ options.

Find the right skill
Add to Configuration

Paste this into your clawhub.json to enable this plugin.

{
  "plugins": {
    "official-t-sinclair2500-lm-studio-subagents": {
      "enabled": true,
      "auto_update": true
    }
  }
}
Safety NoteClawKit audits metadata but not runtime behavior. Use with caution.