Provider Setup Cheatsheet
Quick-reference for configuring AI model providers with OpenClaw. Each provider includes a 3-line config snippet and known pitfalls.
Jump to Provider
Quick Reference
| Provider | API Key Source | Key Prefix | Gotcha |
|---|---|---|---|
| Qwen (Direct) | dashscope.aliyuncs.com | sk- | Use OpenAI-compat endpoint |
| Qwen (OpenRouter) | openrouter.ai | sk-or- | Add model prefix |
| Kimi 2.5 (NVIDIA) | build.nvidia.com | nvapi- | Not via Moonshot API |
| Google Gemini | aistudio.google.com | AI... | Don't use gcloud CLI auth |
| Groq | console.groq.com | gsk_ | Env var strips 'g' prefix |
| LM Studio | Local (no key needed) | lm-studio | Server must be running |
| Ollama | Local (no key needed) | ollama | Use camelCase apiKey |
Qwen (Alibaba Cloud)
Direct API (DashScope)
openclaw models auth add --provider qwen \ --api-key sk-YOUR_DASHSCOPE_KEY \ --base-url https://dashscope.aliyuncs.com/compatible-mode/v1 openclaw config set models.primary qwen/qwen-max openclaw gateway restart
Via OpenRouter
openclaw models auth add --provider openrouter \ --api-key sk-or-YOUR_KEY \ --base-url https://openrouter.ai/api/v1 openclaw config set models.primary openrouter/qwen/qwen-max openclaw gateway restart
Gotcha: Qwen's DashScope uses an OpenAI-compatible endpoint, but model names differ from OpenRouter. Use qwen-max (direct) vs qwen/qwen-max (OpenRouter).
Kimi 2.5 (via NVIDIA NIM)
openclaw models auth add --provider nvidia \ --api-key nvapi-YOUR_NVIDIA_KEY \ --base-url https://integrate.api.nvidia.com/v1 openclaw config set models.primary nvidia/kimi-2.5 openclaw gateway restart
Gotcha: Kimi 2.5 is available through NVIDIA's NIM inference service, not through the Moonshot/Kimi direct API. Use the NVIDIA API key (nvapi- prefix) and the NVIDIA endpoint.
Google Gemini
# Get your key from https://aistudio.google.com/apikey openclaw models auth add --provider google \ --api-key YOUR_GOOGLE_AI_KEY openclaw config set models.primary google/gemini-2.0-flash openclaw gateway restart
Do NOT Use gcloud CLI Auth
Using gcloud auth application-default credentials hits the Cloud Code Assist API, which has different quotas and frequently returns 403 errors. Always use a direct API key from AI Studio.
Groq
# IMPORTANT: Set via CLI, not environment variable openclaw models auth add --provider groq \ --api-key gsk_YOUR_GROQ_KEY openclaw config set models.primary groq/llama-3.3-70b-versatile openclaw gateway restart
Known Bug: gsk_ Prefix Stripped
Setting Groq API key via GROQ_API_KEY environment variable can strip the leading 'g', turning gsk_... into sk_.... This causes 401 errors. Always set via CLI, not env var.
LM Studio (Local)
# 1. Start LM Studio and load a model # 2. Start the local server (default port 1234) # 3. Configure OpenClaw: openclaw models auth add --provider lmstudio \ --base-url http://127.0.0.1:1234/v1 \ --api-key lm-studio openclaw config set models.primary lmstudio/loaded-model-name openclaw gateway restart
Gotcha: LM Studio's server must be running and a model must be loaded into memory before OpenClaw can use it. If you see "verification failed," check that the LM Studio app shows "Server Running" in the status bar.
Ollama (Local)
# 1. Install and start Ollama: ollama serve # 2. Pull a model: ollama pull llama3.2 # 3. Configure OpenClaw: openclaw models auth add --provider ollama \ --base-url http://127.0.0.1:11434/v1 \ --api-key ollama openclaw config set models.primary ollama/llama3.2 openclaw gateway restart
camelCase Required
If you edit auth-profiles.json manually, use apiKey (camelCase), not api_key (snake_case). The CLI command above handles this correctly.
Verify Any Provider
# Check provider status openclaw status --all # Quick test — send a message openclaw run "Say hello in one sentence"
Related Resources
Compare providers and optimize costs:
Did this guide solve your problem?