ClawKit Logo
ClawKitReliability Toolkit

Provider Setup Cheatsheet

Quick-reference for configuring AI model providers with OpenClaw. Each provider includes a 3-line config snippet and known pitfalls.

Quick Reference

ProviderAPI Key SourceKey PrefixGotcha
Qwen (Direct)dashscope.aliyuncs.comsk-Use OpenAI-compat endpoint
Qwen (OpenRouter)openrouter.aisk-or-Add model prefix
Kimi 2.5 (NVIDIA)build.nvidia.comnvapi-Not via Moonshot API
Google Geminiaistudio.google.comAI...Don't use gcloud CLI auth
Groqconsole.groq.comgsk_Env var strips 'g' prefix
LM StudioLocal (no key needed)lm-studioServer must be running
OllamaLocal (no key needed)ollamaUse camelCase apiKey

Qwen (Alibaba Cloud)

Direct API (DashScope)

Qwen via DashScope
openclaw models auth add --provider qwen \
  --api-key sk-YOUR_DASHSCOPE_KEY \
  --base-url https://dashscope.aliyuncs.com/compatible-mode/v1
openclaw config set models.primary qwen/qwen-max
openclaw gateway restart

Via OpenRouter

Qwen via OpenRouter
openclaw models auth add --provider openrouter \
  --api-key sk-or-YOUR_KEY \
  --base-url https://openrouter.ai/api/v1
openclaw config set models.primary openrouter/qwen/qwen-max
openclaw gateway restart

Gotcha: Qwen's DashScope uses an OpenAI-compatible endpoint, but model names differ from OpenRouter. Use qwen-max (direct) vs qwen/qwen-max (OpenRouter).

Kimi 2.5 (via NVIDIA NIM)

Kimi 2.5 via NVIDIA
openclaw models auth add --provider nvidia \
  --api-key nvapi-YOUR_NVIDIA_KEY \
  --base-url https://integrate.api.nvidia.com/v1
openclaw config set models.primary nvidia/kimi-2.5
openclaw gateway restart

Gotcha: Kimi 2.5 is available through NVIDIA's NIM inference service, not through the Moonshot/Kimi direct API. Use the NVIDIA API key (nvapi- prefix) and the NVIDIA endpoint.

Google Gemini

Google Gemini with direct API key
# Get your key from https://aistudio.google.com/apikey
openclaw models auth add --provider google \
  --api-key YOUR_GOOGLE_AI_KEY
openclaw config set models.primary google/gemini-2.0-flash
openclaw gateway restart

Do NOT Use gcloud CLI Auth

Using gcloud auth application-default credentials hits the Cloud Code Assist API, which has different quotas and frequently returns 403 errors. Always use a direct API key from AI Studio.

Groq

Groq setup
# IMPORTANT: Set via CLI, not environment variable
openclaw models auth add --provider groq \
  --api-key gsk_YOUR_GROQ_KEY
openclaw config set models.primary groq/llama-3.3-70b-versatile
openclaw gateway restart

Known Bug: gsk_ Prefix Stripped

Setting Groq API key via GROQ_API_KEY environment variable can strip the leading 'g', turning gsk_... into sk_.... This causes 401 errors. Always set via CLI, not env var.

LM Studio (Local)

LM Studio local setup
# 1. Start LM Studio and load a model
# 2. Start the local server (default port 1234)
# 3. Configure OpenClaw:
openclaw models auth add --provider lmstudio \
  --base-url http://127.0.0.1:1234/v1 \
  --api-key lm-studio
openclaw config set models.primary lmstudio/loaded-model-name
openclaw gateway restart

Gotcha: LM Studio's server must be running and a model must be loaded into memory before OpenClaw can use it. If you see "verification failed," check that the LM Studio app shows "Server Running" in the status bar.

Ollama (Local)

Ollama setup
# 1. Install and start Ollama: ollama serve
# 2. Pull a model: ollama pull llama3.2
# 3. Configure OpenClaw:
openclaw models auth add --provider ollama \
  --base-url http://127.0.0.1:11434/v1 \
  --api-key ollama
openclaw config set models.primary ollama/llama3.2
openclaw gateway restart

camelCase Required

If you edit auth-profiles.json manually, use apiKey (camelCase), not api_key (snake_case). The CLI command above handles this correctly.

Verify Any Provider

Test that the provider works
# Check provider status
openclaw status --all

# Quick test — send a message
openclaw run "Say hello in one sentence"

Did this guide solve your problem?

Need Help?

Try our automated tools to solve common issues instantly.