Fix OpenRouter Rate Limit & "model not allowed" Errors
Jump to Fix
OpenClaw with OpenRouter can fail with two related errors: model not allowed (the model requires higher account access) or 429 Too Many Requests (you hit the rate limit). This guide shows how to diagnose and fix both.
Next Step
Fix now, then reduce repeat incidents
If this issue keeps coming back, validate your setup in Doctor first, then harden your config.
Identify the Error Type
model set failed: Error: model not allowed: anthropic/claude-opus-4-5
403 Model Not Allowed โ Your OpenRouter account does not have access to this model. Usually requires adding credits or upgrading tier.
Error: 429 Too Many Requests โ Rate limit exceeded
429 Rate Limit โ You're sending requests faster than your tier allows. Free tier: 10 req/min. Add credits to raise the limit.
Fix: model not allowed (403)
Check your OpenRouter account at openrouter.ai/account and verify:
- Credit balance โ Most models require a positive credit balance. Add at least $5 to unlock pay-as-you-go models.
- Model availability โ Some models (like Claude Opus) require a minimum balance or are restricted to certain regions.
- API key scope โ Regenerate your API key if it was created under a different account tier.
As a quick fix, switch to a model you know you have access to:
# openclaw.json โ switch to a model available on free tier
{
"model": "openrouter/auto",
"parameters": {}
}
# Or use a specific free model:
{
"model": "mistralai/mistral-7b-instruct",
"parameters": {}
}Fix: Rate Limit Exceeded (429)
If you're on the free tier, the limit is 10 requests/minute. To increase it:
Add credits to your OpenRouter account
Go to openrouter.ai/account โ Credits โ Add Credits. Even $5 significantly raises your rate limit.
Reduce agent concurrency in OpenClaw
Set maxConcurrentAgents to 1 in openclaw.json to serialize requests and avoid bursts.
Add request delay between turns
Use requestDelay: 2000 (ms) to add a 2-second gap between API calls.
Configure Model Fallbacks
OpenClaw supports automatic model fallback. When the primary model returns a rate-limit or access error, it will try the next model in the list:
{
"model": "anthropic/claude-sonnet-4-6",
"fallbackModels": [
"openai/gpt-4o",
"google/gemini-2.0-flash",
"openrouter/auto"
]
}openrouter/auto is a special model that automatically selects the most capable available model for your account. It's a good final fallback.
Tune Request Rate
{
"model": "openrouter/auto",
"advanced": {
"maxConcurrentAgents": 1,
"requestDelay": 1500
}
}Related API & connection issues
Fix It Faster With Our Tools
Did this guide solve your problem?