ClawKit Logo
ClawKitReliability Toolkit

Fix "reasoning and reasoning_effort" Conflict (400)

Model Compatibility Issue

OpenClaw sends both reasoning and reasoning_effort parameters simultaneously. Some providers only accept one. One config flag fixes it in under 60 seconds.

This error occurs with models that have partial reasoning support — they accept either reasoning or reasoning_effort, but not both at the same time. Common providers affected: GLM (Zhipu AI), DeepSeek, and certain OpenRouter model routes.

What the Error Looks Like

error=400 Only one of "reasoning" and "reasoning_effort" may be provided
Bad Request: Only one of reasoning and reasoning_effort may be provided

This appears in the gateway logs or as an error response in your chat interface. The agent will typically retry and fail again unless the compat flag is set.

Fix: Disable Reasoning for the Provider

Set supportsReasoning: false for Your Provider

# Replace <provider> with your provider name (e.g. glm, deepseek, openrouter)
openclaw config set models.providers.<provider>.models.compat '{"supportsReasoning": false}' --json
# Then restart the gateway
openclaw gateway restart

Provider-Specific Commands

GLM (Zhipu AI)

GLM
openclaw config set models.providers.glm.models.compat '{"supportsReasoning": false}' --json

DeepSeek

DeepSeek
openclaw config set models.providers.deepseek.models.compat '{"supportsReasoning": false}' --json

OpenRouter

OpenRouter
openclaw config set models.providers.openrouter.models.compat '{"supportsReasoning": false}' --json
Restart After Any Change
openclaw gateway restart

Why This Happens

OpenClaw supports extended reasoning for providers that advertise it. When a model supports reasoning, OpenClaw sends both the reasoning object (for extended thinking) and reasoning_effort (the intensity level). Some providers implement partial support — they accept one parameter but return a 400 error if both are included in the same request.

Full reasoning support

Accepts both reasoning and reasoning_effort. Works with OpenAI o-series, Anthropic claude-3-5-sonnet, etc.

Partial reasoning support

Accepts only one of the two parameters. Set supportsReasoning: false to disable dual-parameter injection.

No reasoning support

Does not accept either parameter. Standard models — use supportsReasoning: false.

Still Not Working?

Verify the compat flag was saved correctly:

Check Config
openclaw config get models.providers.<provider>.models.compat

Run the Doctor

npx clawkit-doctor@latest

Checks your model provider configuration and identifies compatibility issues.

Did this guide solve your problem?

Need Help?

Try our automated tools to solve common issues instantly.