Fix OpenClaw 400 Reasoning Loop (GPT-5.2 / o-Series)
Persistent 400 Loop
Once this error appears every subsequent request to OpenClaw returns the same 400 until the bad history is cleared. The agent becomes completely unresponsive.
When you use GPT-5.2, o3, o4-mini, or any OpenAI reasoning model with OpenClaw, an abruptly terminated agent turn can leave a reasoning item (rs_...) in the conversation history without its required follow-up assistant message. OpenAI's API then rejects every subsequent request with a 400, trapping you in an unrecoverable loop.
Next Step
Fix now, then reduce repeat incidents
If this issue keeps coming back, validate your setup in Doctor first, then harden your config.
Jump to Fix
Identifying the Error
400 Bad Request
{
"error": {
"message": "Item rs_abc123... of type `reasoning` was provided without its
required following item of type `assistant`.",
"type": "invalid_request_error",
"code": "invalid_conversation_structure"
}
}This error appears in the OpenClaw gateway logs and in the chat interface. The key phrase is of type reasoning was provided without its required following item. Once it appears for one message, it will repeat for every subsequent message in the same session.
Fix 1: Clear Conversation History (Fastest)
The orphaned reasoning item lives in the conversation history that OpenClaw sends to the API. Clearing history removes it immediately.
# In the OpenClaw chat interface, type: /history clear # Or via CLI: openclaw history clear
After clearing history, send a fresh message. The 400 loop should stop immediately. Your previous conversation context will be lost, but the agent will be responsive again.
Fix 2: Switch to a Non-Reasoning Model Temporarily
If you need to preserve the conversation, switch to a model that does not use reasoning tokens (e.g., gpt-4o or claude-sonnet-4-6). The bad history can then be summarised by the non-reasoning model before you switch back.
# openclaw.json โ temporarily set a non-reasoning model
{
"model": "openai/gpt-4o",
"parameters": {}
}
# After clearing context, switch back to your preferred modelFix 3: Lower Reasoning Effort to Prevent Future Loops
High reasoning effort (reasoning_effort: "high") produces longer reasoning chains, which are more likely to be orphaned if a run is interrupted. Reduce the effort level:
{
"model": "openai/gpt-5.2",
"parameters": {
"reasoning_effort": "medium"
}
}lowMinimal reasoning tokens. Fastest, cheapest. Rarely produces orphaned items.
mediumBalanced. Recommended for most agent workflows. Good recovery on interruption.
highMaximum reasoning depth. More likely to produce orphaned items if the run is cut short.
Prevention Tips
- Never force-quit OpenClaw mid-turn. Use
Ctrl+Conce and wait for the gateway to flush the turn cleanly. - Keep sessions short. Start a new session for unrelated tasks instead of reusing a long-running context.
- Use "Save summary" before ending long sessions. Ask the agent to summarise the conversation, then start fresh with the summary as context.
- Pin a stable model. Avoid using preview or daily-build model aliases that may change reasoning format between calls.
Other reasoning-related issues
These guides cover related model errors:
Fix It Faster With Our Tools
Did this guide solve your problem?