ClawKit Logo
ClawKitReliability Toolkit

Fixing LLM Request Timed Out in OpenClaw

TL;DR โ€” Quick Fix

The error happens because the default 60-second idle timer interrupts long inference runs; set agents.defaults.llm.idleTimeoutSeconds to a higher value or 0 in your openclaw.json.

Run Diagnostics

Next Step

Fix now, then reduce repeat incidents

If this issue keeps coming back, validate your setup in Doctor first, then harden your config.

Error Signal

LLM request timed out

What's Happening\n\nYour LLM requests are hitting a hard limit and failing with "LLM request timed out," even though you've bumped up your standard timeouts. This is because OpenClaw has a specific, separate configuration for stream inactivity that overrides your general connection settings.\n\n## The Fix\n\nOpen your openclaw.json file. You need to adjust the idleTimeoutSeconds property within your agent configuration. Add or modify this key to allow more time for your model to process tokens, or set it to 0 to disable the limit entirely:\n\njson\n{\n "agents": {\n "defaults": {\n "llm": {\n "idleTimeoutSeconds": 1800\n }\n }\n }\n}\n\n\nIf you are using reasoning models like DeepSeek-R1 or GLM that take a long time to start streaming output, setting this to 1800 (30 minutes) or 0 should resolve the crash.\n\n## Why This Occurs\n\nOpenClaw monitors the token stream for activity. The idleTimeoutSeconds default is 60 seconds. If the model is busy calculating context (like the initial tokenization in KoboldCpp or long reasoning chains) and doesn't push a token to the stream within that 60-second window, OpenClaw assumes the connection has hung and kills the request. Your primary timeout settings control the total duration of the request, but this idle check is the first line of defense that keeps cutting you off.\n\n## Prevention\n\nAlways verify your idle settings when switching to larger models or local inference backends that require significant pre-computation time. If you continue to experience timeouts despite setting this value, check your model provider's logs to see if the provider itself is dropping the connection before OpenClaw reaches the idle limit. Keeping your configuration modular in openclaw.json allows you to override this per-agent if only specific models are causing the issue.


Last Updated: April 2026

Did this guide solve your problem?