ClawKit Logo
ClawKitReliability Toolkit

Fix "Message Ordering Conflict"

Real Issue from the Community

This guide is based on a real troubleshooting session from the OpenClaw Discord where a user's Telegram bot repeatedly died with this error. The root cause was hidden behind a misleading friendly error message.

Your Telegram bot shows "Message ordering conflict — please try again" and eventually goes completely dead. The real cause? Your LLM provider rejected an unsupported developer role in the prompt — not a message sync issue.

What You See

The error appears in stages:

Stage 1

Intermittent Errors

Message ordering conflict - please try again. If this persists, use /new to start a fresh session.

Bot still responds sometimes, but fails on many messages.

Stage 2

System Dead

The bot goes completely silent. No responses to any messages. The Gateway service may have crashed or the Scheduled Task reports Last Result: 1.

Stage 3

Wipe & Repeat

Deleting the .openclaw folder "fixes" it temporarily, but the error returns after a few days. You lose all chat history and memory each time.

Why It Happens

The error chain works like this:

1

Auto-Detection

OpenClaw detects your model supports "reasoning" (reasoning: true) and switches the system prompt role from system to developer.

2

API Rejection

Your LLM provider (e.g., Moonshot/Kimi) does NOT support the developer role. The API returns: 400 invalid request: unsupported role ROLE_UNSPECIFIED

3

Error Masking

OpenClaw's error handler sees a "role" error and translates it into the friendly "Message ordering conflict" message — hiding the real cause.

4

Cascade Failure

Every message triggers the same rejection. The session becomes permanently broken. Eventually the Gateway crashes.

# The actual error hidden in session transcripts:
400 invalid request: unsupported role ROLE_UNSPECIFIED

The Fix

Tell OpenClaw to not use the developer role for your model's provider:

One Command Fix

# Replace "moonshot" with your provider name
openclaw config set models.providers.moonshot.models.compat \
  '{"supportsDeveloperRole": false}' --json

openclaw gateway restart

Alternatively, you can disable reasoning detection entirely for the model:

# Alternative: disable reasoning mode
openclaw config set models.providers.moonshot.models.compat \
  '{"reasoning": false}' --json

You Do NOT Need to Wipe .openclaw

The config fix resolves the issue permanently. No need to delete your folder and lose chat history. If you already have a broken session, use /new in Telegram to start a fresh one.

How to Find the Real Error

The "Message ordering conflict" message hides the actual API error. Here's how to dig it out:

Method 1: Session Transcript (Most Reliable)

# Find the session transcript file
# Windows
dir "%USERPROFILE%\.openclaw\agents\main\sessions\*.jsonl"

# macOS / Linux
ls ~/.openclaw/agents/main/sessions/*.jsonl

Open the most recent .jsonl file and search for 400 or error. The raw provider response will show the real rejection reason.

Method 2: Enable Raw Logging

# PowerShell — set before starting gateway
$dir = "$env:TEMP\openclaw-debug"
New-Item -ItemType Directory -Force -Path $dir | Out-Null
$env:OPENCLAW_RAW_STREAM=1
$env:OPENCLAW_RAW_STREAM_PATH="$dir\raw-stream.jsonl"

openclaw gateway run --verbose

Method 3: Gateway Logs

# Filter for the actual API error
openclaw logs --tail 200 | grep -i "400\|role\|ordering\|unspecified"

Known UX Flaw

OpenClaw sanitizes error messages too aggressively, hiding vital debugging info. The community has recommended appending "Check logs: openclaw logs" to error messages in future versions.

Affected Models

This issue affects any model/provider where OpenClaw auto-detects reasoning: true but the API doesn't support the developer role:

Provider / ModelDeveloper RoleFix Needed
Moonshot / Kimi-k2.5Not supportedsupportsDeveloperRole: false
Custom OpenAI-compat proxiesVariessupportsDeveloperRole: false
OpenAI (GPT-4o, o1, o3)SupportedNo fix needed
Anthropic (Claude)SupportedNo fix needed
Ollama (local models)Model-dependentTest per model

Telegram Recovery Steps

If your Telegram bot is completely dead, follow this sequence:

# Step 1: Apply the config fix
openclaw config set models.providers.moonshot.models.compat \
  '{"supportsDeveloperRole": false}' --json
# Step 2: Run deep diagnostics
openclaw doctor --deep
# Step 3: Restart the Gateway
openclaw gateway restart
# Step 4: In Telegram, start a fresh session
/new

Backup Before Any Drastic Action

Instead of deleting .openclaw, back it up first:

# Windows PowerShell
Compress-Archive -Path "$env:USERPROFILE\.openclaw\*" \
  -DestinationPath "$env:USERPROFILE\openclaw-backup.zip" -Force

Common Red Herring: "Anthropic API Key Missing"

You might see errors about a missing Anthropic API key in your logs even though you're using Moonshot/Kimi. This is unrelated — it comes from the session-memory hook trying to generate a session name using a fallback model. It does not cause the message ordering error.

Still Stuck?

Run our diagnostic tool to catch issues automatically:

npx clawkit-doctor@latest

Need Help?

Try our automated tools to solve common issues instantly.