Goal Decomposition
A user says "Book me a flight to Tokyo next Tuesday." The agent must break this into dozens of concrete steps: open the browser, navigate to the airline site, search flights, compare prices, fill forms, and confirm. This process is called Goal Decomposition.
Why Decomposition Matters
LLMs are good at reasoning about individual steps, but they struggle with holding an entire complex plan in "working memory." By breaking a high-level goal into a tree of sub-goals, we:
- Reduce cognitive load on each LLM call
- Make progress measurable (how many sub-goals completed?)
- Enable recovery — if one sub-goal fails, retry just that branch
- Allow parallelism — independent sub-goals can run concurrently
Decomposition Strategies
Break the goal into 3-5 major sub-goals, then recursively decompose each until reaching atomic actions the agent can execute directly.
Don't plan ahead. At each step, look at the current state and decide the single next action that brings you closer to the goal.
OpenClaw's Approach: Hybrid
OpenClaw uses a hybrid strategy: reactive planning at the execution layer, with optional high-level goal hints. The user provides the top-level mission, and the agent decomposes reactively:
// User's mission (high-level goal)
"Go to github.com/openai/openai-python and star the repo"
// Agent's internal decomposition (reactive):
// Step 1: navigate("https://github.com/openai/openai-python")
// Step 2: wait(page_loaded)
// Step 3: click("Star" button)
// Step 4: verify(star_count_increased)
// Done.Failure & Recovery
Decomposition also defines how agents handle failure. When a sub-goal fails:
Retry
Re-attempt the same sub-goal. Useful for transient failures like network timeouts or slow page loads.
Re-Plan
Discard the current plan and re-decompose from the current state. Handles unexpected UI changes or navigation errors.
Abort
Stop execution and report the failure. Used when the goal is impossible given current constraints (e.g., login required but no credentials).
The Cost of Over-Decomposition
More steps = more LLM calls = higher cost. A well-decomposed task might take 5-10 steps. A poorly decomposed one can take 50+, burning through your API budget. Tips to keep decomposition efficient:
- Be specific in your mission prompt: "Star the repo at github.com/openai/openai-python" is better than "Go star some OpenAI repo"
- Provide URLs when possible: Skip the search/navigation steps entirely
- Set step limits: Configure
maxStepsin your OpenClaw config to prevent runaway agents
Research Note
Goal decomposition in LLM agents is an active area of research. Techniques like Tree-of-Thought (ToT), ReAct, and Reflexion all address different aspects of this problem. OpenClaw's reactive approach is closest to the ReAct pattern.