Tool Use Patterns
Tool use is what separates an AI agent from a chatbot. When an LLM can call functions — click buttons, read files, make API requests — it becomes capable of acting on the real world. This page explains common patterns and best practices.
How Tool Use Works
Modern LLMs support "function calling" or "tool use" natively. The flow is:
- Registration: You tell the LLM what tools are available, along with their descriptions and parameter schemas.
- Invocation: During generation, the LLM outputs a structured tool call instead of regular text (e.g.,
{"tool": "click", "params": {"selector": "#submit"}}). - Execution: Your application intercepts the tool call, executes it, and feeds the result back to the LLM.
- Continuation: The LLM processes the tool result and decides whether to call another tool or respond with text.
In OpenClaw, Every Tool is an MCP Skill
OpenClaw wraps all tools in the Model Context Protocol (MCP) standard. This means your tools work across any MCP-compatible client, and you can use community Skills without modification. Learn more in Skill System Design.
Common Tool Patterns
The most basic pattern. The agent observes the environment (e.g., takes a screenshot), then acts on it (e.g., clicks a button).
Read data from one source, process it, and write it to another. Common for data collection and report generation.
Visit multiple sites, gather information, and synthesize. Useful for research and comparison tasks.
Tool Design Best Practices
Whether you're building custom Skills or configuring OpenClaw, follow these principles:
Atomic tools
Each tool should do one thing. "click_and_wait" is better split into "click" + "wait".
Clear descriptions
The LLM picks tools based on descriptions. Write them for an LLM audience, not a human developer.
Strict schemas
Use JSON Schema with required fields, enums, and descriptions. Vague schemas cause tool call errors.
Safe defaults
Tools that modify state should require explicit confirmation parameters. Never auto-delete without a flag.
Common Pitfalls
Too many tools
Registering 50+ tools confuses the LLM. Keep it under 15 active tools per session.
Overlapping tools
If two tools can do the same thing (e.g., "fetch_url" and "http_get"), the LLM picks randomly. Remove duplicates.
Missing error handling
Tools should return clear error messages. A silent failure leaves the agent stuck in a retry loop.
No timeout limits
A tool that hangs forever blocks the entire agent loop. Always set execution timeouts.
Tool Use Across Providers
Not all LLMs handle tool use equally. Here's how they compare for agent workloads:
| Provider | Parallel Tools | Reliability | Notes |
|---|---|---|---|
| GPT-4.1 | Yes | High | Best schema adherence |
| Claude Sonnet 4.5 | Yes | High | Strong reasoning |
| DeepSeek V3.2 | Limited | Medium | Great value for cost |
| Gemini 2.5 Flash | Yes | Medium | Fast, budget-friendly |
| Llama 3.3 (Ollama) | No | Medium | Fully local / free |