Data Sovereignty
OpenClaw is built on a core principle: your data never leaves your machine unless you explicitly choose an external LLM. This page explains what data flows where, and how to stay in full control.
The Privacy Guarantee
ClawKit and OpenClaw are open-source tools that run entirely on your machine. We have no servers, no telemetry, no analytics. Your configs, API keys, and browsing data stay local.
Data Flow Map
Understanding what data goes where is critical when working with AI agents that interact with your browser and file system.
- Your config files (JSON/YAML)
- Browser DOM snapshots
- Screenshot images
- Local file contents
- Agent execution logs
- Skill Registry cache
- Compressed DOM text (not raw HTML)
- Your task description / prompt
- Tool call results (text only)
- Conversation history for context
Only when using cloud LLMs (OpenAI, Anthropic, DeepSeek, Google). Using Ollama keeps everything local.
Running Fully Local with Ollama
For maximum privacy, run OpenClaw with a local LLM via Ollama. No data leaves your network:
{
"llm": {
"provider": "ollama",
"model": "llama3.3",
"baseURL": "http://localhost:11434/v1"
}
}Trade-off: Local models are less capable than cloud models like GPT-4.1 or Claude Sonnet 4.5, but for simple automation tasks they work well. Use our Config Wizard to set this up in seconds.
API Key Security
Keys stay in your config file
ClawKit never transmits your API keys anywhere. They are read locally and passed directly to the provider SDK.
No key logging
Keys are excluded from debug logs. Even if you share your logs, keys won't leak.
No proxy servers
API calls go directly from your machine to the LLM provider. There is no ClawKit intermediary.
Best Practices
Use Environment Variables
Instead of hardcoding API keys in your config, reference environment variables:
DEEPSEEK_API_KEY=sk-...Rotate Keys Regularly
Regenerate API keys monthly from your provider dashboard. If a key leaks, revoke it immediately from the provider's console.
Set Spending Limits
Configure billing alerts and hard caps on your LLM provider account to prevent unexpected costs from runaway agents.
Review Agent Logs
Periodically check what your agent sends to the LLM. OpenClaw logs all prompts locally for your review.
Sensitive Data Warning
If your agent browses pages with passwords, bank accounts, or personal data, that information may be included in the compressed DOM sent to the cloud LLM. Use Ollama for tasks involving sensitive information, or configure content filters in your OpenClaw setup.