arc-shield
Output sanitization for agent responses - prevents accidental secret leaks
Install via CLI (Recommended)
clawhub install openclaw/skills/skills/arc-claw-bot/arc-shieldWhat This Skill Does
Arc-shield serves as the final line of defense for OpenClaw agents by providing real-time output sanitization. Unlike input filters that scan incoming data, arc-shield inspects outbound messages—ensuring that your agent does not accidentally leak sensitive credentials, PII, or internal tokens to external channels like Discord, Slack, or web interfaces. It performs deep analysis on message payloads, using both pattern matching for known key structures (OpenAI keys, AWS secrets, private keys) and high-entropy detection to identify anomalous strings that suggest hidden credentials. In --strict mode, the agent will block any transmission containing a high-risk secret, while the --redact flag allows the agent to scrub the specific sensitive token from the message while keeping the rest of the context intact.
Installation
To install arc-shield, navigate to your skill directory and clone the repository:
cd ~/.openclaw/workspace/skills
git clone <arc-shield-repo> arc-shield
chmod +x arc-shield/scripts/*.sh arc-shield/scripts/*.py
Alternatively, use the clawhub command for seamless integration:
clawhub install openclaw/skills/skills/arc-claw-bot/arc-shield
Use Cases
Arc-shield is critical for developers who leverage AI agents for debugging or automation. Common use cases include:
- Log Sanitization: Automatically strip secrets from raw log files before sending them to a support channel.
- Code Review: Ensure that AI-generated code snippets do not contain real API keys or database connection strings.
- Policy Enforcement: Block the transmission of sensitive data like social security numbers or credit cards in customer-facing communication channels.
- Credential Leak Prevention: Catch unintentional pastes of environment variables when an agent displays its current configuration during troubleshooting.
Example Prompts
- "Scan this log output for any AWS tokens or hardcoded database passwords before sending to the team Slack channel."
- "Redact any sensitive API keys from the generated code output in the response and send the result to the repository manager."
- "Run a strict check on the current session summary to ensure no PII or credit card numbers are included before finalizing the report."
Tips & Limitations
- Strict vs. Redact: Always use --strict in automated production pipelines to ensure no message is sent if a leak is detected. Use --redact for interactive sessions where you want to maintain the flow of conversation.
- False Positives: High-entropy detection might occasionally flag safe strings. If you experience frequent blocking, consider fine-tuning your local ignore-list in the config file.
- Not an Input Filter: Remember that arc-shield does not protect against malicious incoming requests; it is exclusively an output guard. Use it in conjunction with clawdefender for full-stack security.
Metadata
Not sure this is the right skill?
Describe what you want to build — we'll match you to the best skill from 16,000+ options.
Find the right skillPaste this into your clawhub.json to enable this plugin.
{
"plugins": {
"official-arc-claw-bot-arc-shield": {
"enabled": true,
"auto_update": true
}
}
}Tags
Flags: file-read
Related Skills
security-scanner
Scans OpenClaw skills for security vulnerabilities and suspicious patterns before installation
tripwire
Tripwire host-based IDS reference. Cryptographic key setup, database initialization, integrity checks, policy rules with severity levels, twcfg.txt configuration, and report analysis with twprint.
AURA Security Scanner
Scan AI agent skills for malware, credential theft, prompt injection, and dangerous permissions before installing them
auditd
Linux Audit Framework reference. auditctl rules for file watches and syscall auditing, auditd.conf configuration, ausearch log queries, aureport summaries, audit.log format, CIS/PCI-DSS compliance rules, and audit tools.
edgeone-clawscan
The first security skill to install after setting up OpenClaw — powered by Tencent Zhuque Lab. Works like an antivirus for your AI environment: audits installed skills, scans skills before installation, and performs a full OpenClaw security health check to prevent data leaks and privacy risks. Backed by Tencent Zhuque Lab A.I.G (AI-Infra-Guard). Use when the user asks to start a security health check or security scan for the current OpenClaw environment, such as `开始安全体检`, `做一次安全体检`, `开始安全扫描`, `全面安全检查`, or `检查 OpenClaw 安全`; also use when the user asks to audit a specific skill before installation, review installed skills for supply chain risk, or investigate whether a skill is safe. Do not trigger for general OpenClaw usage, project debugging, environment setup, or normal development requests. Optional cloud mode: set AIG_CLOUD_LOOKUP=off for zero outbound HTTPS; when enabled, only skill_name, source label, and OpenClaw version are sent to A.I.G (never skill bodies, chats, or workspace files).