ClawKit Logo
ClawKitReliability Toolkit
Back to Registry
Official Verified developer tools Safety 4/5

swarm-safety

SWARM: System-Wide Assessment of Risk in Multi-agent systems. 38 agent types, 29 governance levers, 55 scenarios. Study emergent risks, phase transitions, and governance cost paradoxes.

Why use this skill?

Assess emergent risks in multi-agent AI systems with SWARM. Simulate 38 agent types, 29 governance levers, and 55 scenarios to identify systemic failure points.

skill-install — Terminal

Install via CLI (Recommended)

clawhub install openclaw/skills/skills/rsavitt/swarm-2
Or

What This Skill Does

The swarm-safety skill provides an integrated framework for the System-Wide Assessment of Risk in Multi-agent (SWARM) systems. It allows users to simulate complex multi-agent environments to study emergent risks, phase transitions, and governance challenges without needing AGI-level agents. By utilizing 38 distinct agent types and 29 governance levers across 55 predefined scenarios, this tool enables researchers and developers to stress-test their system designs. The core philosophy relies on soft, probabilistic labeling rather than binary outcomes, offering a nuanced view of how sub-AGI agents interact and where systemic failures originate.

Installation

To install this skill, run the following command in your terminal: clawhub install openclaw/skills/skills/rsavitt/swarm-2. Ensure you have the necessary environment setup, specifically the swarm-safety package. For full functionality, including LLM-based agent simulations, verify your local environment by running pip install swarm-safety[llm] to pull in the required dependencies for advanced agent behaviors.

Use Cases

SWARM is primarily designed for AI safety research, robust system engineering, and academic study. Common use cases include: 1) Analyzing the stability of multi-agent workflows before production deployment; 2) Identifying tipping points where agents shift from cooperative to competitive dynamics; 3) Testing the efficacy of governance levers (such as monitoring, penalties, or resource constraints) in mitigating systemic risk; 4) Developing theoretical models for emergent behavior in decentralized autonomous organizations (DAOs) or multi-agent LLM systems.

Example Prompts

  1. "Run the baseline simulation scenario with the default agent configuration and export the results to a CSV file for my research report."
  2. "Create a new swarm experiment using five HonestAgents and two DeceptiveAgents; simulate 20 epochs and summarize the toxicity rate and total welfare metrics."
  3. "List all available governance levers in the current swarm library and explain how I can apply the 'resource-limiter' lever to my current orchestrator config."

Tips & Limitations

SWARM simulations are research artifacts and should not be treated as ground truth for live systems. Always run simulations on local machines (default binding to 127.0.0.1) to ensure security, as the API lacks native authentication. Do not input real PII or sensitive production API keys into the environment. When documenting your findings, cite the framework version and disclose your seed and epoch parameters to ensure reproducibility of your safety assessment results.

Metadata

Author@rsavitt
Stars1133
Views1
Updated2026-02-18
View Author Profile
AI Skill Finder

Not sure this is the right skill?

Describe what you want to build — we'll match you to the best skill from 16,000+ options.

Find the right skill
Add to Configuration

Paste this into your clawhub.json to enable this plugin.

{
  "plugins": {
    "official-rsavitt-swarm-2": {
      "enabled": true,
      "auto_update": true
    }
  }
}

Tags(AI)

#ai-safety#multi-agent#simulation#governance#research
Safety Score: 4/5

Flags: code-execution