elevenlabs
Text-to-speech, sound effects, music generation, voice management, and quota checks via the ElevenLabs API. Use when generating audio with ElevenLabs or managing voices.
Why use this skill?
Integrate ElevenLabs AI audio generation into OpenClaw. Create high-quality text-to-speech, sound effects, music, and voice clones with ease.
Install via CLI (Recommended)
clawhub install openclaw/skills/skills/odrobnik/elevenlabs-skillWhat This Skill Does
The ElevenLabs skill for OpenClaw provides a powerful interface for high-fidelity audio production directly within your agent workflow. It leverages the industry-leading ElevenLabs API to handle complex tasks including text-to-speech (TTS) synthesis, bespoke sound effect generation, full musical composition, and professional-grade voice cloning. Whether you need to generate a narration track with specific emotional stability settings, create unique audio assets for a multimedia project, or manage your organization's voice repository, this skill streamlines the entire lifecycle of audio generation. It supports a wide array of output formats ranging from high-bitrate MP3s to raw PCM data, making it versatile for both creative applications and technical engineering tasks.
Installation
To integrate this skill into your OpenClaw environment, execute the following command in your terminal:
clawhub install openclaw/skills/skills/odrobnik/elevenlabs-skill
Ensure that you have an active ElevenLabs account and have exported your ELEVENLABS_API_KEY as an environment variable to permit secure API communication.
Use Cases
This skill is designed for developers and creators who require programmatic audio generation. Common use cases include generating dynamic dialogue for NPCs or interactive storytellers, creating custom soundscapes for video projects, producing background music for presentations, and automating the cloning of voice profiles for consistent branding across multiple platforms. It is also an essential tool for monitoring your API usage, ensuring that production pipelines remain within your subscription limits.
Example Prompts
- "Generate a warm, professional narration of the provided mission briefing using the voice ID 'Rachel' and save it as mission_intro.mp3."
- "Create an upbeat 10-second background music track suitable for a software demo and output it at 192kbps."
- "Clone the voice from the uploaded files in the 'recordings' folder, label it 'Agent_Alpha', and apply denoising to ensure high quality."
Tips & Limitations
Always check your remaining quota using the quota.py script before starting large batch generation jobs to avoid mid-task interruptions. When cloning voices, use high-quality, clear audio samples to achieve the best results; background noise, even with the --denoise flag enabled, can still impact the realism of the final output. For telephony applications, utilize the ulaw_8000 format to ensure compatibility with standard communications infrastructure. Remember that ElevenLabs character limits depend on your specific subscription tier; monitor your usage patterns regularly.
Metadata
Not sure this is the right skill?
Describe what you want to build — we'll match you to the best skill from 16,000+ options.
Find the right skillPaste this into your clawhub.json to enable this plugin.
{
"plugins": {
"official-odrobnik-elevenlabs-skill": {
"enabled": true,
"auto_update": true
}
}
}Tags(AI)
Flags: network-access, file-write, file-read, external-api
Related Skills
tesla-fleet-api
Use when integrating with Tesla's official Fleet API to read vehicle/energy device data or issue remote commands (e.g. start HVAC preconditioning, wake vehicle, charge controls). Covers onboarding (developer app registration, regions/base URLs), OAuth token flows (third-party + partner tokens, refresh rotation), required domain/public-key hosting, and using Tesla's official vehicle-command/tesla-http-proxy for signed vehicle commands.
unifi
Monitor UniFi network infrastructure via the UniFi Site Manager API. Use to list hosts/sites/devices/APs and get high-level client/device counts.
codexmonitor
List/inspect/watch local OpenAI Codex sessions (CLI + VS Code) using the CodexMonitor Homebrew formula. Reads sessions from ~/.codex/sessions by default (or via CODEX_SESSIONS_DIR / CODEX_HOME overrides). Requires the cocoanetics/tap Homebrew tap.
snapmaker-2
Control and monitor Snapmaker 2.0 3D printers via their HTTP API. Status, job management, progress watching, and event monitoring.
printer
Print images and PDFs to any CUPS printer. PPD-aware: reads paper sizes, margins, resolution, and duplex at runtime. Use when the user wants to print files (images like PNG/JPG or PDFs) or query printer capabilities.