ClawKit Logo
ClawKitReliability Toolkit
Back to Registry
Community Verified developer tools Safety 5/5

sherpa-onnx-tts

Local text-to-speech via sherpa-onnx (offline, no cloud)

skill-install — Terminal

Install via CLI (Recommended)

clawhub install openclaw/openclaw/skills/sherpa-onnx-tts
Or

What This Skill Does

The sherpa-onnx-tts skill provides local, offline text-to-speech (TTS) capabilities powered by the sherpa-onnx library. This means you can convert written text into spoken audio without needing an internet connection or relying on cloud-based services. This is particularly useful for scenarios where privacy is paramount, internet access is unreliable, or for reducing latency in applications that require real-time voice output. The skill integrates with the OpenClaw environment, allowing for easy configuration and execution.

Installation

To install and configure the sherpa-onnx-tts skill, follow these steps:

  1. Download the Runtime: Obtain the sherpa-onnx runtime executable for your operating system. Extract its contents into the ~/.openclaw/tools/sherpa-onnx-tts/runtime directory.
  2. Download a Voice Model: Download a desired TTS voice model from the sherpa-onnx tts-models releases. Extract the model files into the ~/.openclaw/tools/sherpa-onnx-tts/models directory. Ensure the model is compatible with sherpa-onnx.
  3. Configure OpenClaw: Update your OpenClaw configuration file (~/.openclaw/openclaw.json) to point to the downloaded runtime and model. The env section should specify SHERPA_ONNX_RUNTIME_DIR and SHERPA_ONNX_MODEL_DIR. For example:
    {
      skills: {
        entries: {
          "sherpa-onnx-tts": {
            env: {
              SHERPA_ONNX_RUNTIME_DIR: "~/.openclaw/tools/sherpa-onnx-tts/runtime",
              SHERPA_ONNX_MODEL_DIR: "~/.openclaw/tools/sherpa-onnx-tts/models/vits-piper-en_US-lessac-high",
            },
          },
        },
      },
    }
    
    Note that the SHERPA_ONNX_MODEL_DIR should point to the specific directory containing your chosen voice model.
  4. Add Wrapper to PATH (Optional): For easier access, you can add the skill's binary directory to your system's PATH. The wrapper script is located within the skill's directory. Run the following command in your terminal, replacing {baseDir} with the actual path to the skill's installation directory:
    export PATH="{baseDir}/bin:$PATH"
    
    For Windows users, you might need to run it via Node.js: node {baseDir}\bin\sherpa-onnx-tts ...

Alternatively, you can install the skill directly using the command: clawhub install openclaw/openclaw/skills/sherpa-onnx-tts.

Use Cases

This skill is ideal for a variety of applications where generating speech locally is beneficial:

  • Automated Notifications: Announce system events, alerts, or incoming messages audibly without relying on external services.
  • Personalized Voice Assistants: Integrate TTS into custom voice assistants for private or offline use.
  • Accessibility Tools: Create tools that read text aloud for users, ensuring privacy and offline functionality.
  • Content Creation: Generate audio snippets for videos, podcasts, or presentations locally, bypassing potential cloud service costs or restrictions.
  • Educational Software: Develop applications that read educational content aloud for students.

Example Prompts

Here are three example messages a user might send to OpenClaw to utilize this skill:

  1. "Convert the following text to speech and save it as audio.txt: 'The meeting has been rescheduled to 3 PM Eastern Time.'"
  2. "Read this article summary out loud for me: 'The latest research indicates a significant breakthrough in renewable energy storage technology.'"
  3. "Generate an audio file named 'welcome_message.wav' with the phrase 'Welcome to the secure local assistant.'"

Tips & Limitations

  • Model Variety: Experiment with different voice models from the sherpa-onnx tts-models releases to find a voice that best suits your needs. Each model may have different quality, speaking style, and language support.
  • Model Specifics: If your chosen model directory contains multiple .onnx files, you may need to explicitly specify which model file to use by setting the SHERPA_ONNX_MODEL_FILE environment variable or using the --model-file command-line argument.
  • Customization: The skill supports overriding default token files (--tokens-file) and data directories (--data-dir) if your model requires specific configurations.
  • Platform Differences: Be mindful of the command execution differences between operating systems. Windows users should use node {baseDir}\bin\sherpa-onnx-tts ....
  • Performance: The quality and speed of the TTS output will depend on the chosen model and the processing power of your local machine. Complex models or less powerful hardware may result in slower synthesis times.
  • Error Handling: Ensure you have correctly installed the runtime and downloaded compatible models. Incorrect paths or incompatible files will lead to execution errors.

Metadata

Author@openclaw
Stars289479
Views27
Updated2026-03-09
View Author Profile
AI Skill Finder

Not sure this is the right skill?

Describe what you want to build — we'll match you to the best skill from 16,000+ options.

Find the right skill
Add to Configuration

Paste this into your clawhub.json to enable this plugin.

{
  "plugins": {
    "official-openclaw-sherpa-onnx-tts": {
      "enabled": true,
      "auto_update": true
    }
  }
}

Tags(AI)

#tts#offline#local#speech-synthesis
Safety Score: 5/5

Flags: file-write, file-read, code-execution