ClawKit Logo
ClawKitReliability Toolkit
Back to Registry
Official Verified utilities Safety 4/5

voice-wake-say

Speak responses aloud on macOS using the built-in `say` command when user input indicates Voice Wake/voice recognition (for example, messages starting with "User talked via voice recognition on <device>").

Why use this skill?

Use the local macOS say command to enable voice-activated responses in OpenClaw. Improve your agent's accessibility and hands-free interaction with native, high-speed TTS.

skill-install — Terminal

Install via CLI (Recommended)

clawhub install openclaw/skills/skills/xadenryan/clawdbot-skill-voice-wake-say
Or

What This Skill Does

The voice-wake-say skill enables macOS-based OpenClaw agents to provide auditory feedback by utilizing the native system say command. Unlike cloud-based Text-to-Speech (TTS) tools, this skill performs all synthesis locally on your machine, ensuring high performance, zero latency, and complete privacy. It is specifically designed to handle voice-activated workflows, identifying when a user has interacted with the agent through voice recognition software. By detecting specific trigger phrases at the start of a user prompt, the skill bridges the gap between digital text processing and natural human conversation, making your AI agent feel more responsive and physically present in your workspace.

Installation

To integrate this functionality into your OpenClaw environment, execute the following command in your terminal: clawhub install openclaw/skills/skills/xadenryan/clawdbot-skill-voice-wake-say Ensure that your macOS environment has the say binary available in the system path, which is standard for all macOS installations.

Use Cases

This skill is ideal for hands-free computing scenarios where the user is dictating instructions while away from the keyboard. It is particularly effective for:

  • Hands-free task confirmation: Confirming that an agent has received a command like "send an email" or "schedule a meeting" before execution begins.
  • Accessibility support: Providing auditory cues for users with visual impairments or those who prefer spoken confirmation over reading text.
  • Remote monitoring: Hearing status updates or error messages from an agent without needing to unlock the screen or switch windows.

Example Prompts

  1. "User talked via voice recognition on m3: Please check my calendar for tomorrow and summarize the meetings."
  2. "User talked via voice recognition on m3: Create a new text file named research-notes and add the current date to it."
  3. "User talked via voice recognition on m3: Repeat prompt first, then search the web for the latest weather in San Francisco."

Tips & Limitations

  • Context Handling: The trigger phrase must be the very first thing in the message. If the agent receives a follow-up message without the trigger, it will default to silent text-only mode to prevent annoying interruptions.
  • Formatting: Always strip markdown syntax or excessive code blocks before passing text to say. If your response is a large code snippet, simply say 'I have generated the code; please see the screen for details.'
  • Customization: You can adjust the SAY_VOICE and SAY_RATE environment variables to suit your preferences for tone and speed. If the say command fails, the agent will gracefully fall back to text output to ensure the core task is still completed.

Metadata

Author@xadenryan
Stars919
Views0
Updated2026-02-12
View Author Profile
AI Skill Finder

Not sure this is the right skill?

Describe what you want to build — we'll match you to the best skill from 16,000+ options.

Find the right skill
Add to Configuration

Paste this into your clawhub.json to enable this plugin.

{
  "plugins": {
    "official-xadenryan-clawdbot-skill-voice-wake-say": {
      "enabled": true,
      "auto_update": true
    }
  }
}

Tags(AI)

#macos#tts#voice#automation#accessibility
Safety Score: 4/5

Flags: code-execution