ClawKit Logo
ClawKitReliability Toolkit
Back to Registry
Official Verified utilities Safety 4/5

voice-wake-say

Speak responses aloud on macOS using the built-in `say` command when user input indicates Voice Wake/voice recognition (for example, messages starting with "User talked via voice recognition on <device>").

Why use this skill?

Integrate macOS local TTS into OpenClaw. Get instant verbal feedback for voice-activated tasks using native system tools without cloud latency.

skill-install — Terminal

Install via CLI (Recommended)

clawhub install openclaw/skills/skills/xadenryan/voice-wake-say
Or

What This Skill Does

The voice-wake-say skill provides seamless verbal feedback for OpenClaw users interacting via voice-activated input on macOS. When an interaction originates from voice recognition, this skill intercepts the response and leverages the native macOS say utility to read the answer aloud. By utilizing the system's local TTS (text-to-speech) engine, it provides an immediate, responsive experience without the latency or privacy concerns associated with cloud-based text-to-speech APIs. The skill is designed to prioritize user feedback; it triggers a spoken acknowledgment the moment a voice-based task is initiated, ensuring the user knows their request has been successfully captured and is currently being processed.

Installation

To integrate this skill into your environment, use the OpenClaw command-line interface. Run the following command in your terminal: clawhub install openclaw/skills/skills/xadenryan/voice-wake-say Ensure you have the necessary system permissions for OpenClaw to execute shell commands, as the skill relies on the macOS say binary.

Use Cases

This skill is perfect for hands-free workflows. Whether you are dictating coding tasks, requesting quick system status updates, or drafting documents while your hands are busy, voice-wake-say bridges the gap between text-based AI processing and natural language conversation. It is particularly useful in smart office environments or for users who rely on accessibility features to interact with their development tools.

Example Prompts

  1. "User talked via voice recognition on MacBook: Please check the system logs for recent error messages."
  2. "User talked via voice recognition on iMac: Summarize the contents of the last five emails I received."
  3. "User talked via voice recognition on MacBook: Write a short Python script to rename all files in the current folder."

Tips & Limitations

  • Contextual Awareness: The trigger relies strictly on the message content starting with the required phrase. It does not look at conversation history, so ensure your input stream maintains the specific trigger string.
  • Readability: Because the say command reads raw text, markdown syntax can sometimes sound awkward. Keep spoken responses concise and use the assistant's ability to summarize complex data into a readable format while displaying the full details on the screen.
  • Error Handling: If the say command fails due to system configuration, the skill defaults back to a standard text response, ensuring that you never miss out on information.

Metadata

Author@xadenryan
Stars919
Views2
Updated2026-02-12
View Author Profile
AI Skill Finder

Not sure this is the right skill?

Describe what you want to build — we'll match you to the best skill from 16,000+ options.

Find the right skill
Add to Configuration

Paste this into your clawhub.json to enable this plugin.

{
  "plugins": {
    "official-xadenryan-voice-wake-say": {
      "enabled": true,
      "auto_update": true
    }
  }
}

Tags(AI)

#macos#accessibility#tts#voice-control#automation
Safety Score: 4/5

Flags: code-execution