ClawKit Logo
ClawKitReliability Toolkit
Back to Registry
Official Verified developer tools Safety 5/5

glin-profanity

Profanity detection and content moderation library with leetspeak, Unicode homoglyph, and ML-powered detection. Use when filtering user-generated content, moderating comments, checking text for profanity, censoring messages, or building content moderation into applications. Supports 24 languages.

Why use this skill?

Integrate Glin Profanity to detect leetspeak, Unicode tricks, and toxic language in 24 languages. Ensure safe, moderated communication.

skill-install — Terminal

Install via CLI (Recommended)

clawhub install openclaw/skills/skills/thegdsks/glin-profanity
Or

What This Skill Does

Glin-profanity is a robust, multi-layered content moderation library designed to detect profanity, hate speech, and obfuscated toxicity across 24 different languages. It is engineered to neutralize evasion tactics such as leetspeak (e.g., using '4' for 'a' or '1' for 'i'), Unicode homoglyph attacks (using Cyrillic characters to mimic Latin letters), and character-based obfuscation. By integrating this skill, developers gain a high-performance filtering mechanism that supports standard string matching, dictionary-based lookups, and optional ML-powered toxicity analysis via TensorFlow.js.

Installation

To integrate this skill into your OpenClaw agent or project, use the following command:

clawhub install openclaw/skills/skills/thegdsks/glin-profanity

You can then install the supporting libraries directly via your project's package manager:

  • JavaScript/TypeScript: npm install glin-profanity
  • Python: pip install glin-profanity

Use Cases

  • Chat Moderation: Automatically clean messages in real-time within gaming lobbies or community forums.
  • Social Media Tools: Filter incoming user comments or posts to maintain community safety guidelines.
  • Form Validation: Prevent toxic inputs during user registration or feedback form submission.
  • Content Filtering: Build safe-for-work (SFW) environments for educational or enterprise applications.
  • Domain-Specific Analysis: Utilize context whitelists to distinguish between offensive terms and clinical terminology in medical or technical writing.

Example Prompts

  1. "Filter the user comment 'Th1s is a total f4ckin disgrace' and provide the cleaned output."
  2. "Check the following message for profanity in Spanish and replace all bad words with asterisks: 'Vete al demonio'."
  3. "Run a batch check on this list of chat logs [Insert Log File] and generate a report identifying any high-confidence toxicity violations."

Tips & Limitations

  • Performance: Use the cacheSize configuration option to optimize performance if you are filtering high volumes of messages in real-time.
  • Context Awareness: Always define a domain (e.g., 'medical' or 'gaming') when checking text to avoid false positives on words that might be offensive in general conversation but neutral in specific professional contexts.
  • Language Coverage: While it supports 24 languages, ensure your configuration explicitly lists all target languages to improve detection accuracy and prevent linguistic leakage.
  • ML Usage: The ML-powered toxicity detection provides higher accuracy for complex sentiment but requires more memory and compute, so reserve it for messages that pass initial dictionary checks.

Metadata

Author@thegdsks
Stars946
Views0
Updated2026-02-13
View Author Profile
AI Skill Finder

Not sure this is the right skill?

Describe what you want to build — we'll match you to the best skill from 16,000+ options.

Find the right skill
Add to Configuration

Paste this into your clawhub.json to enable this plugin.

{
  "plugins": {
    "official-thegdsks-glin-profanity": {
      "enabled": true,
      "auto_update": true
    }
  }
}

Tags(AI)

#moderation#profanity#filtering#security#nlp
Safety Score: 5/5