glin-profanity
Profanity detection and content moderation library with leetspeak, Unicode homoglyph, and ML-powered detection. Use when filtering user-generated content, moderating comments, checking text for profanity, censoring messages, or building content moderation into applications. Supports 24 languages.
Why use this skill?
Integrate Glin Profanity to detect leetspeak, Unicode tricks, and toxic language in 24 languages. Ensure safe, moderated communication.
Install via CLI (Recommended)
clawhub install openclaw/skills/skills/thegdsks/glin-profanityWhat This Skill Does
Glin-profanity is a robust, multi-layered content moderation library designed to detect profanity, hate speech, and obfuscated toxicity across 24 different languages. It is engineered to neutralize evasion tactics such as leetspeak (e.g., using '4' for 'a' or '1' for 'i'), Unicode homoglyph attacks (using Cyrillic characters to mimic Latin letters), and character-based obfuscation. By integrating this skill, developers gain a high-performance filtering mechanism that supports standard string matching, dictionary-based lookups, and optional ML-powered toxicity analysis via TensorFlow.js.
Installation
To integrate this skill into your OpenClaw agent or project, use the following command:
clawhub install openclaw/skills/skills/thegdsks/glin-profanity
You can then install the supporting libraries directly via your project's package manager:
- JavaScript/TypeScript:
npm install glin-profanity - Python:
pip install glin-profanity
Use Cases
- Chat Moderation: Automatically clean messages in real-time within gaming lobbies or community forums.
- Social Media Tools: Filter incoming user comments or posts to maintain community safety guidelines.
- Form Validation: Prevent toxic inputs during user registration or feedback form submission.
- Content Filtering: Build safe-for-work (SFW) environments for educational or enterprise applications.
- Domain-Specific Analysis: Utilize context whitelists to distinguish between offensive terms and clinical terminology in medical or technical writing.
Example Prompts
- "Filter the user comment 'Th1s is a total f4ckin disgrace' and provide the cleaned output."
- "Check the following message for profanity in Spanish and replace all bad words with asterisks: 'Vete al demonio'."
- "Run a batch check on this list of chat logs [Insert Log File] and generate a report identifying any high-confidence toxicity violations."
Tips & Limitations
- Performance: Use the
cacheSizeconfiguration option to optimize performance if you are filtering high volumes of messages in real-time. - Context Awareness: Always define a domain (e.g., 'medical' or 'gaming') when checking text to avoid false positives on words that might be offensive in general conversation but neutral in specific professional contexts.
- Language Coverage: While it supports 24 languages, ensure your configuration explicitly lists all target languages to improve detection accuracy and prevent linguistic leakage.
- ML Usage: The ML-powered toxicity detection provides higher accuracy for complex sentiment but requires more memory and compute, so reserve it for messages that pass initial dictionary checks.
Metadata
Not sure this is the right skill?
Describe what you want to build — we'll match you to the best skill from 16,000+ options.
Find the right skillPaste this into your clawhub.json to enable this plugin.
{
"plugins": {
"official-thegdsks-glin-profanity": {
"enabled": true,
"auto_update": true
}
}
}Tags(AI)
Related Skills
openclaw-profanity
Content moderation plugin for OpenClaw/Moltbot AI agents. Use when building chatbots that need profanity filtering, moderating user messages in Discord/Slack/Telegram bots, or adding content moderation to OpenClaw agents.
glin-profanity-mcp
MCP server providing profanity detection tools for AI assistants. Use when reviewing batches of user content, auditing comments for moderation reports, analyzing text for profanity before publishing, or when AI needs content moderation capabilities during workflows.