Back to Registry View Author Profile
Official Verified
Wrynai Skill
Skill by wrynai
skill-install — Terminal
Install via CLI (Recommended)
clawhub install openclaw/skills/skills/wrynai/wrynai-skillOr
WrynAI Web Crawling Skill
Overview
This skill enables OpenClaw to perform advanced web crawling and content extraction using the WrynAI SDK. It provides capabilities for multi-page crawling, content extraction, search engine results parsing, and intelligent data gathering from websites.
Core Capabilities
- Multi-page crawling with depth and breadth control
- Content extraction (text, markdown, structured data, links)
- Search engine results parsing (SERP data)
- Screenshot capture (viewport and full-page)
- Smart listing extraction (e-commerce, directory pages)
- Pattern-based URL filtering for targeted crawling
Prerequisites
Environment Setup
# Install the WrynAI SDK
pip install wrynai
# Set your API key as environment variable
export WRYNAI_API_KEY="your-api-key-here"
API Key
Sign up at https://wryn.ai to obtain an API key. The key must be set in the WRYNAI_API_KEY environment variable.
Usage Patterns
1. Basic Website Crawling
Use this when the user wants to crawl an entire website or section of a website.
import os
from wrynai import WrynAI, WrynAIError
def crawl_website(url: str, max_pages: int = 10) -> dict:
"""
Crawl a website starting from the given URL.
Args:
url: Starting URL for the crawl
max_pages: Maximum number of pages to crawl (hard limit: 10)
Returns:
Dictionary containing crawl results with pages and their content
"""
api_key = os.environ.get("WRYNAI_API_KEY")
if not api_key:
raise ValueError("WRYNAI_API_KEY environment variable required")
try:
with WrynAI(api_key=api_key) as client:
result = client.crawl(
url=url,
max_pages=min(max_pages, 10), # Hard limit enforced
max_depth=3,
return_urls=True,
)
return {
"success": result.success,
"total_pages": result.total_pages,
"total_visited": result.total_visited,
"pages": [
{
"url": page.page_url,
"content": page.content,
"urls_found": len(page.urls),
"discovered_urls": page.urls[:10], # First 10 URLs
}
for page in result.pages
],
}
except WrynAIError as e:
return {
"success": False,
"error": str(e),
"status_code": getattr(e, 'status_code', None),
}
When to use:
- User asks to "crawl a website"
- User wants to gather content from multiple pages
- User needs to discover site structure
2. Documentation Crawling
Specialized crawling for documentation sites with pattern filtering.
from wrynai import WrynAI, Engine
Metadata
AI Skill Finder
Not sure this is the right skill?
Describe what you want to build — we'll match you to the best skill from 16,000+ options.
Find the right skill Add to Configuration
Paste this into your clawhub.json to enable this plugin.
{
"plugins": {
"official-wrynai-wrynai-skill": {
"enabled": true,
"auto_update": true
}
}
}Safety NoteClawKit audits metadata but not runtime behavior. Use with caution.