human_test
Call real humans to test your product. Get structured usability feedback with NPS scores, step-by-step task reports, and AI-aggregated findings.
Install via CLI (Recommended)
clawhub install openclaw/skills/skills/avivahe326/human-testhuman_test() — Real Human Feedback for AI Products
AI agents cannot judge human perception, emotion, or usability. This skill lets you call real humans to test any product URL and get structured feedback back.
What it does
- You call
human_test()with a product URL - AI auto-generates a structured test plan
- Real human testers claim the task on the web platform
- Each tester completes a 3-step guided feedback flow (first impression, task steps, NPS rating)
- AI aggregates all feedback into a structured report with severity-ranked findings
Quick start
You need an API key. Register at https://human-test.work/register to get one (free).
Create a test task
curl -X POST https://human-test.work/api/skill/human-test \
-H "Authorization: Bearer <your-api-key>" \
-H "Content-Type: application/json" \
-d '{
"url": "https://your-product.com",
"focus": "Test the onboarding flow",
"maxTesters": 5
}'
Response:
{
"taskId": "cm...",
"status": "OPEN",
"testPlan": { "steps": [...], "nps": true, "estimatedMinutes": 10 }
}
Check progress and get the report
curl https://human-test.work/api/skill/status/<taskId> \
-H "Authorization: Bearer <your-api-key>"
Response (when completed):
{
"taskId": "cm...",
"status": "COMPLETED",
"submittedCount": 5,
"report": "## Executive Summary\n..."
}
Parameters
| Parameter | Required | Default | Description |
|---|---|---|---|
url | Yes | — | Product URL to test |
title | No | Auto from hostname | Task title |
focus | No | — | What testers should focus on |
maxTesters | No | 5 | Number of testers (1-50) |
estimatedMinutes | No | 10 | Expected test duration |
webhookUrl | No | — | HTTPS URL to receive the report on completion |
repoUrl | No | — | GitHub/Gitee repo URL for code-level fix suggestions |
repoBranch | No | repo default | Branch to analyze (only used with repoUrl) |
Async webhook
If you provide a webhookUrl, the platform will POST the full report to that URL when all testers have submitted:
{
"taskId": "...",
"status": "COMPLETED",
"title": "Test: example.com",
"targetUrl": "https://example.com",
"report": "## Executive Summary\n...",
"codeFixPrUrl": "https://github.com/user/repo/pull/1",
"completedAt": "2026-03-02T12:00:00Z"
}
Report format (structured for AI agents)
The report is returned as a markdown string in the report field. It uses a consistent, machine-parseable structure designed for AI agents to read and act on directly — for example, to automatically file issues, create PRs, or prioritize a fix backlog.
Section structure
Every report contains these exact sections in order:
## Metadata
| Field | Value |
|-------|-------|
| Product | ... |
| URL | ... |
| Testers | N |
| Avg NPS | X.X/10 |
## Executive Summary
(3-5 sentences, most critical finding first)
Metadata
Not sure this is the right skill?
Describe what you want to build — we'll match you to the best skill from 16,000+ options.
Find the right skillPaste this into your clawhub.json to enable this plugin.
{
"plugins": {
"official-avivahe326-human-test": {
"enabled": true,
"auto_update": true
}
}
}Tags
Related Skills
goldenseed
Deterministic entropy streams for reproducible testing and procedural generation. Perfect 50/50 statistical distribution with hash verification. Not cryptographically secure - use for testing, worldgen, and scenarios where reproducibility matters more than unpredictability.
qa-check
Mandatory quality assurance for all dev work before publishing. Use BEFORE deploying any project to production. Validates build, tests browser functionality, checks mobile responsiveness, and ensures no broken links/images.
review-orchestrator
Get multiple perspectives on your work — coordinate reviews across cognitive modes
failure-memory
Stop making the same mistakes — turn failures into patterns that prevent recurrence
workflow-patterns
Systematic task implementation using TDD, phase checkpoints, and structured commits. Ensures quality through red-green-refactor cycles, 80% coverage gates, and verification protocols before proceeding.