ugc-campaign-pipeline
Complete UGC video campaign pipeline: product → hero image → variations → videos → edited final. ✅ USE WHEN: - User says "crear campaña UGC" or "pipeline completo" - Need end-to-end UGC video production - Starting from product image/URL → final edited video - Want the full Doritos-style workflow ❌ DON'T USE WHEN: - Just need one step (use individual skills) - Already have final videos, just editing → use Remotion - Only need images, no video → use Morpheus only OUTPUT: Edited MP4 video with multiple scenes + subtitles + logo
Install via CLI (Recommended)
clawhub install openclaw/skills/skills/pauldelavallaz/ugc-campaign-pipelineUGC Campaign Pipeline
Complete workflow for creating UGC-style promotional videos from a product.
Pipeline Overview
┌─────────────────────────────────────────────────────────────────────┐
│ UGC CAMPAIGN PIPELINE │
├─────────────────────────────────────────────────────────────────────┤
│ │
│ STEP 1: HERO IMAGE │
│ ├─ Input: Product image + model selection │
│ ├─ Tool: morpheus-fashion-design │
│ └─ Output: ~/clawd/outputs/{project}/morpheus/hero.png │
│ │
│ STEP 2: VARIATIONS │
│ ├─ Input: Hero image │
│ ├─ Tool: multishot-ugc │
│ └─ Output: ~/clawd/outputs/{project}/multishot/*.png (10 images) │
│ │
│ STEP 3: SELECTION │
│ ├─ Analyze all 11 images │
│ ├─ Criteria: variety, no errors, lip-sync friendly │
│ └─ Output: 4 best images selected │
│ │
│ STEP 4: SCRIPT │
│ ├─ Write 4-scene dialogue script │
│ ├─ Format: PURE DIALOGUE (no annotations) │
│ └─ Output: 4 lines of dialogue │
│ │
│ STEP 5: UGC VIDEOS │
│ ├─ Input: 4 images + 4 script lines │
│ ├─ Tool: veed-ugc (run 4 times) │
│ └─ Output: ~/clawd/outputs/{project}/ugc/*.mp4 (4 videos) │
│ │
│ STEP 6: FINAL EDIT │
│ ├─ Input: 4 videos + logo │
│ ├─ Tool: Remotion │
│ ├─ Add: subtitles, transitions, logo ending │
│ └─ Output: ~/clawd/outputs/{project}/final/video.mp4 │
│ │
└─────────────────────────────────────────────────────────────────────┘
Execution Checklist
Before Starting
- Product image received
- Brand/product name known
- Target audience understood
- Tone defined (casual, professional, energetic)
Metadata
Not sure this is the right skill?
Describe what you want to build — we'll match you to the best skill from 16,000+ options.
Find the right skillPaste this into your clawhub.json to enable this plugin.
{
"plugins": {
"official-pauldelavallaz-ugc-campaign-pipeline": {
"enabled": true,
"auto_update": true
}
}
}Related Skills
morpheus-fashion-design
Generate professional advertising images with AI models holding/wearing products. ✅ USE WHEN: - Need a person/model in the image WITH a product - Creating fashion ads, product campaigns, commercial photography - Want consistent model face across multiple shots - Need professional lighting/camera simulation - Input: product image + model reference (or catalog) ❌ DON'T USE WHEN: - Just editing/modifying an existing image → use nano-banana-pro - Product-only shot without a person → use nano-banana-pro - Already have the hero image, need variations → use multishot-ugc - Need video, not image → use veed-ugc after generating image - URL-based product fetch with brand profile → use ad-ready instead OUTPUT: Single high-quality PNG image (2K-4K resolution)
veed-ugc
Generate UGC-style promotional videos with AI lip-sync. Takes an image (person with product from Morpheus/Ad-Ready) and a script (pure dialogue), creates a video of the person speaking. Uses ElevenLabs for voice synthesis.
ugc-manual
Generate lip-sync video from image + user's own audio recording. ✅ USE WHEN: - User provides their OWN audio file (voice recording) - Want to sync image to specific audio/voice - User recorded the script themselves - Need exact audio timing preserved ❌ DON'T USE WHEN: - User provides text script (not audio) → use veed-ugc - Need AI to generate the voice → use veed-ugc - Don't have audio file yet → use veed-ugc with script INPUT: Image + audio file (user's recording) OUTPUT: MP4 video with lip-sync to provided audio KEY DIFFERENCE: veed-ugc = script → AI voice → video ugc-manual = user audio → video (no voice generation)
sora
Generate videos using OpenAI's Sora API. Use when the user asks to generate, create, or make videos from text prompts or reference images. Supports image-to-video generation with automatic resizing.
sora
Generate videos from text prompts or reference images using OpenAI Sora. ✅ USE WHEN: - Need AI-generated video from text description - Want image-to-video (animate a still image) - Creating cinematic/artistic video content - Need motion/animation without lip-sync ❌ DON'T USE WHEN: - Need lip-sync (person speaking) → use veed-ugc or ugc-manual - Just need image generation → use nano-banana-pro or morpheus - Editing existing videos → use Remotion - Need UGC-style talking head → use veed-ugc INPUT: Text prompt + optional reference image OUTPUT: MP4 video (various resolutions/durations)