ClawKit Logo
ClawKitReliability Toolkit
Back to Registry
Official Verified

multishot-ugc

Generate 10 perspective/angle variations from a single image for multi-shot UGC videos. ✅ USE WHEN: - Have a hero image and need camera angle variations - Creating multi-scene UGC videos (need different shots) - Want close-ups, wide shots, side angles from one source - Building a video with scene changes ❌ DON'T USE WHEN: - Don't have a hero image yet → use morpheus-fashion-design first - Need completely different scenes/locations → use Morpheus multiple times - Just need one image → skip this step - Want to edit images manually → use nano-banana-pro INPUT: Single image (person with product) OUTPUT: 10 PNG variations with different perspectives TYPICAL PIPELINE: Morpheus → multishot-ugc → select best 4 → veed-ugc each → Remotion edit

skill-install — Terminal

Install via CLI (Recommended)

clawhub install openclaw/skills/skills/pauldelavallaz/multishot-ugc
Or

Multishot-UGC

Generate 10 perspective variations of an image using ComfyDeploy's MULTISHOT-UGC workflow.

Overview

Multishot-UGC takes a single image and generates 10 different variations exploring different perspectives, angles, and compositions. These variations are designed to be used in VEED lip-sync workflows to create dynamic UGC-style promotional videos with varied camera shots.

API Details

Endpoint: https://api.comfydeploy.com/api/run/deployment/queue Deployment ID: 9ccbb29a-d982-48cc-a465-bae916f2c7fd

Required Inputs

InputDescriptionDefault
input_imageURL or path to the source imageRequired
textDescription for exploration"Explora distintas perspectivas de esta escena"
resolutionOutput resolution"2K"
aspect_ratioOutput aspect ratio"9:16"

Usage

uv run ~/.clawdbot/skills/multishot-ugc/scripts/generate.py \
  --image "./person-with-product.png" \
  --output-dir "./multishot-output" \
  [--text "Custom exploration prompt"] \
  [--resolution 1K|2K|4K] \
  [--aspect-ratio 9:16|16:9|1:1|4:3|3:4]

With URL:

uv run ~/.clawdbot/skills/multishot-ugc/scripts/generate.py \
  --image "https://example.com/image.png" \
  --output-dir "./variations"

Output

The workflow generates 10 PNG images with variations:

  • 1_00001_.png through 10_00001_.png

Each image explores a different perspective/angle of the original scene while maintaining subject identity and composition coherence.

Workflow Integration

Typical Pipeline

  1. Generate hero image with Morpheus/Ad-Ready

    uv run morpheus... --output hero.png
    
  2. Create 10 angle variations

    uv run multishot-ugc... --image hero.png --output-dir ./shots
    
  3. Select best variations for VEED lip-sync

    # Review shots, then generate videos for chosen ones
    uv run veed-ugc... --image ./shots/3_00001_.png --brief "..."
    

Notes

  • Source image should be high quality (at least 1K resolution)
  • Works best with images containing a clear subject/person
  • Generation takes ~2-3 minutes for 10 variations
  • All variations maintain the original aspect ratio unless specified

Metadata

Stars1217
Views0
Updated2026-02-20
View Author Profile
AI Skill Finder

Not sure this is the right skill?

Describe what you want to build — we'll match you to the best skill from 16,000+ options.

Find the right skill
Add to Configuration

Paste this into your clawhub.json to enable this plugin.

{
  "plugins": {
    "official-pauldelavallaz-multishot-ugc": {
      "enabled": true,
      "auto_update": true
    }
  }
}
Safety NoteClawKit audits metadata but not runtime behavior. Use with caution.

Related Skills

morpheus-fashion-design

Generate professional advertising images with AI models holding/wearing products. ✅ USE WHEN: - Need a person/model in the image WITH a product - Creating fashion ads, product campaigns, commercial photography - Want consistent model face across multiple shots - Need professional lighting/camera simulation - Input: product image + model reference (or catalog) ❌ DON'T USE WHEN: - Just editing/modifying an existing image → use nano-banana-pro - Product-only shot without a person → use nano-banana-pro - Already have the hero image, need variations → use multishot-ugc - Need video, not image → use veed-ugc after generating image - URL-based product fetch with brand profile → use ad-ready instead OUTPUT: Single high-quality PNG image (2K-4K resolution)

pauldelavallaz 1217

veed-ugc

Generate UGC-style promotional videos with AI lip-sync. Takes an image (person with product from Morpheus/Ad-Ready) and a script (pure dialogue), creates a video of the person speaking. Uses ElevenLabs for voice synthesis.

pauldelavallaz 1217

ugc-manual

Generate lip-sync video from image + user's own audio recording. ✅ USE WHEN: - User provides their OWN audio file (voice recording) - Want to sync image to specific audio/voice - User recorded the script themselves - Need exact audio timing preserved ❌ DON'T USE WHEN: - User provides text script (not audio) → use veed-ugc - Need AI to generate the voice → use veed-ugc - Don't have audio file yet → use veed-ugc with script INPUT: Image + audio file (user's recording) OUTPUT: MP4 video with lip-sync to provided audio KEY DIFFERENCE: veed-ugc = script → AI voice → video ugc-manual = user audio → video (no voice generation)

pauldelavallaz 1217

sora

Generate videos using OpenAI's Sora API. Use when the user asks to generate, create, or make videos from text prompts or reference images. Supports image-to-video generation with automatic resizing.

pauldelavallaz 1217

sora

Generate videos from text prompts or reference images using OpenAI Sora. ✅ USE WHEN: - Need AI-generated video from text description - Want image-to-video (animate a still image) - Creating cinematic/artistic video content - Need motion/animation without lip-sync ❌ DON'T USE WHEN: - Need lip-sync (person speaking) → use veed-ugc or ugc-manual - Just need image generation → use nano-banana-pro or morpheus - Editing existing videos → use Remotion - Need UGC-style talking head → use veed-ugc INPUT: Text prompt + optional reference image OUTPUT: MP4 video (various resolutions/durations)

pauldelavallaz 1217