multishot-ugc
Generate 10 perspective/angle variations from a single image for multi-shot UGC videos. ✅ USE WHEN: - Have a hero image and need camera angle variations - Creating multi-scene UGC videos (need different shots) - Want close-ups, wide shots, side angles from one source - Building a video with scene changes ❌ DON'T USE WHEN: - Don't have a hero image yet → use morpheus-fashion-design first - Need completely different scenes/locations → use Morpheus multiple times - Just need one image → skip this step - Want to edit images manually → use nano-banana-pro INPUT: Single image (person with product) OUTPUT: 10 PNG variations with different perspectives TYPICAL PIPELINE: Morpheus → multishot-ugc → select best 4 → veed-ugc each → Remotion edit
Install via CLI (Recommended)
clawhub install openclaw/skills/skills/pauldelavallaz/multishot-ugcMultishot-UGC
Generate 10 perspective variations of an image using ComfyDeploy's MULTISHOT-UGC workflow.
Overview
Multishot-UGC takes a single image and generates 10 different variations exploring different perspectives, angles, and compositions. These variations are designed to be used in VEED lip-sync workflows to create dynamic UGC-style promotional videos with varied camera shots.
API Details
Endpoint: https://api.comfydeploy.com/api/run/deployment/queue
Deployment ID: 9ccbb29a-d982-48cc-a465-bae916f2c7fd
Required Inputs
| Input | Description | Default |
|---|---|---|
input_image | URL or path to the source image | Required |
text | Description for exploration | "Explora distintas perspectivas de esta escena" |
resolution | Output resolution | "2K" |
aspect_ratio | Output aspect ratio | "9:16" |
Usage
uv run ~/.clawdbot/skills/multishot-ugc/scripts/generate.py \
--image "./person-with-product.png" \
--output-dir "./multishot-output" \
[--text "Custom exploration prompt"] \
[--resolution 1K|2K|4K] \
[--aspect-ratio 9:16|16:9|1:1|4:3|3:4]
With URL:
uv run ~/.clawdbot/skills/multishot-ugc/scripts/generate.py \
--image "https://example.com/image.png" \
--output-dir "./variations"
Output
The workflow generates 10 PNG images with variations:
1_00001_.pngthrough10_00001_.png
Each image explores a different perspective/angle of the original scene while maintaining subject identity and composition coherence.
Workflow Integration
Typical Pipeline
-
Generate hero image with Morpheus/Ad-Ready
uv run morpheus... --output hero.png -
Create 10 angle variations
uv run multishot-ugc... --image hero.png --output-dir ./shots -
Select best variations for VEED lip-sync
# Review shots, then generate videos for chosen ones uv run veed-ugc... --image ./shots/3_00001_.png --brief "..."
Notes
- Source image should be high quality (at least 1K resolution)
- Works best with images containing a clear subject/person
- Generation takes ~2-3 minutes for 10 variations
- All variations maintain the original aspect ratio unless specified
Metadata
Not sure this is the right skill?
Describe what you want to build — we'll match you to the best skill from 16,000+ options.
Find the right skillPaste this into your clawhub.json to enable this plugin.
{
"plugins": {
"official-pauldelavallaz-multishot-ugc": {
"enabled": true,
"auto_update": true
}
}
}Related Skills
morpheus-fashion-design
Generate professional advertising images with AI models holding/wearing products. ✅ USE WHEN: - Need a person/model in the image WITH a product - Creating fashion ads, product campaigns, commercial photography - Want consistent model face across multiple shots - Need professional lighting/camera simulation - Input: product image + model reference (or catalog) ❌ DON'T USE WHEN: - Just editing/modifying an existing image → use nano-banana-pro - Product-only shot without a person → use nano-banana-pro - Already have the hero image, need variations → use multishot-ugc - Need video, not image → use veed-ugc after generating image - URL-based product fetch with brand profile → use ad-ready instead OUTPUT: Single high-quality PNG image (2K-4K resolution)
veed-ugc
Generate UGC-style promotional videos with AI lip-sync. Takes an image (person with product from Morpheus/Ad-Ready) and a script (pure dialogue), creates a video of the person speaking. Uses ElevenLabs for voice synthesis.
ugc-manual
Generate lip-sync video from image + user's own audio recording. ✅ USE WHEN: - User provides their OWN audio file (voice recording) - Want to sync image to specific audio/voice - User recorded the script themselves - Need exact audio timing preserved ❌ DON'T USE WHEN: - User provides text script (not audio) → use veed-ugc - Need AI to generate the voice → use veed-ugc - Don't have audio file yet → use veed-ugc with script INPUT: Image + audio file (user's recording) OUTPUT: MP4 video with lip-sync to provided audio KEY DIFFERENCE: veed-ugc = script → AI voice → video ugc-manual = user audio → video (no voice generation)
sora
Generate videos using OpenAI's Sora API. Use when the user asks to generate, create, or make videos from text prompts or reference images. Supports image-to-video generation with automatic resizing.
sora
Generate videos from text prompts or reference images using OpenAI Sora. ✅ USE WHEN: - Need AI-generated video from text description - Want image-to-video (animate a still image) - Creating cinematic/artistic video content - Need motion/animation without lip-sync ❌ DON'T USE WHEN: - Need lip-sync (person speaking) → use veed-ugc or ugc-manual - Just need image generation → use nano-banana-pro or morpheus - Editing existing videos → use Remotion - Need UGC-style talking head → use veed-ugc INPUT: Text prompt + optional reference image OUTPUT: MP4 video (various resolutions/durations)