ad-ready
Generate advertising images automatically from a product URL + brand profile. ✅ USE WHEN: - User provides a product URL (e-commerce link) - Want automated product scraping + image generation - Have a brand profile to apply (70+ brands available) - Need funnel-stage targeting (awareness/consideration/conversion) - Want AI to auto-select model, scene, lighting based on brand ❌ DON'T USE WHEN: - User provides local product image file → use morpheus-fashion-design - Don't need a person in the image → use nano-banana-pro - Want manual control over model, scene, packs → use morpheus-fashion-design - Already have hero image, need variations → use multishot-ugc - Need video output → use veed-ugc after image generation INPUT: Product URL + brand name (optional) + funnel stage (optional) OUTPUT: PNG advertising image with product + model
Install via CLI (Recommended)
clawhub install openclaw/skills/skills/pauldelavallaz/ad-readyAd-Ready: AI Advertising Image Generator
Generate professional advertising images from product URLs using a 4-phase AI pipeline on ComfyDeploy.
Source: github.com/PauldeLavallaz/ads_SV
Pipeline Architecture
The pipeline runs as a ComfyUI custom node deployed on ComfyDeploy. A single ProductToAds_Manual node executes 4 phases internally:
┌─────────────────────────────────────────────────────────────┐
│ ProductToAds_Manual Node │
│ │
│ PHASE 1: Product Scraping (Gemini Flash) │
│ ───────────────────────────────────────── │
│ Scrapes product URL → extracts title, description, │
│ features, price, materials, image URLs │
│ Also scrapes HTML for high-res product images (≥1000px) │
│ │
│ PHASE 2: Campaign Brief Generation (Gemini Flash) │
│ ──────────────────────────────────────────────── │
│ Brand Identity + Product Data + References → │
│ 10-point Campaign Brief (creative direction) │
│ │
│ PHASE 3: Blueprint Generation (Gemini Flash) │
│ ────────────────────────────────────────────── │
│ Master Prompt (funnel stage) + Brief + Keywords → │
│ Production-Ready JSON Blueprint │
│ │
│ PHASE 4: Image Generation (Nano Banana Pro / Imagen 3) │
│ ────────────────────────────────────────────────── │
│ Blueprint + all reference images → final ad image │
│ │
└─────────────────────────────────────────────────────────────┘
Phase 2: Campaign Brief (The Creative Brain)
The Brief Generator is the most critical intermediate step. It acts as a "Senior Art Director" that translates raw data into actionable creative direction using a 10-point framework:
- Strategic Objective — Why this campaign exists (awareness/positioning/launch)
- Central Message — One idea perceivable without text
- Visual Tone of Voice — Register: calm/energetic/intimate/monumental
- Product Role — Hero vs co-protagonist vs implicit presence
- Visual Language & Brand Coherence — Non-negotiable brand codes
- Photographer & Equipment — Photography as concept, not execution
- Extended Art Direction — Styling, casting, poses, hair/makeup, layout
- Environment & Context — Where and why (conceptual, never decorative)
- Texture, Material & Product Render — How surfaces are perceived
- Final Image Signature — Finish, grain, temporal positioning
Metadata
Not sure this is the right skill?
Describe what you want to build — we'll match you to the best skill from 16,000+ options.
Find the right skillPaste this into your clawhub.json to enable this plugin.
{
"plugins": {
"official-pauldelavallaz-ad-ready": {
"enabled": true,
"auto_update": true
}
}
}Related Skills
morpheus-fashion-design
Generate professional advertising images with AI models holding/wearing products. ✅ USE WHEN: - Need a person/model in the image WITH a product - Creating fashion ads, product campaigns, commercial photography - Want consistent model face across multiple shots - Need professional lighting/camera simulation - Input: product image + model reference (or catalog) ❌ DON'T USE WHEN: - Just editing/modifying an existing image → use nano-banana-pro - Product-only shot without a person → use nano-banana-pro - Already have the hero image, need variations → use multishot-ugc - Need video, not image → use veed-ugc after generating image - URL-based product fetch with brand profile → use ad-ready instead OUTPUT: Single high-quality PNG image (2K-4K resolution)
veed-ugc
Generate UGC-style promotional videos with AI lip-sync. Takes an image (person with product from Morpheus/Ad-Ready) and a script (pure dialogue), creates a video of the person speaking. Uses ElevenLabs for voice synthesis.
ugc-manual
Generate lip-sync video from image + user's own audio recording. ✅ USE WHEN: - User provides their OWN audio file (voice recording) - Want to sync image to specific audio/voice - User recorded the script themselves - Need exact audio timing preserved ❌ DON'T USE WHEN: - User provides text script (not audio) → use veed-ugc - Need AI to generate the voice → use veed-ugc - Don't have audio file yet → use veed-ugc with script INPUT: Image + audio file (user's recording) OUTPUT: MP4 video with lip-sync to provided audio KEY DIFFERENCE: veed-ugc = script → AI voice → video ugc-manual = user audio → video (no voice generation)
sora
Generate videos using OpenAI's Sora API. Use when the user asks to generate, create, or make videos from text prompts or reference images. Supports image-to-video generation with automatic resizing.
sora
Generate videos from text prompts or reference images using OpenAI Sora. ✅ USE WHEN: - Need AI-generated video from text description - Want image-to-video (animate a still image) - Creating cinematic/artistic video content - Need motion/animation without lip-sync ❌ DON'T USE WHEN: - Need lip-sync (person speaking) → use veed-ugc or ugc-manual - Just need image generation → use nano-banana-pro or morpheus - Editing existing videos → use Remotion - Need UGC-style talking head → use veed-ugc INPUT: Text prompt + optional reference image OUTPUT: MP4 video (various resolutions/durations)