ad-ready
Generate professional advertising images from product URLs using the Ad-Ready pipeline on ComfyDeploy. Use when the user wants to create ads for any product by providing a URL, optionally with a brand profile (70+ brands) and funnel stage targeting. Supports model/talent integration, brand-aware creative direction, and multi-format output. Differs from Morpheus (manual fashion photography) — Ad-Ready is URL-driven, brand-intelligent, and funnel-stage aware.
Why use this skill?
Automate high-conversion advertising image generation with the Ad-Ready skill for OpenClaw. Turn product URLs into branded ad campaigns instantly.
Install via CLI (Recommended)
clawhub install openclaw/skills/skills/pauldelavallaz/ad-ready-proWhat This Skill Does
Ad-Ready is a sophisticated AI advertising image generation agent designed for the ComfyDeploy ecosystem. It streamlines the creation of high-conversion marketing assets by automating the multi-phase pipeline required for professional ad production. Unlike manual photography or generic generative tools, Ad-Ready acts as a creative director that understands brand identity, funnel stages, and visual consistency. By processing a product URL, the agent intelligently extracts key information and leverages brand-specific configurations to produce output that is tailored for specific marketing objectives—from top-of-funnel awareness to bottom-of-funnel conversion. It effectively bridges the gap between raw product data and polished advertising creatives.
Installation
To install this skill, run the following command in your terminal within the OpenClaw environment:
clawhub install openclaw/skills/skills/pauldelavallaz/ad-ready-pro
Ensure you have the required dependencies for image processing and network access enabled to allow the agent to fetch external assets like logos and product photos.
Use Cases
- E-commerce Marketing: Rapidly generate professional ad variants for individual products directly from store links.
- Campaign Scaling: Create consistent, branded visuals across different funnel stages (e.g., Awareness vs. Conversion) for A/B testing.
- Creative Direction: Provide a reference image to clone specific visual styles from successful competitor ads or previous high-performing brand assets.
- Brand Asset Management: Maintain visual identity consistency by utilizing stored brand profiles that define specific aesthetic requirements.
Example Prompts
- "Generate a conversion-focused ad for this product: [URL] from Nike. Target the 'Conversion' funnel stage and use our standard brand profile."
- "Create an awareness-stage ad for these running shoes [URL]. Use the minimalist aesthetic found in our previous ad, which you can find in the project folder."
- "Help me build a brand profile for 'EcoWear' first, then create an Instagram story ad (9:16) for their new bamboo t-shirt using [URL]."
Tips & Limitations
- Preparation is Key: The quality of the output is heavily dependent on the quality of inputs. Always ensure you have a clean logo and a high-resolution product image. If the scraper struggles with the URL, provide the image file manually.
- Brand Consistency: Never bypass the brand-analyzer step if you are working with a new brand. A defined profile is the secret to professional results.
- Reference Images: The pipeline works best when given a clear reference. If you don't have one, search for an ad that captures the desired mood or layout to guide the generative process.
- Resource Management: Large image generations may take time. Ensure your ComfyDeploy instance is configured for sufficient processing power.
Metadata
Not sure this is the right skill?
Describe what you want to build — we'll match you to the best skill from 16,000+ options.
Find the right skillPaste this into your clawhub.json to enable this plugin.
{
"plugins": {
"official-pauldelavallaz-ad-ready-pro": {
"enabled": true,
"auto_update": true
}
}
}Tags(AI)
Flags: network-access, file-write, file-read, external-api
Related Skills
morpheus-fashion-design
Generate professional advertising images with AI models holding/wearing products. ✅ USE WHEN: - Need a person/model in the image WITH a product - Creating fashion ads, product campaigns, commercial photography - Want consistent model face across multiple shots - Need professional lighting/camera simulation - Input: product image + model reference (or catalog) ❌ DON'T USE WHEN: - Just editing/modifying an existing image → use nano-banana-pro - Product-only shot without a person → use nano-banana-pro - Already have the hero image, need variations → use multishot-ugc - Need video, not image → use veed-ugc after generating image - URL-based product fetch with brand profile → use ad-ready instead OUTPUT: Single high-quality PNG image (2K-4K resolution)
veed-ugc
Generate UGC-style promotional videos with AI lip-sync. Takes an image (person with product from Morpheus/Ad-Ready) and a script (pure dialogue), creates a video of the person speaking. Uses ElevenLabs for voice synthesis.
ugc-manual
Generate lip-sync video from image + user's own audio recording. ✅ USE WHEN: - User provides their OWN audio file (voice recording) - Want to sync image to specific audio/voice - User recorded the script themselves - Need exact audio timing preserved ❌ DON'T USE WHEN: - User provides text script (not audio) → use veed-ugc - Need AI to generate the voice → use veed-ugc - Don't have audio file yet → use veed-ugc with script INPUT: Image + audio file (user's recording) OUTPUT: MP4 video with lip-sync to provided audio KEY DIFFERENCE: veed-ugc = script → AI voice → video ugc-manual = user audio → video (no voice generation)
sora
Generate videos using OpenAI's Sora API. Use when the user asks to generate, create, or make videos from text prompts or reference images. Supports image-to-video generation with automatic resizing.
sora
Generate videos from text prompts or reference images using OpenAI Sora. ✅ USE WHEN: - Need AI-generated video from text description - Want image-to-video (animate a still image) - Creating cinematic/artistic video content - Need motion/animation without lip-sync ❌ DON'T USE WHEN: - Need lip-sync (person speaking) → use veed-ugc or ugc-manual - Just need image generation → use nano-banana-pro or morpheus - Editing existing videos → use Remotion - Need UGC-style talking head → use veed-ugc INPUT: Text prompt + optional reference image OUTPUT: MP4 video (various resolutions/durations)