morpheus-fashion-design
Generate professional advertising images with AI models holding/wearing products. ✅ USE WHEN: - Need a person/model in the image WITH a product - Creating fashion ads, product campaigns, commercial photography - Want consistent model face across multiple shots - Need professional lighting/camera simulation - Input: product image + model reference (or catalog) ❌ DON'T USE WHEN: - Just editing/modifying an existing image → use nano-banana-pro - Product-only shot without a person → use nano-banana-pro - Already have the hero image, need variations → use multishot-ugc - Need video, not image → use veed-ugc after generating image - URL-based product fetch with brand profile → use ad-ready instead OUTPUT: Single high-quality PNG image (2K-4K resolution)
Why use this skill?
Generate professional fashion and product advertising images with consistent AI models. Expert-level control over camera, lighting, and style for commercial campaigns.
Install via CLI (Recommended)
clawhub install openclaw/skills/skills/pauldelavallaz/morpheus-fashion-designWhat This Skill Does
Morpheus Fashion Design is a high-end AI imaging agent skill designed to bridge the gap between product photography and professional editorial fashion shoots. By leveraging the ComfyDeploy Morpheus workflow, this skill allows users to generate hyper-realistic, commercial-grade imagery where a consistent AI model wears or holds a physical product. Unlike standard text-to-image generators, this skill prioritizes professional photography standards, requiring the user to explicitly define configuration packs—such as camera bodies, lenses, and lighting environments—to ensure the output meets the specific aesthetic requirements of high-end advertising campaigns.
Installation
To integrate this skill into your OpenClaw environment, execute the following command in your terminal:
clawhub install openclaw/skills/skills/pauldelavallaz/morpheus-fashion-design
Ensure you have your ComfyDeploy credentials configured, as this skill interacts directly with the Morpheus production deployment.
Use Cases
This skill is ideal for:
- Marketing Managers: Generating rapid mockups for billboard or social media ad campaigns.
- Fashion Brands: Creating catalog-ready imagery featuring new apparel on consistent, AI-generated models.
- E-commerce Professionals: Visualizing how products interact with real-world environments before setting up expensive photoshoots.
- Agencies: Developing mood boards and test shots with specific photographic styles and lighting conditions.
Example Prompts
- "Morpheus, generate a hero shot for our summer eyewear campaign featuring a female model on a yacht. Use premium_restraint style, hasselblad_x2d camera, golden_hour lighting, and an environment of a luxury Mediterranean port."
- "I need an athletic ad for these sneakers. Set the style to cinematic_realism, use a sony_a1 camera with a 35mm lens, and place the model in a harsh urban skatepark setting at midday."
- "Create a winter fashion editorial. Use kodak_portra_400 film texture, commercial_front_facing pose, and professional studio lighting with a softbox setup to highlight the fabric texture of this cashmere coat."
Tips & Limitations
CRITICAL: Never leave configuration packs on 'auto'. The model performs best when provided with specific, descriptive creative direction. If a setting is left as 'auto', the output quality will degrade to generic, uninspired images. This skill is exclusively for image generation involving people; for product-only shots or simple image editing, please utilize the nano-banana-pro skill to optimize performance and cost. Ensure your input product images are high-resolution to achieve the best integration results with the AI model.
Metadata
Not sure this is the right skill?
Describe what you want to build — we'll match you to the best skill from 16,000+ options.
Find the right skillPaste this into your clawhub.json to enable this plugin.
{
"plugins": {
"official-pauldelavallaz-morpheus-fashion-design": {
"enabled": true,
"auto_update": true
}
}
}Tags(AI)
Flags: external-api
Related Skills
veed-ugc
Generate UGC-style promotional videos with AI lip-sync. Takes an image (person with product from Morpheus/Ad-Ready) and a script (pure dialogue), creates a video of the person speaking. Uses ElevenLabs for voice synthesis.
ugc-manual
Generate lip-sync video from image + user's own audio recording. ✅ USE WHEN: - User provides their OWN audio file (voice recording) - Want to sync image to specific audio/voice - User recorded the script themselves - Need exact audio timing preserved ❌ DON'T USE WHEN: - User provides text script (not audio) → use veed-ugc - Need AI to generate the voice → use veed-ugc - Don't have audio file yet → use veed-ugc with script INPUT: Image + audio file (user's recording) OUTPUT: MP4 video with lip-sync to provided audio KEY DIFFERENCE: veed-ugc = script → AI voice → video ugc-manual = user audio → video (no voice generation)
sora
Generate videos using OpenAI's Sora API. Use when the user asks to generate, create, or make videos from text prompts or reference images. Supports image-to-video generation with automatic resizing.
ad-ready
Generate advertising images automatically from a product URL + brand profile. ✅ USE WHEN: - User provides a product URL (e-commerce link) - Want automated product scraping + image generation - Have a brand profile to apply (70+ brands available) - Need funnel-stage targeting (awareness/consideration/conversion) - Want AI to auto-select model, scene, lighting based on brand ❌ DON'T USE WHEN: - User provides local product image file → use morpheus-fashion-design - Don't need a person in the image → use nano-banana-pro - Want manual control over model, scene, packs → use morpheus-fashion-design - Already have hero image, need variations → use multishot-ugc - Need video output → use veed-ugc after image generation INPUT: Product URL + brand name (optional) + funnel stage (optional) OUTPUT: PNG advertising image with product + model
sora
Generate videos from text prompts or reference images using OpenAI Sora. ✅ USE WHEN: - Need AI-generated video from text description - Want image-to-video (animate a still image) - Creating cinematic/artistic video content - Need motion/animation without lip-sync ❌ DON'T USE WHEN: - Need lip-sync (person speaking) → use veed-ugc or ugc-manual - Just need image generation → use nano-banana-pro or morpheus - Editing existing videos → use Remotion - Need UGC-style talking head → use veed-ugc INPUT: Text prompt + optional reference image OUTPUT: MP4 video (various resolutions/durations)