ClawKit Logo
ClawKitReliability Toolkit
Back to Registry
Official Verified

brand-analyzer

Analyze brands to generate comprehensive brand identity profiles (JSON). Use when the user wants to analyze a brand, create a brand profile, or needs brand data for ad generation. Stores profiles for reuse across Ad-Ready, Morpheus, and other creative workflows. Can list existing profiles and update them.

skill-install — Terminal

Install via CLI (Recommended)

clawhub install openclaw/skills/skills/pauldelavallaz/ai-brand-analyzer
Or

Brand Analyzer: AI Brand Identity Profiler

Analyze any brand to generate a comprehensive brand identity JSON profile using Gemini Flash with Google Search grounding.

Overview

Brand Analyzer creates structured brand identity profiles by:

  1. Researching the brand via Google Search (official data, campaigns, visual identity)
  2. Analyzing brand behavior, visual patterns, photography style, tone of voice
  3. Generating a complete JSON profile following the standard template
  4. Storing the profile for reuse across all creative workflows

When to Use

  • User asks to "analyze a brand" or "create a brand profile"
  • Before running Ad-Ready when the brand isn't in the catalog
  • When the user mentions a brand that doesn't have a profile yet
  • To update/refresh an existing brand profile

Quick Commands

Analyze a brand and save to file

GEMINI_API_KEY="$KEY" uv run {baseDir}/scripts/analyze.py \
  --brand "Brand Name" \
  --output ./brands/Brand_Name.json

Analyze and auto-save to Ad-Ready brands catalog

GEMINI_API_KEY="$KEY" uv run {baseDir}/scripts/analyze.py \
  --brand "Heredero Gin" \
  --auto-save

The --auto-save flag automatically saves to ~/clawd/ad-ready/configs/Brands/{Brand_Name}.json

Print to stdout

GEMINI_API_KEY="$KEY" uv run {baseDir}/scripts/analyze.py --brand "Nike"

Inputs

InputRequiredDescription
--brandBrand name to analyze
--outputOptionalOutput file path (default: stdout)
--auto-saveOptionalAuto-save to Ad-Ready brands catalog
--api-keyOptionalGemini API key (or set GEMINI_API_KEY env var)

Output Format

The generated JSON follows the standard brand identity template used by Ad-Ready:

{
  "brand_info": { "name", "tagline", "category", "positioning", "vision", "mission", "origin_story" },
  "brand_values": { "core_values", "brand_promise", "differentiators", "non_negotiables" },
  "target_audience": { "demographics", "psychographics" },
  "tone_of_voice": { "personality_traits", "communication_style", "language_register", ... },
  "visual_identity": { "logo", "color_system", "typography", "layout_principles" },
  "photography": { "style", "technical" },
  "campaign_guidelines": { "visual_tone", "model_casting", "product_presentation", ... },
  "brand_behavior": { "do_dont", "immutability" },
  "channel_expression": { "retail", "digital", "print" },
  "compliance": { ... }
}

Integration with Other Workflows

Ad-Ready

Brand profiles are automatically available as brand_profile options when generating ads.

Morpheus Fashion Design

Brand visual identity (colors, photography style, tone) can inform Morpheus campaigns.

Custom Workflows

Load any brand profile JSON to extract visual identity, tone of voice, or campaign guidelines for any creative task.

Analysis Methodology

The analyzer follows a 3-phase approach:

Metadata

Stars1217
Views1
Updated2026-02-20
View Author Profile
AI Skill Finder

Not sure this is the right skill?

Describe what you want to build — we'll match you to the best skill from 16,000+ options.

Find the right skill
Add to Configuration

Paste this into your clawhub.json to enable this plugin.

{
  "plugins": {
    "official-pauldelavallaz-ai-brand-analyzer": {
      "enabled": true,
      "auto_update": true
    }
  }
}
Safety NoteClawKit audits metadata but not runtime behavior. Use with caution.

Related Skills

morpheus-fashion-design

Generate professional advertising images with AI models holding/wearing products. ✅ USE WHEN: - Need a person/model in the image WITH a product - Creating fashion ads, product campaigns, commercial photography - Want consistent model face across multiple shots - Need professional lighting/camera simulation - Input: product image + model reference (or catalog) ❌ DON'T USE WHEN: - Just editing/modifying an existing image → use nano-banana-pro - Product-only shot without a person → use nano-banana-pro - Already have the hero image, need variations → use multishot-ugc - Need video, not image → use veed-ugc after generating image - URL-based product fetch with brand profile → use ad-ready instead OUTPUT: Single high-quality PNG image (2K-4K resolution)

pauldelavallaz 1217

veed-ugc

Generate UGC-style promotional videos with AI lip-sync. Takes an image (person with product from Morpheus/Ad-Ready) and a script (pure dialogue), creates a video of the person speaking. Uses ElevenLabs for voice synthesis.

pauldelavallaz 1217

ugc-manual

Generate lip-sync video from image + user's own audio recording. ✅ USE WHEN: - User provides their OWN audio file (voice recording) - Want to sync image to specific audio/voice - User recorded the script themselves - Need exact audio timing preserved ❌ DON'T USE WHEN: - User provides text script (not audio) → use veed-ugc - Need AI to generate the voice → use veed-ugc - Don't have audio file yet → use veed-ugc with script INPUT: Image + audio file (user's recording) OUTPUT: MP4 video with lip-sync to provided audio KEY DIFFERENCE: veed-ugc = script → AI voice → video ugc-manual = user audio → video (no voice generation)

pauldelavallaz 1217

sora

Generate videos using OpenAI's Sora API. Use when the user asks to generate, create, or make videos from text prompts or reference images. Supports image-to-video generation with automatic resizing.

pauldelavallaz 1217

sora

Generate videos from text prompts or reference images using OpenAI Sora. ✅ USE WHEN: - Need AI-generated video from text description - Want image-to-video (animate a still image) - Creating cinematic/artistic video content - Need motion/animation without lip-sync ❌ DON'T USE WHEN: - Need lip-sync (person speaking) → use veed-ugc or ugc-manual - Just need image generation → use nano-banana-pro or morpheus - Editing existing videos → use Remotion - Need UGC-style talking head → use veed-ugc INPUT: Text prompt + optional reference image OUTPUT: MP4 video (various resolutions/durations)

pauldelavallaz 1217