ClawKit Logo
ClawKitReliability Toolkit
Back to Registry
Official Verified developer tools Safety 4/5

k8s-autoscaling

Configure Kubernetes autoscaling with HPA, VPA, and KEDA. Use for horizontal/vertical pod autoscaling, event-driven scaling, and capacity management.

Why use this skill?

Optimize Kubernetes performance and cloud costs with OpenClaw's k8s-autoscaling skill. Automate HPA, VPA, and KEDA configurations for efficient container management.

skill-install — Terminal

Install via CLI (Recommended)

clawhub install openclaw/skills/skills/rohitg00/k8-autoscaling
Or

What This Skill Does

The k8s-autoscaling skill for OpenClaw provides a comprehensive interface for managing Kubernetes resource scaling. By integrating HPA (Horizontal Pod Autoscaler), VPA (Vertical Pod Autoscaler), and KEDA (Kubernetes Event-Driven Autoscaling), this skill allows users to maintain optimal application performance while maximizing resource efficiency. It enables users to dynamically adjust cluster capacity based on CPU and memory utilization, custom business metrics, or external event sources such as AWS SQS queues and Prometheus queries.

Installation

To integrate this skill into your OpenClaw environment, execute the following command in your terminal: clawhub install openclaw/skills/skills/rohitg00/k8-autoscaling

Use Cases

This skill is designed for DevOps and platform engineers. Primary use cases include:

  • Scaling to zero: Using KEDA to eliminate costs for idle background workers and cron jobs.
  • Traffic Spikes: Utilizing HPA for reactive scaling based on incoming request pressure.
  • Resource Optimization: Deploying VPA to analyze pod resource usage and generate precise container sizing recommendations, preventing over-provisioning.
  • Event-Driven Architecture: Automatically scaling consumer deployments based on the depth of message queues or custom Prometheus metrics.

Example Prompts

  1. "Check the current HPA configuration for the 'order-processor' deployment in the production namespace and report if any targets are exceeding their utilization thresholds."
  2. "Create a KEDA ScaledObject for the 'email-worker' deployment that scales based on the AWS SQS queue length and supports scaling down to zero replicas during off-peak hours."
  3. "Run a VPA analysis on all pods in the 'analytics' namespace and provide a summary of the recommended CPU and memory limits for each container."

Tips & Limitations

When using this skill, always ensure that your cluster has the necessary metrics-server installed for HPA and the appropriate KEDA operators deployed. Be cautious when using VPA in 'Auto' mode in production, as it can cause pod restarts. It is generally recommended to use VPA in 'Recommendation' mode first to validate settings. For KEDA, verify that your trigger authenticators have the required IAM permissions if connecting to external cloud services like SQS.

Metadata

Author@rohitg00
Stars1133
Views0
Updated2026-02-18
View Author Profile
AI Skill Finder

Not sure this is the right skill?

Describe what you want to build — we'll match you to the best skill from 16,000+ options.

Find the right skill
Add to Configuration

Paste this into your clawhub.json to enable this plugin.

{
  "plugins": {
    "official-rohitg00-k8-autoscaling": {
      "enabled": true,
      "auto_update": true
    }
  }
}

Tags(AI)

#kubernetes#autoscaling#devops#cloud-native#keda
Safety Score: 4/5

Flags: external-api, code-execution