ClawKit Logo
ClawKitReliability Toolkit
Back to Registry
Official Verified productivity Safety 5/5

dgr

Audit-ready decision artifacts for LLM outputs — assumptions, risks, recommendation, and review gating (schema-valid JSON).

Why use this skill?

DGR provides a machine-validated, auditable JSON artifact for your AI decisions. Improve transparency, surface risks, and ensure governance in high-stakes agent workflows.

skill-install — Terminal

Install via CLI (Recommended)

clawhub install openclaw/skills/skills/sapenov/dgr
Or

What This Skill Does

The DGR (Decision-Grade Reasoning) skill acts as a governance protocol for OpenClaw AI agents. Its primary function is to transform ambiguous decision-making requests into structured, auditable JSON artifacts. By forcing the model to explicitly declare assumptions, calculate risks, and provide a reasoned recommendation, DGR ensures that high-stakes outputs are not just answers, but documented logical processes. This is critical for enterprise environments where traceablity and human-in-the-loop validation are mandatory. The skill operates in three distinct modes: dgr_min for rapid, lightweight checks; dgr_full for comprehensive analytical decomposition; and dgr_strict for high-stakes scenarios requiring conservative, questioning logic. By leveraging the DGR schema, organizations can programmatically ingest agent reasoning into ticketing systems, audit logs, or compliance dashboards.

Installation

To integrate this skill into your OpenClaw environment, execute the following command via your terminal:

clawhub install openclaw/skills/skills/sapenov/dgr

Ensure that your agent is configured to handle the JSON output schema provided in the package, as the skill relies strictly on its internal validation logic to maintain consistency across various LLM backends.

Use Cases

  • Incident Management: Generate automated incident response plans with pre-audited risk assessments and clear decision pathways for post-mortem review.
  • Policy Compliance: Evaluate whether specific actions align with defined corporate policies, forcing the model to cite assumptions and potential risks.
  • Strategic Planning: Use the 'dgr_strict' mode to analyze business proposals, surfacing hidden risks and challenging initial biases before committing to a course of action.
  • HR & Legal Workflows: Document reasoning for sensitive inquiries, ensuring that every recommendation is backed by a verifiable logic trace.

Example Prompts

  1. "dgr_strict: Determine if we should pivot our current Q4 cloud infrastructure to a serverless model based on the attached technical limitations document. Focus heavily on security risks."
  2. "dgr_full: Recommend a strategy for internal data migration. Explicitly outline the assumptions about current bandwidth and the risks of downtime during the cutover."
  3. "dgr_min: Is it safe to grant administrative access to the new staging environment given the current security patches? Provide an immediate decision artifact."

Tips & Limitations

  • Consistency Check: Always review the consistency_check field in the final JSON output; this field flags potential contradictions in the agent's own reasoning.
  • Human Review: DGR does not replace human judgment. For high-stakes decisions, ensure the review_required field is acknowledged by your team.
  • Scope: DGR is designed for process quality. It cannot guarantee truth or eliminate inherent model hallucinations. Treat it as a structural scaffolding for your existing expertise.

Metadata

Author@sapenov
Stars1133
Views1
Updated2026-02-18
View Author Profile
AI Skill Finder

Not sure this is the right skill?

Describe what you want to build — we'll match you to the best skill from 16,000+ options.

Find the right skill
Add to Configuration

Paste this into your clawhub.json to enable this plugin.

{
  "plugins": {
    "official-sapenov-dgr": {
      "enabled": true,
      "auto_update": true
    }
  }
}

Tags(AI)

#governance#auditing#decision-support#compliance#reasoning
Safety Score: 5/5