DORA change failure rate measures the share of production changes that cause incidents, and this guide shows how to calculate it correctly while avoiding common mistakes that skew the metric.
DORA change failure rate is defined as the percentage of deployed changes that result in degraded service and require remediation. It is a core quality metric in the DORA suite, letting leaders see when speed is hurting reliability. By tracking this rate you can tie engineering quality directly to business outcomes.
To calculate the metric you need three counts: total production deployments, fix-only deployments, and failed changes. Subtract fix-only deployments from total deployments to get the number of true changes, then divide failed changes by that number. The article warns against common pitfalls: counting every incident regardless of cause, including fix-only releases in the change count, confusing deployment failures with change failures, and loosening the definition of "degraded service" to game the number. Each mistake can either mask real quality problems or inflate perceived performance.
Technical leaders should use the metric to prioritize quality work when the rate exceeds 15%, and to monitor trends over time. Understanding its limits-such as treating a multi-day outage the same as a brief minor incident-helps avoid over-interpretation. When used correctly, change failure rate becomes a signal that guides investment in testing, automation, and incident response, ultimately supporting faster, safer delivery.
Check out the full stdlib collection for more frameworks, templates, and guides to accelerate your technical leadership journey.