DORA Metrics in Practice: Measuring Developer Productivity Without the BS
Deployment frequency, lead time, change failure rate, MTTR — the four DORA metrics that actually predict software delivery performance. How to measure them without enterprise tooling.
DORA Metrics in Practice: Measuring Developer Productivity Without the BS
Every engineering leader wants to measure developer productivity. Most attempts fail because they measure the wrong things: lines of code, story points completed, PRs merged. These metrics incentivize gaming, not genuine improvement.
DORA metrics (from the DevOps Research and Assessment team at Google) are different. They measure outcomes — how fast you deliver software and how reliable it is — not activity.
Four metrics. No vanity numbers. Here is how to implement them without enterprise tooling.
The Four DORA Metrics
1. Deployment Frequency
How often your team deploys to production.
| Performance Level | Frequency |
|---|---|
| Elite | On-demand (multiple times per day) |
| High | Between once per week and once per month |
| Medium | Between once per month and once every 6 months |
| Low | Less than once every 6 months |
How to measure:
# Count deployments from Git tags
git log --oneline --format="%H %ai" --tags --simplify-by-decoration \
--after="2026-01-01" | wc -l
# Or count production deployments from CI
curl -s "https://gitea.example.com/api/v1/repos/org/app/releases" \
-H "Authorization: token $TOKEN" | jq 'length'
What it actually tells you: High deployment frequency correlates with smaller batch sizes, which reduce risk. Teams that deploy daily find and fix bugs faster than teams that deploy monthly.
2. Lead Time for Changes
Time from code commit to running in production.
| Performance Level | Lead Time |
|---|---|
| Elite | Less than one hour |
| High | Between one day and one week |
| Medium | Between one week and one month |
| Low | More than one month |
How to measure:
Get more insights on Platform Engineering
Join 2,000+ engineers who get our weekly deep-dives. No spam, unsubscribe anytime.
# For each deployment, find the oldest commit included
DEPLOY_TIME=$(git log -1 --format="%ai" v2.1.0)
FIRST_COMMIT=$(git log v2.0.0..v2.1.0 --format="%ai" | tail -1)
# Lead time = DEPLOY_TIME - FIRST_COMMIT
For more accuracy, measure from PR merge to production deployment:
# Query your CI system for deployment timestamps
# Compare with PR merge timestamps from Git
git log --merges --format="%H %ai" --after="2026-01-01" | while read hash date; do
# Cross-reference with deployment logs
echo "$hash merged at $date, deployed at ..."
done
What it actually tells you: Long lead times mean large batch sizes, complex deployments, and slow feedback loops. If it takes a week to get a one-line fix to production, your pipeline has unnecessary friction.
3. Change Failure Rate
Percentage of deployments that cause a failure in production.
| Performance Level | Failure Rate |
|---|---|
| Elite | 0-15% |
| High | 16-30% |
| Medium | 16-30% |
| Low | 46-60% |
How to measure:
# Count deployments that required a rollback or hotfix
TOTAL_DEPLOYS=$(git tag --list "v*" | wc -l)
ROLLBACKS=$(git log --oneline --grep="rollback\|revert\|hotfix" | wc -l)
FAILURE_RATE=$(echo "scale=2; $ROLLBACKS / $TOTAL_DEPLOYS * 100" | bc)
echo "Change failure rate: ${FAILURE_RATE}%"
What it actually tells you: High change failure rates mean your testing, code review, or staging environment is not catching problems before production. It is the quality metric of the four.
4. Mean Time to Restore (MTTR)
How long it takes to recover from a failure in production.
| Performance Level | MTTR |
|---|---|
| Elite | Less than one hour |
| High | Less than one day |
| Medium | Between one day and one week |
| Low | More than one week |
How to measure:
# From your incident tracking system
# MTTR = (time_resolved - time_detected) for each incident
# Average across all incidents in the period
If you do not have an incident tracking system, use your Git history:
# Time between "incident" and "fix" commits
git log --oneline --grep="fix\|resolve\|incident" --format="%ai %s" \
--after="2026-01-01"
What it actually tells you: Low MTTR means your team can detect, diagnose, and fix problems quickly. It is a measure of operational maturity, not just code quality.
Implementing DORA Metrics Without Enterprise Tools
You do not need Sleuth, LinearB, or Jellyfish. A shell script and a cron job get you 80% of the value.
The Simple Dashboard
#!/bin/bash
# dora-metrics.sh — run monthly
REPO="/path/to/repo"
PERIOD_START="2026-03-01"
PERIOD_END="2026-03-31"
cd "$REPO"
# 1. Deployment Frequency
DEPLOYS=$(git tag --list "v*" --sort=-creatordate | while read tag; do
DATE=$(git log -1 --format="%ai" "$tag")
if [[ "$DATE" > "$PERIOD_START" && "$DATE" < "$PERIOD_END" ]]; then
echo "$tag"
fi
done | wc -l)
echo "Deployment Frequency: $DEPLOYS deploys this month"
# 2. Lead Time (average days from first commit to deploy)
# Simplified: time from branch creation to merge
TOTAL_LEAD=0
COUNT=0
git log --merges --format="%H %ai" --after="$PERIOD_START" --before="$PERIOD_END" | while read hash date rest; do
MERGE_EPOCH=$(date -d "$date" +%s)
BRANCH_START=$(git log "$hash^2" --format="%ai" | tail -1)
if [ -n "$BRANCH_START" ]; then
START_EPOCH=$(date -d "$BRANCH_START" +%s)
LEAD_DAYS=$(( (MERGE_EPOCH - START_EPOCH) / 86400 ))
echo "$LEAD_DAYS"
fi
done | awk '{sum+=$1; n++} END {if(n>0) printf "Average Lead Time: %.1f days\n", sum/n}'
# 3. Change Failure Rate
FAILURES=$(git log --oneline --grep="rollback\|revert\|hotfix\|fix:" \
--after="$PERIOD_START" --before="$PERIOD_END" | wc -l)
if [ "$DEPLOYS" -gt 0 ]; then
RATE=$(echo "scale=1; $FAILURES / $DEPLOYS * 100" | bc)
echo "Change Failure Rate: ${RATE}%"
fi
# 4. MTTR (requires incident data)
echo "MTTR: Check incident log"
Automated Collection with CI
# .gitea/workflows/dora-metrics.yml
name: DORA Metrics
on:
schedule:
- cron: '0 9 1 * *' # First of every month
jobs:
collect:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 0 # Full history needed
- name: Calculate DORA metrics
run: ./scripts/dora-metrics.sh > metrics.json
- name: Post to dashboard
run: |
curl -X POST "$DASHBOARD_URL/api/metrics" \
-H "Content-Type: application/json" \
-d @metrics.json
What DORA Metrics Do NOT Measure
DORA metrics measure software delivery performance. They do not measure:
- Individual developer productivity — do not use DORA to evaluate people
- Feature value — deploying frequently does not mean you are building the right things
- Code quality — low change failure rate does not mean the code is well-written
- Team happiness — a team can have elite DORA metrics and be miserable
Use DORA metrics to identify systemic bottlenecks in your delivery pipeline, not to judge individuals.
Free Resource
Free Cloud Architecture Checklist
A 47-point checklist covering security, scalability, cost optimization, and disaster recovery for production cloud environments.
Common Patterns and Anti-Patterns
Pattern: Elite Deployment Frequency but High Failure Rate
You are deploying fast but breaking things. Your testing is insufficient for your velocity. Slow down and invest in test automation, staging environments, or feature flags.
Pattern: Low Lead Time but Low Deployment Frequency
Your code changes are small and fast, but deployments are batched. You probably have a manual release process or a change approval board. Automate the deployment pipeline.
Pattern: High MTTR
You are slow to detect or fix problems. Invest in:
- Better monitoring and alerting (reduce time to detect)
- Runbooks and incident playbooks (reduce time to diagnose)
- Feature flags and quick rollback mechanisms (reduce time to mitigate)
Anti-Pattern: Optimizing One Metric at the Expense of Others
Do not game deployment frequency by splitting every change into a separate deploy. Do not reduce change failure rate by avoiding deployments. The four metrics are designed to be measured together — improving one should not degrade another.
The Bottom Line
DORA metrics work because they measure what matters: how fast you deliver value and how reliably your software runs. They are not perfect — no metrics are — but they correlate strongly with high-performing engineering teams.
Start measuring this month. You need three things: your Git history, your deployment logs, and an honest count of production incidents. No enterprise tools required.
The numbers might be uncomfortable at first. That is the point. You cannot improve what you do not measure, and DORA metrics tell you exactly where your delivery pipeline is bottlenecked.
Related Service
Cloud Solutions
Let our experts help you build the right technology strategy for your business.
Need help with platform engineering?
TechSaaS provides expert consulting and managed services for cloud infrastructure, DevOps, and AI/ML operations.
We Will Build You a Demo Site — For Free
Like it? Pay us. Do not like it? Walk away, zero complaints. You will spend way less than hiring developers or any agency.
No spam. No contracts. Just a free demo.