DORA Metrics in Practice: Measuring Developer Productivity Without the BS
Deployment frequency, lead time, change failure rate, MTTR — the four DORA metrics that actually predict software delivery performance. How to measure them without enterprise tooling.
# DORA Metrics in Practice: Measuring Developer Productivity Without the BS
Every engineering leader wants to measure developer productivity. Most attempts fail because they measure the wrong things: lines of code, story points completed, PRs merged. These metrics incentivize gaming, not genuine improvement.
DORA metrics (from the DevOps Research and Assessment team at Google) are different. They measure outcomes — how fast you deliver software and how reliable it is — not activity.
Four metrics. No vanity numbers. Here is how to implement them without enterprise tooling.
The Four DORA Metrics
1. Deployment Frequency
How often your team deploys to production.
|------------------|-----------|
How to measure:
# Count deployments from Git tags
git log --oneline --format="%H %ai" --tags --simplify-by-decoration \
--after="2026-01-01" | wc -l
# Or count production deployments from CI
curl -s "https://gitea.example.com/api/v1/repos/org/app/releases" \
-H "Authorization: token $TOKEN" | jq 'length'What it actually tells you: High deployment frequency correlates with smaller batch sizes, which reduce risk. Teams that deploy daily find and fix bugs faster than teams that deploy monthly.
2. Lead Time for Changes
Time from code commit to running in production.
|------------------|-----------|
How to measure:
# For each deployment, find the oldest commit included
DEPLOY_TIME=$(git log -1 --format="%ai" v2.1.0)
FIRST_COMMIT=$(git log v2.0.0..v2.1.0 --format="%ai" | tail -1)
# Lead time = DEPLOY_TIME - FIRST_COMMITFor more accuracy, measure from PR merge to production deployment:
# Query your CI system for deployment timestamps
# Compare with PR merge timestamps from Git
git log --merges --format="%H %ai" --after="2026-01-01" | while read hash date; do
# Cross-reference with deployment logs
echo "$hash merged at $date, deployed at ..."
doneWhat it actually tells you: Long lead times mean large batch sizes, complex deployments, and slow feedback loops. If it takes a week to get a one-line fix to production, your pipeline has unnecessary friction.
3. Change Failure Rate
Percentage of deployments that cause a failure in production.
|------------------|-------------|
How to measure:
# Count deployments that required a rollback or hotfix
TOTAL_DEPLOYS=$(git tag --list "v*" | wc -l)
ROLLBACKS=$(git log --oneline --grep="rollback\|revert\|hotfix" | wc -l)
FAILURE_RATE=$(echo "scale=2; $ROLLBACKS / $TOTAL_DEPLOYS * 100" | bc)
echo "Change failure rate: ${FAILURE_RATE}%"What it actually tells you: High change failure rates mean your testing, code review, or staging environment is not catching problems before production. It is the quality metric of the four.
4. Mean Time to Restore (MTTR)
How long it takes to recover from a failure in production.
|------------------|------|
How to measure:
# From your incident tracking system
# MTTR = (time_resolved - time_detected) for each incident
# Average across all incidents in the periodIf you do not have an incident tracking system, use your Git history:
# Time between "incident" and "fix" commits
git log --oneline --grep="fix\|resolve\|incident" --format="%ai %s" \
--after="2026-01-01"What it actually tells you: Low MTTR means your team can detect, diagnose, and fix problems quickly. It is a measure of operational maturity, not just code quality.
Implementing DORA Metrics Without Enterprise Tools
You do not need Sleuth, LinearB, or Jellyfish. A shell script and a cron job get you 80% of the value.
The Simple Dashboard
#!/bin/bash
# dora-metrics.sh — run monthly
REPO="/path/to/repo"
PERIOD_START="2026-03-01"
PERIOD_END="2026-03-31"
cd "$REPO"
# 1. Deployment Frequency
DEPLOYS=$(git tag --list "v*" --sort=-creatordate | while read tag; do
DATE=$(git log -1 --format="%ai" "$tag")
if [[ "$DATE" > "$PERIOD_START" && "$DATE" < "$PERIOD_END" ]]; then
echo "$tag"
fi
done | wc -l)
echo "Deployment Frequency: $DEPLOYS deploys this month"
# 2. Lead Time (average days from first commit to deploy)
# Simplified: time from branch creation to merge
TOTAL_LEAD=0
COUNT=0
git log --merges --format="%H %ai" --after="$PERIOD_START" --before="$PERIOD_END" | while read hash date rest; do
MERGE_EPOCH=$(date -d "$date" +%s)
BRANCH_START=$(git log "$hash^2" --format="%ai" | tail -1)
if [ -n "$BRANCH_START" ]; then
START_EPOCH=$(date -d "$BRANCH_START" +%s)
LEAD_DAYS=$(( (MERGE_EPOCH - START_EPOCH) / 86400 ))
echo "$LEAD_DAYS"
fi
done | awk '{sum+=$1; n++} END {if(n>0) printf "Average Lead Time: %.1f days\n", sum/n}'
# 3. Change Failure Rate
FAILURES=$(git log --oneline --grep="rollback\|revert\|hotfix\|fix:" \
--after="$PERIOD_START" --before="$PERIOD_END" | wc -l)
if [ "$DEPLOYS" -gt 0 ]; then
RATE=$(echo "scale=1; $FAILURES / $DEPLOYS * 100" | bc)
echo "Change Failure Rate: ${RATE}%"
fi
# 4. MTTR (requires incident data)
echo "MTTR: Check incident log"Automated Collection with CI
# .gitea/workflows/dora-metrics.yml
name: DORA Metrics
on:
schedule:
- cron: '0 9 1 * *' # First of every month
jobs:
collect:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 0 # Full history needed
- name: Calculate DORA metrics
run: ./scripts/dora-metrics.sh > metrics.json
- name: Post to dashboard
run: |
curl -X POST "$DASHBOARD_URL/api/metrics" \
-H "Content-Type: application/json" \
-d @metrics.jsonWhat DORA Metrics Do NOT Measure
DORA metrics measure software delivery performance. They do not measure:
Use DORA metrics to identify systemic bottlenecks in your delivery pipeline, not to judge individuals.
Common Patterns and Anti-Patterns
Pattern: Elite Deployment Frequency but High Failure Rate
You are deploying fast but breaking things. Your testing is insufficient for your velocity. Slow down and invest in test automation, staging environments, or feature flags.
Pattern: Low Lead Time but Low Deployment Frequency
Your code changes are small and fast, but deployments are batched. You probably have a manual release process or a change approval board. Automate the deployment pipeline.
Pattern: High MTTR
You are slow to detect or fix problems. Invest in:
Anti-Pattern: Optimizing One Metric at the Expense of Others
Do not game deployment frequency by splitting every change into a separate deploy. Do not reduce change failure rate by avoiding deployments. The four metrics are designed to be measured together — improving one should not degrade another.
The Bottom Line
DORA metrics work because they measure what matters: how fast you deliver value and how reliably your software runs. They are not perfect — no metrics are — but they correlate strongly with high-performing engineering teams.
Start measuring this month. You need three things: your Git history, your deployment logs, and an honest count of production incidents. No enterprise tools required.
The numbers might be uncomfortable at first. That is the point. You cannot improve what you do not measure, and DORA metrics tell you exactly where your delivery pipeline is bottlenecked.
Need help with platform engineering?
TechSaaS provides expert consulting and managed services for cloud infrastructure, DevOps, and AI/ML operations.