Docker Multi-Stage Builds for Node.js: 90% Smaller Images
Reduce Node.js Docker images from 1.1 GB to 40 MB with multi-stage builds, Alpine, BuildKit cache mounts, and .dockerignore. Real production Dockerfiles.
Docker Multi-Stage Builds for Node.js: 90% Smaller Images
Your Node.js Docker image is probably 1 GB. Ours runs at 7 MiB in production.
At TechSaaS, we run 90+ containers on a single host with 13 GiB of RAM. Every megabyte of image bloat costs us — in pull time, registry storage, attack surface, and the 19 GiB of swap we're already leaning on. Multi-stage builds aren't a nice-to-have; they're how we fit the entire stack on one machine.
This guide walks through the full optimization path for Node.js: from the default 1.1 GB image down to under 120 MB — a 90%+ reduction. Every Dockerfile is copy-paste ready. Every number is from our production environment.
Why Your Node.js Image Is 1 GB
The default node:22 image (Debian Bookworm) ships with everything the Node.js build process could possibly need: GCC, G++, Make, Python 3, Git, OpenSSL development headers, and 400+ OS packages. Your 200-line Express API doesn't use any of them at runtime.
$ docker images
REPOSITORY TAG SIZE
node 22-bookworm 1.10 GB
node 22-slim 243 MB
node 22-alpine 181 MB
That 1.1 GB image contains:
- Build toolchain (~300 MB): gcc, g++, make, python3 — needed only during
npm installfor native modules - OS packages (~400 MB): man pages, documentation, utilities you'll never use
- npm cache (~50-100 MB): cached tarballs from every
npm install - Dev dependencies: TypeScript, ESLint, testing frameworks, bundlers — none needed at runtime
Every one of these packages is a CVE waiting to happen. A Trivy scan on node:22-bookworm typically finds 150+ vulnerabilities. On node:22-alpine, that drops to 5-15. We compare scanning tools in our container security guide: Falco, Trivy, and Snyk.
The Two-Stage Dockerfile That Actually Works
Most multi-stage tutorials show three stages. In practice, two stages are enough for the vast majority of Node.js applications: one to build, one to run.
Here's what we actually use in production:
# ============ Stage 1: Build ============
FROM node:22-alpine AS builder
WORKDIR /app
# Copy dependency manifests FIRST for cache hits
COPY package.json package-lock.json ./
# Install ALL dependencies (including devDeps for build)
RUN npm ci
# Copy source and build
COPY . .
RUN npm run build
# Remove dev dependencies after build
RUN npm prune --omit=dev
# ============ Stage 2: Production ============
FROM node:22-alpine AS production
WORKDIR /app
# Non-root user
RUN addgroup -S appgroup && adduser -S appuser -G appgroup
# Copy only what's needed from builder
COPY --from=builder /app/node_modules ./node_modules
COPY --from=builder /app/dist ./dist
COPY --from=builder /app/package.json ./
USER appuser
EXPOSE 3000
CMD ["node", "dist/server.js"]
Key decisions:
npm ciovernpm install:cideletesnode_modulesfirst and installs frompackage-lock.jsonexactly — reproducible builds, faster in CInpm prune --omit=dev: Strips devDependencies after build. TypeScript, ESLint, and test frameworks don't ship to production- Layer ordering:
package.json+package-lock.jsonare copied before source code. Docker caches thenpm cilayer as long as dependencies don't change — source code changes don't trigger a reinstall - Non-root USER: Mandatory. Never run containers as root. See our Docker container security best practices for the full hardening checklist.
Get more insights on DevOps
Join 2,000+ engineers who get our weekly deep-dives. No spam, unsubscribe anytime.
For Static Sites: Two-Stage Into nginx:alpine
Our company website is a Next.js static export. The final image doesn't need Node.js at all — just nginx serving HTML/CSS/JS:
# ============ Stage 1: Build ============
FROM node:22-alpine AS builder
WORKDIR /app
COPY package.json package-lock.json ./
RUN npm ci
COPY . .
RUN npm run build
# ============ Stage 2: Serve ============
FROM nginx:alpine
COPY --from=builder /app/out /usr/share/nginx/html
COPY nginx.conf /etc/nginx/conf.d/default.conf
EXPOSE 80
Result: the final image is ~40 MB (nginx:alpine base + static assets). At runtime, our company-website container uses 7.4 MiB of memory on a 32 MB limit. That's what "optimized" actually looks like.
Choosing Your Final Stage: The Real Comparison
Every tutorial says "use distroless." The reality is more nuanced.
| Base Image | Size | Shell? | CVEs (typical) | Best For |
|---|---|---|---|---|
| node:22-bookworm | 1.10 GB | Yes | 150+ | Never use in production |
| node:22-slim | 243 MB | Yes | 30-60 | When you need apt-get |
| node:22-alpine | 181 MB | Yes | 5-15 | Most Node.js apps |
| gcr.io/distroless/nodejs22 | ~141 MB | No | 0-5 | Max security, no debugging |
| nginx:alpine | ~40 MB | Yes | 3-8 | Static sites |
| Docker Hardened Images | Varies | Some | 0 CVEs | Enterprise compliance |
The Distroless Surprise
Here's the counterintuitive part: distroless Node.js images (~141 MB) aren't dramatically smaller than Alpine (~181 MB). The size saving is only ~40 MB. Distroless wins on security (no shell, no package manager, nothing for attackers), not on size.
Use distroless when: security is paramount and you have alternative debugging tools (Docker Engine 29's docker debug for ephemeral containers).
Use Alpine when: you need to docker exec into containers for debugging, your team isn't ready for the distroless workflow, or you have native modules that need musl-compatible builds.
Docker Hardened Images (DHI)
In December 2025, Docker released over 1,000 Hardened Images under Apache 2.0 — purpose-built for zero CVEs, continuously updated, available on Docker Hub. If you need enterprise compliance (SOC 2, ISO 27001) with zero known vulnerabilities, DHI is the strongest option. Check Docker Hub for docker.io/library/node hardened variants.
The musl libc Gotcha
Alpine uses musl instead of glibc. This breaks specific Node.js packages:
canvas(node-canvas): Won't compile on Alpine without manual cairo/pango installationsharp: Works on Alpine but requires--platform=linuxmuslflag and has reported edge-case differences in image processing output- Native C++ addons: Any package using
node-gypwith glibc-specific system calls may fail silently
If you depend on these packages, use node:22-slim (Debian-based, 243 MB) as your final stage instead of Alpine. The 60 MB size difference isn't worth broken image processing.
.dockerignore: The File Nobody Writes
Without a .dockerignore, docker build sends your entire project directory as build context — including .git (which can be hundreds of MB), node_modules (redundant since npm ci installs fresh), test files, documentation, and potentially .env files with secrets.
# .dockerignore
node_modules
.git
.github
*.md
README*
LICENSE
tests/
__tests__/
coverage/
.env*
.vscode/
.idea/
*.log
dist/
build/
.next/
.turbo/
Impact: On our projects, adding a proper .dockerignore reduced build context from 340 MB to 12 MB. Docker sends this entire context to the daemon before building starts — smaller context = faster build start.
BuildKit Optimizations for 2026
If you're not using BuildKit features, you're leaving performance on the table.
Cache Mounts
BuildKit cache mounts persist the npm cache across builds without bloating the image:
RUN --mount=type=cache,target=/root/.npm \
npm ci
This caches downloaded packages between builds. On subsequent builds with minor dependency changes, npm ci downloads only the diff instead of re-fetching everything. Our CI build times dropped from 90 seconds to 35 seconds.
COPY --link
COPY --link --from=builder /app/dist ./dist
--link creates a new independent layer instead of modifying the parent layer's filesystem. Benefits: layers can be copied in parallel, and changes to earlier layers don't invalidate this copy. This is a free optimization — add --link to every COPY --from instruction.
Free Resource
CI/CD Pipeline Blueprint
Our battle-tested pipeline template covering build, test, security scan, staging, and zero-downtime deployment stages.
Build Arguments for Conditional Stages
ARG NODE_ENV=production
RUN if [ "$NODE_ENV" = "development" ]; then npm install; else npm ci --omit=dev; fi
One Dockerfile for both development (with hot reload, dev deps) and production (minimal). Controlled by build arg.
Real Production Numbers From Our Stack
| Container | Base Image | Image Size | Runtime Memory | Mem Limit |
|---|---|---|---|---|
| company-website | nginx:alpine | ~40 MB | 7.4 MiB | 32 MB |
| contact-api | node:22-alpine | ~120 MB | 4.6 MiB | 64 MB |
| umami (analytics) | node:22-alpine | ~150 MB | 53.5 MiB | 256 MB |
| n8n (automation) | node:22-alpine | ~180 MB | 34.9 MiB | 1 GB |
Every one of these started as a 1 GB+ image before optimization. For the full architecture behind these containers, see how we built self-healing infrastructure with 90+ Docker containers. The company website went from 1.1 GB → 40 MB — a 96% reduction.
The Complete Optimization Checklist
- Multi-stage Dockerfile (build stage + production stage)
-
npm ciinstead ofnpm install -
npm prune --omit=devor install production-only deps in final stage - Alpine or distroless as final base image
-
.dockerignoreexcluding node_modules, .git, tests, docs, .env - Non-root USER in final stage
-
COPY --linkon allCOPY --frominstructions - BuildKit cache mounts for npm cache
- Layer ordering: package.json before source code
- Trivy scan in CI pipeline
Conclusion
A 1 GB Node.js Docker image is not a Node.js problem — it's a Dockerfile problem. Two-stage builds with Alpine or distroless get you to 120-180 MB. Static site builds into nginx:alpine hit 40 MB. Proper .dockerignore and BuildKit cache mounts make the build fast.
Our production containers prove it works at scale: 7 MiB runtime memory, 32 MB limit, serving real traffic. The optimization took 30 minutes per service. The savings compound across 90 containers sharing 13 GiB of RAM.
Stop shipping build tools to production. Your containers will thank you.
Related reading:
- How Multi-Stage Docker Builds Reduce Image Size by 80%
- Docker Compose for Production: Managing 89 Containers Without Kubernetes
- Building CI/CD Pipelines with Gitea Actions
Need help optimizing your container infrastructure? Explore our cloud infrastructure and DevOps services or full-stack web development.
Related Service
Platform Engineering
From CI/CD pipelines to service meshes, we create golden paths for your developers.
Need help with devops?
TechSaaS provides expert consulting and managed services for cloud infrastructure, DevOps, and AI/ML operations.
We Will Build You a Demo Site — For Free
Like it? Pay us. Do not like it? Walk away, zero complaints. You will spend way less than hiring developers or any agency.
No spam. No contracts. Just a free demo.