Docker Multi-Stage Builds for Node.js: 90% Smaller Images
Reduce Node.js Docker images from 1.1 GB to 40 MB with multi-stage builds, Alpine, BuildKit cache mounts, and .dockerignore. Real production Dockerfiles.
# Docker Multi-Stage Builds for Node.js: 90% Smaller Images
Your Node.js Docker image is probably 1 GB. Ours runs at 7 MiB in production.
At TechSaaS, we run 90+ containers on a single host with 13 GiB of RAM. Every megabyte of image bloat costs us — in pull time, registry storage, attack surface, and the 19 GiB of swap we're already leaning on. Multi-stage builds aren't a nice-to-have; they're how we fit the entire stack on one machine.
This guide walks through the full optimization path for Node.js: from the default 1.1 GB image down to under 120 MB — a 90%+ reduction. Every Dockerfile is copy-paste ready. Every number is from our production environment.
Why Your Node.js Image Is 1 GB
The default node:22 image (Debian Bookworm) ships with everything the Node.js build process could possibly need: GCC, G++, Make, Python 3, Git, OpenSSL development headers, and 400+ OS packages. Your 200-line Express API doesn't use any of them at runtime.
$ docker images
REPOSITORY TAG SIZE
node 22-bookworm 1.10 GB
node 22-slim 243 MB
node 22-alpine 181 MBThat 1.1 GB image contains:
npm install for native modulesnpm installEvery one of these packages is a CVE waiting to happen. A Trivy scan on node:22-bookworm typically finds 150+ vulnerabilities. On node:22-alpine, that drops to 5-15. We compare scanning tools in our container security guide: Falco, Trivy, and Snykcontainer security guide: Falco, Trivy, and Snykhttps://www.techsaas.cloud/blog/container-security-falco-trivy-snyk/.
The Two-Stage Dockerfile That Actually Works
Most multi-stage tutorials show three stages. In practice, two stages are enough for the vast majority of Node.js applications: one to build, one to run.
Here's what we actually use in production:
# ============ Stage 1: Build ============
FROM node:22-alpine AS builder
WORKDIR /app
# Copy dependency manifests FIRST for cache hits
COPY package.json package-lock.json ./
# Install ALL dependencies (including devDeps for build)
RUN npm ci
# Copy source and build
COPY . .
RUN npm run build
# Remove dev dependencies after build
RUN npm prune --omit=dev
# ============ Stage 2: Production ============
FROM node:22-alpine AS production
WORKDIR /app
# Non-root user
RUN addgroup -S appgroup && adduser -S appuser -G appgroup
# Copy only what's needed from builder
COPY --from=builder /app/node_modules ./node_modules
COPY --from=builder /app/dist ./dist
COPY --from=builder /app/package.json ./
USER appuser
EXPOSE 3000
CMD ["node", "dist/server.js"]Key decisions:
ci deletes node_modules first and installs from package-lock.json exactly — reproducible builds, faster in CIpackage.json + package-lock.json are copied before source code. Docker caches the npm ci layer as long as dependencies don't change — source code changes don't trigger a reinstallFor Static Sites: Two-Stage Into nginx:alpine
Our company website is a Next.js static export. The final image doesn't need Node.js at all — just nginx serving HTML/CSS/JS:
# ============ Stage 1: Build ============
FROM node:22-alpine AS builder
WORKDIR /app
COPY package.json package-lock.json ./
RUN npm ci
COPY . .
RUN npm run build
# ============ Stage 2: Serve ============
FROM nginx:alpine
COPY --from=builder /app/out /usr/share/nginx/html
COPY nginx.conf /etc/nginx/conf.d/default.conf
EXPOSE 80Result: the final image is ~40 MB (nginx:alpine base + static assets). At runtime, our company-website container uses 7.4 MiB of memory on a 32 MB limit. That's what "optimized" actually looks like.
Choosing Your Final Stage: The Real Comparison
Every tutorial says "use distroless." The reality is more nuanced.
|---|---|---|---|---|
The Distroless Surprise
Here's the counterintuitive part: distroless Node.js images (~141 MB) aren't dramatically smaller than Alpine (~181 MB). The size saving is only ~40 MB. Distroless wins on security (no shell, no package manager, nothing for attackers), not on size.
Use distroless when: security is paramount and you have alternative debugging tools (Docker Engine 29's docker debug for ephemeral containers).
Use Alpine when: you need to docker exec into containers for debugging, your team isn't ready for the distroless workflow, or you have native modules that need musl-compatible builds.
Docker Hardened Images (DHI)
In December 2025, Docker released over 1,000 Hardened Images under Apache 2.0 — purpose-built for zero CVEs, continuously updated, available on Docker Hub. If you need enterprise compliance (SOC 2, ISO 27001) with zero known vulnerabilities, DHI is the strongest option. Check Docker Hub for docker.io/library/node hardened variants.
The musl libc Gotcha
Alpine uses musl instead of glibc. This breaks specific Node.js packages:
--platform=linuxmusl flag and has reported edge-case differences in image processing outputnode-gyp with glibc-specific system calls may fail silentlyIf you depend on these packages, use node:22-slim (Debian-based, 243 MB) as your final stage instead of Alpine. The 60 MB size difference isn't worth broken image processing.
.dockerignore: The File Nobody Writes
Without a .dockerignore, docker build sends your entire project directory as build context — including .git (which can be hundreds of MB), node_modules (redundant since npm ci installs fresh), test files, documentation, and potentially .env files with secrets.
# .dockerignore
node_modules
.git
.github
*.md
README*
LICENSE
tests/
__tests__/
coverage/
.env*
.vscode/
.idea/
*.log
dist/
build/
.next/
.turbo/Impact: On our projects, adding a proper .dockerignore reduced build context from 340 MB to 12 MB. Docker sends this entire context to the daemon before building starts — smaller context = faster build start.
BuildKit Optimizations for 2026
If you're not using BuildKit features, you're leaving performance on the table.
Cache Mounts
BuildKit cache mounts persist the npm cache across builds without bloating the image:
RUN --mount=type=cache,target=/root/.npm \
npm ciThis caches downloaded packages between builds. On subsequent builds with minor dependency changes, npm ci downloads only the diff instead of re-fetching everything. Our CI build times dropped from 90 seconds to 35 seconds.
COPY --link
COPY --link --from=builder /app/dist ./dist--link creates a new independent layer instead of modifying the parent layer's filesystem. Benefits: layers can be copied in parallel, and changes to earlier layers don't invalidate this copy. This is a free optimization — add --link to every COPY --from instruction.
Build Arguments for Conditional Stages
ARG NODE_ENV=production
RUN if [ "$NODE_ENV" = "development" ]; then npm install; else npm ci --omit=dev; fiOne Dockerfile for both development (with hot reload, dev deps) and production (minimal). Controlled by build arg.
Real Production Numbers From Our Stack
|---|---|---|---|---|
Every one of these started as a 1 GB+ image before optimization. For the full architecture behind these containers, see how we built self-healing infrastructure with 90+ Docker containershow we built self-healing infrastructure with 90+ Docker containershttps://www.techsaas.cloud/blog/self-healing-infrastructure-90-docker-containers/. The company website went from 1.1 GB → 40 MB — a 96% reduction.
The Complete Optimization Checklist
npm ci instead of npm installnpm prune --omit=dev or install production-only deps in final stage.dockerignore excluding node_modules, .git, tests, docs, .envCOPY --link on all COPY --from instructionsConclusion
A 1 GB Node.js Docker image is not a Node.js problem — it's a Dockerfile problem. Two-stage builds with Alpine or distroless get you to 120-180 MB. Static site builds into nginx:alpine hit 40 MB. Proper .dockerignore and BuildKit cache mounts make the build fast.
Our production containers prove it works at scale: 7 MiB runtime memory, 32 MB limit, serving real traffic. The optimization took 30 minutes per service. The savings compound across 90 containers sharing 13 GiB of RAM.
Stop shipping build tools to production. Your containers will thank you.
---
Related reading:
Need help optimizing your container infrastructure? Explore our cloud infrastructure and DevOps servicescloud infrastructure and DevOps serviceshttps://www.techsaas.cloud/services/cloud-infrastructure-devops/ or full-stack web developmentfull-stack web developmenthttps://www.techsaas.cloud/services/full-stack-web-development/.
Need help with devops?
TechSaaS provides expert consulting and managed services for cloud infrastructure, DevOps, and AI/ML operations.