Docker Container Security Best Practices in 2026
Learn Docker container security best practices for 2026: distroless images, rootless runtimes, Cosign v3 supply chain verification, and runtime monitoring. Checklist included.
# Docker Container Security Best Practices in 2026
Containers are the backbone of modern infrastructure. At TechSaaS, we run 90+ Docker containers on a single host — and every one of them is a potential attack surface if left unchecked. Container security in 2026 isn't optional; it's table stakes.
This guide covers the practices we actually use in production, backed by real scan data and real incidents. If you're running containers in any environment — dev, staging, or production — these are the things that matter.
1. Start With Minimal Base Images
The single most impactful security decision happens at the top of your Dockerfile. Every package in your base image is a package that could have a CVE.
Use distroless, Alpine, or Docker Hardened Images:
# Bad: Full Debian with hundreds of packages
FROM node:22-bookworm
# Better: Alpine with minimal footprint
FROM node:22-alpine
# Best: Distroless — no shell, no package manager, nothing extra
FROM gcr.io/distroless/nodejs22-debian12A major development in late 2025: Docker released over 1,000 Hardened Images (DHI) under the Apache 2.0 license — purpose-built for security, available on Docker Hub. These images are stripped of unnecessary packages, pre-scanned, and continuously updated. If distroless is too restrictive for your use case, DHI is the next best option.
Distroless images contain only your application and its runtime dependencies. No shell, no package manager, no curl, no wget. An attacker who gains code execution inside a distroless container has almost nothing to work with.
The debugging trade-off: Distroless has no shell, so you can't docker exec -it into a crashing production container at 3 AM. Solutions: use the :debug variant (gcr.io/distroless/base-nossl:debug) as a temporary sidecar, or use ephemeral containers (docker debug in Docker Engine 29+).
Use multi-stage builds to strip build tools from the final image:
FROM node:22-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci --production
COPY . .
RUN npm run build
FROM gcr.io/distroless/nodejs22-debian12
COPY --from=builder /app/dist /app
CMD ["app/server.js"]This pattern keeps your final image minimal — build tools, dev dependencies, and source code never ship to production. For a deeper dive, see our guide on how multi-stage Docker builds reduce image size by 80%how multi-stage Docker builds reduce image size by 80%https://www.techsaas.cloud/blog/multi-stage-docker-builds-reduce-image-size-80-percent/.
Watch out for Alpine's musl libc: Alpine uses musl instead of glibc. Some Python packages with C extensions (numpy, pandas, cryptography) fail to install or have subtle behavioral differences. Test thoroughly if switching from Debian to Alpine.
2. Never Run as Root
By default, Docker containers run as root. This means if an attacker exploits your application, they're root inside the container — and potentially one kernel exploit away from root on the host.
# Create a non-root user
RUN addgroup -S appgroup && adduser -S appuser -G appgroup
# Copy application files with correct ownership
COPY --chown=appuser:appgroup ./dist /app
# Switch to non-root user
USER appuser
CMD ["node", "server.js"]This applies to every container. No exceptions. Even your "internal-only" services should run as non-root because lateral movement is how breaches escalate.
For an extra layer, enable rootless Docker on the host:
dockerd-rootless-setuptool.sh install
# Verify rootless mode
docker info --format '{{.SecurityOptions}}'
# Output includes: name=rootlessRootless Docker maps the container's root user to an unprivileged user on the host, making container escapes significantly harder. Know the trade-offs: rootless mode doesn't support privileged containers, ports below 1024 (without setcap), ICMP (ping), AppArmor, or cgroup resource limits without systemd delegation. These are real constraints in production — evaluate whether they affect your workload.
3. Scan Images in Your CI/CD Pipeline
Building secure images means nothing if you don't verify them continuously. Vulnerabilities are discovered daily.
Integrate scanning into every build:
# Gitea Actions / GitHub Actions workflow
name: Container Security Scan
on: [push]
jobs:
scan:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Build with SBOM and provenance attestations
run: docker buildx build --sbom=true --provenance=true -t myapp:${{ github.sha }} .
- name: Scan with Trivy
uses: aquasecurity/trivy-action@master
with:
image-ref: myapp:${{ github.sha }}
severity: CRITICAL,HIGH
exit-code: 1Key scanning tools (2026 versions):
docker scout cves myapp:latest for native scanningDocker Engine 25+ automatically generates provenance attestations (mode=min) on every buildx build. Adding --sbom=true generates a full software bill of materials as a build attestation — this is critical for supply chain verification.
The critical rule: fail the build on HIGH and CRITICAL vulnerabilities. Don't just generate reports that nobody reads. We compare Falco, Trivy, and Snyk Container in detail in our container security tools guidecontainer security tools guidehttps://www.techsaas.cloud/blog/container-security-falco-trivy-snyk/.
Important caveat: Trivy and Grype scan OS packages and language dependencies, but they don't detect application-level misconfigurations, hardcoded secrets in code, or malicious custom binaries. A "zero CVE" scan result does not mean the image is secure.
4. Verify the Supply Chain
Supply chain attacks are the defining threat of this era.
The real-world stakes: In August 2025, Binarly researchers discovered that dozens of official Debian-based Docker Hub images still contained the XZ Utils backdoor (CVE-2024-3094, CVSS 10.0) — months after public disclosure. Teams that pinned mutable tags like debian:bookworm unknowingly inherited the compromised library. Organizations using digest pinning and automated scanning caught it immediately; everyone else was silently shipping backdoored containers.
Around the same time, researchers found over 10,000 Docker Hub images leaking production credentials — API keys, database passwords, AI model tokens — from more than 100 organizations including a Fortune 500 company. These weren't obscure images; they were built by teams who hardcoded secrets in Dockerfiles.
Pin image digests, not just tags:
# Tags can be overwritten — this could point to anything tomorrow
FROM node:22-alpine
# Digests are immutable — this always points to the exact same image
FROM node:22-alpine@sha256:a1b2c3d4e5f6...Verify image signatures with Cosign v3 (keyless by default):
# Keyless verification via Sigstore Fulcio CA + Rekor transparency log
cosign verify \
[email protected] \
--certificate-oidc-issuer=https://accounts.google.com \
myregistry/myapp:latest
# Key-based verification (alternative)
cosign verify --key cosign.pub myregistry/myapp:latestCosign v3 (current: v3.0.5) defaults to keyless verification through Sigstore's certificate authority and transparency log. This is simpler and more secure than managing signing keys yourself.
Generate and track SBOMs:
trivy image --format spdx-json -o sbom.json myapp:latestAn SBOM gives you a complete inventory of everything inside your container. When the next zero-day drops, you can instantly check whether you're affected. Note: The EU Cyber Resilience Act (September 2026) will mandate SBOM generation for all software sold in the EU market — this is no longer a nice-to-have.
5. Drop Capabilities, Read-Only Filesystem, and Resource Limits
Linux capabilities are fine-grained permissions that replace the old root/non-root binary. Docker gives containers a default set that most applications don't need.
# docker-compose.yml — standalone compose (NOT Swarm)
services:
webapp:
image: myapp:latest
security_opt:
- no-new-privileges:true
cap_drop:
- ALL
cap_add:
- NET_BIND_SERVICE
read_only: true
tmpfs:
- /tmp
- /var/run
# Resource limits for standalone compose
mem_limit: 512m
cpus: 1.0
pids_limit: 100Critical note: The deploy.resources block that many articles recommend is silently ignored by docker compose up. It only works with docker stack deploy (Swarm mode). For standalone Compose — which is what the vast majority of deployments use — set mem_limit, cpus, and pids_limit directly at the service level as shown above.
What this configuration does:
cap_drop: ALL removes every Linux capabilitycap_add grants back only what's strictly neededno-new-privileges prevents privilege escalation via setuid binariesread_only: true makes the filesystem immutable — malware can't write to disktmpfs provides writable scratch space in memory onlypids_limit prevents fork bombsDocker ships a default seccomp profile that blocks approximately 44 dangerous syscalls. Custom seccomp profiles can tighten this further for specific workloads.
Gotchas to watch for:
read_only: true breaks more than you'd expect. Many apps write to /var/log, /var/cache, or generate temp files in unexpected locations. Redis needs tmpfs for RDB dumps; Nginx needs /var/cache/nginx and /var/run. Audit your app's write paths.cap_drop: ALL can break DNS resolution in some images. Removing CAP_NET_RAW prevents certain DNS strategies. If hostname resolution fails, add cap_add: NET_RAW back.6. Manage Secrets Properly
Hardcoded secrets in images are the most common container security failure. Credentials baked into ENV instructions in a Dockerfile end up in image layers that anyone with docker history can read — as the 10,000+ leaked Docker Hub images demonstrate.
Never do this:
# Secret is baked into the image layer forever
ENV DATABASE_URL=postgres://admin:password@db:5432/prodFor Docker Compose (standalone):
services:
webapp:
image: myapp:latest
secrets:
- db_password
environment:
DB_HOST: postgres
DB_USER: app
secrets:
db_password:
file: ./secrets/db_password.txtImportant distinction: Docker secrets (/run/secrets/) with full lifecycle management only works in Swarm mode. In standalone Compose, the secrets: directive mounts files but without Swarm's encryption-at-rest and access control. For non-Swarm environments, use a dedicated secrets manager:
We compare these tools in depth in our secrets management comparison: Vault vs Infisical vs Dopplersecrets management comparison: Vault vs Infisical vs Dopplerhttps://www.techsaas.cloud/blog/secrets-management-vault-vs-infisical-vs-doppler/.
# Infisical CLI injection — works in any Docker environment
infisical run -- docker compose upPermissions gotcha: Secrets mounted at /run/secrets/ default to root-owned (mode 0444, uid 0). If your container runs as non-root AND uses mounted secrets, the process may not be able to read them. Ensure permissions are correct or use an entrypoint script to fix ownership.
7. Network Segmentation
By default, all containers on the same Docker network can talk to each other. Your frontend does not need direct access to your database.
services:
frontend:
networks: [public]
api:
networks: [public, backend]
postgres:
networks: [backend]
networks:
public:
driver: bridge
backend:
driver: bridge
internal: trueThe internal: true flag means containers on that network have zero access to the outside internet. Your database can talk to the API, but it can't phone home to a C2 server.
Edge case: Containers on an internal: true network can still reach the Docker host's IP. If the host runs services (metadata endpoints, local databases), the "isolated" container can still reach them. Use iptables rules on the host to close this gap. Docker Engine 28 addressed a related issue: unpublished container ports are now blocked from LAN access by default.
8. Monitor and Audit at Runtime
Static security catches problems before deployment. Runtime security catches what happens after.
nftables support as an iptables alternativeAt TechSaaS, we pipe container logs through Promtail into Loki and visualize with Grafana. Unexpected process execution, network connections, or file modifications trigger immediate alerts.
9. Keep Everything Updated
This sounds obvious, but it's where most teams fail. Running docker pull once and forgetting about it means shipping images with months-old vulnerabilities.
Automate updates with Renovate Bot's container image datasource — it creates PRs when base images have updates, pairs with your CI scanning pipeline for automatic remediation.
For quick checks:
docker scout cves myapp:latestThe Security Checklist
Conclusion
Container security isn't a single tool or a one-time audit. It's a set of practices layered across your entire pipeline — from the Dockerfile you write, to the CI that builds it, to the runtime that executes it.
The incidents of 2025 — backdoored base images shipping for months, thousands of production credentials leaked through Dockerfiles — prove that the basics still matter. Start with the high-impact items: minimal images, non-root users, CI scanning, and digest pinning. Layer in runtime monitoring, network segmentation, and secrets management. Every hardened container is one less thing an attacker can exploit.
Your containers are only as secure as the weakest link in your pipeline. Make every link count.
---
Related reading:
Need help securing your container infrastructure? Explore our cybersecurity and compliance servicescybersecurity and compliance serviceshttps://www.techsaas.cloud/services/cybersecurity-compliance/ or cloud infrastructure and DevOps consultingcloud infrastructure and DevOps consultinghttps://www.techsaas.cloud/services/cloud-infrastructure-devops/.
Need help with security?
TechSaaS provides expert consulting and managed services for cloud infrastructure, DevOps, and AI/ML operations.