Docker Security Best Practices: Production Hardening 2026
I ran a security audit on my production Docker setup three months ago. The scanner found 147 vulnerabilities across my container images. Some were critical.
I wasn't hacked. The app was working fine. But looking at that Trivy reportâeight pages of CVEs in packages I didn't even know were installedâmade it clear: just because Docker runs doesn't mean it's secure.
I spent the next two weeks hardening everything. The second scan came back with 11 vulnerabilities. All low severity. The changes I made didn't break anything. They just closed the doors I didn't know I'd left open.
If you're running Docker in production, here's what I learned about making it actually secure.
Docker Security Threat Model
Before you can fix security problems, you need to know what you're defending against.
Container breakout: A container escape lets an attacker break out of the isolated environment and access the host system. This is rare but catastrophic. If someone gets root inside your container and your container runs as root, they're one kernel exploit away from owning your host.
Image vulnerabilities: Your base image and dependencies carry known CVEs. Most attacks don't need fancy zero-daysâthey exploit six-month-old unpatched bugs in libraries you shipped without checking.
Secrets leaking: Hardcoded API keys, database passwords in environment variables, .env files copied into imagesâthese get committed to registries, logged to stdout, or exposed in image layers.
Network exposure: The default Docker bridge exposes ports that shouldn't be public. Container-to-container traffic on the same host often flows unencrypted. And if you bind 0.0.0.0:5432 thinking it's just localhost, you've just opened Postgres to the internet.
Resource abuse: A compromised container can fork-bomb your host, fill your disk, or consume all CPU. Without limits, one bad actor container takes down the entire box.
The good news: every one of these has a fix. And none of them require rearchitecting your app.
Secure Image Building
Security starts at build time. If your image ships with vulnerabilities, runtime defenses won't save you.
Use Minimal Base Images
I used to build everything on node:20. Full Debian base, 900MB, hundreds of packages. Switching to node:20-alpine cut my images to 150MB and dropped the CVE count by 60%.
Alpine Linux ships with almost nothing: a minimal C library, a shell, and that's it. Smaller surface = fewer vulnerabilities.
For apps that don't need a package manager at all, distroless images are even better. Google's distroless images have no shell, no package manager, nothing except your app and its runtime dependencies:
FROM node:20-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production
COPY . .
RUN npm run build
FROM gcr.io/distroless/nodejs20-debian12
WORKDIR /app
COPY --from=builder /app/dist ./dist
COPY --from=builder /app/node_modules ./node_modules
CMD ["dist/app.js"]
No shell means an attacker who compromises your app can't curl, wget, or spawn a reverse shell. They're stuck.
Multi-Stage Builds to Reduce Attack Surface
Multi-stage builds aren't just about sizeâthey're a security feature.
Your build stage needs compilers, npm, git. Your runtime stage doesn't. By copying only the final artifacts into a clean second stage, you ensure dev tools never reach production.
Here's the security win: if your builder stage has a vulnerability in npm or gcc, it doesn't matter. That stage is discarded. The final image has neither.
Don't Leak Secrets with .dockerignore
Before Docker builds, it copies your entire project directory into the build context. If you have a .env file sitting there, it gets sent to the Docker daemon.
Even if your Dockerfile doesn't COPY .env, the file is in the build cache. And build caches leak.
Create a .dockerignore at your project root:
.env
.env.*
*.pem
*.key
node_modules
.git
npm-debug.log
coverage/
.vscode/
Think of it like .gitignore for Docker. If it shouldn't be in an image, it shouldn't be in the build context.
Run as a Non-Root User
By default, containers run as root (UID 0). If an attacker escapes your container, they land on the host as root.
Fix: create a non-privileged user inside the Dockerfile and run your app as that user.
FROM node:20-alpine
RUN addgroup -S appgroup && adduser -S appuser -G appgroup
WORKDIR /app
COPY --chown=appuser:appgroup . .
USER appuser
CMD ["node", "app.js"]
The USER directive switches the runtime identity. Now if someone gets shell access, they're appuserâno root.
Scan Images Before You Ship
I scan every image before it reaches production. My CI pipeline runs Trivy on every push:
docker build -t myapp:latest .
trivy image --severity HIGH,CRITICAL myapp:latest
Trivy scans the image against known CVE databases and prints vulnerabilities grouped by severity. If it finds a CRITICAL CVE, the pipeline fails.
Here's the before/after from my audit:
Before hardening (node:20 base, single-stage build):
Total: 147 (HIGH: 34, CRITICAL: 12)
After hardening (node:20-alpine, multi-stage, distroless where possible):
Total: 11 (HIGH: 0, CRITICAL: 0)
Other scanners worth using: Grype (faster than Trivy), Snyk (better for Node.js), Docker Scout (built into Docker Desktop). Pick one and automate it.
Sign and Verify Images
Docker Content Trust (DCT) lets you sign images so you know they haven't been tampered with between build and deploy.
Enable it:
export DOCKER_CONTENT_TRUST=1
docker push myregistry.com/myapp:latest
Docker signs the image with your private key. On pull, it verifies the signature.
For more control, use Sigstore Cosign:
cosign sign --key cosign.key myregistry.com/myapp:latest
cosign verify --key cosign.pub myregistry.com/myapp:latest
If you're running on a budget VPS without a full CI/CD pipeline, DCT is built in and costs nothing. Turn it on.
Runtime Security
A secure image is only half the job. You also need to lock down how containers run.
Read-Only Root Filesystem
Most apps don't need to write to their own filesystem. They write to /tmp, log to stdout, persist data to volumesâbut they don't modify /app.
Force this with --read-only:
docker run --read-only --tmpfs /tmp myapp:latest
If an attacker breaks in and tries to drop a malicious binary, they get read-only filesystem errors.
For Docker Compose:
services:
app:
image: myapp:latest
read_only: true
tmpfs:
- /tmp
I use this on every container that doesn't explicitly need write access.
Drop Unnecessary Capabilities
Linux capabilities break root privileges into granular permissions. Docker containers start with a default set that includes things like CAP_NET_RAW (craft raw packets) and CAP_SYS_CHROOT (change root directory).
Most apps don't need these. Drop them:
docker run --cap-drop=ALL --cap-add=NET_BIND_SERVICE myapp:latest
This strips all capabilities except NET_BIND_SERVICE (needed to bind ports below 1024).
For Node.js apps running on port 3000 or higher, you can drop everything:
services:
app:
image: myapp:latest
cap_drop:
- ALL
Less privilege = smaller blast radius.
Apply AppArmor or SELinux Profiles
AppArmor and SELinux are mandatory access control systems. They enforce what programs can and can't do, even if they're running as root.
Docker includes a default AppArmor profile that blocks things like mounting filesystems, loading kernel modules, and accessing raw sockets.
Check if it's active:
docker inspect mycontainer | grep AppArmorProfile
If you see docker-default, you're protected. If you see empty or unconfined, enable it:
docker run --security-opt apparmor=docker-default myapp:latest
For custom restrictions, write your own AppArmor profile. But the default is strong enough for most apps.
Set Resource Limits
A compromised container shouldn't be able to eat all your CPU or memory. Set hard limits to prevent resource exhaustion and maintain application performance:
services:
app:
image: myapp:latest
deploy:
resources:
limits:
cpus: '1.0'
memory: 512M
reservations:
memory: 256M
pids_limit: 100
The pids_limit prevents fork bombs. Without it, a container can spawn unlimited processes and lock up the host.
Use Seccomp Profiles for Syscall Filtering
Seccomp (secure computing mode) blocks dangerous syscalls at the kernel level. Docker's default seccomp profile disables about 44 syscalls that containers almost never need, including:
rebootswaponmountpivot_root
Check if it's active:
docker inspect mycontainer | grep SeccompProfile
You should see default. If you see unconfined, re-run with:
docker run --security-opt seccomp=/usr/share/docker/seccomp.json myapp:latest
Run Docker in Rootless Mode
Rootless Docker runs the Docker daemon as a non-root user. Even if someone escapes the container, they land in an unprivileged process.
It's the single biggest security upgrade you can make if you're on a single-tenant VPS.
Install rootless Docker:
curl -fsSL https://get.docker.com/rootless | sh
Then set:
export PATH=/home/youruser/bin:$PATH
export DOCKER_HOST=unix:///run/user/$(id -u)/docker.sock
systemctl --user enable docker
systemctl --user start docker
Now docker ps runs as your user, not root. Container escapes stay contained to your user's permissions.
The tradeoff: rootless mode can't bind privileged ports (<1024) without extra config. For most Node.js/Next.js apps behind nginx, that's fineânginx binds 80/443, containers bind 3000+.
Secrets Management
Secrets are the most common way developers accidentally leak credentials.
Docker Secrets vs Environment Variables
Don't pass secrets via -e flags:
# BAD
docker run -e DB_PASSWORD=hunter2 myapp:latest
Environment variables show up in docker inspect, process lists, and logs. Anyone with access to the Docker socket can read them.
Use Docker secrets instead (requires Swarm mode or Compose):
echo "hunter2" | docker secret create db_password -
docker service create --secret db_password myapp:latest
Inside the container, the secret appears as a file at /run/secrets/db_password. Your app reads it from there:
const fs = require('fs');
const dbPassword = fs.readFileSync('/run/secrets/db_password', 'utf8').trim();
Secrets are never logged, never in environment variables, never in docker inspect.
For Compose:
secrets:
db_password:
file: ./secrets/db_password.txt
services:
app:
image: myapp:latest
secrets:
- db_password
Integrate with Vault or AWS Secrets Manager
For multi-host setups, centralize secrets in HashiCorp Vault or AWS Secrets Manager.
Your container fetches secrets at runtime:
const AWS = require('aws-sdk');
const secretsManager = new AWS.SecretsManager({ region: 'us-east-1' });
async function getSecret(secretName) {
const data = await secretsManager.getSecretValue({ SecretId: secretName }).promise();
return JSON.parse(data.SecretString);
}
This way secrets live in one place, rotate automatically, and audit logs track every access.
Never Bake Secrets into Images
I've seen this too many times:
COPY .env /app/.env
That .env file is now in the image layer. Forever. Even if you delete it in a later layer, it's still in the build cache.
Rule: Secrets go in at runtime, not build time. Use secrets, environment injection at deploy, or a fetch-at-startup pattern.
Rotate Secrets Regularly
Set a 90-day rotation policy for database passwords, API keys, and certificates.
Vault and Secrets Manager can automate this. For manual setups, put a reminder in your calendar and rotate by:
- Generate new secret
- Update secret store
- Rolling restart containers (they fetch the new value)
- Revoke old secret after 24 hours
Rotation limits the blast radius if a secret leaks.
Network Security
Docker's default networking is convenient but not secure.
Use Custom Bridge Networks
The default bridge network (docker0) has no DNS resolution, no network isolation between containers, and no encryption.
Create a custom bridge:
docker network create --driver bridge secure-net
docker run --network secure-net myapp:latest
Custom networks give you:
- Automatic DNS (containers resolve each other by name)
- Isolation (containers on different networks can't talk)
- Better performance
For production, every service should be on its own network or a shared network per application stack.
Minimize Port Exposure
Only expose ports you need. If your app is behind a reverse proxy, don't publish the app port:
services:
app:
image: myapp:latest
# No 'ports' directive = not accessible from outside Docker
networks:
- backend
nginx:
image: nginx:alpine
ports:
- "443:443"
networks:
- backend
Nginx can still reach the app via the internal network, but the app isn't reachable from the internet.
Firewall Rules and iptables
Docker manipulates iptables directly. If you have UFW or firewalld rules, Docker bypasses them.
Lock down the Docker chain:
iptables -I DOCKER-USER -i eth0 ! -s 10.0.0.0/8 -j DROP
This blocks all external traffic to Docker containers except from your internal network (10.0.0.0/8).
For VPS deployments, I pair this with UFW:
ufw allow from 10.0.0.0/8 to any port 3000
ufw deny 3000
TLS Termination at the Reverse Proxy
Don't handle TLS inside your containers. Let nginx or Caddy terminate TLS and proxy plain HTTP to the backend.
This centralizes certificate management and makes renewals easier. Plus, your containers don't need root privileges or port 443.
My nginx config:
server {
listen 443 ssl;
server_name myapp.com;
ssl_certificate /etc/letsencrypt/live/myapp.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/myapp.com/privkey.pem;
location / {
proxy_pass http://app:3000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
}
The app container runs on port 3000, no TLS config needed. Full deployment guide here.
Never Expose the Docker Socket
Never, ever bind-mount the Docker socket into a container:
# NEVER DO THIS
volumes:
- /var/run/docker.sock:/var/run/docker.sock
The Docker socket gives full control over the host. A compromised container with socket access can start privileged containers, bind-mount the host filesystem, and own the machine.
If you need Docker-in-Docker for CI/CD, use the docker:dind image in a separate, isolated environmentânot on your production host.
Container Registry Security
Your private registry is where built images live. If it's not locked down, attackers can push malicious images or pull your code.
Use a Private Registry
Don't put proprietary images on Docker Hub's public registry. Use a private registry:
- Harbor: Open-source, self-hosted, full RBAC and vulnerability scanning
- AWS ECR: Managed, integrates with IAM
- Google Artifact Registry: Managed, supports multi-region replication
- GitLab/GitHub Container Registry: Built into your CI/CD
For a budget VPS setup, Harbor runs in Docker and costs nothing except storage.
Access Control and RBAC
Restrict who can push and pull images.
In Harbor:
- Create project-level robot accounts with read-only tokens for deployments
- Use per-user credentials for CI pushes
- Enable content trust to enforce signed images
In AWS ECR:
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::123456789012:role/ECS-Deploy"
},
"Action": [
"ecr:GetDownloadUrlForLayer",
"ecr:BatchGetImage"
]
}
Least-privilege access: CI can push, production can only pull.
Scan at Push Time
Configure your registry to scan images on push.
Harbor does this automatically:
# harbor.yml
scanner:
trivy:
enabled: true
severity: CRITICAL,HIGH
If a pushed image has critical vulnerabilities, Harbor flags it and blocks deployments.
AWS ECR has scan-on-push too:
aws ecr put-image-scanning-configuration \
--repository-name myapp \
--image-scanning-configuration scanOnPush=true
Sign Images with Notary or Cosign
Docker Content Trust uses Notary under the hood for image signing.
Enable it in your registry:
export DOCKER_CONTENT_TRUST=1
export DOCKER_CONTENT_TRUST_SERVER=https://notary.myregistry.com
docker push myregistry.com/myapp:latest
For a more modern approach, use Cosign:
cosign generate-key-pair
cosign sign --key cosign.key myregistry.com/myapp:latest
# On deploy:
cosign verify --key cosign.pub myregistry.com/myapp:latest
Unsigned images get rejected at pull time.
Monitoring and Incident Response
Security doesn't end at deployment. You need to know when something's wrong.
Log Aggregation
Ship container logs to a central system.
I use Promtail + Loki:
services:
promtail:
image: grafana/promtail:latest
volumes:
- /var/lib/docker/containers:/var/lib/docker/containers:ro
- /var/run/docker.sock:/var/run/docker.sock:ro
- ./promtail-config.yml:/etc/promtail/config.yml
command: -config.file=/etc/promtail/config.yml
Promtail scrapes container logs and sends them to Loki. Grafana queries Loki for dashboards and alerts.
For budget setups, this runs on the same VPS. For larger deployments, run Loki on a separate instance.
Runtime Threat Detection with Falco
Falco watches syscalls and alerts on suspicious behavior: unexpected shell spawns, privilege escalations, sensitive file access.
Install Falco:
docker run -d --name falco --privileged \
-v /var/run/docker.sock:/host/var/run/docker.sock \
-v /dev:/host/dev \
-v /proc:/host/proc:ro \
falcosecurity/falco:latest
Default rules catch:
- Shell spawned in a container
- Read from sensitive files like
/etc/shadow - Unexpected network connections
- Privilege escalation attempts
Falco logs alerts to stdout. Pipe them to your log aggregator.
Audit Logs for Docker Daemon
Enable Docker daemon audit logging:
{
"log-driver": "json-file",
"log-opts": {
"max-size": "10m",
"max-file": "3"
},
"audit": {
"enabled": true,
"level": "info"
}
}
This logs every Docker API call: who ran what container, when, with which options.
Rotate logs to prevent disk fill-up.
Security Update Workflow
Set up automated security updates for base images:
- Dependabot (GitHub) or Renovate (self-hosted) opens PRs for new base image versions
- CI runs Trivy scan on the new build
- If vulnerabilities drop or stay low, auto-merge
- Rolling deploy
For manual workflows, check for updates weekly:
docker pull node:20-alpine
docker build -t myapp:latest .
trivy image myapp:latest
If Trivy shows new critical CVEs in your base, rebuild immediately.
Production Security Checklist
Here's the checklist I run before every deploy.
Pre-Deployment:
- Base image is minimal (Alpine or distroless)
- Multi-stage build separates build and runtime
- No secrets in image layers or environment variables
-
.dockerignoreexcludes sensitive files - Image scanned with Trivy or equivalent (zero CRITICAL)
- Container runs as non-root user
- Image signed with DCT or Cosign
- Resource limits set (CPU, memory, PIDs)
- Filesystem set to read-only where possible
- Capabilities dropped (keep only what's needed)
- AppArmor/SELinux profile applied
- Seccomp profile active
- Custom bridge network (not default)
- Only necessary ports exposed
- Secrets injected via Docker secrets or external store
- TLS terminated at reverse proxy (nginx/Caddy)
- Docker socket NOT mounted into containers
Post-Deployment:
- Logs aggregated to central system
- Falco or equivalent runtime monitoring active
- Periodic vulnerability rescans (weekly)
- Secrets rotation policy enforced (90 days)
- Firewall rules tested (no unexpected open ports)
- Incident response plan documented
GitHub Actions CI/CD Security Gate:
Here's the automation I use to enforce this checklist:
name: Docker Security Scan
on:
push:
branches: [main]
pull_request:
jobs:
security-scan:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Build image
run: docker build -t myapp:${{ github.sha }} .
- name: Run Trivy vulnerability scanner
uses: aquasecurity/trivy-action@master
with:
image-ref: myapp:${{ github.sha }}
exit-code: 1 # Fail if vulnerabilities found
severity: CRITICAL,HIGH
- name: Check for root user
run: |
USER=$(docker inspect myapp:${{ github.sha }} -f '{{.Config.User}}')
if [ -z "$USER" ]; then
echo "ERROR: Container runs as root"
exit 1
fi
- name: Check for secrets in image
run: |
docker history myapp:${{ github.sha }} --no-trunc | grep -iE '(password|secret|key|token)' && exit 1 || true
- name: Sign image with Cosign
if: github.ref == 'refs/heads/main'
run: |
cosign sign --key ${{ secrets.COSIGN_KEY }} myregistry.com/myapp:${{ github.sha }}
If any check fails, the pipeline blocks the merge.
Periodic Audit:
Every quarter, re-run Trivy on all production images and check for:
- New CVEs in base images
- Stale secrets (rotate anything >90 days old)
- Unused containers (remove them)
- Firewall drift (re-verify iptables rules)
Wrapping Up
Security isn't a feature you add at the end. It's a set of habits you build into every step: building images, configuring runtimes, managing secrets, monitoring behavior.
The work I did three months agoâswitching to Alpine bases, enabling rootless mode, setting up Trivy scansâtook two weeks. The peace of mind it bought me is worth every hour.
Start with the low-hanging fruit: scan your images, run as non-root, drop capabilities. Then layer in the deeper changes: rootless Docker, Falco, secret rotation.
You don't need enterprise tools or a security team. You just need a checklist and the discipline to follow it.
Happy hardening!
Tested environment: Docker 26.1, Trivy 0.50, Node.js 20 LTS, Ubuntu 22.04