Back to Blog

Article

Docker for Next.js: A Practical Production Guide

Practical Docker and Next.js production tips: standalone builds, reusable container images, runtime environment config, and Kubernetes scaling without rebuilding per environment.

April 09, 2026 ☕ 5 min read
  • CI/CD
  • Containerization
  • Deployment
  • DevOps
  • Docker
  • Kubernetes
  • NextJS
  • NodeJS
  • Production
  • SSR
  • WebDevelopment

Getting a Next.js app into a container is straightforward. Where it usually gets messy is production: multiple environments, SSR under load, horizontal scaling—and suddenly you’re fighting env vars that don’t match, images that are effectively “stuck” to one environment, and deploys that crawl because you rebuild for every promotion.

I’ve leaned on a few rules that stay boring on purpose: build once and ship the same artifact, reuse one image everywhere, inject config at runtime (not in the image), and keep pods stateless so they can die and come back without drama.

Why Dockerize Next.js at all?

Docker gives you the same runtime shape on your machine, in CI, in staging, and in prod. That matters most when you’re not purely static—if the server renders or mixes server and client work, you want what you tested to be what runs.

Pure static export? You might skip containers. Anything with SSR or hybrid rendering, I’d default to Docker.

A repo I’m comfortable shipping usually looks like this:

.
├─ Dockerfile
├─ .dockerignore
├─ next.config.ts
├─ package.json
├─ package-lock.json
├─ public/
├─ src/
└─ scripts/
   └─ start.sh

Dockerfile and .dockerignore live next to the app. I put runtime wiring—especially anything that has to happen before node server.js runs—in something like scripts/start.sh (more on that below).

Use standalone output

Next.js can emit a self-contained server bundle. Turn on output: 'standalone' so the image only needs Node plus the built output, not the whole repo or dev dependency tree.

const nextConfig = {
  output: 'standalone',
}

export default nextConfig

A production-oriented Dockerfile

Multi-stage: install once, build once, copy only what the runtime needs. Non-root user, port 3000 exposed—nothing exotic, just habits that save pain later.

# syntax=docker/dockerfile:1.7

FROM node:22-bookworm-slim AS base
WORKDIR /app
ENV NEXT_TELEMETRY_DISABLED=1

FROM base AS deps
COPY package.json package-lock.json ./
RUN npm ci

FROM base AS builder
COPY --from=deps /app/node_modules ./node_modules
COPY . .
RUN npm run build

FROM node:22-bookworm-slim AS runner
WORKDIR /app
ENV NODE_ENV=production
ENV PORT=3000
ENV HOSTNAME=0.0.0.0

RUN groupadd --system --gid 1001 nodejs     && useradd --system --uid 1001 --gid nodejs nextjs

COPY --from=builder --chown=nextjs:nodejs /app/.next/standalone ./
COPY --from=builder --chown=nextjs:nodejs /app/.next/static ./.next/static
COPY --from=builder --chown=nextjs:nodejs /app/public ./public
COPY --from=builder --chown=nextjs:nodejs /app/scripts/start.sh ./scripts/start.sh

RUN chmod +x ./scripts/start.sh

USER nextjs
EXPOSE 3000
ENTRYPOINT ["./scripts/start.sh"]

The final stage stays slim: no source tree, no fat node_modules from dev—just the standalone server, static assets, public, and the entry script.

.dockerignore

Small context, fewer surprises in the layer history, less chance of baking local junk into the build.

node_modules
.next
.git
.env*

Environment variables (the bit people mix up)

Server-only variables are where secrets and internal URLs belong—they’re read on the server at runtime as you’d expect.

NEXT_PUBLIC_* is different: those get inlined at build time into client bundles. Treat them as public. Don’t put secrets there, and don’t expect to “just change” them for the browser without a rebuild unless you use another pattern (like the runtime script below).

Runtime config for the browser

When you need public values—API base URL, env label, whatever—to change per deploy without rebuilding, I’ve used a small script at container start that writes something like public/__ENV.js (window.__ENV = …), then load that before other client code.

Example scripts/start.sh:

#!/bin/sh
set -eu

node <<'EOF'
const fs = require('fs')

const config = {
  PUBLIC_API_BASE_URL: process.env.PUBLIC_API_BASE_URL || '',
  PUBLIC_APP_ENV: process.env.PUBLIC_APP_ENV || '',
}

fs.writeFileSync(
  '/app/public/__ENV.js',
  `window.__ENV = ${JSON.stringify(config)};`
)
EOF

exec node server.js

Wire __ENV.js in before your bundle and read window.__ENV where you need it. The image stays identical; only what you pass to docker run or the orchestrator changes.

Build and run locally

docker build -t nextjs-app .
docker run -p 3000:3000 nextjs-app

Use -e or an env file for the PUBLIC_* values you expect.

Mistakes I still see in the wild

  • Running npm run build at container startup (build belongs in CI or a build stage).
  • Building a different image per environment instead of one image plus runtime config.
  • Treating NEXT_PUBLIC_* like server secrets.
  • Storing uploads or session state on the container disk.
  • Shipping the whole repo and full node_modules in the final image when standalone is enough.

Multi-pod deployment (Kubernetes or similar)

Rough flow:

        Developer
        CI/CD Pipeline
        Container Registry
    ┌───────────────────────┐
    │   Kubernetes Cluster  │
    │                       │
    │   Pod A (SSR)         │
    │   Pod B (SSR)         │
    │   Pod C (CSR)         │
    │                       │
    └───────────┬───────────┘
             Users

Pods should be interchangeable. Shared state lives in your database, cache, object storage, queue—not on the container filesystem.

Pod A ─┐
Pod B ─┼──> DB / Cache / Storage
Pod C ─┘

Config still flows from the platform into start.sh, then into __ENV.js for the browser:

Environment Variables
     start.sh
   __ENV.js
     Browser

SSR vs CSR when you scale

ConcernSSRCSR
Where it runsServerBrowser
Scaling pressureOften higherLower on your API
Runtime configServer-drivenClient-driven

SSR pods do more render work; CSR pushes work to the client. Both can scale horizontally as long as the app stays stateless at the pod level.

Practices that actually pay off

  • Build in CI, push a tagged image to a registry.
  • Keep images small: slim base, multi-stage builds, a real .dockerignore.
  • Prefer runtime config for anything that differs by environment.
  • Avoid local state inside containers.
  • Run as non-root (as in the Dockerfile above).
  • Treat build output as immutable; deploys change env and replicas, not the artifact you already tested.

End-to-end mental model

CI → Registry → Deployment → Pods → External Services → Users

One image travels that line. Config and secrets attach when you deploy, not when you bake the image.

Do vs do not

DoDo not
Build once, deploy manyRebuild per environment
Stateless podsRely on local disk for data
Runtime config for envHardcode environment URLs

Closing thought

Build once. Deploy many times. Keep containers stateless, keep configuration outside the image, and let your orchestrator worry about replicas. At production scale, what saves you isn’t a cleverer Dockerfile—it’s a clean split between artifact, configuration, and shared services.