Your Dockerfiles Are Lying to You About Speed
You’ve got a Dockerfile. It works. It builds. You’ve accepted that building your Node.js app takes 6 minutes and that’s just the cost of doing business. Maybe you’ve even written it off as “CI build time” and gone to make a coffee.
Here’s the thing: you’ve been rebuilding node_modules from scratch every single time like some kind of animal. Every apt update, every pip install -r requirements.txt, every Maven dependency download — all of it, from zero, every build. Docker was just quietly letting you suffer.
BuildKit is the answer to most of this. It’s been the default builder since Docker 23.0, so if you’ve updated Docker in the last couple of years, you’re technically already using it. But “using it” and “actually using it” are very different things.
Let’s fix that.
What BuildKit Actually Is
BuildKit is Docker’s build engine — the thing that takes your Dockerfile and turns it into an image. The old builder was sequential, naive, and had the caching behavior of a goldfish. BuildKit is a complete rewrite that ships with:
- Parallel stage execution — multi-stage builds run concurrently where possible
- Better cache invalidation — actually understands what changed
- Cache mounts — persistent build-time caches that survive between builds
- Build secrets — inject credentials without baking them into layers
- SSH forwarding — for private Git repos during build
- Inline cache — push and pull cache from a registry
The old builder is still there if you explicitly ask for it, but there’s basically no reason to. If you’re on Docker 23.0+, BuildKit is your default. If you’re on something older, add DOCKER_BUILDKIT=1 to your environment.
# Verify you're on BuildKit
docker buildx version
# docker buildx 0.12.0 ...
Parallel Stage Execution: Free Speed
Most people write multi-stage Dockerfiles as a single linear chain. BuildKit will automatically parallelize stages that don’t depend on each other.
# These two stages have no dependency on each other
# BuildKit runs them at the same time
FROM node:20-alpine AS frontend-builder
WORKDIR /app
COPY frontend/package*.json ./
RUN npm ci
COPY frontend/ ./
RUN npm run build
FROM golang:1.22-alpine AS backend-builder
WORKDIR /app
COPY go.mod go.sum ./
RUN go mod download
COPY . .
RUN go build -o server ./cmd/server
# Final stage pulls from both
FROM alpine:3.19
COPY --from=frontend-builder /app/dist ./static
COPY --from=backend-builder /app/server .
CMD ["./server"]
If your frontend build takes 3 minutes and your backend takes 2 minutes, you’re now waiting 3 minutes instead of 5. That’s not a trick — that’s just BuildKit being smarter than the old builder.
Cache Mounts: The Actual Game Changer
This is the feature that will save your sanity. Cache mounts let you keep a persistent cache directory between builds that doesn’t get committed to any image layer. Package manager caches, compiler caches, whatever you want.
apt / apt-get
FROM ubuntu:22.04
RUN --mount=type=cache,target=/var/cache/apt,sharing=locked \
--mount=type=cache,target=/var/lib/apt,sharing=locked \
apt-get update && apt-get install -y \
curl \
git \
build-essential \
&& rm -rf /var/lib/apt/lists/*
First build: normal. Second build onwards: apt’s package cache is warm. You’re not re-downloading packages you already downloaded. On a slow connection this alone can cut minutes off your build.
npm / Node.js
FROM node:20-alpine
WORKDIR /app
COPY package*.json ./
RUN --mount=type=cache,target=/root/.npm \
npm ci --prefer-offline
COPY . .
RUN npm run build
The --prefer-offline flag tells npm to use the cache when possible. Combined with the cache mount, you’re only downloading packages that actually changed between builds. Your CI pipeline going from 8 minutes to 45 seconds is not an exaggeration.
pip / Python
FROM python:3.12-slim
WORKDIR /app
COPY requirements.txt ./
RUN --mount=type=cache,target=/root/.cache/pip \
pip install -r requirements.txt
COPY . .
Go modules
FROM golang:1.22-alpine
WORKDIR /app
COPY go.mod go.sum ./
RUN --mount=type=cache,target=/go/pkg/mod \
--mount=type=cache,target=/root/.cache/go-build \
go mod download
COPY . .
RUN --mount=type=cache,target=/go/pkg/mod \
--mount=type=cache,target=/root/.cache/go-build \
go build -o /app/server ./cmd/server
Two cache mounts for Go: one for the module download cache, one for the build cache. The build cache is the big one — incremental Go compilation is fast. Full recompilation every time because you changed one file is not.
Build Secrets: Stop Putting Passwords in Your Layers
Here’s a thing that happens constantly: someone needs to pull a private package during build. They add ARG NPM_TOKEN to their Dockerfile, pass it in, and call it a day. The problem is that ARG values end up in the layer history. Anyone who can pull that image can extract your token.
# DON'T do this
docker build --build-arg NPM_TOKEN=my_secret_token .
# It's in the layer. Forever.
BuildKit secrets fix this properly. The secret is mounted as a file during the specific RUN step that needs it, and it never appears in any layer.
# .npmrc is injected at build time, not baked in
FROM node:20-alpine
WORKDIR /app
COPY package*.json ./
RUN --mount=type=secret,id=npmrc,target=/root/.npmrc \
npm ci
COPY . .
RUN npm run build
# Pass the secret at build time
docker build --secret id=npmrc,src=$HOME/.npmrc .
Your .npmrc with the token is available during that RUN step and nowhere else. The final image has zero trace of it.
Private registries, API keys, anything
RUN --mount=type=secret,id=pypi_token \
pip install \
--extra-index-url https://$(cat /run/secrets/pypi_token)@your-private-pypi.example.com/simple \
your-private-package
SSH Agent Forwarding for Private Git Repos
If you’re cloning private repos during build (which you probably shouldn’t be doing, but here we are), don’t bake in SSH keys. Forward your SSH agent instead.
FROM alpine:3.19
RUN apk add --no-cache git openssh-client
# Mount the SSH agent socket
RUN --mount=type=ssh \
git clone git@github.com:your-org/private-repo.git /app
# Start your SSH agent and add your key
eval $(ssh-agent)
ssh-add ~/.ssh/id_ed25519
# Build with SSH forwarding
docker build --ssh default .
Multi-Platform Builds with buildx
BuildKit is also the engine behind docker buildx, which lets you build for multiple architectures from one machine. Useful if you’re building for ARM (Raspberry Pi, Apple Silicon, AWS Graviton).
# Create a builder that supports multi-platform
docker buildx create --name multibuilder --use
docker buildx inspect --bootstrap
# Build for amd64 and arm64
docker buildx build \
--platform linux/amd64,linux/arm64 \
--tag yourrepo/yourimage:latest \
--push \
.
The --push flag is required for multi-platform builds — you can’t load a multi-platform image into your local daemon, so it goes straight to the registry.
Bake Files for Complex Build Matrices
If you’ve got multiple images that need to be built together — maybe a monorepo with several services — docker buildx bake lets you define the whole matrix in a config file instead of a wall of shell script.
# docker-bake.hcl
group "default" {
targets = ["api", "worker", "frontend"]
}
target "api" {
context = "./services/api"
dockerfile = "Dockerfile"
tags = ["yourrepo/api:latest"]
platforms = ["linux/amd64", "linux/arm64"]
cache-from = ["type=registry,ref=yourrepo/api:buildcache"]
cache-to = ["type=registry,ref=yourrepo/api:buildcache,mode=max"]
}
target "worker" {
context = "./services/worker"
dockerfile = "Dockerfile"
tags = ["yourrepo/worker:latest"]
cache-from = ["type=registry,ref=yourrepo/worker:buildcache"]
cache-to = ["type=registry,ref=yourrepo/worker:buildcache,mode=max"]
}
target "frontend" {
context = "./services/frontend"
dockerfile = "Dockerfile"
tags = ["yourrepo/frontend:latest"]
}
# Build everything in the default group, in parallel
docker buildx bake --push
All three services build in parallel. Cache is pulled from the registry at the start and pushed back at the end. Your CI pipeline just got a lot simpler.
.dockerignore: The Free Optimization Everyone Forgets
Before BuildKit even runs, Docker sends your build context to the build daemon. If you don’t have a .dockerignore, you’re sending node_modules, .git, build artifacts, test fixtures, and whatever else lives in your project directory. This can be hundreds of megabytes.
# .dockerignore
.git
.gitignore
node_modules
dist
build
.env
.env.*
*.log
.DS_Store
README.md
docs/
tests/
coverage/
.github/
Small .dockerignore, faster context transfer, better cache behavior. It takes two minutes to write one and you get time back on every build forever.
Inline Cache: CI Builds That Actually Reuse Work
BuildKit can push its cache to a registry tag and pull it back on the next build. This means your CI runners — which are usually ephemeral and have no local cache — can still benefit from caching.
# Push with inline cache metadata
docker buildx build \
--cache-to type=inline \
--tag yourrepo/yourimage:latest \
--push \
.
# Next build, pull the cache
docker buildx build \
--cache-from yourrepo/yourimage:latest \
--tag yourrepo/yourimage:latest \
--push \
.
For more control, use the registry cache type with mode=max, which caches intermediate layers too:
docker buildx build \
--cache-from type=registry,ref=yourrepo/yourimage:buildcache \
--cache-to type=registry,ref=yourrepo/yourimage:buildcache,mode=max \
--tag yourrepo/yourimage:latest \
--push \
.
Putting It All Together
Here’s what a reasonably optimized Dockerfile looks like with everything applied:
# syntax=docker/dockerfile:1
FROM node:20-alpine AS deps
WORKDIR /app
COPY package*.json ./
RUN --mount=type=cache,target=/root/.npm \
--mount=type=secret,id=npmrc,target=/root/.npmrc \
npm ci --prefer-offline
FROM node:20-alpine AS builder
WORKDIR /app
COPY --from=deps /app/node_modules ./node_modules
COPY . .
RUN npm run build
FROM node:20-alpine AS runner
ENV NODE_ENV=production
WORKDIR /app
RUN addgroup --system --gid 1001 nodejs && \
adduser --system --uid 1001 nextjs
COPY --from=builder --chown=nextjs:nodejs /app/dist ./dist
COPY --from=deps --chown=nextjs:nodejs /app/node_modules ./node_modules
USER nextjs
EXPOSE 3000
CMD ["node", "dist/server.js"]
docker buildx build \
--secret id=npmrc,src=$HOME/.npmrc \
--cache-from type=registry,ref=yourrepo/myapp:buildcache \
--cache-to type=registry,ref=yourrepo/myapp:buildcache,mode=max \
--tag yourrepo/myapp:latest \
--push \
.
The # syntax=docker/dockerfile:1 comment at the top is optional but good practice — it pins the Dockerfile frontend version and ensures you’re getting the latest BuildKit syntax features.
The Before and After
Before: fresh CI runner, no cache, 8-minute build, existential dread.
After: registry cache pulled, npm cache warm, parallel stages running, 90-second build, smug satisfaction.
The work to get there is maybe an afternoon. Most of it is copy-pasting the cache mount patterns for whatever package manager you’re using and adding a .dockerignore that you should have written months ago.
BuildKit has been sitting there doing nothing because nobody told your Dockerfiles to use it properly. Now you know. Go make your builds fast.