You built an image on your M3 Mac. Works great locally. Ship it to your x86_64 server and it crashes with exec format error. Your binary is ARM64, but the server is Intel.
Enter buildx. It’s Docker’s multi-platform build tool, and it’s been in the core since Docker 19.03. You just probably haven’t used it yet.
What Is buildx?
buildx is a BuildKit frontend that lets you build images for multiple architectures (ARM64, AMD64, ARM, 386, etc.) from a single machine. It uses QEMU emulation or actual hardware builders to cross-compile.
Most of the time you don’t need it. Your CI/CD system can build on the actual hardware. But when you’re developing locally or need to build everything at once, buildx is a lifesaver.
Check If You Have It
docker buildx version# github.com/docker/buildx v0.12.1It’s been default in Docker Desktop for a couple years. If you don’t have it, upgrade Docker.
Enable Buildkit
Make sure BuildKit is enabled:
export DOCKER_BUILDKIT=1docker build --help | grep platform# --platform value Set the target platform for the buildOr set it permanently:
{ "features": { "buildkit": true }}The Basic Workflow
- Create a builder instance (one-time setup)
- Build for multiple platforms
- Push the multi-arch manifest to a registry
Step 1: Create a builder
docker buildx create --name multi-platform-builder --usedocker buildx inspect --bootstrapThe --use flag sets it as default. --bootstrap starts it.
Step 2: Build and push
docker buildx build \ --platform linux/amd64,linux/arm64 \ --push \ --tag myregistry/myimage:latest \ .That’s it. Docker builds for both architectures and pushes to the registry. On pull, Docker automatically picks the right one.
Key Flags
—platform linux/amd64,linux/arm64
Comma-separated list of architectures to build. Common combinations:
linux/amd64— Intel/AMD serverslinux/arm64— Apple Silicon, newer AWS Graviton, newer Raspberry Pilinux/amd64,linux/arm64— Cover 95% of use caseslinux/amd64,linux/arm64,linux/arm/v7— Add 32-bit ARM for older Pis
Full list: docker buildx ls --builder multi shows available platforms.
—push
Push to registry after build. Without this, the image stays local (if you’re using docker builder, not docker-container).
—output type=oci,dest=./output
Export to local directory instead of pushing. Useful for local testing:
docker buildx build \ --platform linux/amd64,linux/arm64 \ --output type=oci,dest=./build \ .
# Images end up in ./build/, load them with:docker load < build/index.json—cache-from
Reuse cache from a registry image to speed up builds:
docker buildx build \ --platform linux/amd64,linux/arm64 \ --push \ --cache-from type=registry,ref=myregistry/myimage:latest \ --tag myregistry/myimage:latest \ .Dockerfile Considerations
Most Dockerfiles work as-is. But some things break with cross-compilation:
Alpine as base: Generally works
FROM alpine:latestRUN apk add --no-cache curlUbuntu with compiled binaries: Use multiarch base images
# Good for multi-archFROM ubuntu:22.04
# If you're installing pre-compiled binaries, make sure the base image supports them# ubuntu:22.04 has multiarch support built-inYour own compiled binaries: Can be tricky
If your Dockerfile compiles a binary with RUN go build ..., the binary is compiled for the build platform, not the target platform. Use FROM --platform=$BUILDPLATFORM to fix:
# Build stage uses your native architectureFROM --platform=$BUILDPLATFORM golang:1.21-alpine AS builderARG TARGETPLATFORMARG BUILDPLATFORMRUN echo "Building for $TARGETPLATFORM on $BUILDPLATFORM"
WORKDIR /appCOPY . .
# Use CGO_ENABLED=0 for static binaries that work everywhereRUN CGO_ENABLED=0 go build -o server .
# Final stage runs on target architectureFROM alpine:latestCOPY --from=builder /app/server /app/serverENTRYPOINT ["/app/server"]The ARG TARGETPLATFORM and ARG BUILDPLATFORM are automatically set by buildx.
Python/Node/Java: Usually fine as-is
These languages run on the target platform, not compiled:
FROM python:3.11-slimCOPY . /appWORKDIR /appRUN pip install -r requirements.txtCMD ["python", "app.py"]This works across architectures without change.
Local Testing
Can’t easily test ARM64 on your Mac without buildx, but you can check the image exists:
docker buildx build \ --platform linux/amd64,linux/arm64 \ --push \ --tag myregistry/myimage:test-build \ .
# Then on your server:docker pull myregistry/myimage:test-builddocker run myregistry/myimage:test-buildOr use QEMU emulation (slow, but works):
docker run --rm --privileged tonistiigi/binfmt --install all
# Now you can run ARM images on x86:docker run --platform linux/arm64 --rm myimage:test uname -m# aarch64When You Actually Need buildx
You need it if:
- You’re shipping images for both Intel servers and M-series Macs
- You support newer AWS instances (Graviton uses ARM64)
- You want to build images in CI once and push a multi-arch manifest
- You’re shipping to IoT devices or newer Raspberry Pis
You don’t need it if:
- You only run on one architecture (most teams)
- Your CI/CD builds on the actual hardware (builds on Linux runners, on Mac runners, etc.)
- You’re comfortable with separate builds per architecture
A Real Workflow
GitHub Actions:
name: Build Multi-Archon: push: branches: [main]
jobs: build: runs-on: ubuntu-latest steps: - uses: actions/checkout@v3
- uses: docker/setup-buildx-action@v2
- uses: docker/login-action@v2 with: registry: ghcr.io username: ${{ github.actor }} password: ${{ secrets.GITHUB_TOKEN }}
- uses: docker/build-push-action@v5 with: context: . platforms: linux/amd64,linux/arm64 push: true tags: ghcr.io/${{ github.repository }}:latestOne pipeline, two architectures, one manifest. Done.
buildx is one of those tools that sits unused until you suddenly need it at 2 AM. But when that day comes, you’ll be very glad it exists.