Skip to content
SumGuy's Ramblings
Go back

Docker Compose vs Docker Swarm: When "Good Enough" Beats "Enterprise"

The “Do I Really Need This?” Problem

Here’s a scene that plays out roughly fourteen thousand times a day across the tech world: a developer has a web app, a database, and maybe Redis. They Google “how to run multiple Docker containers” and suddenly they’re three tabs deep into Kubernetes documentation, questioning their career choices and wondering if they need a “service mesh.”

Take a breath. Close those tabs.

For the vast majority of us — the ones who aren’t running Netflix’s backend or orchestrating fleets of microservices across seventeen availability zones — the answer to your container orchestration needs probably lives inside two tools you already have installed: Docker Compose and Docker Swarm.

But which one? That’s what we’re here to figure out.

Docker Compose: Your Trusty Swiss Army Knife

Docker Compose is the tool that makes you feel like you actually know what you’re doing. You write a YAML file, you type docker compose up, and suddenly your entire application stack springs to life like a well-rehearsed orchestra. Except instead of violins, it’s Postgres, and instead of cellos, it’s your Node.js API that crashes every third Tuesday.

What It Actually Does

Docker Compose lets you define and run multi-container Docker applications using a single docker-compose.yml (or compose.yml if you’re hip) file. That’s it. That’s the whole pitch. And honestly? For most projects, that’s all you need.

Here’s what a typical Compose file looks like for a web app with a database and cache:

version: "3.8"

services:
  web:
    build: .
    ports:
      - "8080:8080"
    environment:
      - DATABASE_URL=postgres://user:pass@db:5432/myapp
      - REDIS_URL=redis://cache:6379
    depends_on:
      db:
        condition: service_healthy
      cache:
        condition: service_started
    restart: unless-stopped

  db:
    image: postgres:16
    volumes:
      - pgdata:/var/lib/postgresql/data
    environment:
      POSTGRES_USER: user
      POSTGRES_PASSWORD: pass
      POSTGRES_DB: myapp
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U user -d myapp"]
      interval: 5s
      timeout: 5s
      retries: 5

  cache:
    image: redis:7-alpine
    volumes:
      - redisdata:/data

volumes:
  pgdata:
  redisdata:

You run docker compose up -d and boom — your whole stack is running. Need to tear it down? docker compose down. Want to rebuild after code changes? docker compose up --build. It’s almost suspiciously simple.

The Compose Superpowers You Might Not Know About

Compose has quietly gotten really good over the years. Here are some features that don’t get enough love:

Profiles let you group services so you only start what you need:

services:
  web:
    build: .
    ports:
      - "8080:8080"

  db:
    image: postgres:16

  monitoring:
    image: grafana/grafana
    profiles:
      - observability

  prometheus:
    image: prom/prometheus
    profiles:
      - observability

Now docker compose up only starts web and db. Want monitoring too? docker compose --profile observability up. Your dev environment stays lean until you explicitly ask for the extras.

Watch mode (Compose 2.22+) gives you hot-reloading without hacks:

services:
  web:
    build: .
    develop:
      watch:
        - action: sync
          path: ./src
          target: /app/src
        - action: rebuild
          path: ./package.json

Run docker compose watch and your source files sync into the container automatically. Change package.json? The whole image rebuilds. It’s like nodemon but for your entire container.

Override files let you layer configurations. Have a compose.yml for your base config and a compose.override.yml that Compose automatically merges in for local development:

# compose.override.yml - automatically loaded in dev
services:
  web:
    build:
      target: development
    volumes:
      - .:/app
    environment:
      - DEBUG=true

No more “oh I accidentally committed the debug flag” incidents. (Okay, fewer of them.)

When Compose Shines

Docker Swarm: The Middle Child of Orchestration

Docker Swarm is what happens when Docker looked at Kubernetes and said, “What if we made that, but you could actually set it up before your coffee gets cold?”

Swarm is Docker’s built-in clustering and orchestration tool. It turns a pool of Docker hosts into a single virtual host. It handles service discovery, load balancing, rolling updates, and secret management. And here’s the kicker: it’s built right into Docker. No extra installation. No separate control plane. No YAML files that make you question whether indentation is a form of psychological warfare.

Getting Swarm Running

Initializing a Swarm is almost offensively simple:

# On your manager node
docker swarm init --advertise-addr 192.168.1.100

# It spits out a join command. Run it on your worker nodes:
docker swarm join --token SWMTKN-1-xxxxx 192.168.1.100:2377

# Check your nodes
docker node ls

That’s it. You now have a cluster. Three commands. Try doing that with Kubernetes and report back in about six to eight business days.

Deploying Services to Swarm

Swarm uses “stacks” which are basically Compose files with some extra orchestration sauce:

version: "3.8"

services:
  web:
    image: myapp:latest
    deploy:
      replicas: 3
      update_config:
        parallelism: 1
        delay: 10s
        failure_action: rollback
      rollback_config:
        parallelism: 0
      restart_policy:
        condition: on-failure
        delay: 5s
        max_attempts: 3
      resources:
        limits:
          cpus: "0.5"
          memory: 512M
        reservations:
          cpus: "0.25"
          memory: 256M
    ports:
      - "8080:8080"
    networks:
      - frontend
      - backend

  db:
    image: postgres:16
    deploy:
      placement:
        constraints:
          - node.role == manager
    volumes:
      - pgdata:/var/lib/postgresql/data
    secrets:
      - db_password
    networks:
      - backend

networks:
  frontend:
    driver: overlay
  backend:
    driver: overlay

volumes:
  pgdata:

secrets:
  db_password:
    external: true

Deploy it:

# Create the secret first
echo "supersecretpassword" | docker secret create db_password -

# Deploy the stack
docker stack deploy -c docker-compose.yml myapp

# Check on things
docker stack services myapp
docker service logs myapp_web

Notice how the Compose file format is almost identical to what you already know. The deploy key is the main addition, and it’s where all the orchestration magic lives.

Swarm’s Secret Weapons

Built-in load balancing: Swarm automatically distributes traffic across your service replicas using an internal load balancer. Hit any node in the swarm on the published port, and traffic routes to a healthy container. No nginx config. No HAProxy setup. It just works.

Rolling updates with rollback: Deploy a bad image? Swarm can automatically roll back:

# Update a service
docker service update --image myapp:v2 myapp_web

# Oh no, v2 is broken? Roll back!
docker service rollback myapp_web

Secrets management: Unlike Compose where you’re stuffing passwords in .env files and praying nobody commits them (they will), Swarm has actual encrypted secrets:

# Create a secret
echo "my-database-password" | docker secret create db_pass -

# Use it in a service
docker service create --secret db_pass --name myapp myapp:latest

Inside the container, the secret appears as a file at /run/secrets/db_pass. No environment variables floating around in docker inspect output for the whole world to see.

Overlay networks: Services can communicate across nodes seamlessly. Your web container on Node 1 talks to your database on Node 3 like they’re sitting on the same machine. Swarm handles all the networking voodoo behind the scenes.

The Showdown: Compose vs Swarm

Let’s get into the comparison everyone’s here for. Here’s how these two stack up across the dimensions that actually matter:

FeatureDocker ComposeDocker Swarm
Setup complexityZero. Write YAML, run command.Minimal. swarm init + join tokens.
Learning curveGentle slopeModerate hill
Multi-hostNo (single host only)Yes (that’s the whole point)
Load balancingDIY (add nginx/traefik)Built-in ingress routing mesh
Scalingdocker compose up --scale web=3 (same host)docker service scale web=10 (across nodes)
Rolling updatesNope. Stop and start.Yes, with configurable parallelism and rollback
Secrets management.env files (not great)Encrypted secrets in Raft store
Health checksSupported, affects depends_onSupported, affects scheduling and routing
Resource limitsSupported but no enforcement across hostsEnforced cluster-wide
Service discoveryDNS by service name (same network)DNS + VIP-based across all nodes
Production readinessSuitable for small/single-server deploymentsBuilt for multi-node production
Community support in 2026Thriving, actively developedStable but quiet. Docker maintains it but isn’t adding flashy features.

Real-World Scenarios: Which One Do You Pick?

Scenario 1: “I’m building a side project”

Use Compose.

You’ve got a Next.js app, a Postgres database, and maybe Minio for file storage. It runs on a single $10/month VPS. Compose is your best friend here. One docker-compose.yml, one docker compose up -d, and you’re in business.

Adding Swarm to this would be like hiring a forklift to move a couch. Technically it works, but your neighbors will have questions.

Scenario 2: “I run a SaaS product with a small team”

It depends on your traffic.

If you can vertically scale (bigger server) and you’re comfortable with a few minutes of downtime during deployments, Compose on a single beefy server is perfectly fine. Millions of dollars of revenue have been generated by apps running on a single well-configured server.

If you need zero-downtime deployments, redundancy, or you’re hitting the limits of a single machine, Swarm starts making a lot of sense. You can start with three nodes (one manager, two workers) and get real high-availability without bringing in the Kubernetes complexity tax.

Scenario 3: “I need to run the same app across multiple servers”

Use Swarm.

This is literally what it was built for. Compose can’t cross the host boundary. Swarm treats your fleet of servers as one big Docker engine. Define your desired state, and Swarm figures out where to put everything.

Scenario 4: “My boss said we need Kubernetes”

Ask your boss why.

If the answer is “because everyone uses it” or “it’s industry standard,” push back gently. Kubernetes is a phenomenal tool for organizations with the team size and complexity to justify it. For a team of five running twenty services, Swarm gives you 80% of the value at 20% of the complexity.

If the answer involves multi-cloud, hundreds of microservices, or custom operators — okay, fair enough, go with Kubernetes. But consider Swarm as a stepping stone. Learning Swarm’s concepts (services, replicas, overlay networks, rolling updates) maps almost directly to Kubernetes concepts, making the eventual migration smoother.

Scenario 5: “I want to self-host a bunch of apps on my homelab”

Start with Compose. Graduate to Swarm if you add more nodes.

This is the classic path. You get a Raspberry Pi or an old laptop, start running Compose stacks for Nextcloud, Home Assistant, Plex, whatever. When you inevitably accumulate more hardware (it’s a sickness, we know), Swarm lets you spread the load without rewriting all your configs.

The “But Swarm Is Dead” Myth

Let’s address the elephant in the room. You’ll see people online claiming Docker Swarm is dead, deprecated, or abandoned. Here’s the truth as of 2026:

Swarm is not dead. It’s in maintenance mode. Docker continues to ship it as part of Docker Engine. Security patches land. It works. What it isn’t getting is new features at a rapid pace. Docker’s commercial efforts are focused on Docker Desktop and Docker Build Cloud, not on competing with Kubernetes for enterprise orchestration.

For many teams, this is actually a feature. A stable, boring, “it just works” tool that doesn’t change its API every six months is exactly what you want underpinning your production infrastructure. Innovation in your orchestration layer is only exciting if you enjoy re-reading migration guides on Friday afternoons.

That said, if you’re starting a brand new project in 2026 and you know you’ll need serious orchestration at scale, going straight to Kubernetes (or a managed Kubernetes service like EKS, GKE, or AKS) is a reasonable choice. Just don’t let anyone tell you that Swarm doesn’t have a legitimate place in the toolbox.

The Migration Path: Compose to Swarm

One of the most underrated things about Docker’s ecosystem is how smooth the Compose-to-Swarm migration is. Your Compose file is already 90% of a Swarm stack file. Here’s what changes:

  1. Add deploy sections to your services for replicas, update config, and resource limits.
  2. Replace build with image — Swarm doesn’t build images, it pulls them. Set up a registry (Docker Hub, GitHub Container Registry, or a self-hosted one).
  3. Switch from bind mounts to named volumes for anything that needs to persist across nodes.
  4. Move secrets out of .env files and into docker secret.
  5. Use overlay networks instead of the default bridge.

That’s basically it. You’re not rewriting your infrastructure — you’re upgrading it. Compare this to the Compose-to-Kubernetes migration path, which involves learning a whole new vocabulary (Pods, Deployments, StatefulSets, Ingresses, ConfigMaps, oh my) and rewriting your configs from scratch.

A Practical Compose-to-Swarm Example

Let’s say you have this Compose setup running on a single server:

# compose.yml - running fine on one server
services:
  api:
    build: ./api
    ports:
      - "3000:3000"
    environment:
      - DB_HOST=db
    depends_on:
      - db

  db:
    image: postgres:16
    environment:
      POSTGRES_PASSWORD: changeme
    volumes:
      - pgdata:/var/lib/postgresql/data

volumes:
  pgdata:

Here’s the Swarm-ready version:

# stack.yml - ready for multi-node deployment
version: "3.8"

services:
  api:
    image: registry.example.com/myapi:latest
    ports:
      - "3000:3000"
    deploy:
      replicas: 2
      update_config:
        parallelism: 1
        delay: 10s
      restart_policy:
        condition: on-failure
    environment:
      - DB_HOST=db
    secrets:
      - db_password
    networks:
      - appnet

  db:
    image: postgres:16
    deploy:
      placement:
        constraints:
          - node.labels.db == true
    secrets:
      - db_password
    volumes:
      - pgdata:/var/lib/postgresql/data
    networks:
      - appnet

networks:
  appnet:
    driver: overlay

volumes:
  pgdata:

secrets:
  db_password:
    external: true

The shape is the same. The structure is the same. You just added production knobs where they matter.

Quick Decision Framework

Still not sure? Run through this checklist:

Go with Docker Compose if:

Go with Docker Swarm if:

Go with Kubernetes if:

The Bottom Line

Docker Compose and Docker Swarm aren’t competitors — they’re different tools on the same spectrum. Compose is for defining and running your stack. Swarm is for distributing and scaling it across machines. The fact that they share the same file format is a gift from the Docker gods.

Start with Compose. It handles more than you think. When your single server starts sweating, graduate to Swarm. When your engineering org looks like a small country, consider Kubernetes.

The best orchestration tool is the one your team can actually operate. There’s no prize for running Kubernetes if a Compose file on a single server would’ve done the job. Complexity isn’t a badge of honor — it’s a maintenance bill.

Keep it simple. Ship your code. Scale when you need to, not when Hacker News tells you to.


Share this post on:

Previous Post
Wiki.js with GitSync: Documentation That Lives in Version Control Like It Should
Next Post
Colima vs OrbStack vs Docker Desktop: Running Docker on Mac Without Selling Your Soul