You’re running 40 containers and you need to restart only the ones running Node.js apps that belong to the payment service. Good luck grepping container names.
Or use labels. Labels are just key-value metadata attached to containers, and they’re absurdly underused.
What Are Labels?
Labels are arbitrary key-value pairs you attach to containers, images, volumes, networks, and services. They’re free. They don’t do anything by themselves. But they unlock powerful filtering, routing, and automation.
Setting Labels
When you run a container:
docker run -d \ --name api-server \ --label app=payment-service \ --label version=2.1.0 \ --label environment=production \ --label team=platform \ myimage:latestOr in docker-compose:
services: api: image: myimage:latest labels: app: payment-service version: "2.1.0" environment: production team: platform criticality: highFiltering by Labels
Now you can find containers without guessing names:
# Show only payment-service containersdocker ps --filter "label=app=payment-service"
# Show only production containersdocker ps --filter "label=environment=production"
# Multiple filters (AND logic)docker ps \ --filter "label=environment=production" \ --filter "label=team=platform"
# Containers with a label regardless of valuedocker ps --filter "label=criticality"Combine with other filters:
docker ps --filter "status=exited" --filter "label=environment=production"Practical Use Cases
Traefik Routing
Traefik uses labels to route traffic to containers. No separate config file needed:
services: web: image: myapp:latest labels: traefik.enable: "true" traefik.http.routers.web.rule: "Host(\`example.com\`)" traefik.http.routers.web.entrypoints: "websecure" traefik.http.services.web.loadbalancer.server.port: "8080"When the container starts, Traefik reads the labels and automatically routes traffic.
Watchtower Image Updates
Watchtower automatically updates running containers when new images are pushed. Use labels to control which containers it touches:
services: critical-db: image: postgres:latest labels: com.centurylinklabs.watchtower.enable: "false"
api: image: myapi:latest labels: com.centurylinklabs.watchtower.enable: "true"Only the api container gets auto-updated.
Automation and Scripting
Backup only labeled containers:
#!/bin/bashfor container in $(docker ps -q --filter "label=backup=true"); do docker exec $container /scripts/backup.shdoneRestart only a specific service:
docker restart $(docker ps -q --filter "label=app=payment-service")Check which team owns a container:
docker inspect --format='{{.Config.Labels.team}}' mycontainerDocumentation
Labels are self-documenting. New team member? They can see who owns what:
docker inspect api-server | jq '.Config.Labels'{ "app": "payment-service", "team": "platform", "owner": "alice@example.com", "runbook": "https://wiki.example.com/payment-service", "pagerduty": "https://pagerduty.com/incidents?service=payment"}Then jump straight to the runbook.
Label Naming Conventions
Reverse domain notation prevents collisions:
labels: # Standard labels app: payment-service environment: production version: "2.1.0"
# Vendor-specific (prefixed) com.example.team: platform com.example.owner: alice com.example.criticality: high
# Third-party tools traefik.enable: "true" com.centurylinklabs.watchtower.enable: "true"Common conventions:
app— Application nameenvironment— production, staging, developmentversion— Git tag or semantic versionteam— Owning teamcriticality— high, medium, low (for SLA/alerting)owner— Email or usernamebackup— true/falsemonitoring— true/false
Querying Labels Programmatically
Get all labels as JSON:
docker inspect mycontainer | jq '.Config.Labels'Get a specific label:
docker inspect --format='{{.Config.Labels.app}}' mycontainer# payment-serviceList all containers with their app label:
docker ps --format='table {{.Names}}\t{{.Label "app"}}'A Real Workflow
Setup:
version: '3.8'services: api: image: myapi:latest labels: app: payment-api environment: production team: backend criticality: high
worker: image: myworker:latest labels: app: payment-worker environment: production team: backend criticality: high
debug-container: image: ubuntu:latest labels: app: debug-shell environment: development team: infrastructureNow, restart all production containers:
docker restart $(docker ps -q --filter "label=environment=production")Check criticality of all running containers:
for cid in $(docker ps -q); do name=$(docker inspect --format='{{.Name}}' $cid | cut -c2-) criticality=$(docker inspect --format='{{.Config.Labels.criticality}}' $cid) echo "$name: $criticality"doneLabels on Images vs Containers
Labels aren’t just for running containers. You can bake them into images in your Dockerfile:
FROM node:20-alpine
LABEL maintainer="alice@example.com"LABEL org.opencontainers.image.title="Payment API"LABEL org.opencontainers.image.version="2.1.0"LABEL org.opencontainers.image.source="https://github.com/example/payment-api"LABEL org.opencontainers.image.documentation="https://docs.example.com/payment-api"The org.opencontainers.image.* namespace is a proper standard — tools like GitHub Container Registry and Docker Hub display these automatically.
Check labels on any image without pulling it:
docker inspect nginx:latest | jq '.[].Config.Labels'Prometheus and Labels
If you’re running cAdvisor or node_exporter for container monitoring, labels flow through to Prometheus automatically. You can filter dashboards and alerts by label:
scrape_configs: - job_name: 'containers' static_configs: - targets: ['cadvisor:8080'] # Prometheus picks up Docker labels as __meta_docker_container_label_* relabel_configs: - source_labels: [__meta_docker_container_label_environment] target_label: environmentNow your Grafana dashboards can filter by environment="production" and your alerts can page the right team based on label_team.
Labels are zero overhead and zero configuration friction. Add them now, profit later when you need to automate something at 3 AM.