Your Containers Are Talking — But Are They Listening?
Here’s a scenario you’ve probably lived through: you spin up two containers, try to get them talking, and spend 45 minutes figuring out why one can’t reach the other. Ping works from your laptop. Nothing works between the containers. You restart everything. Still broken. You Google “docker networking” and get a wall of documentation that reads like a networking textbook from 2003.
That ends today.
Docker networking is actually pretty elegant once you understand what it’s doing. There are a handful of network types, each with a clear purpose. Once you know which one to reach for, the “why isn’t this working” problems mostly disappear.
The Default Bridge: Fine, But Frustrating
When you run a container without specifying a network, it lands on the default bridge network (bridge). Every Docker installation has one. It works. Containers can reach the internet. You can publish ports. Life goes on.
The catch? Containers on the default bridge cannot resolve each other by name. No DNS. You’d need to hardcode IP addresses, which are ephemeral and change on restart. This is the gotcha that bites beginners hardest.
# These two containers are on the default bridgedocker run -d --name web nginxdocker run -d --name app myapp
# From inside 'app', this will NOT work:curl http://web/api/health # DNS resolution fails — unknown host
# You'd have to do something painful like:curl http://172.17.0.2/api/health # Hardcoded IP. Don't do this.The default bridge is fine for quick one-offs. For anything with multiple services, move on.
Custom Bridge Networks: The Right Default
Custom bridge networks fix the DNS problem. Create your own network, attach containers to it, and Docker gives you automatic name resolution between them. The container name becomes a hostname. It just works.
docker network create myapp-net
docker run -d --name web --network myapp-net nginxdocker run -d --name app --network myapp-net myapp
# From inside 'app', this works now:curl http://web/api/health # DNS resolves 'web' to the right containerThis is also how Docker Compose works under the hood — it creates a custom bridge network per project and attaches all services to it automatically. When you define services in a Compose file, they can reference each other by service name. No manual network creation needed.
Custom bridge networks also give you isolation. Containers on network-a can’t reach containers on network-b unless you explicitly connect them. That’s a feature, not a bug — it’s how you build proper security boundaries between services.
Docker Compose With Explicit Networks
Here’s where it gets practical. Most real stacks need some containers exposed to each other but not to everything. A frontend that talks to an API, an API that talks to a database, and a database that talks to nobody except the API.
services: frontend: image: nginx ports: - "80:80" networks: - public
api: image: myapp-api networks: - public - internal
db: image: postgres environment: POSTGRES_PASSWORD: secret networks: - internal
networks: public: internal:Notice what’s happening here. The frontend only lives on public — it can reach the api but has no path to db. The api sits on both networks, bridging them. The db only lives on internal and is completely unreachable from frontend or the outside world.
This is the pattern you want. Expose only what needs to be exposed. The database should never be reachable from your load balancer, full stop.
Port Publishing vs. Internal Networking
Here’s a thing people get wrong: they publish ports for every service “just in case.”
# Please don't do thisservices: api: ports: - "3000:3000" # Published to host — reachable from outside db: ports: - "5432:5432" # Why? Nobody outside needs this redis: ports: - "6379:6379" # Also noIf your api service talks to db over the internal Docker network, db doesn’t need a published port. Published ports poke holes through to the host. Internal networking between containers on the same Docker network doesn’t need them.
Rule of thumb: only publish ports that need to be reachable from outside Docker — typically just your reverse proxy or your app’s public-facing service.
Host Networking: Maximum Performance, Zero Isolation
Host networking skips the Docker network stack entirely. The container uses the host’s network interfaces directly — same IP, same ports, no NAT.
docker run --network host nginx# Nginx is now listening on the host's port 80, not a container portUse cases are narrow but real: performance-sensitive applications (game servers, certain monitoring agents, anything doing raw socket work), or containers that need to discover and bind to host-level interfaces.
The trade-off is obvious — you lose network isolation completely. Port conflicts become your problem. And it only works on Linux; on Mac and Windows, Docker runs inside a VM, so --network host gives you the VM’s network, not your actual machine’s.
Overlay Networks: When One Host Isn’t Enough
If you’re running Docker Swarm or coordinating containers across multiple hosts, bridge networks don’t cut it — they’re local to a single machine. Overlay networks span across hosts, letting containers on different machines talk to each other as if they were on the same local network.
# Initialize Swarm firstdocker swarm init
# Create an overlay networkdocker network create --driver overlay myapp-overlay
# Deploy a service that uses itdocker service create --network myapp-overlay --name api myapp-apiOverlay networks handle the tunneling and encryption between hosts transparently. From the container’s perspective, it’s just another network. From your perspective, it’s Docker handling the hard parts of distributed networking.
If you’re not running Swarm, you don’t need overlay networks. Compose on a single host + custom bridge is the right answer.
macvlan and None: The Edge Cases
Two more network types worth knowing exist, even if you rarely touch them.
macvlan assigns a real MAC address to a container, making it look like a physical device on your network. Your router sees it, your DHCP server can give it an IP. Useful for legacy applications that expect to live on the real network, or IoT-adjacent setups.
none is exactly what it sounds like — no network whatsoever. The container is completely isolated. Good for batch jobs that process data and don’t need to talk to anything, or security-sensitive workloads where you want to be absolutely sure there’s no network path.
Network Inspection Commands You’ll Actually Use
# List all networksdocker network ls
# Inspect a network — see connected containers, subnet, gatewaydocker network inspect myapp-net
# Connect a running container to a networkdocker network connect myapp-net mycontainer
# Disconnectdocker network disconnect myapp-net mycontainer
# Remove a network (only works if no containers are attached)docker network rm myapp-net
# Prune unused networks (safe to run regularly)docker network pruneThe inspect command is the one you’ll use most for debugging. It shows you exactly which containers are on a network, their IP addresses within that network, and the subnet configuration. When something isn’t talking to something else, docker network inspect tells you whether they’re even on the same network.
The Debugging Checklist
When container-to-container communication breaks, run through this:
- Are both containers on the same named network? Not just “a” Docker network — the same one.
- Are you using a custom bridge network, not the default? DNS resolution only works on custom networks.
- Are you using the correct name? The container name (or service name in Compose) is the hostname. Typos happen.
- Did you expose the port inside the container? Publishing to the host is separate from the container’s internal listener. If the app only listens on 127.0.0.1 inside the container, Docker networking can’t help you.
- Is there a firewall on the host blocking inter-container traffic? Unlikely, but worth checking if everything else looks right.
Docker networking rewards the 10 minutes you spend understanding it upfront. Your 2 AM self — staring at a connection refused error with a deployment half-finished — will be very glad you read this.