Skip to main content

Command Palette

Search for a command to run...

Diving into Docker (Part 6): Docker Networking

Learn how Docker connects everything behind the scenes

Published
14 min read
Diving into Docker (Part 6): Docker Networking

Finally, here comes the interesting blog of the Diving into Docker series. I've learned so much from the book (Docker Deep Dive - Nigel Poulton) about networking. It's gonna be so much fun to know how Docker connects everything internally. So without further ado, let's get started.


When people hear Networking, it often sounds complex and scary. But the core ideas are actually simple once you break them down. Let's tackle down and understand the Docker Networking. I will try my best to simplify it, so let's get started.

Docker networking is divided into 3 main parts

  1. CNM (Container Network Model) – the design or the plan, you can say

  2. libnetwork – the implementation or the code that follows the plan

  3. Drivers – the actual network types (bridge, overlay, macvlan, etc.)

A simple way to remember this is:

  • CNM is like the blueprint of a house

  • libnetwork is like the construction team that follows the blueprint

  • Drivers are like the different types of houses you can build (apartment, villa, etc.)

Let’s go step by step and understand these in detail, and make it feel clear.


Part 1: CNM (Container Network Model)

CNM is not software you install. It’s a design document. Docker created CNM to define how container networking should be structured.

CNM says Docker networking is built using 3 building blocks

  • Sandbox

  • Endpoint

  • Network

If you understand these three, you understand the base of Docker networking.


1) Sandbox

A sandbox is an isolated networking environment for a container. That sentence sounds heavy, but here’s what it really means:

Every container gets its own private networking setup, including:

  • its own IP address

  • its own network interfaces like eth0 inside the container

  • its own routing table and rules about where traffic should go

  • its own DNS settings

  • its own ports

So the sandbox is basically, everything networking related that belongs only to that container.

Think of it like this

  • Each container lives in its own room

  • That room has its own Wi‑Fi setup, its own address, and its own rules

  • Even if two containers are on the same machine, their “rooms” are still separate

On Linux, Docker achieves this separation using something called network namespaces. You don’t have to memorize that right now, just remember the outcome which is, containers behave like separate computers from a networking point of view.

The important thing to remember here is, for example, even when Container A and Container B run on the same Docker host, their networking is still isolated because each has its own sandbox.


2) Endpoint

Your laptop can connect to Wi‑Fi because it has a network card. Containers don’t have physical network cards. So Docker creates a virtual network interface for them. CNM calls this an endpoint.

An endpoint’s job is simple, which is to connect a container’s sandbox to a network.

There are two rules that make endpoints easy to understand

  1. One endpoint connects to exactly one network.

    1. If a container needs to join two networks, it needs two endpoints.

For Example, If a container must talk to both Network A and Network B, Docker gives it

  • Endpoint 1 → Network A

  • Endpoint 2 → Network B

That’s how one container can be part of multiple networks at the same time.


3) Network

In Docker, a network is like a virtual switch. It groups endpoints. If two containers connect to the same Docker network, they can usually talk to each other.

If they are not on the same network, they can’t talk to each other by default (unless you set up routing manually).

A very simple analogy is that a Docker network is like a WhatsApp group. People inside the same group can message each other. If you’re not in the group, you can’t message the group members.

Same idea with containers: same network = communication is possible.


Part 2: libnetwork

So far, CNM is just a design. It describes what should exist: sandboxes, endpoints, networks. But Docker needs real code that actually creates and manages these things.

That code is called libnetwork.

So, CNM says what should exist, libnetwork actually creates it and manages it.

Whenever you run commands like:

  • docker network create

  • docker run

  • docker network connect

Docker Engine uses libnetwork behind the scenes. You can think of libnetwork as Docker’s network manager.


Why Docker separated networking from the daemon?

Early on, a lot of Docker networking code lived inside the Docker daemon dockerd. But over time networking became bigger and more complicated:

  • multiple network types (bridge, overlay, macvlan…)

  • DNS and service discovery features

  • load balancing

  • multi-host networking

The daemon started becoming too large and messy. That’s why Docker engineers moved networking into a separate library called libnetwork.

This was good because networking could improve without constantly modifying the main daemon, and the design became more modular (cleaner separation), as well as other projects could reuse the networking library

So when someone says networking was ripped out of the daemon and refactored into libnetwork, they mean it was separated into its own component.


Control plane vs Data plane

These two terms show up a lot in networking. Here’s what it means.

Control plane (libnetwork)

The control plane is responsible for deciding how the network should behave and setting up the rules. It does not move packets itself. Instead, it programs the system so packets know where to go. In Docker networking, the control plane is mainly handled by libnetwork.

For Example, when you run docker network create mynet: The control plane performs multiple tasks.

1. Create a Network Object

Docker stores metadata like:

Network Name: mynet
Driver: bridge
Subnet: 172.18.0.0/16
Gateway: 172.18.0.1

This is stored in Docker's internal database.

2. IP Address Management (IPAM)

Docker assigns a subnet to the network.

Example:

Subnet: 172.18.0.0/16
Gateway: 172.18.0.1

Later, when containers join the network:

Container A → 172.18.0.2
Container B → 172.18.0.3

IP allocation is controlled by the IPAM system in the control plane.

3. Select Network Driver

Docker supports different drivers:

  • bridge

  • overlay

  • macvlan

  • host

  • none

Example:

Driver = bridge

The control plane decides which driver will implement the network.

4. Maintain Network State

Control plane tracks:

  • which containers belong to the network

  • assigned IPs

  • driver configuration

  • DNS entries

Example state:

Network: mynet
Containers:
   web -> 172.18.0.2
   db  -> 172.18.0.3

5. Configure System Networking

The control plane tells Linux:

  • create bridges

  • configure routes

  • configure iptables rules

But it doesn't move packets itself. Instead, it programs the data plane.

Data plane (driver)

Data plane means where the real packets move.

Imagine two containers:

Container A
172.18.0.2

Container B
172.18.0.3

They are connected to:

docker0 bridge

now the network structure is like this

Container A
   |
 vethA
   |
 docker0 bridge
   |
 vethB
   |
Container B

Now when A sends a packet to B:

ping 172.18.0.3

The data plane handles the packet movement.


Implementation

Let’s walk through a simple example:

Step 1: Create a bridge network

You run:

docker network create -d bridge mynet

Docker Engine receives the command. It asks libnetwork to create a new network object. libnetwork stores, name=mynet, driver=bridge, settings, IP ranges, etc. And then it calls the bridge driver and says build the real network for this.

The bridge driver then sets up Linux-level things like:

  • a Linux bridge device

  • NAT / iptables rules

  • isolation rules

Step 2: Run a container on that network

You run:

docker run --network mynet nginx

In this case, Docker asks libnetwork to attach this container to “mynet”. So what libnetwork will do is create, a sandbox which is a container’s private network environment and an endpoint.

Then libnetwork connects the endpoint to the network and assigns IP + DNS settings. And finally asks the bridge driver that “connect this endpoint to the bridge”

So a very clean way to remember:

  • libnetwork organizes and manages

  • the driver does the real network wiring


Single-host Bridge Networks

The simplest Docker network type is the single-host bridge network. The name itself tells you the meaning of it. Single-host works only on one Docker machine. Bridge behaves like a Layer 2 switch.

Every Docker host gets a default bridge network. On Linux it’s called bridge. On Windows it’s called nat.

If you don’t specify a network, containers join this default one

On Linux, this default bridge network is backed by a real Linux bridge called docker0.

Creating your own bridge network

Run the command below to create your own bridge network.

docker network create -d bridge localnet

Then run a container attached to it. For example we have two container c1 and c2.

  • run c1 on localnet
docker container run -d --name c1 \
--network localnet \
alpine sleep 1d
  • run c2 on localnet
docker container run -d --name c2 \
--network localnet \
ubuntu:latest

Now c2 can reach c1, and very importantly, it can resolve it by name, not just IP.

That leads to an important concept: Service discovery.


Service discovery

Container IP addresses are often not stable. For example, containers restart and may get a new IP, or scaling up/down creates new containers with new IPs.

So you don’t want your app to hardcode IP addresses.

Instead, you want to say: “talk to db”, “talk to api”, “talk to redis”

That is service discovery, automatic mapping from name → IP.

Docker supports this on the same network using an internal DNS system:

Docker runs an internal DNS server that knows container names and IPs, and containers have a local DNS resolver that forwards requests to Docker’s DNS

So if you run ping c1 from inside c2, Docker DNS helps c2 translate “c1” into the correct IP.

This name-based discovery works only within the same Docker network. If two containers aren’t on the same network, Docker DNS won’t resolve them by name for each other.


Port mapping

Containers on a bridge network are mainly meant to talk to each other inside the same host/network. But what if you want a browser, something outside Docker, to reach a container web server? That’s where port mapping comes in.

For example, you have seen this --publish 5000:80 In a command, we write when we run the container. What does that mean?

It simply means that container listens on port 80 and Docker host opens port 5000

your machine : docker container (5000:80)

So the outside world talks to the Docker host, and Docker forwards the traffic to the container. This is extremely common for local development.


Multi-host networking

Bridge networks are great when everything is on one machine. But real systems often run across multiple machines. For example, Host 1 runs some containers. Host 2 runs other containers, and you want containers across hosts to talk like they are on one shared network

By default, containers on different hosts are like they’re in different houses, they’re not on the same local network.

An overlay network solves this by creating a virtual network that spans multiple Docker hosts.

Overlay network

An overlay network is a virtual network built on top of the real network.

For example, you can consider like this:

  • real network = normal roads

  • overlay network = secret tunnels built on top of those roads

Containers use the tunnels so that, even across different machines, they behave like they’re on the same LAN.

How overlay “tunneling” works

When Container A on Host 1 sends data to Container B on Host 2:

  • A sends a normal packet, it thinks B is nearby.

  • Docker on Host 1 wraps that packet inside another packet like putting a letter in an envelope

  • the outer packet travels over the real network to Host 2

  • Docker on Host 2 unwraps it and delivers the original packet to B

Docker has a built-in overlay driver, so you can create an overlay network like:

docker network create -d overlay mynet

Usually this is used with Swarm or other orchestrators


MACVLAN: making containers look like real devices on your LAN

Overlay is about connecting containers across hosts inside a Docker-managed virtual network. Macvlan is a different goal.

Macvlan is used when you want containers to join the real physical network directly.

In other words, instead of hiding containers behind the host like bridge / NAT often does, macvlan makes each container appear like a real machine on your network.

With macvlan, each container gets its own MAC address, its own IP address from your real LAN subnet

So your switch/router sees the container as it sees: your laptop, a server, a VM, and now… the container

macvlan often requires the host network card to be in promiscuous mode. Promiscuous mode is sometimes blocked in corporate networks or cloud providers

So, macvlan is commonly a good fit in controlled environments like data centers.


Overlay vs Macvlan

Overlay:

  • Best for container-to-container communication across multiple Docker hosts.

  • Containers feel like they are on one shared Docker network.

  • Common in clusters, Swarm or microservices

Macvlan:

  • Best when containers must be first-class citizens on the physical network

  • Containers get real LAN IPs and MACs

  • Common when containers must talk to physical servers directly and be reachable from outside without port mapping


Swarm publishing: Ingress mode vs Host mode

In Swarm, you usually deploy services and not individual containers. A service can have replicas running on different nodes. And to allow outside users to access a service, you publish a port.

For example,

  • published port (outside) = 5000

  • target port (inside container) = 80

Swarm supports two publishing modes:

1) Ingress mode (default)

  • Any node can accept traffic, even if it’s not running the container. And the cluster routes it internally to the right node.

  • Ingress mode is the default. This means any time you publish a service with -p or --publish it will default to ingress mode.

  • To publish a service in host mode you need to use the long format of the --publish flag and add mode=host. Let’s see an example using host mode.

docker service create -d --name c1 \
--publish published=5000,target=80,mode=host \
nginx

2) Host mode

  • In this case, only nodes that are actually running the service accept traffic on that port.

In practice, people often talk about routing mesh and load balancing here. If multiple replicas exist, Swarm can spread incoming requests across them.


Conclusion

  • So in summary, Docker networking boils down to three things.

    • CNM defines the model - sandbox, endpoint, network.

    • libnetwork is the manager that creates and connects those pieces, and drivers - bridge/overlay/macvlan, do the real packet-moving work.

    • Bridge is for containers on one host, overlay connects containers across hosts like one network, and macvlan puts containers directly onto the physical LAN with their own MAC/IP.