Overlay network driver
The overlay network driver creates a distributed network among multiple
Docker daemon hosts. This network sits on top of (overlays) the host-specific
networks, allowing containers connected to it to communicate securely when
encryption is enabled. Docker transparently handles routing of each packet to
and from the correct Docker daemon host and the correct destination container.
You can create user-defined overlay networks using docker network create,
in the same way that you can create user-defined bridge networks. Services
or containers can be connected to more than one network at a time. Services or
containers can only communicate across networks they're each connected to.
Overlay networks are often used to create a connection between Swarm services, but you can also use it to connect standalone containers running on different hosts. When using standalone containers, it's still required that you use Swarm mode to establish a connection between the hosts.
This page describes overlay networks in general, and when used with standalone containers. For information about overlay for Swarm services, see Manage Swarm service networks.
Requirements
Docker hosts must be part of a swarm to use overlay networks, even when connecting standalone containers. The following ports must be open between participating hosts:
2377/tcp: Swarm control plane (configurable)4789/udp: Overlay traffic (configurable)7946/tcpand7946/udp: Node communication (not configurable)
Create an overlay network
The following table lists the ports that need to be open to each host participating in an overlay network:
| Ports | Description |
|---|---|
2377/tcp | The default Swarm control plane port, is configurable with
docker swarm join --listen-addr |
4789/udp | The default overlay traffic port, configurable with
docker swarm init --data-path-addr |
7946/tcp, 7946/udp | Used for communication among nodes, not configurable |
To create an overlay network that containers on other Docker hosts can connect to, run the following command:
$ docker network create -d overlay --attachable my-attachable-overlay
The --attachable option enables both standalone containers
and Swarm services to connect to the overlay network.
Without --attachable, only Swarm services can connect to the network.
You can specify the IP address range, subnet, gateway, and other options. See
docker network create --help for details.
Encrypt traffic on an overlay network
Use the --opt encrypted flag to encrypt the application data
transmitted over the overlay network:
$ docker network create \
--opt encrypted \
--driver overlay \
--attachable \
my-attachable-multi-host-network
This enables IPsec encryption at the level of the Virtual Extensible LAN (VXLAN). This encryption imposes a non-negligible performance penalty, so you should test this option before using it in production.
WarningDon't attach Windows containers to encrypted overlay networks.
Overlay network encryption isn't supported on Windows. Swarm doesn't report an error when a Windows host attempts to connect to an encrypted overlay network, but networking for the Windows containers is affected as follows:
- Windows containers can't communicate with Linux containers on the network
- Data traffic between Windows containers on the network isn't encrypted
Attach a container to an overlay network
Adding containers to an overlay network gives them the ability to communicate with other containers without having to set up routing on the individual Docker daemon hosts. A prerequisite for doing this is that the hosts have joined the same Swarm.
To join an overlay network named multi-host-network with a busybox container:
$ docker run --network multi-host-network busybox sh
NoteThis only works if the overlay network is attachable (created with the
--attachableflag).
Container discovery
Publishing ports of a container on an overlay network opens the ports to other containers on the same network. Containers are discoverable by doing a DNS lookup using the container name.
| Flag value | Description |
|---|---|
-p 8080:80 | Map TCP port 80 in the container to port 8080 on the overlay network. |
-p 8080:80/udp | Map UDP port 80 in the container to port 8080 on the overlay network. |
-p 8080:80/sctp | Map SCTP port 80 in the container to port 8080 on the overlay network. |
-p 8080:80/tcp -p 8080:80/udp | Map TCP port 80 in the container to TCP port 8080 on the overlay network, and map UDP port 80 in the container to UDP port 8080 on the overlay network. |
Connection limit for overlay networks
Due to limitations set by the Linux kernel, overlay networks become unstable and inter-container communications may break when 1000 containers are co-located on the same host.
For more information about this limitation, see moby/moby#44973.
Usage examples
This section provides hands-on examples for working with overlay networks. These examples cover swarm services and standalone containers on multiple Docker hosts.
Prerequisites
All examples require at least a single-node swarm. Initialize one by running
docker swarm init on the host. You can run these examples on multi-node
swarms as well.
Use the default overlay network
This example shows how the default overlay network works with swarm services.
You'll create an nginx service and examine the network from the service
containers' perspective.
Prerequisites for multi-node setup
This walkthrough requires three Docker hosts that can communicate with each other on the same network with no firewall blocking traffic between them:
manager: Functions as both manager and workerworker-1: Functions as worker onlyworker-2: Functions as worker only
If you don't have three hosts available, you can set up three virtual machines on a cloud provider with Docker installed.
Create the swarm
On
manager, initialize the swarm. If the host has one network interface, the--advertise-addrflag is optional:$ docker swarm init --advertise-addr=<IP-ADDRESS-OF-MANAGER>Save the join token displayed for use with workers.
On
worker-1, join the swarm:$ docker swarm join --token <TOKEN> \ --advertise-addr <IP-ADDRESS-OF-WORKER-1> \ <IP-ADDRESS-OF-MANAGER>:2377On
worker-2, join the swarm:$ docker swarm join --token <TOKEN> \ --advertise-addr <IP-ADDRESS-OF-WORKER-2> \ <IP-ADDRESS-OF-MANAGER>:2377On
manager, list all nodes:$ docker node ls ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS d68ace5iraw6whp7llvgjpu48 * ip-172-31-34-146 Ready Active Leader nvp5rwavvb8lhdggo8fcf7plg ip-172-31-35-151 Ready Active ouvx2l7qfcxisoyms8mtkgahw ip-172-31-36-89 Ready ActiveFilter by role if needed:
$ docker node ls --filter role=manager $ docker node ls --filter role=workerList Docker networks on all hosts. Each now has an overlay network called
ingressand a bridge network calleddocker_gwbridge:$ docker network ls NETWORK ID NAME DRIVER SCOPE 495c570066be bridge bridge local 961c6cae9945 docker_gwbridge bridge local ff35ceda3643 host host local trtnl4tqnc3n ingress overlay swarm c8357deec9cb none null local
The docker_gwbridge connects the ingress network to the Docker host's
network interface. If you create services without specifying a network, they
connect to ingress. It's recommended to use separate overlay networks for each
application or group of related applications.
Create the services
On
manager, create a new overlay network:$ docker network create -d overlay nginx-netThe overlay network is automatically created on worker nodes when they run service tasks that need it.
On
manager, create a 5-replica Nginx service connected tonginx-net:NoteServices can only be created on a manager.
$ docker service create \ --name my-nginx \ --publish target=80,published=80 \ --replicas=5 \ --network nginx-net \ nginxThe default
ingresspublish mode means you can browse to port 80 on any node and connect to one of the 5 service tasks, even if no tasks run on that node.Monitor service creation progress:
$ docker service lsInspect the
nginx-netnetwork on all hosts. TheContainerssection lists all service tasks connected to the overlay network from that host.From
manager, inspect the service:$ docker service inspect my-nginxNotice the information about ports and endpoints.
Create a second network and update the service to use it:
$ docker network create -d overlay nginx-net-2 $ docker service update \ --network-add nginx-net-2 \ --network-rm nginx-net \ my-nginxVerify the update completed:
$ docker service lsInspect both networks to verify containers moved from
nginx-nettonginx-net-2.NoteOverlay networks are automatically created on swarm worker nodes as needed, but aren't automatically removed.
Clean up:
$ docker service rm my-nginx $ docker network rm nginx-net nginx-net-2
Use a user-defined overlay network
This example shows the recommended approach for production services using custom overlay networks.
Prerequisites
This assumes the swarm is already set up and you're on a manager node.
Steps
Create a user-defined overlay network:
$ docker network create -d overlay my-overlayStart a service using the overlay network, publishing port 80 to port 8080:
$ docker service create \ --name my-nginx \ --network my-overlay \ --replicas 1 \ --publish published=8080,target=80 \ nginx:latestVerify the service task is connected to the network:
$ docker network inspect my-overlayCheck the
Containerssection for themy-nginxservice task.Clean up:
$ docker service rm my-nginx $ docker network rm my-overlay
Use an overlay network for standalone containers
This example demonstrates DNS container discovery between standalone containers on different Docker hosts using an overlay network.
Prerequisites
You need two Docker hosts that can communicate with each other with the following ports open between them:
- TCP port 2377
- TCP and UDP port 7946
- UDP port 4789
This example refers to the hosts as host1 and host2.
Steps
Set up the swarm:
On
host1, initialize a swarm:$ docker swarm init Swarm initialized: current node (vz1mm9am11qcmo979tlrlox42) is now a manager. To add a worker to this swarm, run the following command: docker swarm join --token SWMTKN-1-5g90q48weqrtqryq4kj6ow0e8xm9wmv9o6vgqc5j320ymybd5c-8ex8j0bc40s6hgvy5ui5gl4gy 172.31.47.252:2377On
host2, join the swarm using the token from the previous output:$ docker swarm join --token <your_token> <your_ip_address>:2377 This node joined a swarm as a worker.If the join fails, run
docker swarm leave --forceonhost2, verify network and firewall settings, and try again.On
host1, create an attachable overlay network:$ docker network create --driver=overlay --attachable test-net uqsof8phj3ak0rq9k86zta6htNote the returned network ID.
On
host1, start an interactive container that connects totest-net:$ docker run -it --name alpine1 --network test-net alpine / #On
host2, list available networks. Notice thattest-netdoesn't exist yet:$ docker network ls NETWORK ID NAME DRIVER SCOPE ec299350b504 bridge bridge local 66e77d0d0e9a docker_gwbridge bridge local 9f6ae26ccb82 host host local omvdxqrda80z ingress overlay swarm b65c952a4b2b none null localOn
host2, start a detached, interactive container that connects totest-net:$ docker run -dit --name alpine2 --network test-net alpine fb635f5ece59563e7b8b99556f816d24e6949a5f6a5b1fbd92ca244db17a4342NoteAutomatic DNS container discovery only works with unique container names.
On
host2, verify thattest-netwas created with the same network ID as onhost1:$ docker network ls NETWORK ID NAME DRIVER SCOPE ... uqsof8phj3ak test-net overlay swarmOn
host1, pingalpine2from withinalpine1:/ # ping -c 2 alpine2 PING alpine2 (10.0.0.5): 56 data bytes 64 bytes from 10.0.0.5: seq=0 ttl=64 time=0.600 ms 64 bytes from 10.0.0.5: seq=1 ttl=64 time=0.555 ms --- alpine2 ping statistics --- 2 packets transmitted, 2 packets received, 0% packet loss round-trip min/avg/max = 0.555/0.577/0.600 msThe two containers communicate over the overlay network connecting the two hosts. You can also run another container on
host2and pingalpine1:$ docker run -it --rm --name alpine3 --network test-net alpine / # ping -c 2 alpine1 / # exitOn
host1, close thealpine1session (which stops the container):/ # exitClean up. You must stop and remove containers on each host independently:
On
host2:$ docker container stop alpine2 $ docker network ls $ docker container rm alpine2When you stop
alpine2,test-netdisappears fromhost2.On
host1:$ docker container rm alpine1 $ docker network rm test-net
Next steps
- Learn about networking from the container's point of view
- Learn about standalone bridge networks
- Learn about Macvlan networks