Bridge network driver
A Docker bridge network has an IPv4 subnet and, optionally, an IPv6 subnet. Each container connected to the bridge network has a network interface with addresses in the network's subnets. By default, it:
- Allows unrestricted network access to containers in the network from the host, and from other containers connected to the same bridge network.
- Blocks access from containers in other networks and from outside the Docker host.
- Uses masquerading to give containers external network access. Devices on the host's external networks only see the IP address of the Docker host.
- Supports port publishing, where network traffic is forwarded between container ports and ports on host IP addresses. The published ports can be accessed from outside the Docker host, on its IP addresses.
In terms of Docker, a bridge network uses a software bridge which lets containers connected to the same bridge network communicate, while providing isolation from containers that aren't connected to that bridge network. By default, the Docker bridge driver automatically installs rules in the host machine so that containers connected to different bridge networks can only communicate with each other using published ports.
Bridge networks apply to containers running on the same Docker daemon host. For communication among containers running on different Docker daemon hosts, you can either manage routing at the OS level, or you can use an overlay network.
When you start Docker, a default bridge network (also
called bridge) is created automatically, and newly-started containers connect
to it unless otherwise specified. You can also create user-defined custom bridge
networks. User-defined bridge networks are superior to the default bridge
network.
Differences between user-defined bridges and the default bridge
User-defined bridges provide automatic DNS resolution between containers.
Containers on the default bridge network can only access each other by IP addresses, unless you use the
--linkoption, which is considered legacy. On a user-defined bridge network, containers can resolve each other by name or alias.Imagine an application with a web front-end and a database back-end. If you call your containers
webanddb, the web container can connect to the db container atdb, no matter which Docker host the application stack is running on.If you run the same application stack on the default bridge network, you need to manually create links between the containers (using the legacy
--linkflag). These links need to be created in both directions, so you can see this gets complex with more than two containers which need to communicate. Alternatively, you can manipulate the/etc/hostsfiles within the containers, but this creates problems that are difficult to debug.User-defined bridges provide better isolation.
All containers without a
--networkspecified, are attached to the default bridge network. This can be a risk, as unrelated stacks/services/containers are then able to communicate.Using a user-defined network provides a scoped network in which only containers attached to that network are able to communicate.
Containers can be attached and detached from user-defined networks on the fly.
During a container's lifetime, you can connect or disconnect it from user-defined networks on the fly. To remove a container from the default bridge network, you need to stop the container and recreate it with different network options.
Each user-defined network creates a configurable bridge.
If your containers use the default bridge network, you can configure it, but all the containers use the same settings, such as MTU and
iptablesrules. In addition, configuring the default bridge network happens outside of Docker itself, and requires a restart of Docker.User-defined bridge networks are created and configured using
docker network create. If different groups of applications have different network requirements, you can configure each user-defined bridge separately, as you create it.Linked containers on the default bridge network share environment variables.
Originally, the only way to share environment variables between two containers was to link them using the
--linkflag. This type of variable sharing isn't possible with user-defined networks. However, there are superior ways to share environment variables. A few ideas:Multiple containers can mount a file or directory containing the shared information, using a Docker volume.
Multiple containers can be started together using
docker-composeand the compose file can define the shared variables.You can use swarm services instead of standalone containers, and take advantage of shared secrets and configs.
Containers connected to the same user-defined bridge network effectively expose all ports
to each other. For a port to be accessible to containers or non-Docker hosts on
different networks, that port must be published using the -p or --publish
flag.
Options
The following table describes the driver-specific options that you can pass to
--opt when creating a custom network using the bridge driver.
| Option | Default | Description |
|---|---|---|
com.docker.network.bridge.name | Interface name to use when creating the Linux bridge. | |
com.docker.network.bridge.enable_ip_masquerade | true | Enable IP masquerading. |
com.docker.network.host_ipv4com.docker.network.host_ipv6 | Address to use for source NAT. See Packet filtering and firewalls. | |
com.docker.network.bridge.gateway_mode_ipv4com.docker.network.bridge.gateway_mode_ipv6 | nat | Control external connectivity. See Packet filtering and firewalls. |
com.docker.network.bridge.enable_icc | true | Enable or Disable inter-container connectivity. |
com.docker.network.bridge.host_binding_ipv4 | all IPv4 and IPv6 addresses | Default IP when binding container ports. |
com.docker.network.driver.mtu | 0 (no limit) | Set the containers network Maximum Transmission Unit (MTU). |
com.docker.network.container_iface_prefix | eth | Set a custom prefix for container interfaces. |
com.docker.network.bridge.inhibit_ipv4 | false | Prevent Docker from assigning an IP address to the bridge. |
Some of these options are also available as flags to the dockerd CLI, and you
can use them to configure the default docker0 bridge when starting the Docker
daemon. The following table shows which options have equivalent flags in the
dockerd CLI.
| Option | Flag |
|---|---|
com.docker.network.bridge.name | - |
com.docker.network.bridge.enable_ip_masquerade | --ip-masq |
com.docker.network.bridge.enable_icc | --icc |
com.docker.network.bridge.host_binding_ipv4 | --ip |
com.docker.network.driver.mtu | --mtu |
com.docker.network.container_iface_prefix | - |
The Docker daemon supports a --bridge flag, which you can use to define
your own docker0 bridge. Use this option if you want to run multiple daemon
instances on the same host. For details, see
Run multiple daemons.
Default host binding address
When no host address is given in port publishing options like -p 80
or -p 8080:80, the default is to make the container's port 80 available on all
host addresses, IPv4 and IPv6.
The bridge network driver option com.docker.network.bridge.host_binding_ipv4
can be used to modify the default address for published ports.
Despite the option's name, it is possible to specify an IPv6 address.
When the default binding address is an address assigned to a specific interface, the container's port will only be accessible via that address.
Setting the default binding address to :: means published ports will only be
available on the host's IPv6 addresses. However, setting it to 0.0.0.0 means it
will be available on the host's IPv4 and IPv6 addresses.
To restrict a published port to IPv4 only, the address must be included in the
container's publishing options. For example, -p 0.0.0.0:8080:80.
Manage a user-defined bridge
Use the docker network create command to create a user-defined bridge
network.
$ docker network create my-net
You can specify the subnet, the IP address range, the gateway, and other
options. See the
docker network create
reference or the output of docker network create --help for details.
Use the docker network rm command to remove a user-defined bridge
network. If containers are currently connected to the network,
disconnect them
first.
$ docker network rm my-net
What's really happening?
When you create or remove a user-defined bridge or connect or disconnect a container from a user-defined bridge, Docker uses tools specific to the operating system to manage the underlying network infrastructure (such as adding or removing bridge devices or configuring
iptablesrules on Linux). These details should be considered implementation details. Let Docker manage your user-defined networks for you.
Connect a container to a user-defined bridge
When you create a new container, you can specify one or more --network flags.
This example connects an Nginx container to the my-net network. It also
publishes port 80 in the container to port 8080 on the Docker host, so external
clients can access that port. Any other container connected to the my-net
network has access to all ports on the my-nginx container, and vice versa.
$ docker create --name my-nginx \
--network my-net \
--publish 8080:80 \
nginx:latest
To connect a running container to an existing user-defined bridge, use the
docker network connect command. The following command connects an already-running
my-nginx container to an already-existing my-net network:
$ docker network connect my-net my-nginx
Disconnect a container from a user-defined bridge
To disconnect a running container from a user-defined bridge, use the
docker network disconnect command. The following command disconnects
the my-nginx container from the my-net network.
$ docker network disconnect my-net my-nginx
Use IPv6 in a user-defined bridge network
When you create your network, you can specify the --ipv6 flag to enable IPv6.
$ docker network create --ipv6 --subnet 2001:db8:1234::/64 my-net
If you do not provide a --subnet option, a Unique Local Address (ULA) prefix
will be chosen automatically.
IPv6-only bridge networks
To skip IPv4 address configuration on the bridge and in its containers, create
the network with option --ipv4=false, and enable IPv6 using --ipv6.
$ docker network create --ipv6 --ipv4=false v6net
IPv4 address configuration cannot be disabled in the default bridge network.
Use the default bridge network
The default bridge network is considered a legacy detail of Docker and is not
recommended for production use. Configuring it is a manual operation, and it has
technical shortcomings.
Connect a container to the default bridge network
If you do not specify a network using the --network flag, and you do specify a
network driver, your container is connected to the default bridge network by
default. Containers connected to the default bridge network can communicate,
but only by IP address, unless they're linked using the
legacy --link flag.
Configure the default bridge network
To configure the default bridge network, you specify options in daemon.json.
Here is an example daemon.json with several options specified. Only specify
the settings you need to customize.
{
"bip": "192.168.1.1/24",
"fixed-cidr": "192.168.1.0/25",
"mtu": 1500,
"default-gateway": "192.168.1.254",
"dns": ["10.20.1.2", "10.20.1.3"]
}In this example:
- The bridge's address is "192.168.1.1/24" (from
bip). - The bridge network's subnet is "192.168.1.0/24" (from
bip). - Container addresses will be allocated from "192.168.1.0/25" (from
fixed-cidr).
Use IPv6 with the default bridge network
IPv6 can be enabled for the default bridge using the following options in
daemon.json, or their command line equivalents.
These three options only affect the default bridge, they are not used by user-defined networks. The addresses in below are examples from the IPv6 documentation range.
- Option
ipv6is required. - Option
bip6is optional, it specifies the address of the default bridge, which will be used as the default gateway by containers. It also specifies the subnet for the bridge network. - Option
fixed-cidr-v6is optional, it specifies the address range Docker may automatically allocate to containers.- The prefix should normally be
/64or shorter. - For experimentation on a local network, it is better to use a Unique Local
Address (ULA) prefix (matching
fd00::/8) than a Link Local prefix (matchingfe80::/10).
- The prefix should normally be
- Option
default-gateway-v6is optional. If unspecified, the default is the first address in thefixed-cidr-v6subnet.
{
"ipv6": true,
"bip6": "2001:db8::1111/64",
"fixed-cidr-v6": "2001:db8::/64",
"default-gateway-v6": "2001:db8:abcd::89"
}If no bip6 is specified, fixed-cidr-v6 defines the subnet for the bridge
network. If no bip6 or fixed-cidr-v6 is specified, a ULA prefix will be
chosen.
Restart Docker for changes to take effect.
Connection limit for bridge networks
Due to limitations set by the Linux kernel, bridge networks become unstable and inter-container communications may break when 1000 containers or more connect to a single network.
For more information about this limitation, see moby/moby#44973.
Skip Bridge IP address configuration
The bridge is normally assigned the network's --gateway address, which is
used as the default route from the bridge network to other networks.
The com.docker.network.bridge.inhibit_ipv4 option lets you create a network
without the IPv4 gateway address being assigned to the bridge. This is useful
if you want to configure the gateway IP address for the bridge manually. For
instance if you add a physical interface to your bridge, and need it to have
the gateway address.
With this configuration, north-south traffic (to and from the bridge network) won't work unless you've manually configured the gateway address on the bridge, or a device attached to it.
This option can only be used with user-defined bridge networks.
Usage examples
This section provides hands-on examples for working with bridge networks.
Use the default bridge network
This example shows how the default bridge network works. You start two
alpine containers on the default bridge and test how they communicate.
NoteThe default
bridgenetwork is not recommended for production. Use user-defined bridge networks instead.
List current networks:
$ docker network ls NETWORK ID NAME DRIVER SCOPE 17e324f45964 bridge bridge local 6ed54d316334 host host local 7092879f2cc8 none null localThe default
bridgenetwork is listed, along withhostandnone.Start two
alpinecontainers runningash. The-ditflags mean detached, interactive, and with a TTY. Since you haven't specified a--networkflag, the containers connect to the defaultbridgenetwork.$ docker run -dit --name alpine1 alpine ash $ docker run -dit --name alpine2 alpine ashVerify both containers are running:
$ docker container ls CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 602dbf1edc81 alpine "ash" 4 seconds ago Up 3 seconds alpine2 da33b7aa74b0 alpine "ash" 17 seconds ago Up 16 seconds alpine1Inspect the
bridgenetwork to see connected containers:$ docker network inspect bridgeThe output shows both containers connected, with their assigned IP addresses (
172.17.0.2foralpine1and172.17.0.3foralpine2).Connect to
alpine1:$ docker attach alpine1 / #Show the network interfaces for
alpine1from within the container:# ip addr show 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 27: eth0@if28: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff inet 172.17.0.2/16 scope global eth0 valid_lft forever preferred_lft foreverIn this example, the
eth0interface has the IP address172.17.0.2.From within
alpine1, verify you can connect to the internet:# ping -c 2 google.com PING google.com (172.217.3.174): 56 data bytes 64 bytes from 172.217.3.174: seq=0 ttl=41 time=9.841 ms 64 bytes from 172.217.3.174: seq=1 ttl=41 time=9.897 ms --- google.com ping statistics --- 2 packets transmitted, 2 packets received, 0% packet loss round-trip min/avg/max = 9.841/9.869/9.897 msPing the second container by its IP address:
# ping -c 2 172.17.0.3 PING 172.17.0.3 (172.17.0.3): 56 data bytes 64 bytes from 172.17.0.3: seq=0 ttl=64 time=0.086 ms 64 bytes from 172.17.0.3: seq=1 ttl=64 time=0.094 ms --- 172.17.0.3 ping statistics --- 2 packets transmitted, 2 packets received, 0% packet loss round-trip min/avg/max = 0.086/0.090/0.094 msThis succeeds. Now try pinging by container name:
# ping -c 2 alpine2 ping: bad address 'alpine2'On the default bridge network, containers can't resolve each other by name.
Detach from
alpine1without stopping it usingCTRL+p CTRL+q.Clean up: stop the containers and remove them.
$ docker container stop alpine1 alpine2 $ docker container rm alpine1 alpine2Stopped containers lose their IP addresses.
Use user-defined bridge networks
This example shows how user-defined bridge networks provide better isolation and automatic DNS resolution between containers.
Create the
alpine-netnetwork:$ docker network create --driver bridge alpine-netList Docker's networks:
$ docker network ls NETWORK ID NAME DRIVER SCOPE e9261a8c9a19 alpine-net bridge local 17e324f45964 bridge bridge local 6ed54d316334 host host local 7092879f2cc8 none null localInspect the
alpine-netnetwork:$ docker network inspect alpine-netThis shows the network's gateway (for example,
172.18.0.1) and that no containers are connected yet.Create four containers. Three connect to
alpine-net, and one connects to the defaultbridge. Then connect one container to both networks:$ docker run -dit --name alpine1 --network alpine-net alpine ash $ docker run -dit --name alpine2 --network alpine-net alpine ash $ docker run -dit --name alpine3 alpine ash $ docker run -dit --name alpine4 --network alpine-net alpine ash $ docker network connect bridge alpine4Verify all containers are running:
$ docker container ls CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 156849ccd902 alpine "ash" 41 seconds ago Up 41 seconds alpine4 fa1340b8d83e alpine "ash" 51 seconds ago Up 51 seconds alpine3 a535d969081e alpine "ash" About a minute ago Up About a minute alpine2 0a02c449a6e9 alpine "ash" About a minute ago Up About a minute alpine1Inspect both networks again to see which containers are connected:
$ docker network inspect bridgeContainers
alpine3andalpine4are connected to thebridgenetwork.$ docker network inspect alpine-netContainers
alpine1,alpine2, andalpine4are connected toalpine-net.On user-defined networks, containers can resolve each other by name. Connect to
alpine1and test:NoteAutomatic service discovery only resolves custom container names, not default automatically generated names.
$ docker container attach alpine1 # ping -c 2 alpine2 PING alpine2 (172.18.0.3): 56 data bytes 64 bytes from 172.18.0.3: seq=0 ttl=64 time=0.085 ms 64 bytes from 172.18.0.3: seq=1 ttl=64 time=0.090 ms --- alpine2 ping statistics --- 2 packets transmitted, 2 packets received, 0% packet loss round-trip min/avg/max = 0.085/0.087/0.090 ms # ping -c 2 alpine4 PING alpine4 (172.18.0.4): 56 data bytes 64 bytes from 172.18.0.4: seq=0 ttl=64 time=0.076 ms 64 bytes from 172.18.0.4: seq=1 ttl=64 time=0.091 ms --- alpine4 ping statistics --- 2 packets transmitted, 2 packets received, 0% packet loss round-trip min/avg/max = 0.076/0.083/0.091 msFrom
alpine1, you can't connect toalpine3because it's on a different network:# ping -c 2 alpine3 ping: bad address 'alpine3'You also can't connect by IP address. If
alpine3's IP is172.17.0.2:# ping -c 2 172.17.0.2 PING 172.17.0.2 (172.17.0.2): 56 data bytes --- 172.17.0.2 ping statistics --- 2 packets transmitted, 0 packets received, 100% packet lossDetach from
alpine1usingCTRL+p CTRL+q.Since
alpine4is connected to both networks, it can reach all containers. However, you need to usealpine3's IP address:$ docker container attach alpine4 # ping -c 2 alpine1 PING alpine1 (172.18.0.2): 56 data bytes 64 bytes from 172.18.0.2: seq=0 ttl=64 time=0.074 ms 64 bytes from 172.18.0.2: seq=1 ttl=64 time=0.082 ms --- alpine1 ping statistics --- 2 packets transmitted, 2 packets received, 0% packet loss round-trip min/avg/max = 0.074/0.078/0.082 ms # ping -c 2 alpine2 PING alpine2 (172.18.0.3): 56 data bytes 64 bytes from 172.18.0.3: seq=0 ttl=64 time=0.075 ms 64 bytes from 172.18.0.3: seq=1 ttl=64 time=0.080 ms --- alpine2 ping statistics --- 2 packets transmitted, 2 packets received, 0% packet loss round-trip min/avg/max = 0.075/0.077/0.080 ms # ping -c 2 alpine3 ping: bad address 'alpine3' # ping -c 2 172.17.0.2 PING 172.17.0.2 (172.17.0.2): 56 data bytes 64 bytes from 172.17.0.2: seq=0 ttl=64 time=0.089 ms 64 bytes from 172.17.0.2: seq=1 ttl=64 time=0.075 ms --- 172.17.0.2 ping statistics --- 2 packets transmitted, 2 packets received, 0% packet loss round-trip min/avg/max = 0.075/0.082/0.089 msVerify all containers can connect to the internet:
# ping -c 2 google.com PING google.com (172.217.3.174): 56 data bytes 64 bytes from 172.217.3.174: seq=0 ttl=41 time=9.778 ms 64 bytes from 172.217.3.174: seq=1 ttl=41 time=9.634 ms --- google.com ping statistics --- 2 packets transmitted, 2 packets received, 0% packet loss round-trip min/avg/max = 9.634/9.706/9.778 msDetach with
CTRL+p CTRL+qand repeat foralpine3andalpine1if desired.Clean up:
$ docker container stop alpine1 alpine2 alpine3 alpine4 $ docker container rm alpine1 alpine2 alpine3 alpine4 $ docker network rm alpine-net
Next steps
- Learn about networking from the container's point of view
- Learn about overlay networks
- Learn about Macvlan networks