Bridge network driver
In terms of networking, a bridge network is a Link Layer device which forwards traffic between network segments. A bridge can be a hardware device or a software device running within a host machine's kernel.
In terms of Docker, a bridge network uses a software bridge which lets containers connected to the same bridge network communicate, while providing isolation from containers that aren't connected to that bridge network. The Docker bridge driver automatically installs rules in the host machine so that containers on different bridge networks can't communicate directly with each other.
Bridge networks apply to containers running on the same Docker daemon host. For communication among containers running on different Docker daemon hosts, you can either manage routing at the OS level, or you can use an overlay network.
When you start Docker, a
default bridge network (also
bridge) is created automatically, and newly-started containers connect
to it unless otherwise specified. You can also create user-defined custom bridge
networks. User-defined bridge networks are superior to the default
User-defined bridges provide automatic DNS resolution between containers.
Containers on the default bridge network can only access each other by IP addresses, unless you use the
--linkoption, which is considered legacy. On a user-defined bridge network, containers can resolve each other by name or alias.
Imagine an application with a web front-end and a database back-end. If you call your containers
db, the web container can connect to the db container at
db, no matter which Docker host the application stack is running on.
If you run the same application stack on the default bridge network, you need to manually create links between the containers (using the legacy
--linkflag). These links need to be created in both directions, so you can see this gets complex with more than two containers which need to communicate. Alternatively, you can manipulate the
/etc/hostsfiles within the containers, but this creates problems that are difficult to debug.
User-defined bridges provide better isolation.
All containers without a
--networkspecified, are attached to the default bridge network. This can be a risk, as unrelated stacks/services/containers are then able to communicate.
Using a user-defined network provides a scoped network in which only containers attached to that network are able to communicate.
Containers can be attached and detached from user-defined networks on the fly.
During a container's lifetime, you can connect or disconnect it from user-defined networks on the fly. To remove a container from the default bridge network, you need to stop the container and recreate it with different network options.
Each user-defined network creates a configurable bridge.
If your containers use the default bridge network, you can configure it, but all the containers use the same settings, such as MTU and
iptablesrules. In addition, configuring the default bridge network happens outside of Docker itself, and requires a restart of Docker.
User-defined bridge networks are created and configured using
docker network create. If different groups of applications have different network requirements, you can configure each user-defined bridge separately, as you create it.
Linked containers on the default bridge network share environment variables.
Originally, the only way to share environment variables between two containers was to link them using the
--linkflag. This type of variable sharing isn't possible with user-defined networks. However, there are superior ways to share environment variables. A few ideas:
Multiple containers can mount a file or directory containing the shared information, using a Docker volume.
Multiple containers can be started together using
docker-composeand the compose file can define the shared variables.
Containers connected to the same user-defined bridge network effectively expose all ports
to each other. For a port to be accessible to containers or non-Docker hosts on
different networks, that port must be published using the
The following table describes the driver-specific options that you can pass to
--option when creating a custom network using the
|Interface name to use when creating the Linux bridge.
|Enable IP masquerading.
|Enable or Disable inter-container connectivity.
|Default IP when binding container ports.
0 (no limit)
|Set the containers network Maximum Transmission Unit (MTU).
|Set a custom prefix for container interfaces.
Some of these options are also available as flags to the
dockerd CLI, and you
can use them to configure the default
docker0 bridge when starting the Docker
daemon. The following tables shows which options have equivalent flags in the
The Docker daemon supports a
--bridge flag, which you can use to define
docker0 bridge. Use this option if you want to run multiple daemon
instances on the same host. For details, see
Run multiple daemons.
docker network create command to create a user-defined bridge
$ docker network create my-net
You can specify the subnet, the IP address range, the gateway, and other
options. See the
docker network create
reference or the output of
docker network create --help for details.
docker network rm command to remove a user-defined bridge
network. If containers are currently connected to the network,
$ docker network rm my-net
What's really happening?
When you create or remove a user-defined bridge or connect or disconnect a container from a user-defined bridge, Docker uses tools specific to the operating system to manage the underlying network infrastructure (such as adding or removing bridge devices or configuring
iptablesrules on Linux). These details should be considered implementation details. Let Docker manage your user-defined networks for you.
When you create a new container, you can specify one or more
This example connects a Nginx container to the
my-net network. It also
publishes port 80 in the container to port 8080 on the Docker host, so external
clients can access that port. Any other container connected to the
network has access to all ports on the
my-nginx container, and vice versa.
$ docker create --name my-nginx \
--network my-net \
--publish 8080:80 \
To connect a running container to an existing user-defined bridge, use the
docker network connect command. The following command connects an already-running
my-nginx container to an already-existing
$ docker network connect my-net my-nginx
To disconnect a running container from a user-defined bridge, use the
docker network disconnect command. The following command disconnects
my-nginx container from the
$ docker network disconnect my-net my-nginx
If you need IPv6 support for Docker containers, you need to enable the option on the Docker daemon and reload its configuration, before creating any IPv6 networks or assigning containers IPv6 addresses.
When you create your network, you can specify the
--ipv6 flag to enable
IPv6. You can't selectively disable IPv6 support on the default
bridge network is considered a legacy detail of Docker and is not
recommended for production use. Configuring it is a manual operation, and it has
If you do not specify a network using the
--network flag, and you do specify a
network driver, your container is connected to the default
bridge network by
default. Containers connected to the default
bridge network can communicate,
but only by IP address, unless they're linked using the
To configure the default
bridge network, you specify options in
Here is an example
daemon.json with several options specified. Only specify
the settings you need to customize.
Restart Docker for the changes to take effect.
If you configure Docker for IPv6 support (see Use IPv6), the default bridge network is also configured for IPv6 automatically. Unlike user-defined bridges, you can't selectively disable IPv6 on the default bridge.
Due to limitations set by the Linux kernel, bridge networks become unstable and inter-container communications may break when 1000 containers or more connect to a single network.
For more information about this limitation, see moby/moby#44973.