Deploy a layer 7 routing solutionEstimated reading time: 6 minutes
This topic applies to Docker Enterprise.
The Docker Enterprise platform business, including products, customers, and employees, has been acquired by Mirantis, inc., effective 13-November-2019. For more information on the acquisition and how it may affect you and your business, refer to the Docker Enterprise Customer FAQ.
This topic covers deploying a layer 7 routing solution into a Docker Swarm to route traffic to Swarm services. Layer 7 routing is also referred to as an HTTP routing mesh (HRM).
- Docker version 17.06 or later
- Docker must be running in Swarm mode
- Internet access (see Offline installation for installing without internet access)
Enable layer 7 routing via UCP
By default, layer 7 routing is disabled, so you must first enable this service from the UCP web UI.
- Log in to the UCP web UI as an administrator.
- Navigate to Admin Settings.
- Select Layer 7 Routing.
- Select the Enable Layer 7 Routing check box.
By default, the routing mesh service listens on port 8080 for HTTP and port 8443 for HTTPS. Change the ports if you already have services that are using them.
When layer 7 routing is enabled:
- UCP creates the
- UCP deploys the
ucp-interlockservice and attaches it both to the Docker socket and the overlay network that was created. This allows the Interlock service to use the Docker API. That’s also the reason why this service needs to run on a manger node.
ucp-interlockservice starts the
ucp-interlock-extensionservice and attaches it to the
ucp-interlocknetwork. This allows both services to communicate.
ucp-interlock-extensiongenerates a configuration to be used by the proxy service. By default the proxy service is NGINX, so this service generates a standard NGINX configuration. UCP creates the
com.docker.ucp.interlock.conf-1configuration file and uses it to configure all the internal components of this service.
ucp-interlockservice takes the proxy configuration and uses it to start the
Now you are ready to use the layer 7 routing service with your Swarm workloads. There are three primary Interlock services: core, extension, and proxy. To learn more about these services, see TOML configuration options.
The following code sample provides a default UCP configuration. This will be created automatically when enabling Interlock as described in this section.
ListenAddr = ":8080" DockerURL = "unix:///var/run/docker.sock" AllowInsecure = false PollInterval = "3s" [Extensions] [Extensions.default] Image = "docker/ucp-interlock-extension:3.2.4" ServiceName = "ucp-interlock-extension" Args =  Constraints = ["node.labels.com.docker.ucp.orchestrator.swarm==true", "node.platform.os==linux"] ProxyImage = "docker/ucp-interlock-proxy:3.2.4" ProxyServiceName = "ucp-interlock-proxy" ProxyConfigPath = "/etc/nginx/nginx.conf" ProxyReplicas = 2 ProxyStopSignal = "SIGQUIT" ProxyStopGracePeriod = "5s" ProxyConstraints = ["node.labels.com.docker.ucp.orchestrator.swarm==true", "node.platform.os==linux"] PublishMode = "ingress" PublishedPort = 8080 TargetPort = 80 PublishedSSLPort = 8443 TargetSSLPort = 443 [Extensions.default.Labels] "com.docker.ucp.InstanceID" = "fewho8k85kyc6iqypvvdh3ntm" [Extensions.default.ContainerLabels] "com.docker.ucp.InstanceID" = "fewho8k85kyc6iqypvvdh3ntm" [Extensions.default.ProxyLabels] "com.docker.ucp.InstanceID" = "fewho8k85kyc6iqypvvdh3ntm" [Extensions.default.ProxyContainerLabels] "com.docker.ucp.InstanceID" = "fewho8k85kyc6iqypvvdh3ntm" [Extensions.default.Config] Version = "" User = "nginx" PidPath = "/var/run/proxy.pid" MaxConnections = 1024 ConnectTimeout = 600 SendTimeout = 600 ReadTimeout = 600 IPHash = false AdminUser = "" AdminPass = "" SSLOpts = "" SSLDefaultDHParam = 1024 SSLDefaultDHParamPath = "" SSLVerify = "required" WorkerProcesses = 1 RLimitNoFile = 65535 SSLCiphers = "HIGH:!aNULL:!MD5" SSLProtocols = "TLSv1.2" AccessLogPath = "/dev/stdout" ErrorLogPath = "/dev/stdout" MainLogFormat = "'$remote_addr - $remote_user [$time_local] \"$request\" '\n\t\t '$status $body_bytes_sent \"$http_referer\" '\n\t\t '\"$http_user_agent\" \"$http_x_forwarded_for\"';" TraceLogFormat = "'$remote_addr - $remote_user [$time_local] \"$request\" $status '\n\t\t '$body_bytes_sent \"$http_referer\" \"$http_user_agent\" '\n\t\t '\"$http_x_forwarded_for\" $request_id $msec $request_time '\n\t\t '$upstream_connect_time $upstream_header_time $upstream_response_time';" KeepaliveTimeout = "75s" ClientMaxBodySize = "32m" ClientBodyBufferSize = "8k" ClientHeaderBufferSize = "1k" LargeClientHeaderBuffers = "4 8k" ClientBodyTimeout = "60s" UnderscoresInHeaders = false HideInfoHeaders = false
Enable layer 7 routing manually
Interlock can also be enabled from the command line, as described in the following sections.
Work with the core service configuration file
Interlock uses the TOML file for the core service configuration. The following example utilizes Swarm deployment and recovery features by creating a Docker Config object:
$> cat << EOF | docker config create service.interlock.conf - ListenAddr = ":8080" DockerURL = "unix:///var/run/docker.sock" PollInterval = "3s" [Extensions] [Extensions.default] Image = "docker/ucp-interlock-extension:3.2.4" Args = ["-D"] ProxyImage = "docker/ucp-interlock-proxy:3.2.4" ProxyArgs =  ProxyConfigPath = "/etc/nginx/nginx.conf" ProxyReplicas = 1 ProxyStopGracePeriod = "3s" ServiceCluster = "" PublishMode = "ingress" PublishedPort = 8080 TargetPort = 80 PublishedSSLPort = 8443 TargetSSLPort = 443 [Extensions.default.Config] User = "nginx" PidPath = "/var/run/proxy.pid" WorkerProcesses = 1 RlimitNoFile = 65535 MaxConnections = 2048 EOF oqkvv1asncf6p2axhx41vylgt
Create a dedicated network for Interlock and extensions
Next, create a dedicated network for Interlock and the extensions:
$> docker network create --driver overlay ucp-interlock
Create the Interlock service
Now you can create the Interlock service. Note the requirement to constrain to a manager. The Interlock core service must have access to a Swarm manager, however the extension and proxy services are recommended to run on workers. See the Production section for more information on setting up for a production environment.
$> docker service create \ --name ucp-interlock \ --mount src=/var/run/docker.sock,dst=/var/run/docker.sock,type=bind \ --network ucp-interlock \ --constraint node.role==manager \ --config src=service.interlock.conf,target=/config.toml \ docker/ucp-interlock:3.2.4 -D run -c /config.toml
At this point, there should be three (3) services created: one for the Interlock service, one for the extension service, and one for the proxy service:
$> docker service ls ID NAME MODE REPLICAS IMAGE PORTS sjpgq7h621ex ucp-interlock replicated 1/1 docker/ucp-interlock:3.2.4 oxjvqc6gxf91 ucp-interlock-extension replicated 1/1 docker/ucp-interlock-extension:3.2.4 lheajcskcbby ucp-interlock-proxy replicated 1/1 docker/ucp-interlock-proxy:3.2.4 *:80->80/tcp *:443->443/tcp
The Interlock traffic layer is now deployed.