Deploy a Sample Application with a Canary release (Experimental)

Estimated reading time: 3 minutes

This topic applies to Docker Enterprise.

The Docker Enterprise platform business, including products, customers, and employees, has been acquired by Mirantis, inc., effective 13-November-2019. For more information on the acquisition and how it may affect you and your business, refer to the Docker Enterprise Customer FAQ.

Experimental features provide early access to future product functionality. These features are intended for testing and feedback only as they may change between releases without warning or can be removed entirely from a future release. Experimental features must not be used in production environments. Docker does not offer support for experimental features.


This example stages a canary release using weight-based load balancing between multiple back-end applications.


This guide assumes the Deploy Sample Application tutorial was followed, with the artifacts still running on the cluster. If they are not, please go back and follow this guide.

The following schema is used for this tutorial:

  • 80% of client traffic is sent to the production v1 service.
  • 20% of client traffic is sent to the staging v2 service.
  • All test traffic using the header stage=dev is sent to the v3 service.

A new Kubernetes manifest file with updated ingress rules can be found here.

  1. Source a UCP Client Bundle attached to a cluster with Cluster Ingress installed.
  2. Download the sample Kubernetes manifest file.
    $ wget
  3. Deploy the Kubernetes manifest file.
  $ kubectl apply -f ingress-weighted.yaml
  $ kubectl describe vs
            Exact:  dev
          Host:  demo-service
            Number:  8080
          Subset:    v3
          Host:  demo-service
            Number:  8080
          Subset:    v1
        Weight:      80
          Host:  demo-service
            Number:  8080
          Subset:    v2
        Weight:      20

This virtual service performs the following actions:

  • Receives all traffic with
  • If an exact match for HTTP header stage=dev is found, traffic is routed to v3.
  • All other traffic is routed to v1 and v2 in an 80:20 ratio.

Now we can send traffic to the application to view the applied load balancing algorithms.

# Public IP Address of a Worker or Manager VM in the Cluster

# Node Port
$ PORT=$(kubectl get service demo-service --output jsonpath='{.spec.ports[?("http")].nodePort}')

$ for i in {1..5}; do curl -H "Host:" http://$IPADDR:$PORT/ping; done

The split between v1 and v2 corresponds to the specified criteria. Within the v1 service, requests are load-balanced across the three back-end replicas. v3 does not appear in the requests.

To send traffic to the third service, add the HTTP header stage=dev.

for i in {1..5}; do curl -H "Host:" -H "Stage: dev" http://$IPADDR:$PORT/ping; done

In this case, 100% of the traffic with the stage=dev header is sent to the v3 service.

Where to go next

ucp, cluster, ingress, kubernetes