Docker Engine 23.0 release notes

Note

From Docker Engine version 23.0.0, Buildx is distributed in a separate package: docker-buildx-plugin. In earlier versions, Buildx was included in the docker-ce-cli package. When you upgrade to this version of Docker Engine, make sure you update all packages. For example, on Ubuntu:

$ sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin

Refer to the Docker Engine installation instructions for your operating system for more details on upgrading Docker Engine.

This page describes the latest changes, additions, known issues, and fixes for Docker Engine version 23.0.

For more information about:

Starting with the 23.0.0 release, Docker Engine moves away from using CalVer versioning, and starts using the SemVer versioning formatopen_in_new. Changing the version format is a stepping-stone towards Go module compatibility, but the repository doesn't yet use Go modules, and still requires using a "+incompatible" version. Work continues towards Go module compatibility in a future release.

23.0.6

2023-05-08

For a full list of pull requests and changes in this release, refer to the relevant GitHub milestones:

Bug fixes and enhancements

Packaging Updates

23.0.5

2023-04-26

For a full list of pull requests and changes in this release, refer to the relevant GitHub milestones:

Bug fixes and enhancements

Packaging Updates

23.0.4

2023-04-17

For a full list of pull requests and changes in this release, refer to the relevant GitHub milestones:

Bug fixes and enhancements

Packaging Updates

23.0.3

2023-04-04

Note

Due to an issue with CentOS 9 Stream's package repositories, packages for CentOS 9 are currently unavailable. Packages for CentOS 9 may be added later, or as part of the next (23.0.4) patch release.

Bug fixes and enhancements

  • Fixed a number of issues that can cause Swarm encrypted overlay networks to fail to uphold their guarantees, addressing CVE-2023-28841open_in_new, CVE-2023-28840open_in_new, and CVE-2023-28842open_in_new.
    • A lack of kernel support for encrypted overlay networks now reports as an error.
    • Encrypted overlay networks are eagerly set up, rather than waiting for multiple nodes to attach.
    • Encrypted overlay networks are now usable on Red Hat Enterprise Linux 9 through the use of the xt_bpf kernel module.
    • Users of Swarm overlay networks should review GHSA-vwm3-crmr-xfxwopen_in_new to ensure that unintentional exposure has not occurred.

Packaging Updates

23.0.2

2023-03-28

For a full list of pull requests and changes in this release, refer to the relevant GitHub milestones:

Bug fixes and enhancements

Packaging

23.0.1

2023-02-09

For a full list of pull requests and changes in this release, refer to the relevant GitHub milestones:

Bug fixes and enhancements

Packaging

23.0.0

2023-02-01

For a full list of pull requests and changes in this release, refer to the relevant GitHub milestones:

New

Removed

Deprecated

Upgrades

Security

Bug fixes and enhancements

Known issues

apparmor_parser ( tracking issueopen_in_new)

Some Debian users have reported issues with containers failing to start after upgrading to the 23.0 branch. The error message indicates that the issue is due to a missing apparmor_parser binary:

Error response from daemon: AppArmor enabled on system but the docker-default profile could not be loaded: running `apparmor_parser apparmor_parser --version` failed with output:
error: exec: "apparmor_parser": executable file not found in $PATH
Error: failed to start containers: somecontainer

The workaround to this issue is to install the apparmor package manually:

apt-get install apparmor

BuildKit inline cache ( tracking issueopen_in_new)

Attempting to build an image with BuildKit's inline cache feature (e.g. docker build --build-arg BUILDKIT_INLINE_CACHE=1 ., docker buildx build --cache-to type=inline .) will result in the daemon unexpectedly exiting:

panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x18 pc=0x147ff00]

goroutine 693 [running]:
github.com/docker/docker/vendor/github.com/moby/buildkit/cache.computeBlobChain.func4.1({0x245cca8, 0x4001394960})
        /go/src/github.com/docker/docker/vendor/github.com/moby/buildkit/cache/blobs.go:206 +0xc90
github.com/docker/docker/vendor/github.com/moby/buildkit/util/flightcontrol.(*call).run(0x40013c2240)
        /go/src/github.com/docker/docker/vendor/github.com/moby/buildkit/util/flightcontrol/flightcontrol.go:121 +0x64
sync.(*Once).doSlow(0x0?, 0x4001328240?)
        /usr/local/go/src/sync/once.go:74 +0x100
sync.(*Once).Do(0x4001328240?, 0x0?)
        /usr/local/go/src/sync/once.go:65 +0x24
created by github.com/docker/docker/vendor/github.com/moby/buildkit/util/flightcontrol.(*call).wait

The daemon will restart if configured to do so (e.g. via systemd) after such a crash. The only available mitigation in this release is to avoid performing builds with the inline cache feature enabled.

BuildKit with warm cache ( tracking issueopen_in_new)

If an image was built with BuildKit on a previous version of the daemon, and is built with a 23.0 daemon, previously cached layers will not be restored correctly. The image may appear to build correctly if no lines are changed in the Dockerfile; however, if partial cache invalidation occurs due to changing some lines in the Dockerfile, the still valid and previously cached layers will not be loaded correctly.

This most often presents as files that should be present in the image not being present in a RUN stage, or any other stage that references files, after changing some lines in the Dockerfile:

[+] Building 0.4s (6/6) FINISHED
 => [internal] load build definition from Dockerfile
 => => transferring dockerfile: 102B
 => [internal] load .dockerignore
 => => transferring context: 2B
 => [internal] load metadata for docker.io/library/node:18-alpine
 => [base 1/2] FROM docker.io/library/node:18-alpine@sha256:bc329c7332cffc30c2d4801e38df03cbfa8dcbae2a7a52a449db104794f168a3
 => CACHED [base 2/2] WORKDIR /app
 => ERROR [stage-1 1/1] RUN uname -a
------
 > [stage-1 1/1] RUN uname -a:
#0 0.138 runc run failed: unable to start container process: exec: "/bin/sh": stat /bin/sh: no such file or directory
------
Dockerfile:5
--------------------
   3 |
   4 |     FROM base
   5 | >>> RUN uname -a
   6 |
--------------------
ERROR: failed to solve: process "/bin/sh -c uname -a" did not complete successfully: exit code: 1

To mitigate this, the previous build cache must be discarded. docker builder prune -a will completely empty the build cache, and allow the affected builds to proceed again by removing the mishandled cache layers.

ipvlan networks ( tracking issueopen_in_new)

When upgrading to the 23.0 branch, the existence of any ipvlan networks will prevent the daemon from starting:

panic: interface conversion: interface {} is nil, not string

goroutine 1 [running]:
github.com/docker/docker/libnetwork/drivers/ipvlan.(*configuration).UnmarshalJSON(0x40011533b0, {0x400069c2d0, 0xef, 0xef})
        /go/src/github.com/docker/docker/libnetwork/drivers/ipvlan/ipvlan_store.go:196 +0x414
encoding/json.(*decodeState).object(0x4001153440, {0x5597157640?, 0x40011533b0?, 0x559524115c?})
        /usr/local/go/src/encoding/json/decode.go:613 +0x650
encoding/json.(*decodeState).value(0x4001153440, {0x5597157640?, 0x40011533b0?, 0x559524005c?})
        /usr/local/go/src/encoding/json/decode.go:374 +0x40
encoding/json.(*decodeState).unmarshal(0x4001153440, {0x5597157640?, 0x40011533b0?})
        /usr/local/go/src/encoding/json/decode.go:181 +0x204
encoding/json.Unmarshal({0x400069c2d0, 0xef, 0xef}, {0x5597157640, 0x40011533b0})
        /usr/local/go/src/encoding/json/decode.go:108 +0xf4
github.com/docker/docker/libnetwork/drivers/ipvlan.(*configuration).SetValue(0x4000d18050?, {0x400069c2d0?, 0x23?, 0x23?})
        /go/src/github.com/docker/docker/libnetwork/drivers/ipvlan/ipvlan_store.go:230 +0x38

To mitigate this, affected users can downgrade and remove the network, then upgrade again. Alternatively, the entire network store can be removed, and networks can be recreated after the upgrade. The network store is located at /var/lib/docker/network/files/local-kv.db. If the daemon is using an alternate --data-root, substitute /var/lib/docker for the alternate path.

Kata Containers ( tracking issueopen_in_new)

The 23.0 branch brings support for alternate containerd shims, such as io.containerd.runsc.v1 (gVisor) and io.containerd.kata.v2 (Kata Containers).

When using the Kata Containers runtime, exiting an exec session stops the running container, and hangs the connected CLI if a TTY was opened. There is no mitigation at this time beyond avoiding execing into containers running on the Kata runtime.

The root cause of this issue is a long-standing bug in Moby. This will be resolved in a future release. Be advised that support for alternate OCI runtimes is a new feature and that similar issues may be discovered as more users start exercising this functionality.