Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

distribution: Set default compression to zstd/fastest #48328

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

roolebo
Copy link

@roolebo roolebo commented Aug 14, 2024

gzip compression is notoriously slow. The compression on skylake happens at 7MiB/s. The push rate is not sustainable for large multi-gigabyte images.

zstd/fastest runs at 81 MiB/s for the same image. That gives tenfold push speedup with default flags.

Resolves: #1266
See also: #48106

- How to verify it
Use docker push with built-in docker buildkit driver
- Description for the changelog

distribution: change default compression of docker push to zstd/fastest

gzip compression is notoriously slow. The compression on skylake happens
at 7MiB/s. The push rate is not sustainable for large multi-gigabyte
images.

zstd/fastest runs at 81 MiB/s for the same image. That gives tenfold
push speedup with default flags.

Resolves: moby#1266
See also: moby#48106

Signed-off-by: Roman Bolshakov <[email protected]>
Copy link
Contributor

@vvoland vvoland left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

zstd support is not as universal as gzip. That definitely can't be the default.

If you're interested in building and pushing zstd images, you can do so natively in Docker with the containerd image store integration enabled, see: https://docs.docker.com/engine/storage/containerd/.

@roolebo
Copy link
Author

roolebo commented Aug 14, 2024

@vvoland what do you think if default compression method and compression level is specified via /etc/docker/daemon.json?

I wanted to try the suggested approach but I don't see any migration path for the running containers:

Switching to containerd snapshotters causes you to temporarily lose images and containers created using the classic storage drivers. Those resources still exist on your filesystem, and you can retrieve them by turning off the containerd snapshotters feature.

In order to test properly it should have a migration path, at least disruptive for the starter. Do you know how I can migrate my containers and images to containerd to test the feature?

With regards to universality... zstd was open-sourced in 2016 and it's pretty much being adopted everywhere. The adoption is still in progress. The fix works pretty well in production environment without the need to change tooling and I haven't seen an issue with registry:2 and docker-ce stable.

@vvoland
Copy link
Contributor

vvoland commented Aug 14, 2024

Personally speaking (can't speak for other maintainers though), it would be fine to have an optional setting (possibly a feature?) for enabling this.

However, please do note that:

  • compression on push is an implementation detail of the graphdrivers image store because it stores uncompressed layers
  • with containerd integration, the content is just pushed as-is, so the compression of the image can be specified at the build time
  • graphdrivers image store doesn't receive much development nowadays as the long-term plan is to move to the containerd image store

In order to test properly it should have a migration path, at least disruptive for the starter. Do you know how I can migrate my containers and images to containerd to test the feature?

There's no official migration process. It's advised to just re-pull/rebuild all the images in the new store. Technically you can transfer your images via docker saveing them with graphdrivers and then docker loading them with containerd, but this has some gotchas due to the data model being different:

  • the image IDs won't be preserved
  • docker pull ubuntu; docker save ubuntu >a.tar + docker load -i a.tar with the containerd store will not produce the exact same image as docker pull ubuntu with the containerd store.
  • ☝🏻 the same applies to docker build

While we might consider having some kind of a "best-effort" image migration, I don't think it will be able feasible to migrate running containers.

In general, it's advised to treat your containers with the "cattle not pets" rule in mind.

@vvoland
Copy link
Contributor

vvoland commented Aug 14, 2024

With regards to universality... zstd was open-sourced in 2016 and it's pretty much being adopted everywhere.

While zstd itself is quite widespread now, its support is not a MUST in the OCI spec:

  • mediaType string

This descriptor property has additional restrictions for layers[].
Implementations MUST support at least the following media types:

Manifests concerned with portability SHOULD use one of the above media types.
Implementations storing or copying image manifests MUST NOT error on encountering a mediaType that is unknown to the implementation.

https://github.com/opencontainers/image-spec/blob/main/manifest.md#image-manifest-property-descriptions

@tianon
Copy link
Member

tianon commented Oct 30, 2024

FWIW I did a bit of research into widespread versions of popular runtimes and their support for zstd in docker-library/official-images#17720 - it's a little grim, especially around Debian Stable/Oldstable (Bookworm/Bullseye), which are both on Docker 20.10 (and thus have no zstd support), so IMO we can't reasonably change the default just yet, but making it possible to use zstd compression via configuration seems really sane.

@roolebo
Copy link
Author

roolebo commented Nov 6, 2024

@vvoland agreed, configurable works for us.

@tianon Thank you for continuing the topic and doing the research.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Make compression when pushing to a private registry optional
3 participants