Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

prebuilds: Support for force pull of the latest image manifests on manually-triggered prebuilds and webhook prebuilds #7149

Open
ajhalili2006 opened this issue Dec 9, 2021 · 25 comments
Labels
feature: prebuilds meta: never-stale This issue can never become stale team: workspace Issue belongs to the Workspace team

Comments

@ajhalili2006
Copy link

About this Issue

On projects using custom workspace images either through image key in the configuration file or through their configured custom workspace Dockerfile, when Gitpod first found a workspace image, assuming container image repositories are checked by tags, are not in its local registry proxy, it'll pull from whenever the registry the image is located and then cached it aggressively.

In my case, I maintain a fork of gitpod-io/workspace-images and use Red Hat Quay Container Registry's built-in image builder (instead of using Dazzle in GitLab CI which currently I implement ShellCheck + Hadolint checks for a while) for all the images within the quay.io/gitpodified-workspace-images/* namespace and then use it on my own projects. The problem is whenever I want to update the config file, Gitpod uses the cached version of the workspace image (possibly to save bandwidth and to avoid rate limits for unauthenticated pulls like in Docker Hub but maybe not in others) and things just went into chaotic errors.

image

Suggestion

  • Add a dropdown menu beside Run Prebuild with a boolean option called Pull latest manifest when ticked, it will pull the latest image manifest first.
  • Add support for pullLatestManifest=true URL parameter on both manual prebuild URLs and webhook endpoints (e.g. https://gitpod.io/#prebuild/https://gitlab.com/gitpodify/gitpodified-workspace-images?pullLatestManifest=true and https://gitpod.io/apps/gitlab/?pullLatestManifest=true

Workarounds

Like the Gitpod team is doing, I can change the image key of it every time and wait for prebuilds to finish.

# we can get the exact manifest digest if we want, kinda like those how npm/yarn lockfiles does minus any Dockerfile-specific
# lockfiles ;(
image: quay.io/gitpodified-workspace-images/full@sha256:14abc95e25cfbef35eda9fa1272ed39a1c2177404fd41f9b50c01e45ff5bf854

# in case of images in the gitpodified-workspace-images RHQCR namespace, including the future recaptime-dev-environment image,
# we can also use per-commit tags in form of build-<short SHA>
image: quay.io/gitpodified-workspace-images/vnc:build-c0a089c

Currently, there's no prefix for branches yet due to me slapped the Tag manifest with the branch or tag name box and no additional tag template in form of branch-${parsed_ref.branch} as I reproduced it below.

image

@jldec jldec added team: webapp Issue belongs to the WebApp team feature: prebuilds labels Dec 10, 2021
@jldec
Copy link
Contributor

jldec commented Dec 10, 2021

Thanks @ajhalili2006 -- cache invalidation is hard :)

I wonder if using something similar to the Kubernetes image pull policy would help
From https://kubernetes.io/docs/concepts/containers/images/#imagepullpolicy-defaulting

When you (or a controller) submit a new Pod to the API server, your cluster sets the imagePullPolicy field when specific conditions are met:

  • if you omit the imagePullPolicy field, and the tag for the container image is :latest, imagePullPolicy is automatically set to Always;
  • if you omit the imagePullPolicy field, and you don't specify the tag for the container image, imagePullPolicy is automatically set to Always;
  • if you omit the imagePullPolicy field, and you specify the tag for the container image that isn't :latest, the imagePullPolicy is automatically set to IfNotPresent.

@jldec
Copy link
Contributor

jldec commented Dec 30, 2021

@ajhalili2006 could you validate you're still seeing over-aggressive caching of tagged images?
Please provide an example with a .gitpod.yml and your image: specification.
thanks

@jldec jldec removed this from 🍎 WebApp Team Dec 30, 2021
@ajhalili2006
Copy link
Author

@ajhalili2006 could you validate you're still seeing over-aggressive caching of tagged images? Please provide an example with a .gitpod.yml and your image: specification. thanks

Sorry for an late reply, but yes, it's still cached if I'm using the latest tag.

Speaking of the image spec, I made some changes to the Dockerfile to help in debugging the situtation, which involves using build arguments1 with some useful information accessible through either docker inspect or env | grep <prefix> on the container.

image

And on the Gitpod config: https://gitlab.com/gitpodify/gitpodified-workspace-images/-/blob/7adc206c76e82ea228a6e6e9d651d665e054dc56/.gitpod.yml

Footnotes

  1. Currently I use the plain dockerbuild here, but if I use Dazzle, then I need to write an script to handle these as per https://github.com/gitpod-io/dazzle/issues/37.

@jldec jldec added team: workspace Issue belongs to the Workspace team and removed team: webapp Issue belongs to the WebApp team feature: prebuilds labels Jan 20, 2022
@jldec
Copy link
Contributor

jldec commented Jan 20, 2022

Passing to workspace team for tracking with work on new workpace image builds and replacement of latest tag.

@stale
Copy link

stale bot commented Apr 21, 2022

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

@stale stale bot added the meta: stale This issue/PR is stale and will be closed soon label Apr 21, 2022
@stale stale bot closed this as completed May 1, 2022
@jkaye2012
Copy link

Hey all,

I don't think this issue should be closed. latest images are still being cached incorrectly, and this is not behavior that anyone would expect using the default imagePullPolicy.

@gtsiolis
Copy link
Contributor

gtsiolis commented Jun 8, 2022

Hey @jkaye2012! Let me reopen this and loop in some fellow team members from the corresponding team in case this is something worth investigating, triaging, or updating. Cc @kylos101 @atduarte

@gtsiolis gtsiolis reopened this Jun 8, 2022
@stale stale bot removed the meta: stale This issue/PR is stale and will be closed soon label Jun 8, 2022
@jkaye2012
Copy link

Thank you. This has bit us a few times over the past few months. Changing our base image isn't a very frequent operation, but whenever we do change it we end up in a situation where our pods fail in sporadic ways for multiple days as we cannot rely on the new version being reliably run for new pods.

@kylos101
Copy link
Contributor

kylos101 commented Jun 8, 2022

Hey @gtsiolis thank you for reopening! To follow our groundwork process, I also added this issue to our inbox.

@jkaye2012
Copy link

Hey all, any progress on this? Currently we have an image that has been cached for 9 days now. We have not been able to find any way to get around this without changing the tag (which is not something that we want to do as it would require multiple commits any time that our base image is updated).

@xangelix
Copy link

Same issue here, even a button to manually clear cache at an account level would be greatly appreciated.

@stale
Copy link

stale bot commented Oct 12, 2022

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

@jkaye2012
Copy link

jkaye2012 commented Oct 12, 2022 via email

@davidwindell
Copy link
Contributor

Just spent a good few hours trying to figure out why (even with incremental prebuilds turned off) the latest version of our image wasn't being used.

This could really do with improvement (and not just for latest, we use branch names like php-8.1 (the contents of which changes over time without a new image tag).

@stale
Copy link

stale bot commented Jan 21, 2023

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

@stale stale bot added the meta: stale This issue/PR is stale and will be closed soon label Jan 21, 2023
@davidwindell
Copy link
Contributor

This issue shouldn't be considered stale.

@stale stale bot removed the meta: stale This issue/PR is stale and will be closed soon label Jan 24, 2023
@gtsiolis gtsiolis added the meta: never-stale This issue can never become stale label Jan 24, 2023
@jkaye2012
Copy link

Is there any update on this? It's been over a year now that GitPod is not adhering to very basic image caching protocols. latest should never be cached. Every time we make an image change our developers have random failures until the faulty cache is flushed.

@axonasif
Copy link
Member

axonasif commented Feb 23, 2023

👋 @jkaye2012 sorry about that. For now I could suggest to:

  • Pin the image version (i.e. tag) and update the version on your dockerfile or .gitpod.yml as necessary.

or

@jpfeuffer
Copy link

Forced rebuilds do not help! It will still use the cached version of the base image.

And the image we are using does not have tags other than latest.

So what now?
This is a huge issue!

@axonasif
Copy link
Member

Hi @jpfeuffer, which base image are you using?

@jpfeuffer
Copy link

My own: ghcr.io/openms/contrib:latest

@axonasif
Copy link
Member

axonasif commented Aug 28, 2023

@jpfeuffer cool. Can you please also share the contents of your .gitpod.yml and .gitpod.Dockerfile (if exists). Or even a link to your public repo would work.

@axonasif
Copy link
Member

axonasif commented Aug 28, 2023

@jpfeuffer thanks for sharing your image address.

You could use the sha256 digest of your image instead, I copied it from https://github.com/openms/contrib/pkgs/container/contrib


When directly using from .gitpod.yml:

image: ghcr.io/openms/contrib@sha256:ab301bf0858923b5c14349b38e5796bf341a838141eea077048a1df3fcc935be

When using a custom dockerfile

.gitpod.yml:

image:
  file: .gitpod.Dockerfile

.gitpod.Dockerfile:

FROM ghcr.io/openms/contrib@sha256:ab301bf0858923b5c14349b38e5796bf341a838141eea077048a1df3fcc935be

# Do more stuff ....

Tip: Run gp validate command to quickly test.

@jkaye2012
Copy link

jkaye2012 commented Aug 28, 2023 via email

@jpfeuffer
Copy link

@axonasif I see. Yes I might use that hack, but ideally I don't want to change the hash manually everytime my base image is updated. I'm okay with pressing a button on your web interface if you need to save bandwidth.
However, this image wasn't updated for a looooong long time by gitpod, so I'm wondering if gitpod is ever pulling new versions without manual intervention.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feature: prebuilds meta: never-stale This issue can never become stale team: workspace Issue belongs to the Workspace team
Projects
No open projects
Status: No status
Development

No branches or pull requests

10 participants