Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Refactor upgrade-crd job to not copy binaries #571

Open
mjnagel opened this issue Apr 29, 2024 · 4 comments
Open

Refactor upgrade-crd job to not copy binaries #571

mjnagel opened this issue Apr 29, 2024 · 4 comments

Comments

@mjnagel
Copy link

mjnagel commented Apr 29, 2024

Describe the problem/challenge you have

When using the velero helm chart I have strict requirements to deploy with specific images in my environment. Some of these images are slimmed down/hardened and don't match the exact way this chart is setup. Specifically I am encountering an issue with the upgrade-crd job when it copies sh and kubectl into a shared volume - due to missing glibc (one of the images is alpine based).

While this is very much a unique issue to my environment (and working through changes to the image to make this work) it did make me question the way a shell and kubectl binaries are being copied around given architecture and other concerns with copying binaries.

Describe the solution you'd like

If the upgrade-crd job were re-architected/re-ordered:

  • velero could run in an init container to output the CRD yaml to a shared volume
  • kubectl could run in the main container, mounting the shared volume and applying the manifests

This would effectively just change what is being copied using the volume - rather than binaries it would just be copies of the CRD manifests. This seems like it would reduce some of the complexity and strangeness here in how this job runs.

Anything else you would like to add:

I could also see this being done "one layer up" if the velero CLI had an option to force apply/upgrade when running velero install --crds-only. This may actually be the best option as it would drop the need for the shared volume/extra container entirely, and allow the velero image to stay slim with no shell/kubectl. It appears that the current behavior is tailored to install only and skips CRDs if they already exist?

@jenting
Copy link
Collaborator

jenting commented May 11, 2024

@qiuming-best Perhaps we could reconsider separate velero helm chart into two charts, one is CRD only, the other one is the rest of Velero kubernetes resources. WDYT?

@Feliksas
Copy link

Feliksas commented Jun 10, 2024

To be honest, I am simply astonished that someone thought that it was a great idea to copy binaries between containers in such a way. Are you serious? This would never have worked for a very obvious reason that target image is missing any dependencies to run a foreign, dynamically linked binary. Did anyone even test that chart before publishing? Looks like not.

@mjnagel 's suggestion seems very reasonable - to keep the velero image slim, the workflow needs to be reversed: first, gererate the CRD manifests and save them to a shared location from an init container, and then apply them with kubectl from the main job container. Alternatively, CRDs may be just supplied with the Helm chart itself, or the velero binary can be made to interact with Kubernetes API directly to apply necessary resources.

@jenting creating a separate chart just for CRDs is counterproductive, since it complicates the deployment process by introducing strict deploy order (which would break automatic deployment flow with tools like ArgoCD), and there are better alternatives available, which I've described above.

@jenting
Copy link
Collaborator

jenting commented Jun 10, 2024

Please check this #450 (comment)

Perhaps we could revisit to see if it still valid or not.

@mjnagel
Copy link
Author

mjnagel commented Jun 10, 2024

@jenting it does look like you could use the --log_file flag to output the yaml to a file, although I'm unsure if that is risky to do (not familiar with what else may be written in that log).

Alternatively, CRDs may be just supplied with the Helm chart itself, or the velero binary can be made to interact with Kubernetes API directly to apply necessary resources.

@Feliksas moving CRDs to the chart itself would change behavior around upgrades (since Helm will ignore any CRD changes on upgrades) so I don't think that is the ideal route. The mention of velero CLI changes is a good one - I think that's a viable alternative path as mentioned in my issue. velero install --crds-only currently works, but has the same problem as Helm - it would not touch anything on upgrades. Adding/extending that to work on upgrades would be a great option and clean up a lot of this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants