-
Notifications
You must be signed in to change notification settings - Fork 733
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Upgrading to Flagger built on main (commit id #133fdec) causes canary rollouts when no change in deployments #1673
Comments
hey, this is probably because of #1638. The upgrade to k8s 1.30 libs means that the sidecars are now used via the cc: @stefanprodan |
Hey @aryan9600, thank you for the follow-up. We have hundreds of workloads using Flagger, so when we upgrade this would cause all of the canaries to be triggered which is not ideal. Is there any way this can be avoided in the hash calculation of Flagger so that a dependency upgrade doesn't trigger a false rollout? |
i can't think of any clean way to avoid this. |
You could pause all canaries with |
Is there nothing that can be done around this issue? I've got the exact same problem with 100's of canaries that can't all be started at once as they burst a rate limit for an external metrics provider used for the analysis. I've tried setting The only option we have on our hands at the moment is to release the canaries in batches following an upgrade which would be a very time consuming process regularly as we tend to take the very latest release when it becomes available. |
I'm not sure if there are other obvious reasons, but we've experienced this through plenty of other upgrades in the past. Maybe they also contained similar updates like 1.30, but I wanted to provide that data point if it is helpful. Is there anything that could be done on the controller side to perform something similar to #1673 (comment) ?? |
Describe the bug
I've been working on an MR for issue #1646 and ran into this following bug when testing Flagger in my personal kubernetes cluster. I've also reached out in the slack channel here
When building my own docker image from main (commit id #133fdec), I am seeing canary rollouts triggered even though there were no changes to my canary deployment spec. As soon as Flagger was upgraded to this image, the canaries detected a new revision and began the analysis.
To confirm, I also compared 1:1 the deployment spec and nothing changed. This should mean the calculated hash is the same, but for some reason the
lastAppliedSpec
hash in the canary was different.For a sanity check, I also built a custom image from the last tag v1.37, and confirmed the canary analysis is not triggered when upgraded. I also confirmed that the hash remains the same.
To Reproduce
Expected behavior
It is expected that upgrading Flagger does not cause canary rollouts to be triggered if nothing changes in the canary deployments.
Additional context
The text was updated successfully, but these errors were encountered: