-
Notifications
You must be signed in to change notification settings - Fork 16
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
re-fixing DevicePlugin upgrade from v1.1 to v2 #968
re-fixing DevicePlugin upgrade from v1.1 to v2 #968
Conversation
Unlike upstream, in downstream version ModuleLoader daemonset differs from DevicePlugin by present of the kernel version label, and not by role label. This PR changes the identification of DevicePlugins during gathering all device plugin DS by checkig the absence of the kernel version label
✅ Deploy Preview for openshift-kmm ready!
To edit notification comments on pull requests, go to your Netlify site configuration. |
/assign @qbarrand |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: yevgeny-shnaidman The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
@@ -176,7 +176,7 @@ func (dprh *devicePluginReconcilerHelper) getModuleDevicePluginDaemonSets(ctx co | |||
devicePluginsList := make([]appsv1.DaemonSet, 0, len(dsList.Items)) | |||
// remove the older version module loader daemonsets | |||
for _, ds := range dsList.Items { | |||
if ds.GetLabels()[constants.DaemonSetRole] != constants.ModuleLoaderRoleLabelValue { | |||
if _, ok := ds.GetLabels()[constants.KernelLabel]; !ok { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why do we have that diff in the labels of the DS between u/s and d/s?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Because we released upstream before we tested downstream. That is why now we wait for testing to be completed downstream before releasing upstream.
BTW this means that the 1.0 --> 1.1 upgrade was probably broken upstream.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we never tested upgrade in upstream, and i think that most upstream project never really test it
/lgtm |
29d995b
into
rh-ecosystem-edge:main
/cherry-pick release-2.0 |
@qbarrand: new pull request created: #969 In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Unlike upstream, in downstream version ModuleLoader daemonset differs from DevicePlugin by present of the kernel version label, and not by role label.
This PR changes the identification of DevicePlugins during gathering all device plugin DS by checkig the absence of the kernel version label