You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Omni must know the order its upgrading the nodes. When the node about to be upgraded is tainted with SchedulingDisabled, the pods shift over to the other available nodes in the cluster. As you'd expect, several pods are shifted to the next node to be upgraded, which means those pods will have to move yet again in short order. Its like being told "You can't sit here, but you can sit over here or here", but then a few minutes later, I have to get up again. Since I'm incredibly lazy, I would think "Why would you tell me to sit here when you know you're going to ask me to move again?"
Solution
Create an option so that either all remaining nodes to be upgraded are tainted with PreferNoSchedule, or possibly just the next node to be upgraded (ie as soon as Worker1 is tainted to remove all workloads, taint Worker2 with PreferNoSchedule). As nodes are upgraded, this taint can be removed.
This may reduce overall pod "churn" during upgrades.
This should be available as an option in the UI, and via cluster templates.
Alternative Solutions
No response
Notes
No response
The text was updated successfully, but these errors were encountered:
Problem Description
Omni must know the order its upgrading the nodes. When the node about to be upgraded is tainted with
SchedulingDisabled
, the pods shift over to the other available nodes in the cluster. As you'd expect, several pods are shifted to the next node to be upgraded, which means those pods will have to move yet again in short order. Its like being told "You can't sit here, but you can sit over here or here", but then a few minutes later, I have to get up again. Since I'm incredibly lazy, I would think "Why would you tell me to sit here when you know you're going to ask me to move again?"Solution
Create an option so that either all remaining nodes to be upgraded are tainted with
PreferNoSchedule
, or possibly just the next node to be upgraded (ie as soon as Worker1 is tainted to remove all workloads, taint Worker2 withPreferNoSchedule
). As nodes are upgraded, this taint can be removed.This may reduce overall pod "churn" during upgrades.
This should be available as an option in the UI, and via cluster templates.
Alternative Solutions
No response
Notes
No response
The text was updated successfully, but these errors were encountered: