-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
KEP-3619: updated Production Readiness Review Questionnaire for beta release #4895
base: master
Are you sure you want to change the base?
Conversation
everpeace
commented
Oct 2, 2024
- One-line PR description: updated Production Readiness Review Questionnaire for beta release
- Issue link: Fine-grained SupplementalGroups control #3619
- Other comments: None.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@haircommander @mrunalp @thockin @SergeyKanzhelev Could you kindly review this PR??
Look for an event saying indicating SupplementalGroupsPolicy is not supported by the runtime. | ||
```console | ||
$ kubectl get events -o json -w | ||
... | ||
{ | ||
... | ||
"kind": "Event", | ||
"message": "Error: SupplementalGroupsPolicyNotSupported", | ||
... | ||
} | ||
... | ||
``` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For this, I plan to add kubelet admission which raises an error event if
- the pod sets
pod.spec.securityContext.supplementalGroupsPolicy=Strict(non default value)
- and the node is
node.status.features.supplementalGroupsPolicy=false
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok, and so the pod doesn't get admitted, can there be metric when this happens for this reason? Events are not useful if you are managing 50,000 clusters...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can there be metric when this happens for this reason?
hmm. If I understood correctly, there are currently no metrics for specific kubelet's admission errors. This kind of admission error is a common issue for node(or runtime handler) feature-based admission (e.g. user namespace or recursive read-only mounts).
Like KEP-127: Support User Namespaces stated, we can compare kubelet_running_pods
and kubelet_desired_pods
metrics.
WDYT??
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeah, that looks reasonable, please pull that in here. thanks
- Make sure that `crictl info` (with the latest crictl) | ||
reports that `supplemental_groups_policy` is supported. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It needs to update crictl
to display features
field in crictl info
command.
6caafe8
to
5d3c7c0
Compare
e87dd09
to
8182886
Compare
@@ -907,6 +969,8 @@ and creating new ones, as well as about cluster-level services (e.g. DNS): | |||
- Impact of its degraded performance or high-error rates on the feature: | |||
--> | |||
|
|||
Specific version of CRI. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@samuelkarp any comments on this? Is this a Containerd 2.0 - only feature?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
containerd/containerd#9737 and containerd/containerd#10410 will be in containerd 2.0.
So once it is beta enabled by default, API server will allow this new field. While Containerd 2.0 is still in works, what behavior customers who will set this field will get on containerd 1.x? |
Beta graduation criteria listed new tests top be added. Is it something you plan to work on in 1.32? |
Co-authored-by: Mrunal Patel <[email protected]>
63da45f
to
46bf6c7
Compare
This PR proposes that if user sets Please remember that the user won't get any error when But, as you pointed out, we maybe better wait for container 2.0 to be released. Or, should we have more prudent steps: "Beta, disabled by default"(v1.32) --> "Beta, enabled by default" (after containerd v2 is out) --> "GA"? WDYT??
Actually, I already added basic e2e tests in alpha for this KEP: kubernetes/kubernetes#125470 So, I think the criteria are almost satisfied. |
@@ -1056,6 +1149,7 @@ Major milestones might include: | |||
|
|||
- 2023-02-10: Initial KEP published. | |||
- v1.31.0(2024-08-13): Alpha | |||
- v1.32.0: Beta (enabled by default) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I apologize for not making this link before.
If this totally depends opn containerd 2.0 and 2.0 is NOT YET RELEASED, then I am wary of moving this to beta, especially being enabled-by-default. That's a tripping hazard we don't need to introduce.
-
Are there any parts of this that containerd 1.x implements that could be advanced while leaving the policy in alpha? e.g. the status stuff?
-
Is there an urgent need to move this to beta (for CRI-O, I assume)? If so, could it be off-by-default (which is ~identical to alpha)?
Convince me I am over-reacting?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No urgency from CRI-O side to move this to beta.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the planning around version skew also needs more work, so it may be a good idea to postpone it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Don't get me wrong, I want this feature, but it seems like a bad idea to move features to beta where the majority (anecdotally) of kube users CAN'T POSSIBLY use it (because it's linked to containerd 2 and that is not yet released).
Are there any parts of this that containerd 1.x implements that could be advanced while leaving the policy in alpha? e.g. the status stuff?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@thockin @johnbelamaric @SergeyKanzhelev @mrunalp @haircommander
Thanks for the feedback.
it seems like a bad idea to move features to beta where the majority (anecdotally) of kube users CAN'T POSSIBLY use it (because it's linked to containerd 2 and that is not yet released).
I now agree with this. Let's postpone promoting this KEP to beta until containerd v2 is released and it becomes popular.
I will update beta timing in README and kep.yaml.
updated in 0c5f7ed
@@ -790,13 +790,44 @@ rollout. Similarly, consider large clusters and how enablement/disablement | |||
will rollout across nodes. | |||
--> | |||
|
|||
A rollout may fail when at least one of the following components are too old because this KEP introduces the new Kubernetes API field: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
First, a comment on the affect of this enabling/disabling this on running Pods (I can't comment on the unchanged lines above). In the answers above, you say that the permission may change. Under what conditions? If the container restarts? Or just if a Pod is recreated?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Under what conditions? If the container restarts? Or just if a Pod is recreated?
The permission(process identities) may change only when
- the pod is set
SupplementalGroupsPolicy: "Strict"
, and - it is recreated.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok, can you fix this above (lines 752, 756 are not clear on this).
| CRI runtime | `Strict` | | ||
|
||
|
||
For example, an error will be returned like this if kube-apiserver is too old: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think you need to revise your version skew strategy (above) and think about how each component reacts during an upgrade. You can't say "kubelet must be at least the version of control-plane components". That's not realistic. It's not possible during an upgrade, and in fact people often run for extended periods of time with older kubelets. In that case, the kubelet won't see the new field. What sort of failure does that cause? Similarly for the CRI runtime.
Similarly, with enablement, you could enable the feature gate in the control plane, then only enable it in some nodes at the kubelet level. What's the behavior in this case?
If you enable it everywhere, then you create some pods with the Strict
policy, then you disable it, will the kubelet see the new field or not? If it see it, have you feature gated the kubelet behavior to ignore the field?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You can't say "kubelet must be at least the version of control-plane components"
I did not say this. I think I respected the version skew policy.
It's not possible during an upgrade, and in fact people often run for extended periods of time with older kubelets. In that case, the kubelet won't see the new field. What sort of failure does that cause?
This is a common issue when adding new API fields in Pod. For this KEP, the below matrix describes what will happen:
kubelet version | Feature Gate | CR support the KEP? | Pod's policy | Outcome |
---|---|---|---|---|
<1.31 (which does not know this field) |
N/A | Yes/No | Strict |
The pod can run, but its policy is just ignored. And, .containerStatuses.user will not be reported. |
Merge /(not set) |
The pod can run normally as expected. And, .containerStatuses.user will not be reported. |
|||
>=1.31 | True |
YES | Strict |
The pod and its policy can run as expected. And, .containerStatuses.user will be reported. |
Merge /(not set) |
The pod and its policy can run as expected. And, .containerStatuses.user will be reported. |
|||
NO | Strict |
The pod will be rejected in kubelet's admission. | ||
Merge /(not set) |
The pod and its policy can run as expected. And, .containerStatuses.user will be reported. |
|||
>=1.31 | False |
YES | Strict |
The pod, which was created when the feature gate was enabled previously, and its policy can run as expected. But, .containerStatuses.user will not be reported. |
Merge /(not set) |
The pod and its policy can run as expected. But, .containerStatuses.user will not be reported. |
|||
NO | Strict |
The pod, which was created when the feature gate was enabled previously, will be rejected in kubelet's admission. | ||
Merge /(not set) |
The pod and the policy can run as expected. But, .containerStatuses.user will not be reported. |
Similarly, with enablement, you could enable the feature gate in the control plane, then only enable it in some nodes at the kubelet level. What's the behavior in this case?
Please see the above matrix.
If you enable it everywhere, then you create some pods with the Strict policy, then you disable it, will the kubelet see the new field or not? If it see it, have you feature gated the kubelet behavior to ignore the field?
Please see the above matrix.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That was a quote from line 676, please fix it up there and add in the matrix. Thanks!
Look for an event saying indicating SupplementalGroupsPolicy is not supported by the runtime. | ||
```console | ||
$ kubectl get events -o json -w | ||
... | ||
{ | ||
... | ||
"kind": "Event", | ||
"message": "Error: SupplementalGroupsPolicyNotSupported", | ||
... | ||
} | ||
... | ||
``` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok, and so the pod doesn't get admitted, can there be metric when this happens for this reason? Events are not useful if you are managing 50,000 clusters...
@@ -828,6 +869,12 @@ checking if there are objects with field X set) may be a last resort. Avoid | |||
logs or events for this purpose. | |||
--> | |||
|
|||
Inspect the `supplementalGroupsPolicy` fields in Pods. You can check if the following `jq` command prints non-zero number: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If you have 50,000 clusters this is not helpful. Is there a metric we can use, can kube state metrics help here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, kube state metrics can help. Let me update this section.
@@ -864,16 +911,22 @@ These goals will help you determine what you need to measure (SLIs) in the next | |||
question. | |||
--> | |||
|
|||
- `supplementalGroupsPolicy=Strict`: 100% of pods were scheduled into a node with the feature supported. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't see any scheduler integration incorporated into the KEP. How will this happen in clusters where some nodes support this and some do not?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, I described this topic in later section. I think to update this line like this. WDYT?
- `supplementalGroupsPolicy=Strict`: 100% of pods were scheduled into a node with the feature supported. | |
- `supplementalGroupsPolicy=Strict`: 100% of pods were scheduled into a node with the feature supported. This KEP does NOT support scheduler integration. Please see the section "Are there any missing metrics that would be useful to have to improve observability of this feature?". |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think that's helpful, but perhaps you can also note up in the discussion of the feature and how it works, that users should target using a node label.
as an error metric. | ||
|
||
However, this is not planned to be implemented in kube-scheduler, as it seems overengineering. | ||
Users may use `nodeSelector`, `nodeAffinity`, etc. to workaround this. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok, this needs to be clearly documented, I didn't see this anywhere above (I may have missed it). Is the assumption then that during the time period while this is working its way through the ecosystem (maybe a couple years?), NFD/a label + node selector will be needed to ensure that workloads that need the Strict
policy are properly scheduled?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
NFD/a label + node selector will be needed to ensure that workloads that need the Strict policy are properly scheduled?
Yes. However, this topic is not specific to the KEP. This is common issue for node/runtime handler features (e.g. user namespaces, recursive read-only mounts.)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes...and I think this is a big usability oversight. but you are right, that's not specific to this KEP.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree it's miserable and the containerd 2.0 situation is exaccerbating it.
The only paths I see are basically:
-
Back-rev nodes ignore unknown fields; the API may say one thing and the node does something else.
-
Scheduler does not know about features, and so might assign pods which use feature X to nodes which do not support feature X; back-rev nodes will detect the "unknown fields" and reject those pods.
-
Scheduler knows how to map pods which need feature X to nodes which support feature X; scheduling may fail entirely if no nodes support it.
-
We stall any feature which has a node-based implementation until old kubelets are out of support; CRI support is not version locked or controlled by us, so fall back on 1, 2, or 3.
@@ -919,6 +987,16 @@ For GA, this section is required: approvers should be able to confirm the | |||
previous answers based on experience in the field. | |||
--> | |||
|
|||
|
|||
A pod with `supplementalGroupsPolicy: Strict` may be rejected by kubelet with the probablility of $$B/A$$, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Only if the user fails to target the pod via a nodeSelector, right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, if the cluster administrator maintains node labels propagated from node.status.features.supplementalGroupsPolicy
.
@@ -1039,8 +1119,21 @@ For each of them, fill in the following information by copying the below templat | |||
- Testing: Are there any tests for failure mode? If not, describe why. | |||
--> | |||
|
|||
None. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the "pod failing to schedule because the node doesn't support it" is a failure mode you can document here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks. But, this section defines "What are OTHER known failure modes?", right?
I think the failure mode "node does not support it" was clearly mentioned above. This proposes to raise a kubelet admission error for this case.
Would you like to describe the failure mode here even though it was stated?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"other" here means "other than the API server and/or etcd being available". But I think it's ok to have it where you do, since it's really a user error not a failure of the feature.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@johnbelamaric Thanks for your PRR review. I responded your comments. PTAL 🙇
@@ -790,13 +790,44 @@ rollout. Similarly, consider large clusters and how enablement/disablement | |||
will rollout across nodes. | |||
--> | |||
|
|||
A rollout may fail when at least one of the following components are too old because this KEP introduces the new Kubernetes API field: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Under what conditions? If the container restarts? Or just if a Pod is recreated?
The permission(process identities) may change only when
- the pod is set
SupplementalGroupsPolicy: "Strict"
, and - it is recreated.
| CRI runtime | `Strict` | | ||
|
||
|
||
For example, an error will be returned like this if kube-apiserver is too old: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You can't say "kubelet must be at least the version of control-plane components"
I did not say this. I think I respected the version skew policy.
It's not possible during an upgrade, and in fact people often run for extended periods of time with older kubelets. In that case, the kubelet won't see the new field. What sort of failure does that cause?
This is a common issue when adding new API fields in Pod. For this KEP, the below matrix describes what will happen:
kubelet version | Feature Gate | CR support the KEP? | Pod's policy | Outcome |
---|---|---|---|---|
<1.31 (which does not know this field) |
N/A | Yes/No | Strict |
The pod can run, but its policy is just ignored. And, .containerStatuses.user will not be reported. |
Merge /(not set) |
The pod can run normally as expected. And, .containerStatuses.user will not be reported. |
|||
>=1.31 | True |
YES | Strict |
The pod and its policy can run as expected. And, .containerStatuses.user will be reported. |
Merge /(not set) |
The pod and its policy can run as expected. And, .containerStatuses.user will be reported. |
|||
NO | Strict |
The pod will be rejected in kubelet's admission. | ||
Merge /(not set) |
The pod and its policy can run as expected. And, .containerStatuses.user will be reported. |
|||
>=1.31 | False |
YES | Strict |
The pod, which was created when the feature gate was enabled previously, and its policy can run as expected. But, .containerStatuses.user will not be reported. |
Merge /(not set) |
The pod and its policy can run as expected. But, .containerStatuses.user will not be reported. |
|||
NO | Strict |
The pod, which was created when the feature gate was enabled previously, will be rejected in kubelet's admission. | ||
Merge /(not set) |
The pod and the policy can run as expected. But, .containerStatuses.user will not be reported. |
Similarly, with enablement, you could enable the feature gate in the control plane, then only enable it in some nodes at the kubelet level. What's the behavior in this case?
Please see the above matrix.
If you enable it everywhere, then you create some pods with the Strict policy, then you disable it, will the kubelet see the new field or not? If it see it, have you feature gated the kubelet behavior to ignore the field?
Please see the above matrix.
Look for an event saying indicating SupplementalGroupsPolicy is not supported by the runtime. | ||
```console | ||
$ kubectl get events -o json -w | ||
... | ||
{ | ||
... | ||
"kind": "Event", | ||
"message": "Error: SupplementalGroupsPolicyNotSupported", | ||
... | ||
} | ||
... | ||
``` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can there be metric when this happens for this reason?
hmm. If I understood correctly, there are currently no metrics for specific kubelet's admission errors. This kind of admission error is a common issue for node(or runtime handler) feature-based admission (e.g. user namespace or recursive read-only mounts).
Like KEP-127: Support User Namespaces stated, we can compare kubelet_running_pods
and kubelet_desired_pods
metrics.
WDYT??
@@ -828,6 +869,12 @@ checking if there are objects with field X set) may be a last resort. Avoid | |||
logs or events for this purpose. | |||
--> | |||
|
|||
Inspect the `supplementalGroupsPolicy` fields in Pods. You can check if the following `jq` command prints non-zero number: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, kube state metrics can help. Let me update this section.
@@ -864,16 +911,22 @@ These goals will help you determine what you need to measure (SLIs) in the next | |||
question. | |||
--> | |||
|
|||
- `supplementalGroupsPolicy=Strict`: 100% of pods were scheduled into a node with the feature supported. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, I described this topic in later section. I think to update this line like this. WDYT?
- `supplementalGroupsPolicy=Strict`: 100% of pods were scheduled into a node with the feature supported. | |
- `supplementalGroupsPolicy=Strict`: 100% of pods were scheduled into a node with the feature supported. This KEP does NOT support scheduler integration. Please see the section "Are there any missing metrics that would be useful to have to improve observability of this feature?". |
as an error metric. | ||
|
||
However, this is not planned to be implemented in kube-scheduler, as it seems overengineering. | ||
Users may use `nodeSelector`, `nodeAffinity`, etc. to workaround this. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
NFD/a label + node selector will be needed to ensure that workloads that need the Strict policy are properly scheduled?
Yes. However, this topic is not specific to the KEP. This is common issue for node/runtime handler features (e.g. user namespaces, recursive read-only mounts.)
@@ -919,6 +987,16 @@ For GA, this section is required: approvers should be able to confirm the | |||
previous answers based on experience in the field. | |||
--> | |||
|
|||
|
|||
A pod with `supplementalGroupsPolicy: Strict` may be rejected by kubelet with the probablility of $$B/A$$, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, if the cluster administrator maintains node labels propagated from node.status.features.supplementalGroupsPolicy
.
@@ -1039,8 +1119,21 @@ For each of them, fill in the following information by copying the below templat | |||
- Testing: Are there any tests for failure mode? If not, describe why. | |||
--> | |||
|
|||
None. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks. But, this section defines "What are OTHER known failure modes?", right?
I think the failure mode "node does not support it" was clearly mentioned above. This proposes to raise a kubelet admission error for this case.
Would you like to describe the failure mode here even though it was stated?
…ive) The beta promotion milestone is tenative because it will wait for containerd v2 got released and it will become popular.
32e3594
to
0c5f7ed
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If this is pushing beta to 1.33 (or later, depending on containerd 2.0), John should probably spend his precious, precious PRR time somewhere else this week?
|
||
# The most recent milestone for which work toward delivery of this KEP has been | ||
# done. This can be the current (upcoming) milestone, if it is being actively | ||
# worked on. | ||
latest-milestone: "v1.31" | ||
latest-milestone: "v1.33" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should not update "latest" to something iun the future. The "latest" should stay 31
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, I changed back to 1.31 in 69d37fe.
@@ -4,3 +4,5 @@ | |||
kep-number: 3619 | |||
alpha: | |||
approver: "@johnbelamaric" | |||
beta: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
no needed for this PR, now?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@johnbelamaric Sorry for taking the time to do this PRR. As we postpone its beta promotion to v1.33 or later(depending on containerd v2 situation). So, I think you don't need to review this KEP for now (at least until v1.33 release cycle.
I'm not sure how to withdraw the PRR request. I'd be very glad if you could guide me.
7a6b216
to
69d37fe
Compare
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: everpeace The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |