Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

KEP-3619: updated Production Readiness Review Questionnaire for beta release #4895

Open
wants to merge 6 commits into
base: master
Choose a base branch
from

Conversation

everpeace
Copy link
Contributor

  • One-line PR description: updated Production Readiness Review Questionnaire for beta release
  • Other comments: None.

@k8s-ci-robot k8s-ci-robot added the cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. label Oct 2, 2024
@k8s-ci-robot k8s-ci-robot added kind/kep Categorizes KEP tracking issues and PRs modifying the KEP directory sig/node Categorizes an issue or PR as relevant to SIG Node. size/L Denotes a PR that changes 100-499 lines, ignoring generated files. labels Oct 2, 2024
Copy link
Contributor Author

@everpeace everpeace left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@haircommander @mrunalp @thockin @SergeyKanzhelev Could you kindly review this PR??

Comment on lines +818 to +829
Look for an event saying indicating SupplementalGroupsPolicy is not supported by the runtime.
```console
$ kubectl get events -o json -w
...
{
...
"kind": "Event",
"message": "Error: SupplementalGroupsPolicyNotSupported",
...
}
...
```
Copy link
Contributor Author

@everpeace everpeace Oct 2, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For this, I plan to add kubelet admission which raises an error event if

  • the pod sets pod.spec.securityContext.supplementalGroupsPolicy=Strict(non default value)
  • and the node is node.status.features.supplementalGroupsPolicy=false

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok, and so the pod doesn't get admitted, can there be metric when this happens for this reason? Events are not useful if you are managing 50,000 clusters...

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can there be metric when this happens for this reason?

hmm. If I understood correctly, there are currently no metrics for specific kubelet's admission errors. This kind of admission error is a common issue for node(or runtime handler) feature-based admission (e.g. user namespace or recursive read-only mounts).

Like KEP-127: Support User Namespaces stated, we can compare kubelet_running_pods and kubelet_desired_pods metrics.

WDYT??

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah, that looks reasonable, please pull that in here. thanks

Comment on lines +1123 to +1124
- Make sure that `crictl info` (with the latest crictl)
reports that `supplemental_groups_policy` is supported.
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It needs to update crictl to display features field in crictl info command.

@@ -907,6 +969,8 @@ and creating new ones, as well as about cluster-level services (e.g. DNS):
- Impact of its degraded performance or high-error rates on the feature:
-->

Specific version of CRI.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@samuelkarp any comments on this? Is this a Containerd 2.0 - only feature?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@SergeyKanzhelev
Copy link
Member

So once it is beta enabled by default, API server will allow this new field. While Containerd 2.0 is still in works, what behavior customers who will set this field will get on containerd 1.x?

@SergeyKanzhelev
Copy link
Member

Beta graduation criteria listed new tests top be added. Is it something you plan to work on in 1.32?

@everpeace
Copy link
Contributor Author

everpeace commented Oct 7, 2024

So once it is beta enabled by default, API server will allow this new field. While Containerd 2.0 is still in works, what behavior customers who will set this field will get on containerd 1.x?

This PR proposes that if user sets SupplementalGroupsPolicy: Strict (non default value) ane the pod is scheduled to a node with containerd 1.x, then user will get error. I plan to add an admission in kubelet for this. See also: #4895 (comment)

Please remember that the user won't get any error when SupplementalGroupsPolicy is unset and those pods can run as expected even though the scheduled node is containerd 1.x.

But, as you pointed out, we maybe better wait for container 2.0 to be released. Or, should we have more prudent steps: "Beta, disabled by default"(v1.32) --> "Beta, enabled by default" (after containerd v2 is out) --> "GA"?

WDYT??


Beta graduation criteria listed new tests top be added. Is it something you plan to work on in 1.32?

Actually, I already added basic e2e tests in alpha for this KEP: kubernetes/kubernetes#125470

So, I think the criteria are almost satisfied.

@@ -1056,6 +1149,7 @@ Major milestones might include:

- 2023-02-10: Initial KEP published.
- v1.31.0(2024-08-13): Alpha
- v1.32.0: Beta (enabled by default)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I apologize for not making this link before.

If this totally depends opn containerd 2.0 and 2.0 is NOT YET RELEASED, then I am wary of moving this to beta, especially being enabled-by-default. That's a tripping hazard we don't need to introduce.

  1. Are there any parts of this that containerd 1.x implements that could be advanced while leaving the policy in alpha? e.g. the status stuff?

  2. Is there an urgent need to move this to beta (for CRI-O, I assume)? If so, could it be off-by-default (which is ~identical to alpha)?

Convince me I am over-reacting?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No urgency from CRI-O side to move this to beta.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the planning around version skew also needs more work, so it may be a good idea to postpone it.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Don't get me wrong, I want this feature, but it seems like a bad idea to move features to beta where the majority (anecdotally) of kube users CAN'T POSSIBLY use it (because it's linked to containerd 2 and that is not yet released).

Are there any parts of this that containerd 1.x implements that could be advanced while leaving the policy in alpha? e.g. the status stuff?

Copy link
Contributor Author

@everpeace everpeace Oct 8, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@thockin @johnbelamaric @SergeyKanzhelev @mrunalp @haircommander

Thanks for the feedback.

it seems like a bad idea to move features to beta where the majority (anecdotally) of kube users CAN'T POSSIBLY use it (because it's linked to containerd 2 and that is not yet released).

I now agree with this. Let's postpone promoting this KEP to beta until containerd v2 is released and it becomes popular.

I will update beta timing in README and kep.yaml.

updated in 0c5f7ed

@@ -790,13 +790,44 @@ rollout. Similarly, consider large clusters and how enablement/disablement
will rollout across nodes.
-->

A rollout may fail when at least one of the following components are too old because this KEP introduces the new Kubernetes API field:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

First, a comment on the affect of this enabling/disabling this on running Pods (I can't comment on the unchanged lines above). In the answers above, you say that the permission may change. Under what conditions? If the container restarts? Or just if a Pod is recreated?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Under what conditions? If the container restarts? Or just if a Pod is recreated?

The permission(process identities) may change only when

  • the pod is set SupplementalGroupsPolicy: "Strict", and
  • it is recreated.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, can you fix this above (lines 752, 756 are not clear on this).

| CRI runtime | `Strict` |


For example, an error will be returned like this if kube-apiserver is too old:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think you need to revise your version skew strategy (above) and think about how each component reacts during an upgrade. You can't say "kubelet must be at least the version of control-plane components". That's not realistic. It's not possible during an upgrade, and in fact people often run for extended periods of time with older kubelets. In that case, the kubelet won't see the new field. What sort of failure does that cause? Similarly for the CRI runtime.

Similarly, with enablement, you could enable the feature gate in the control plane, then only enable it in some nodes at the kubelet level. What's the behavior in this case?

If you enable it everywhere, then you create some pods with the Strict policy, then you disable it, will the kubelet see the new field or not? If it see it, have you feature gated the kubelet behavior to ignore the field?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can't say "kubelet must be at least the version of control-plane components"

I did not say this. I think I respected the version skew policy.

It's not possible during an upgrade, and in fact people often run for extended periods of time with older kubelets. In that case, the kubelet won't see the new field. What sort of failure does that cause?

This is a common issue when adding new API fields in Pod. For this KEP, the below matrix describes what will happen:

kubelet version Feature Gate CR support the KEP? Pod's policy Outcome
<1.31
(which does not know this field)
N/A Yes/No Strict The pod can run, but its policy is just ignored. And, .containerStatuses.user will not be reported.
Merge/(not set) The pod can run normally as expected. And, .containerStatuses.user will not be reported.
>=1.31 True YES Strict The pod and its policy can run as expected. And, .containerStatuses.user will be reported.
Merge/(not set) The pod and its policy can run as expected. And, .containerStatuses.user will be reported.
NO Strict The pod will be rejected in kubelet's admission.
Merge/(not set) The pod and its policy can run as expected. And, .containerStatuses.user will be reported.
>=1.31 False YES Strict The pod, which was created when the feature gate was enabled previously, and its policy can run as expected. But, .containerStatuses.user will not be reported.
Merge/(not set) The pod and its policy can run as expected. But, .containerStatuses.user will not be reported.
NO Strict The pod, which was created when the feature gate was enabled previously, will be rejected in kubelet's admission.
Merge/(not set) The pod and the policy can run as expected. But, .containerStatuses.user will not be reported.

Similarly, with enablement, you could enable the feature gate in the control plane, then only enable it in some nodes at the kubelet level. What's the behavior in this case?

Please see the above matrix.

If you enable it everywhere, then you create some pods with the Strict policy, then you disable it, will the kubelet see the new field or not? If it see it, have you feature gated the kubelet behavior to ignore the field?

Please see the above matrix.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That was a quote from line 676, please fix it up there and add in the matrix. Thanks!

Comment on lines +818 to +829
Look for an event saying indicating SupplementalGroupsPolicy is not supported by the runtime.
```console
$ kubectl get events -o json -w
...
{
...
"kind": "Event",
"message": "Error: SupplementalGroupsPolicyNotSupported",
...
}
...
```
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok, and so the pod doesn't get admitted, can there be metric when this happens for this reason? Events are not useful if you are managing 50,000 clusters...

@@ -828,6 +869,12 @@ checking if there are objects with field X set) may be a last resort. Avoid
logs or events for this purpose.
-->

Inspect the `supplementalGroupsPolicy` fields in Pods. You can check if the following `jq` command prints non-zero number:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you have 50,000 clusters this is not helpful. Is there a metric we can use, can kube state metrics help here?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, kube state metrics can help. Let me update this section.

@@ -864,16 +911,22 @@ These goals will help you determine what you need to measure (SLIs) in the next
question.
-->

- `supplementalGroupsPolicy=Strict`: 100% of pods were scheduled into a node with the feature supported.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't see any scheduler integration incorporated into the KEP. How will this happen in clusters where some nodes support this and some do not?

Copy link
Contributor Author

@everpeace everpeace Oct 8, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, I described this topic in later section. I think to update this line like this. WDYT?

Suggested change
- `supplementalGroupsPolicy=Strict`: 100% of pods were scheduled into a node with the feature supported.
- `supplementalGroupsPolicy=Strict`: 100% of pods were scheduled into a node with the feature supported. This KEP does NOT support scheduler integration. Please see the section "Are there any missing metrics that would be useful to have to improve observability of this feature?".

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think that's helpful, but perhaps you can also note up in the discussion of the feature and how it works, that users should target using a node label.

as an error metric.

However, this is not planned to be implemented in kube-scheduler, as it seems overengineering.
Users may use `nodeSelector`, `nodeAffinity`, etc. to workaround this.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, this needs to be clearly documented, I didn't see this anywhere above (I may have missed it). Is the assumption then that during the time period while this is working its way through the ecosystem (maybe a couple years?), NFD/a label + node selector will be needed to ensure that workloads that need the Strict policy are properly scheduled?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

NFD/a label + node selector will be needed to ensure that workloads that need the Strict policy are properly scheduled?

Yes. However, this topic is not specific to the KEP. This is common issue for node/runtime handler features (e.g. user namespaces, recursive read-only mounts.)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes...and I think this is a big usability oversight. but you are right, that's not specific to this KEP.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree it's miserable and the containerd 2.0 situation is exaccerbating it.

The only paths I see are basically:

  1. Back-rev nodes ignore unknown fields; the API may say one thing and the node does something else.

  2. Scheduler does not know about features, and so might assign pods which use feature X to nodes which do not support feature X; back-rev nodes will detect the "unknown fields" and reject those pods.

  3. Scheduler knows how to map pods which need feature X to nodes which support feature X; scheduling may fail entirely if no nodes support it.

  4. We stall any feature which has a node-based implementation until old kubelets are out of support; CRI support is not version locked or controlled by us, so fall back on 1, 2, or 3.

@@ -919,6 +987,16 @@ For GA, this section is required: approvers should be able to confirm the
previous answers based on experience in the field.
-->


A pod with `supplementalGroupsPolicy: Strict` may be rejected by kubelet with the probablility of $$B/A$$,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Only if the user fails to target the pod via a nodeSelector, right?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, if the cluster administrator maintains node labels propagated from node.status.features.supplementalGroupsPolicy.

@@ -1039,8 +1119,21 @@ For each of them, fill in the following information by copying the below templat
- Testing: Are there any tests for failure mode? If not, describe why.
-->

None.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the "pod failing to schedule because the node doesn't support it" is a failure mode you can document here.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks. But, this section defines "What are OTHER known failure modes?", right?

I think the failure mode "node does not support it" was clearly mentioned above. This proposes to raise a kubelet admission error for this case.

Would you like to describe the failure mode here even though it was stated?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"other" here means "other than the API server and/or etcd being available". But I think it's ok to have it where you do, since it's really a user error not a failure of the feature.

Copy link
Contributor Author

@everpeace everpeace left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@johnbelamaric Thanks for your PRR review. I responded your comments. PTAL 🙇

@@ -790,13 +790,44 @@ rollout. Similarly, consider large clusters and how enablement/disablement
will rollout across nodes.
-->

A rollout may fail when at least one of the following components are too old because this KEP introduces the new Kubernetes API field:
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Under what conditions? If the container restarts? Or just if a Pod is recreated?

The permission(process identities) may change only when

  • the pod is set SupplementalGroupsPolicy: "Strict", and
  • it is recreated.

| CRI runtime | `Strict` |


For example, an error will be returned like this if kube-apiserver is too old:
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can't say "kubelet must be at least the version of control-plane components"

I did not say this. I think I respected the version skew policy.

It's not possible during an upgrade, and in fact people often run for extended periods of time with older kubelets. In that case, the kubelet won't see the new field. What sort of failure does that cause?

This is a common issue when adding new API fields in Pod. For this KEP, the below matrix describes what will happen:

kubelet version Feature Gate CR support the KEP? Pod's policy Outcome
<1.31
(which does not know this field)
N/A Yes/No Strict The pod can run, but its policy is just ignored. And, .containerStatuses.user will not be reported.
Merge/(not set) The pod can run normally as expected. And, .containerStatuses.user will not be reported.
>=1.31 True YES Strict The pod and its policy can run as expected. And, .containerStatuses.user will be reported.
Merge/(not set) The pod and its policy can run as expected. And, .containerStatuses.user will be reported.
NO Strict The pod will be rejected in kubelet's admission.
Merge/(not set) The pod and its policy can run as expected. And, .containerStatuses.user will be reported.
>=1.31 False YES Strict The pod, which was created when the feature gate was enabled previously, and its policy can run as expected. But, .containerStatuses.user will not be reported.
Merge/(not set) The pod and its policy can run as expected. But, .containerStatuses.user will not be reported.
NO Strict The pod, which was created when the feature gate was enabled previously, will be rejected in kubelet's admission.
Merge/(not set) The pod and the policy can run as expected. But, .containerStatuses.user will not be reported.

Similarly, with enablement, you could enable the feature gate in the control plane, then only enable it in some nodes at the kubelet level. What's the behavior in this case?

Please see the above matrix.

If you enable it everywhere, then you create some pods with the Strict policy, then you disable it, will the kubelet see the new field or not? If it see it, have you feature gated the kubelet behavior to ignore the field?

Please see the above matrix.

Comment on lines +818 to +829
Look for an event saying indicating SupplementalGroupsPolicy is not supported by the runtime.
```console
$ kubectl get events -o json -w
...
{
...
"kind": "Event",
"message": "Error: SupplementalGroupsPolicyNotSupported",
...
}
...
```
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can there be metric when this happens for this reason?

hmm. If I understood correctly, there are currently no metrics for specific kubelet's admission errors. This kind of admission error is a common issue for node(or runtime handler) feature-based admission (e.g. user namespace or recursive read-only mounts).

Like KEP-127: Support User Namespaces stated, we can compare kubelet_running_pods and kubelet_desired_pods metrics.

WDYT??

@@ -828,6 +869,12 @@ checking if there are objects with field X set) may be a last resort. Avoid
logs or events for this purpose.
-->

Inspect the `supplementalGroupsPolicy` fields in Pods. You can check if the following `jq` command prints non-zero number:
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, kube state metrics can help. Let me update this section.

@@ -864,16 +911,22 @@ These goals will help you determine what you need to measure (SLIs) in the next
question.
-->

- `supplementalGroupsPolicy=Strict`: 100% of pods were scheduled into a node with the feature supported.
Copy link
Contributor Author

@everpeace everpeace Oct 8, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, I described this topic in later section. I think to update this line like this. WDYT?

Suggested change
- `supplementalGroupsPolicy=Strict`: 100% of pods were scheduled into a node with the feature supported.
- `supplementalGroupsPolicy=Strict`: 100% of pods were scheduled into a node with the feature supported. This KEP does NOT support scheduler integration. Please see the section "Are there any missing metrics that would be useful to have to improve observability of this feature?".

as an error metric.

However, this is not planned to be implemented in kube-scheduler, as it seems overengineering.
Users may use `nodeSelector`, `nodeAffinity`, etc. to workaround this.
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

NFD/a label + node selector will be needed to ensure that workloads that need the Strict policy are properly scheduled?

Yes. However, this topic is not specific to the KEP. This is common issue for node/runtime handler features (e.g. user namespaces, recursive read-only mounts.)

@@ -919,6 +987,16 @@ For GA, this section is required: approvers should be able to confirm the
previous answers based on experience in the field.
-->


A pod with `supplementalGroupsPolicy: Strict` may be rejected by kubelet with the probablility of $$B/A$$,
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, if the cluster administrator maintains node labels propagated from node.status.features.supplementalGroupsPolicy.

@@ -1039,8 +1119,21 @@ For each of them, fill in the following information by copying the below templat
- Testing: Are there any tests for failure mode? If not, describe why.
-->

None.
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks. But, this section defines "What are OTHER known failure modes?", right?

I think the failure mode "node does not support it" was clearly mentioned above. This proposes to raise a kubelet admission error for this case.

Would you like to describe the failure mode here even though it was stated?

…ive)

The beta promotion milestone is tenative because it will wait for containerd v2 got released and it will become popular.
Copy link
Member

@thockin thockin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If this is pushing beta to 1.33 (or later, depending on containerd 2.0), John should probably spend his precious, precious PRR time somewhere else this week?


# The most recent milestone for which work toward delivery of this KEP has been
# done. This can be the current (upcoming) milestone, if it is being actively
# worked on.
latest-milestone: "v1.31"
latest-milestone: "v1.33"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should not update "latest" to something iun the future. The "latest" should stay 31

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, I changed back to 1.31 in 69d37fe.

@@ -4,3 +4,5 @@
kep-number: 3619
alpha:
approver: "@johnbelamaric"
beta:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

no needed for this PR, now?

Copy link
Contributor Author

@everpeace everpeace Oct 8, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@johnbelamaric Sorry for taking the time to do this PRR. As we postpone its beta promotion to v1.33 or later(depending on containerd v2 situation). So, I think you don't need to review this KEP for now (at least until v1.33 release cycle.

I'm not sure how to withdraw the PRR request. I'd be very glad if you could guide me.

@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: everpeace
Once this PR has been reviewed and has the lgtm label, please assign dchen1107, johnbelamaric for approval. For more information see the Kubernetes Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. kind/kep Categorizes KEP tracking issues and PRs modifying the KEP directory sig/node Categorizes an issue or PR as relevant to SIG Node. size/L Denotes a PR that changes 100-499 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

8 participants