Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: use multi arch supported image for kube-rbac-proxy. #440

Conversation

ashokpariya0
Copy link
Contributor

The origin-kube-rbac-proxy image is not compatible with the s390x architecture. This change deploys an image that is multi arch supported.

What this PR does / why we need it:
The origin-kube-rbac-proxy image is not compatible with the s390x architecture.
This change deploys an image that is multi arch supported(https://quay.io/repository/brancz/kube-rbac-proxy/manifest/sha256:e6a323504999b2a4d2a6bf94f8580a050378eba0900fd31335cf9df5787d9a9b).

Special notes for your reviewer:
proposed image(https://quay.io/repository/brancz/kube-rbac-proxy/manifest/sha256:e6a323504999b2a4d2a6bf94f8580a050378eba0900fd31335cf9df5787d9a9b) is built from parent repo of https://github.com/openshift/kube-rbac-proxy.

Release note:

None

The origin-kube-rbac-proxy image is not compatible with the s390x
architecture. This change deploys an image that is multi arch supported.

Signed-off-by: Ashok Pariya <[email protected]>
@kubevirt-bot
Copy link
Collaborator

Hi @ashokpariya0. Thanks for your PR.

PRs from untrusted users cannot be marked as trusted with /ok-to-test in this repo meaning untrusted PR authors can never trigger tests themselves. Collaborators can still trigger tests on the PR using /test all.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@oshoval
Copy link
Member

oshoval commented Oct 9, 2024

/lgtm

Thanks

Please ref the issue, so Ram can see all the discussion there, because he is the maintainer of KMP

@ashokpariya0
Copy link
Contributor Author

@RamLavi For your reference Issue.

@RamLavi
Copy link
Member

RamLavi commented Oct 14, 2024

/lgtm
/approve

@kubevirt-bot
Copy link
Collaborator

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: RamLavi

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@RamLavi
Copy link
Member

RamLavi commented Oct 14, 2024

@ashokpariya0 do you need this bumped to CNAO?

@kubevirt-bot kubevirt-bot merged commit f61b413 into k8snetworkplumbingwg:main Oct 14, 2024
4 checks passed
@oshoval
Copy link
Member

oshoval commented Oct 14, 2024

@ashokpariya0 do you need this bumped to CNAO?

done already thanks
it doesnt need a release
it is done in other means kubevirt/cluster-network-addons-operator#1917
since this image also used by CNAO itself it has other flow

@RamLavi
Copy link
Member

RamLavi commented Oct 14, 2024

@ashokpariya0 do you need this bumped to CNAO?

done already thanks it doesnt need a release it is done in other means kubevirt/cluster-network-addons-operator#1917 since this image also used by CNAO itself it has other flow

ACK.
Then perhaps we should bump CNAO on kmp then?

@oshoval
Copy link
Member

oshoval commented Oct 14, 2024

@ashokpariya0 do you need this bumped to CNAO?

done already thanks it doesnt need a release it is done in other means kubevirt/cluster-network-addons-operator#1917 since this image also used by CNAO itself it has other flow

ACK. Then perhaps we should bump CNAO on kmp then?

if KMP uses CNAO then yes for the CNAO image part (but not for itself), if it doesnt then no need
well the fact it is there maybe means it does use it

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants