-
Notifications
You must be signed in to change notification settings - Fork 167
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
cli login with kubeconfig lacks permissions compared to sso #3323
Comments
When you "log in" with your kubeconfig, you are passing the token belonging to a cluster user with any request to the Kargo API server. It will not recognize the token and take a guess that the Kubernetes API server might. It will use that token for any communication with the Kubernetes API server in the course of a given request to the Kargo API server. So... what that user can or can't do is entirely a matter of how you've configured RBAC with the cluster. No Kargo configuration comes into play at all. |
Thank you for the quick reply! I guess my issue is then to understand what kind of kube RBAC |
If you believe you have granted permission to get on namespaces and you are getting that error, then you have misconfigured RBAC. Look at the predefined kargo-viewer and kargo-admin ClusterRoles to understand... or just bind your user to one of those roles. |
Yes it is possible that I have misconfigured my RBAC.
And still getting I have granted other permissions as well and I am testing what I can do with For example, checking what permissions are granted on the whole cluster: And furthermore, I can get the stage's freight just fine with kubectl: I am considering trying to do the promotion also with kubectl instead of kargo cli.. |
Ok... you've certainly done your due diligence in troubleshooting here, which I very much appreciate. I will look into this to see what I can find. |
Oh... your kubeconfig... Does it use a token for authentication or a client certificate? This feature does not work at all with certificates, which means it also doesn't work with common local clusters like kind or k3d. |
I am pretty sure It is using a token. I am using AKS and I do (as MI) Also this --kubeconfig kargo login works for me and I am a cluster admin and I do the same thing for kubeconfig. edit I mean kargo cmds works (kargo get stage, kargo promote) for me when i kargo login with --kubeconfig. |
I see this as evidence that it's working.
I'm going to go out on a limb... is it possible that the name you're including here looks like a human-friendly username or email address? Unless I am much mistaken, with Entra you need to put an object ID here. (At least that's the way it worked back when it was still called AAD.) |
``> > Also this --kubeconfig kargo login works for me and I am a cluster admin
It is the Object ID of the MI: We are using other MIs in the same way as I am trying to do here with kargo. We grant the MI But I was finally able to test with a human user. I added a human user (email) to the subjects of the ClusterRoleBinding with kargo-viewer ClusterRole.
And it succeeded. So the problem seems to be with the MI after all. However, I am still puzzled about it because the same solution works for our other use cases with MIs.. |
Keep in mind that Can you show me what the MI's kubeconfig looks like, with sensitive data redacted, of course? |
Here's the kubeconfig from just before trying the kubectl & kargo cmds. Sorry for the format, im on mobile.
It looks the same for me as well. I might still test out using workload-identity for the kubelogin but i can not use it with our current federation of choice. So it would require some effort. And I havent tested it yet because current setup works for all of our other use cases. |
|
kubelogin is installed though. I'm not sure why it says that. I did a promotion by creating a promotion resource with kubectl in the workflow with same RBAC. It seemingly worked fine until at some point, it didnt verify the stage anymore after a promotion. The promotion itself still worked but the UI never refreshed and it didnt verify the stage after promotion. I resolved the broken state by deleting the stage resource. Seemed like it ended up in the broken state after another user tried to promote from UI or by creating another promotion resource with kubectl. Back to back promotions by the workflow seemed to work. Although, I didnt do extensive testing. I might just drop this promotion from workflow idea. My goal is to promote automatically downstream after test automation has successfully passed. |
Just to try and keep the issue focused, none of this seems related to the permissions issue.
For reference, here is how we get the token, very indirectly, from your kubeconfig. This convoluted process is only used specifically to be compatible with cases where the token is obtained via a plugin. https://github.com/akuity/kargo/blob/main/internal/kubeclient/auth.go I believe you that it is installed, but the kube client for sure does not think it is. PATH issue? |
Hey, finally made some progress thanks to your link. Before it actually gets the creds, it gets the config and from GetConfig, I noticed the config precedence. So what I did is, I paused the workflow run with sleep, connected to the runner pod (we are using github self-hosted arc runners) retrieved the bearer token with I dont know if there would be any ways to improve this. Maybe if we could give a path to the kubeconfig file in kargo login cmd. E.g. Anyway thanks for helping, much appreciated. Should I close this issue? P.S. I still had to do ClusterRoleBinding to namespaces (get) for the kargo promote to work. Not so problematic but I have to do two role bindings |
First and foremost, I'm happy to hear this helped and that you're back in business.
Does Last, and only as an fyi (not finger-wagging), using the kargo CLI from a CI pipeline isn't something we encourage. Explanation here: https://docs.kargo.io/new-docs/faqs#how-do-i-integrate-kargo-into-my-ci-pipelines |
Yeah I've acknowledged that this isnt the best way to go about promotions. To my understanding, the way to go would be to use stage verifications and implement I will test this: |
Checklist
kargo version
.Description
I have granted our user all applicable rules to a project "kargo-podinfo". When this user logs in with the cli:
kargo login <url> --kubeconfig
and then tries kargo cli cmds for example:kargo get stage ..
outputs an error:Error: permission_denied: namespaces "kargo-podinfo" is forbidden: get is not permitted
or
kargo promote ..
outputs an error:Error: promote stage subscribers: permission_denied: namespaces "kargo-podinfo" is forbidden: get is not permitted
On the other hand, when this same user logs in with --sso, these same commands work. However, this same user is in oidc viewers.
Also:
What am I missing?
Screenshots
Steps to Reproduce
Microsoft Entra ID authentication with Kubernetes RBAC
Azure Kubernetes Service Cluster User Role
Azure RBACkubelogin convert-kubeconfig -l azurecli
kargo login <url> --kubeconfig
kargo get stage dev --project=kargo-podinfo
Version
Logs
The text was updated successfully, but these errors were encountered: