You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm working on setting up the spicedb-operator but don't want to give it more permissions than it truly needs. I would like to withhold some of the kubernetes permissions the operator expects without it affecting its functionality. Ideally, I'd like to grant it only the fewest permissions it needs to provision and manage a spicedb cluster.
Currently, I'd like to not grant it the full ability to create or patch Role and RoleBinding resources within a cluster/namespace. However, if I try do that the operator logs a lot of warnings and it fails to provision spicedb clusters. I believe this is configured in the ensureRoleBinding function. Logs I'm seeing:
msg="requeueing after api error" err="context canceled" syncID="H2LCy" controller="spicedbclusters" obj={"name":"spicedbcluster","namespace":"namespace"}
msg="requeueing after error" err="rolebindings.rbac.authorization.k8s.io "spicedbcluster" is forbidden: User "system:serviceaccount:spicedb-operator:spicedb-operator" cannot patch resource "rolebindings" in API group "rbac.authorization.k8s.io" in the namespace "namespace"" syncID="j8aUD" controller="spicedbclusters" obj={"name":"spicedbcluster","namespace":"namespace"}
Problematic permissions that the operator currently requires:
Briefly discussed this issue with ecordell and the following was suggested:
Update the spicedb-operator to allow the user to specify the serviceaccount up front and the operator could just confirm the account has the permissions that it needs. Specific RBAC configuration the operator and spicedb clusters are granted would be up to the user. (short-term workaround)
Ideally, anything the operator has permission to do it would just do it, and anything it doesn't have permission to do it would warn the user about / tell the user about this in the status.
The text was updated successfully, but these errors were encountered:
jgarcia-sqsp
changed the title
Ability to grant the spicedb-operator Kubernetes specific permissions
Ability to grant the spicedb-operator Kubernetes specific permissions on resources
May 16, 2024
jgarcia-sqsp
changed the title
Ability to grant the spicedb-operator Kubernetes specific permissions on resources
Ability to grant the spicedb-operator limited Kubernetes permissions
May 16, 2024
Context
I'm working on setting up the spicedb-operator but don't want to give it more permissions than it truly needs. I would like to withhold some of the kubernetes permissions the operator expects without it affecting its functionality. Ideally, I'd like to grant it only the fewest permissions it needs to provision and manage a spicedb cluster.
Currently, I'd like to not grant it the full ability to create or patch Role and RoleBinding resources within a cluster/namespace. However, if I try do that the operator logs a lot of warnings and it fails to provision spicedb clusters. I believe this is configured in the ensureRoleBinding function. Logs I'm seeing:
Problematic permissions that the operator currently requires:
From https://github.com/authzed/spicedb-operator/blob/main/config/rbac/role.yaml
Permissions to read/write any secret in a cluster/namespace.
Permissions to create and/or update roles and roleBindings. Would allow the operator to change the roles granted to it.
Suggested workaround/solution
Briefly discussed this issue with
ecordell
and the following was suggested:Update the spicedb-operator to allow the user to specify the
serviceaccount
up front and the operator could just confirm the account has the permissions that it needs. Specific RBAC configuration the operator and spicedb clusters are granted would be up to the user. (short-term workaround)Ideally, anything the operator has permission to do it would just do it, and anything it doesn't have permission to do it would warn the user about / tell the user about this in the status.
The text was updated successfully, but these errors were encountered: