-
Notifications
You must be signed in to change notification settings - Fork 301
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Secret Auto Rotation not working for succeeded and failed pods #1288
Comments
We are running in the same problem on our setup. AWS EKS and Cronjobs that try to access secrets that are rotated every N Days. Currently, our only solution is to remove the secrets once the AWS Secrets have rotated to force regeneration. |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
/remove-lifecycle rotten |
What steps did you take and what happened:
SecretProviderClass
for secrets from AWS SecretsManager with Kubernetes Secret sync enabledSecretProviderClass
in aCronJob
The first time the
CronJob
is triggered, everything works as expected. A KubernetesSecret
is created with the secret value from the AWS SecretsManager and the value can be used in the container environment variables.If the secret value is now changed in the AWS SecretsManager, the next time the
CronJob
is triggered, the environment variable in the container is still set to the old value, as the value in the KubernetesSecret
was not updated.What did you expect to happen:
Ideally the auto rotation would have updated the Kubernetes
Secret
to the current value from the AWS SecretsManager so that the container always has the latest secret value available.Anything else you would like to add:
A workaround for this issue is to set
successfulJobsHistoryLimit
andfailedJobsHistoryLimit
in theCronJob
spec to0
. That way, after aJob
finishes, no succeeded or failedPods
belonging to theJob
will remain in the cluster, which allows the secrets store CSI driver to delete the KubernetesSecret
and recreate it the next time theCronJob
is triggered.Looking at the code, it seems like this behaviour is intentional. Not sure why the auto rotation is skipped for succeeded and failed
Pods
, but for the use case described above it could make some difficulties.Which provider are you using:
AWS
Environment:
kubectl version
): v1.26.5The text was updated successfully, but these errors were encountered: