-
Notifications
You must be signed in to change notification settings - Fork 377
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Helm provider does not trigger an update, if local templates change #821
Comments
We noticed the same issue. What we do is to bump the chart version every time there's a change in the templates. That'll trigger a helm upgrade. |
I am aware of that, but the risk of not updating changed parts due to a forgotten version increase is not acceptable for me. It would be nice if a solution similar to my hack would make it into the provider. |
I am facing the same issue, I am planning to add triggers with timestamp to overcome this issue, but this will cause the deployment to trigger everytime |
Thanks @junzebao. It works well and also is something very logical to do - just bump the chart version on chart changes |
For me, bumping the chart version by hand sounds like: we've automated everything, we just need to trigger the automation by hand each time I'm looking for a solution that will trigger resource "helm_release" "istio-ingressgateway-internal" {
name = "istio-ingressgateway-internal"
repository = "https://istio-release.storage.googleapis.com/charts"
chart = "gateway"
version = "1.17.2"
description = sha1(join("", [for f in sort(fileset("${path.module}/helm-values/istio-ingressgateway-internal-kustomize", "*")) : filesha1("${path.module}/helm-values/istio-ingressgateway-internal-kustomize/${f}")]))
namespace = kubernetes_namespace.istio-ingress.metadata[0].name
timeout = "900"
values = [
templatefile("${path.module}/helm-values/istio-ingressgateway-internal.yaml", { infra_name = var.infra_name, eks_suffix = var.eks_suffix })
]
postrender {
binary_path = "${path.module}/helm-values/istio-ingressgateway-internal-kustomize/kustomize.sh"
}
depends_on = [
helm_release.istio-istiod,
kubernetes_namespace.istio-ingress
]
} |
I'm surprised this hasn't had more attention! Is there an explanation somewhere why this can't happen? It's really counterintuitive, and so even with work-arounds, new helm charts created aren't always spotted as requiring extra work to force a deployment.
|
description field changes cause provider errors
|
We set My understanding is that the default behaviour of Helm is to consider this flag The Terraform |
The cause of error seems to have been coming due to the Chart with dependencies. |
I have a local-only chart without any dependencies. When I'm modifying a template in a local folder, Terraform does not detect any changes. |
@PowerSurj what's the issue with using |
reset_values will reset the values to the default ones built into the chart... so if you wish to use custom values it will be over-written |
@lipika-pal-partior you refer to set, set_list and set_sensitive, right? I wasn't aware of those arguments, thanks for pointing me into that direction. But wouldn't the custom values are applied back the moment terraform is applied? So the |
|
any updates ? set {
name = "ignoreMe"
value = timestamp()
} to trigger the deployment each time and let helm control which templates/values have changed |
FYI, this is related to chart version. When you modify the chart/templates, edit Chart.yaml and bump up the version variable so that the changes get picked up and applied by Terraform. |
Using helm provider 2.15.0, I'm using the following workaround. It computes the sha1sum of all templates files, concatenate them, compute the sha1sum of the resulting string, and set it to a dummy Value. It does not trigger a change every time like resource "helm_release" "mychart" {
name = "myapp"
chart = "${path.module}/charts/mychart"
set {
name = "templates_hash"
value = sha1(join("", [for f in fileset("${path.module}/charts/mychart/templates", "*") : filesha1("${path.module}/charts/mychart/templates/${f}")]))
}
} |
I do not believe the workaround is working with |
Terraform version, Kubernetes provider version and Kubernetes version
Terraform configuration
Question
helm-hash module:
The text was updated successfully, but these errors were encountered: