Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Helm provider does not trigger an update, if local templates change #821

Open
markusheiden opened this issue Jan 18, 2022 · 18 comments
Open
Labels

Comments

@markusheiden
Copy link

markusheiden commented Jan 18, 2022

Terraform version, Kubernetes provider version and Kubernetes version

Terraform version: 1.1.3
Helm Provider version: 2.4.1
Kubernetes version: 1.20

Terraform configuration

resource "helm_release" "myapp" {
  namespace = "namespace"
  name      = "myapp"
  chart     = "${path.module}"
}

Question

Why does the Helm provider NOT trigger an update, if the files in the "templates" directory change? 
How to achieve that?

My temporary solution is to compute a hash of the templates files and use that as a dummy value.
resource "helm_release" "myapp" {
  ...

  // Trigger updates on template changes.
  set {
    name = "templatesHash"
    value = module.templates_hash.hash
  }
}

module "templates_hash" {
  source = "../helm-hash"
  chart_directory = path.module
}

helm-hash module:

variable "chart_directory" {
  description = "Chart directory"
  type        = string
}

locals {
  templates_directory = "${var.chart_directory}/templates"
  template_hashes     = {
  for path in sort(fileset(local.templates_directory, "**")) :
  path => filebase64sha512("${local.templates_directory}/${path}")
  }
  hash                = base64sha512(jsonencode(local.template_hashes))
}

output "hash" {
  value = local.hash
}
@junzebao
Copy link

We noticed the same issue. What we do is to bump the chart version every time there's a change in the templates. That'll trigger a helm upgrade.

@markusheiden
Copy link
Author

I am aware of that, but the risk of not updating changed parts due to a forgotten version increase is not acceptable for me. It would be nice if a solution similar to my hack would make it into the provider.

@vikaskoppineedi
Copy link

I am facing the same issue, I am planning to add triggers with timestamp to overcome this issue, but this will cause the deployment to trigger everytime

@beepdot
Copy link

beepdot commented Mar 25, 2023

We noticed the same issue. What we do is to bump the chart version every time there's a change in the templates. That'll trigger a helm upgrade.

Thanks @junzebao. It works well and also is something very logical to do - just bump the chart version on chart changes

@Oliniusz
Copy link

Oliniusz commented Sep 19, 2023

For me, bumping the chart version by hand sounds like: we've automated everything, we just need to trigger the automation by hand each time

I'm looking for a solution that will trigger helm_release by any changes done under the postrender block and I think the checksum is the only way I know of at the moment. I calculate the changes in a directory and use it to change the description:

resource "helm_release" "istio-ingressgateway-internal" {
  name        = "istio-ingressgateway-internal"
  repository  = "https://istio-release.storage.googleapis.com/charts"
  chart       = "gateway"
  version     = "1.17.2"
  description = sha1(join("", [for f in sort(fileset("${path.module}/helm-values/istio-ingressgateway-internal-kustomize", "*")) : filesha1("${path.module}/helm-values/istio-ingressgateway-internal-kustomize/${f}")]))
  namespace   = kubernetes_namespace.istio-ingress.metadata[0].name
  timeout     = "900"
  values = [
    templatefile("${path.module}/helm-values/istio-ingressgateway-internal.yaml", { infra_name = var.infra_name, eks_suffix = var.eks_suffix })
  ]
  postrender {
    binary_path = "${path.module}/helm-values/istio-ingressgateway-internal-kustomize/kustomize.sh"
  }
  depends_on = [
    helm_release.istio-istiod,
    kubernetes_namespace.istio-ingress
  ]
}

@djmcgreal-cc
Copy link

I'm surprised this hasn't had more attention! Is there an explanation somewhere why this can't happen? It's really counterintuitive, and so even with work-arounds, new helm charts created aren't always spotted as requiring extra work to force a deployment.

helm template | kubectl diff -f - seems fairly straightforward, though I guess it'd need to filter hooks.

@PowerSurj
Copy link

description field changes cause provider errors
i'm on terraform v1.6.4

Error: Provider produced inconsistent final plan
When expanding the plan for helm_release.myapp[0] to include new values learned so far during apply, provider "registry.terraform.io/hashicorp/helm" produced an invalid new value for .description: was cty.StringVal("4906e97bd629cbf998cd33647ab879d8ef90536e"), but now cty.StringVal("5380c9793e0f8669c17f5e45ee25ab927fe9bb28").

This is a bug in the provider, which should be reported in the provider's own issue tracker.

@alfechner
Copy link

alfechner commented Mar 8, 2024

We set reset_values = true and that works fine for us.

My understanding is that the default behaviour of Helm is to consider this flag true if it's not set. That's the behaviour we're all used to.

The Terraform helm_release resource however defaults to false. That explains the unusual behaviour I guess.

@PowerSurj
Copy link

PowerSurj commented Mar 19, 2024

The cause of error seems to have been coming due to the Chart with dependencies.
Description driven updates work fine without reset_values and with Charts that have no dependencies.
My solution is to break the dependencies into independent charts.

@air3ijai
Copy link

The cause of error seems to have been coming due to the Chart with dependencies.

I have a local-only chart without any dependencies. When I'm modifying a template in a local folder, Terraform does not detect any changes.

@alfechner
Copy link

@PowerSurj what's the issue with using reset_values?

@lipika-pal-partior
Copy link

@PowerSurj what's the issue with using reset_values?

reset_values will reset the values to the default ones built into the chart... so if you wish to use custom values it will be over-written

@alfechner
Copy link

alfechner commented Mar 28, 2024

@lipika-pal-partior you refer to set, set_list and set_sensitive, right?

I wasn't aware of those arguments, thanks for pointing me into that direction.

But wouldn't the custom values are applied back the moment terraform is applied? So the reset_values would reset them and then they would be added back?

@djmcgreal-cc
Copy link

reset_values doesn't work if the post_renderer executable changes. This really needs a more robust solution.

@oubaydos
Copy link

oubaydos commented Jun 11, 2024

any updates ?
I personally use this workaround

set {
    name  = "ignoreMe"
    value = timestamp()
  }

to trigger the deployment each time and let helm control which templates/values have changed

@hubertlepicki
Copy link

FYI, this is related to chart version. When you modify the chart/templates, edit Chart.yaml and bump up the version variable so that the changes get picked up and applied by Terraform.

@NeodymiumFerBore
Copy link

Using helm provider 2.15.0, I'm using the following workaround. It computes the sha1sum of all templates files, concatenate them, compute the sha1sum of the resulting string, and set it to a dummy Value. It does not trigger a change every time like timestamp does.

resource "helm_release" "mychart" {
  name = "myapp"
  chart = "${path.module}/charts/mychart"
  set {
    name  = "templates_hash"
    value = sha1(join("", [for f in fileset("${path.module}/charts/mychart/templates", "*") : filesha1("${path.module}/charts/mychart/templates/${f}")]))
  }
}

@jeremymcgee73
Copy link

I do not believe the workaround is working with 3.0.0-pre1. I reverted to 2.17.0 and this started working again as expected.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests