Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

GH-1097 causes metadata to always be recomputed with some helm charts. #1150

Open
BenB196 opened this issue May 31, 2023 · 19 comments · Fixed by #1153
Open

GH-1097 causes metadata to always be recomputed with some helm charts. #1150

BenB196 opened this issue May 31, 2023 · 19 comments · Fixed by #1153
Labels

Comments

@BenB196
Copy link

BenB196 commented May 31, 2023

Terraform, Provider, Kubernetes and Helm Versions

Terraform version: 1.4.6
Provider version: v2.10.0
Kubernetes version: 1.23.17 (EKS)

Affected Resource(s)

Terraform Configuration Files

resource "helm_release" "filebeat" {
  chart      = "raw"
  name       = var.filebeat.name
  namespace  = var.filebeat.namespace
  repository = "https://dysnix.github.io/charts"
  version    = "0.3.1"

  values = [
    <<-EOF
    ${yamlencode({ resources = [local.filebeat] })}
    EOF
  ]
}

Debug Output

NOTE: In addition to Terraform debugging, please set HELM_DEBUG=1 to enable debugging info from helm.

Panic Output

N/A

Steps to Reproduce

  1. Use the above define Helm chart to deploy something. You can really deploy any type of Kubernetes resource.
  2. Rerun terraform plan
  3. Observe that the metadata is going to be regenerated when it shouldn't.

Downgrading from 2.10.0 to 2.9.0 causes the issue to go away.

Expected Behavior

I would expect that rerunning Terraform where there are no changes to the Helm values that the metadata should not be recomputed.

Actual Behavior

Observe that the metadata gets regeneratred

Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
  ~ update in-place
Terraform will perform the following actions:
  # module.kubernetes_filebeat_autodiscovery_cluster_1.helm_release.filebeat will be updated in-place
  ~ resource "helm_release" "filebeat" {
        id                         = "autodiscover"
      ~ metadata                   = [
          - {
              - app_version = ""
              - chart       = "raw"
              - name        = "autodiscover"
              - namespace   = "elastic-monitors"
              - revision    = 3
              - values      = jsonencode(
                    {
                      - resources = [
                          - {
                              - apiVersion = "beat.k8s.elastic.co/v1beta1"
                              - kind       = "Beat"
                              - metadata   = {
                                  - labels      = null
                                  - name        = "autodiscover"
                                  - namespace   = "default"
                                }
                                ... <spec_removed>
                            },
                        ]
                    }
                )
              - version     = "v0.3.1"
            },
        ] -> (known after apply)
        name                       = "autodiscover"
        # (27 unchanged attributes hidden)
    }
Plan: 0 to add, 1 to change, 0 to destroy.

Important Factoids

References

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment
@BenB196 BenB196 added the bug label May 31, 2023
@BenB196 BenB196 changed the title #1097 causes metadata to always be recomputed with some helm charts. GH-1097 causes metadata to always be recomputed with some helm charts. May 31, 2023
@fewkso
Copy link

fewkso commented May 31, 2023

We could witness the same behaviour with 2.10.0 but also, some charts installs started to fail with this message:
Error: error validating "": error validating data: the server responded with the status code 426 but did not return more information
It would pass the terraform plan but the terraform apply failed (we're using gitlab's terraform image for tf jobs)
There might be something with the charts themselves that passed before and now make the release fail.
Reverting to 2.9.0 fixes the problem

@BenB196
Copy link
Author

BenB196 commented May 31, 2023

I'm not really familiar with this code base, but looking at the merge request, I see that there is a test for set - https://github.com/hashicorp/terraform-provider-helm/pull/1097/files#diff-0c630859c0a0648a5a7cd178d9b8d011108f16648cd8fc6b651390cec95ebc56R1781-R1827 & https://github.com/hashicorp/terraform-provider-helm/pull/1097/files#diff-0c630859c0a0648a5a7cd178d9b8d011108f16648cd8fc6b651390cec95ebc56R2077-R2111, but none for values which I think is where the issue is.

I unfortunately don't have too much availability to dig deeper into this issue, but I suspect that this is reproducible when simply just using the values setting.

@shushpanchik
Copy link

Not sure if that's related, but for me plan shows sensitive data from "values" in "metadata" changes now.

@jrhouston
Copy link
Contributor

I was able to reproduce this with the chart mentioned in the original issue. It looks like what's happening here is the version attribute is being flagged here as having changed because after install the version gets prefixed with v and so changes from 1.2.3 to v1.2.3 in the state. I've opened a PR with a fix for this.

@shushpanchik I'm not sure if your issue is the same as the one above. Can you share your config so I can try and reproduce?

@SteveJ-IT
Copy link

SteveJ-IT commented Jun 8, 2023

This doesn't work we're still seeing plan/apply changes to metadata when no other values change on every run.

As you've confirmed, reverting to 2.9.0 solves this problem so we'll stick on this for now. Thanks for attempting to resolve it. Alternatively, you can remove the 'version' input to helm_release and let Terraform use the Chart.yaml's version instead as this resolves the issue. This is spoken about in the following active issue: #1157

@itay-grudev
Copy link

itay-grudev commented Jan 12, 2024

@jrhouston Could this please be reopened? See @SteveJ-IT's comment. This is still a problem in 2.12.1. As of Jan 2024 we still need to downgrade to 2.9.0 in order to avoid constant helm upgrades.

@lodotek
Copy link

lodotek commented Mar 25, 2024

@jrhouston Could this please be reopened? See @SteveJ-IT's comment. This is still a problem in 2.12.1. As of Jan 2024 we still need to downgrade to 2.9.0 in order to avoid constant helm upgrades.

Bump!

@BenB196
Copy link
Author

BenB196 commented Mar 25, 2024

Just an FYI, while there still might be a "bug" there is a workaround for this issue on 2.12.1.

If you're setting a chart version like:

  version    = "0.3.1"

(without the v in front). If the metadata diff contains:

- version     = "v0.3.1"

(With a v in front). Update your chart version to match:

  version    = "v0.3.1"

This should fix the perpetual diff in most cases.

@itay-grudev
Copy link

Just an FYI, while there still might be a "bug" there is a workaround for this issue on 2.12.1.

If you're setting a chart version like:

  version    = "0.3.1"

(without the v in front). If the metadata diff contains:

- version     = "v0.3.1"

(With a v in front). Update your chart version to match:

  version    = "v0.3.1"

This should fix the perpetual diff in most cases.

Isn't that a separate issue?

@BenB196
Copy link
Author

BenB196 commented Mar 25, 2024

Just an FYI, while there still might be a "bug" there is a workaround for this issue on 2.12.1.
If you're setting a chart version like:

  version    = "0.3.1"

(without the v in front). If the metadata diff contains:

- version     = "v0.3.1"

(With a v in front). Update your chart version to match:

  version    = "v0.3.1"

This should fix the perpetual diff in most cases.

Isn't that a separate issue?

I don't think so, I'm 99% that the diff issue is resolved just the v/no-v issue remains; #1150 (comment).

If you're still having issues on 2.12.1, I'd recommend probably opening a new issue with a reproducible example. I can't reproduce anymore on 2.12.1.

@dan-dbnl
Copy link

dan-dbnl commented May 30, 2024

SO close - I thought @BenB196 comment about including v solved the issue for me, but I might have something slightly distinct:

if i use:

version = "v6.*"  # or version = "6.*"

i see proposed changes in metadata (but nothing happens)

whereas if I pin the version (to the latest one):

version = "v6.0.8" # or version = "6.0.8"

I get a clean / empty terraform plan.

I can pin for now, but any ideas if this is a similar issue? Would love for this code to plan updates "automatically" when there are new releases, but not otherwise.

@itay-grudev
Copy link

I can confirm this behaviour. I would rather not lock the version completely to keep getting minor updates automatically, but that causes the metadata change false positive.

We have a lot of helm releases and now these are seriously polluting our plans and making it's review difficult.

@Fresa
Copy link

Fresa commented Jul 8, 2024

I can also confirm this behavior on 2.14.0 and 2.10.1, downgrading to 2.9.0 solves it.

@BBBmau
Copy link
Contributor

BBBmau commented Aug 30, 2024

reopening since this is still an issue that users are coming across.

@BBBmau BBBmau reopened this Aug 30, 2024
@YawataNoKami
Copy link

I can also confirm this behavior on 2.14.0 and 2.10.1, downgrading to 2.9.0 solves it.

we are still in 2.9.0 too, any news on this issue ?
for info i try the 2.15.0 and the issue is still here

@itay-grudev
Copy link

itay-grudev commented Oct 9, 2024

@YawataNoKami I don't think @BBBmau's changes have been released yet. You'll have to wait until 2.16.0 I suppose.

@BBBmau
Copy link
Contributor

BBBmau commented Oct 10, 2024

@YawataNoKami @itay-grudev v2.16.0 has just been released. Let us know if the issue still is present. Once we get confirmation we can close this issue.

@itay-grudev
Copy link

It looks like it's working.

@ikarlashov
Copy link

It's fixed indeed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging a pull request may close this issue.