Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unable to create Custom Resource (CR) Failed to determine GroupVersionResource for manifest #1917

Open
samiamoura opened this issue Dec 1, 2022 · 11 comments
Labels
acknowledged Issue has undergone initial review and is in our work queue. manifest progressive apply upstream-terraform

Comments

@samiamoura
Copy link

samiamoura commented Dec 1, 2022

Terraform Version, Provider Version and Kubernetes Version

Terraform version: v1.3.6
Kubernetes provider version: v2.16.0
Helm provider version: v2.7.1
Kubernetes version: v1.22.15-eks-fb459a

Affected Resource(s)

  • kubernetes_manifest

Terraform Configuration Files

resource "helm_release" "traefik" {
  name                     = "traefik"
  repository              = "https://helm.traefik.io/traefik"
  chart                      = "traefik"
  namespace            = "traefik"
  create_namespace = true
  version                   = "v10.24.2"
  values                     = [
    "${templatefile("${path.module}/templates/traefik_values.tftpl",
      {
        extra_ports = var.extra_ports
      }
    )}"
  ]
}

resource "kubernetes_manifest" "middleware_argocd_argocd" {
  manifest = {
    "apiVersion"    = "traefik.containo.us/v1alpha1"
    "kind"              = "Middleware"
    "metadata"      = {
      "name"          = "argocd"
      "namespace" = "argocd"
    }
    "spec" = {
      "redirectScheme" = {
        "permanent" = true
        "scheme"    = "https"
      }
    }
  }

  depends_on = [
    helm_release.traefik
  ]
}

Steps to Reproduce

  1. Create Terraform manifest file traefik.tf :
resource "helm_release" "traefik" {
  name                     = "traefik"
  repository              = "https://helm.traefik.io/traefik"
  chart                      = "traefik"
  namespace            = "traefik"
  create_namespace = true
  version                   = "v10.24.2"
  values                     = [
    "${templatefile("${path.module}/templates/traefik_values.tftpl",
      {
        extra_ports = var.extra_ports
      }
    )}"
  ]
}

resource "kubernetes_manifest" "middleware_argocd_argocd" {
  manifest = {
    "apiVersion"    = "traefik.containo.us/v1alpha1"
    "kind"              = "Middleware"
    "metadata"      = {
      "name"          = "argocd"
      "namespace" = "argocd"
    }
    "spec" = {
      "redirectScheme" = {
        "permanent" = true
        "scheme"    = "https"
      }
    }
  }

  depends_on = [
    helm_release.traefik
  ]
}
  1. Terraform plan/apply:
terraform plan 

or

terraform apply

Expected Behavior

What should have happened?

Terrafom should create Traefik helm release and appropriate CRDs and next kubernetes_manifest.middleware_argocd_argocd kubernetes manifest because we have the defined the following depends_on in the Traefik terraform manifest file :

depends_on = [
    helm_release.traefik
  ]

Actual Behavior

What actually happened?

When planning or applying the terraform configuration file, the following error appears :

image

We can see the terraform kubernetes provider try first to read the relevant middlewares.traefik.containo.us CRD before create the CR despite the depends_on terraform field.

The error appears normal because at this execution time middlewares.traefik.containo.us CRD doesn't exist and this is why I add the

depends_on = [
    helm_release.traefik
  ]

to orchestrate the order of creation of resources.

The correct behavior should be, first create the helm traefik release (and appropriate CRD) and then read and create the CR.

Important Factoids

References

There are several issues already opened :

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment
@samiamoura samiamoura added the bug label Dec 1, 2022
@github-actions github-actions bot removed the bug label Dec 1, 2022
@samiamoura samiamoura changed the title Failed to determine GroupVersionResource for manifest Unable to create Custom Resource (CR) Failed to determine GroupVersionResource for manifest Dec 1, 2022
@alexsomesan
Copy link
Member

If the Middleware.traefik.containo.us/v1alpha1 CRD is installed by the Helm release, then you cannot create resources of the same type with kubernetes_manifest during the same apply. They need to split into different apply operation.

This is due to the provider needing to read the CRD schema from the API server during the planning phase, while the CRD itself would only be created by the helm chart during the apply phase (later than required).

@ibadullaev-inc4
Copy link

ibadullaev-inc4 commented Dec 26, 2022

Same issue.

  1. Use helm_release to install CRDs
  2. install kubernetes_manifest using that CRDs
Error: Failed to determine GroupVersionResource for manifest
with kubernetes_manifest.


no matches for kind "xxxx" in group "XXXXXXX"

@gmile
Copy link

gmile commented Feb 10, 2023

This is due to the provider needing to read the CRD schema from the API server during the planning phase, while the CRD itself would only be created by the helm chart during the apply phase (later than required).

@alexsomesan this makes sense during an attempt to create CRDs (via Helm chart) and resources based on CRD - in one go.

However, in our situation we're seeing this same issue after CRDs have already been applied, e.g. on a subsequent apply. Following the logic you've outlined, provider should be able to fetch the CRD definitions during the planning phase, right?

Either I am failing to interpret the logic how provider works, or something else is standing in a way to apply the resources 🤔

@gmile
Copy link

gmile commented Feb 10, 2023

In fact it's a bit more nuanced in our case:

  1. the CRDs are applied,
  2. the resource itself was applied too (via targeted apply),
  3. now, running the non-targetted terraform plan stumbled upon the OP issue

Any idea what kind of further diagnostics could help out here? 🤔 I'd be happy to provider it

@gmile
Copy link

gmile commented Feb 14, 2023

Ignore what I said in #1917 (comment), in the end it was our fault.

The mistake was the the CRDs were applied to one cluster, but the CRD resources were attempted to be created in another 🤦

@Hronom
Copy link

Hronom commented May 6, 2023

This issue is critical! With this issue Kubernetes terraform module is useless in many cases.

Why there no workaround yet?

Is there any workaround available?

@dmalykh
Copy link

dmalykh commented May 10, 2023

@marziply
Copy link

marziply commented Jun 5, 2023

This issue is critical! With this issue Kubernetes terraform module is useless in many cases.

Why there no workaround yet?

Is there any workaround available?

I share these sentiments. I wonder if an argument can be added for specifying a URL to fetch the spec from instead? Maybe even a parameter accepting a string in case the spec can be sourced from a file or a variable. As others have noted, there's limited value in kubernetes_manifest if there isn't any mechanism available to plan and apply CRDs that do not exist yet.

@SinisterMinister
Copy link

Your best bet to work around this issue is to store the CRD as a ConfigMap and load it with a Job instead of trying to manage it with TF directly. If you rely on some sort of statefile scanning for drift detection, etc., I'm not sure what the best option would be.

@hegerdes
Copy link

hegerdes commented Jan 8, 2024

As other have said it would be awesome to deploy CRDs and resources using that CRD in the same run. Since the CRDs are not in the cluster when terraform plan is run, it should be possible to force skip their dry run execution for plan. Of course at the own risk of the user.

Just like ArgoCD we could have a skip-dry-run-for-new-custom-resources-types option which is false by default. Users than can decide if they want to take the risk of applying a unknown plan for that resources or not.

@iBrandyJackson iBrandyJackson added acknowledged Issue has undergone initial review and is in our work queue. progressive apply manifest labels Mar 22, 2024
@bacongobbler
Copy link

FWIW a workaround would be to install the CRDs using the kubernetes provider, then fall back to alekc/kubectl's provider, which does not rely on schema validation during terraform plan/terraform apply.

Here's an excerpt from our own terraform configuration, which installs cert-manager, its CRDs, and creates an Issuer:

terraform {
  required_providers {
    kubernetes = {
      source = "hashicorp/kubernetes"
    }
    helm = {
      source = "hashicorp/helm"
    }
    kubectl = {
      source = "alekc/kubectl"
    }
  }
}

resource "kubernetes_manifest" "cert_manager_crds" {
    for_each = { for manifest in provider::kubernetes::manifest_decode_multi(file("${path.module}/cert-manager.crds.yaml")) : "${lower(manifest.kind)}-${manifest.metadata.name}" => manifest }
    manifest = each.value
    wait {
        condition {
          type = "Established"
          status = "True"
        }
    }
}

resource "helm_release" "cert_manager" {
  name = "cert-manager"

  repository       = "https://charts.jetstack.io"
  chart            = "cert-manager"
  namespace        = "cert-manager"
  create_namespace = true

  depends_on = [kubernetes_manifest.cert_manager_crds]
}

resource "kubectl_manifest" "prod_issuer" {
  depends_on = [helm_release.cert_manager]
  yaml_body  = file("${path.module}/prod-issuer.yaml")
}

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
acknowledged Issue has undergone initial review and is in our work queue. manifest progressive apply upstream-terraform
Projects
None yet
Development

No branches or pull requests