Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

🐛 Fix Pod spec toleration when the spec is a template #2380

Open
wants to merge 6 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
59 changes: 59 additions & 0 deletions .changelog/2380.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,59 @@
```release-note:note
We have updated the logic of resources that use the Pod specification template, such as `kubernetes_deployment_v1`, `kubernetes_stateful_set_v1`, etc, and now the provider will keep all tolerations(`spec.toleration`) returned by Kubernetes. The same is applicable for the data sources `kubernetes_pod_v1` and `kubernetes_pod`. The behavior of resources `kubernetes_pod_v1` and `kubernetes_pod` remains unchanged, i.e. the provider will keep removing tolerations with well-known [taints](https://kubernetes.io/docs/reference/labels-annotations-taints/) since they might be attached to the object by Kubernetes controller and could lead to a perpetual diff.
```

```release-note:bug
`resource/kubernetes_replication_controller`: fix an issue when the provider cuts out toleration under pod spec template(`*.template.spec.toleration`) if it uses a well-known [taint](https://kubernetes.io/docs/reference/labels-annotations-taints/). That could lead to a perpetual diff behavior.
```

```release-note:bug
`resource/kubernetes_replication_controller_v1`: fix an issue when the provider cuts out toleration under pod spec template(`*.template.spec.toleration`) if it uses a well-known [taint](https://kubernetes.io/docs/reference/labels-annotations-taints/). That could lead to a perpetual diff behavior.
```

```release-note:bug
`resource/kubernetes_stateful_set: fix an issue when the provider cuts out toleration under pod spec template(`*.template.spec.toleration`) if it uses a well-known [taint](https://kubernetes.io/docs/reference/labels-annotations-taints/). That could lead to a perpetual diff behavior.
```

```release-note:bug
`resource/kubernetes_stateful_set_v1: fix an issue when the provider cuts out toleration under pod spec template(`*.template.spec.toleration`) if it uses a well-known [taint](https://kubernetes.io/docs/reference/labels-annotations-taints/). That could lead to a perpetual diff behavior.
```

```release-note:bug
`resource/kubernetes_deployment: fix an issue when the provider cuts out toleration under pod spec template(`*.template.spec.toleration`) if it uses a well-known [taint](https://kubernetes.io/docs/reference/labels-annotations-taints/). That could lead to a perpetual diff behavior.
```

```release-note:bug
`resource/kubernetes_deployment_v1: fix an issue when the provider cuts out toleration under pod spec template(`*.template.spec.toleration`) if it uses a well-known [taint](https://kubernetes.io/docs/reference/labels-annotations-taints/). That could lead to a perpetual diff behavior.
```

```release-note:bug
`resource/kubernetes_daemonset: fix an issue when the provider cuts out toleration under pod spec template(`*.template.spec.toleration`) if it uses a well-known [taint](https://kubernetes.io/docs/reference/labels-annotations-taints/). That could lead to a perpetual diff behavior.
```

```release-note:bug
`resource/kubernetes_daemon_set_v1: fix an issue when the provider cuts out toleration under pod spec template(`*.template.spec.toleration`) if it uses a well-known [taint](https://kubernetes.io/docs/reference/labels-annotations-taints/). That could lead to a perpetual diff behavior.
```

```release-note:bug
`resource/kubernetes_cron_job: fix an issue when the provider cuts out toleration under pod spec template(`*.template.spec.toleration`) if it uses a well-known [taint](https://kubernetes.io/docs/reference/labels-annotations-taints/). That could lead to a perpetual diff behavior.
```

```release-note:bug
`resource/kubernetes_cron_job_v1: fix an issue when the provider cuts out toleration under pod spec template(`*.template.spec.toleration`) if it uses a well-known [taint](https://kubernetes.io/docs/reference/labels-annotations-taints/). That could lead to a perpetual diff behavior.
```

```release-note:bug
`resource/kubernetes_job: fix an issue when the provider cuts out toleration under pod spec template(`*.template.spec.toleration`) if it uses a well-known [taint](https://kubernetes.io/docs/reference/labels-annotations-taints/). That could lead to a perpetual diff behavior.
```

```release-note:bug
`resource/kubernetes_job_v1: fix an issue when the provider cuts out toleration under pod spec template(`*.template.spec.toleration`) if it uses a well-known [taint](https://kubernetes.io/docs/reference/labels-annotations-taints/). That could lead to a perpetual diff behavior.
```

```release-note:bug
`data_source/kubernetes_pod`: fix an issue when the provider cuts out toleration under pod spec(`spec.toleration`) if it uses a well-known [taint](https://kubernetes.io/docs/reference/labels-annotations-taints/).
```

```release-note:bug
`data_source/kubernetes_pod_v1`: fix an issue when the provider cuts out toleration under pod spec(`spec.toleration`) if it uses a well-known [taint](https://kubernetes.io/docs/reference/labels-annotations-taints/).
```
3 changes: 2 additions & 1 deletion kubernetes/data_source_kubernetes_pod_v1.go
Original file line number Diff line number Diff line change
Expand Up @@ -65,7 +65,8 @@ func dataSourceKubernetesPodV1Read(ctx context.Context, d *schema.ResourceData,
return diag.FromErr(err)
}

podSpec, err := flattenPodSpec(pod.Spec)
// isTeamplate argument here is equal to 'true' because we want to keep all attributes that Kubernetes unchanged.
podSpec, err := flattenPodSpec(pod.Spec, true)
if err != nil {
return diag.FromErr(err)
}
Expand Down
3 changes: 3 additions & 0 deletions kubernetes/data_source_kubernetes_pod_v1_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@ package kubernetes

import (
"fmt"
"regexp"
"testing"

"github.com/hashicorp/terraform-plugin-sdk/v2/helper/acctest"
Expand All @@ -16,6 +17,7 @@ func TestAccKubernetesDataSourcePodV1_basic(t *testing.T) {
dataSourceName := "data.kubernetes_pod_v1.test"
name := fmt.Sprintf("tf-acc-test-%s", acctest.RandStringFromCharSet(10, acctest.CharSetAlphaNum))
imageName := busyboxImage
oneOrMore := regexp.MustCompile(`^[1-9][0-9]*$`)

resource.ParallelTest(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },
Expand All @@ -34,6 +36,7 @@ func TestAccKubernetesDataSourcePodV1_basic(t *testing.T) {
Check: resource.ComposeAggregateTestCheckFunc(
resource.TestCheckResourceAttr(dataSourceName, "metadata.0.name", name),
resource.TestCheckResourceAttr(dataSourceName, "spec.0.container.0.image", imageName),
resource.TestMatchResourceAttr(dataSourceName, "spec.0.toleration.#", oneOrMore),
),
},
},
Expand Down
52 changes: 42 additions & 10 deletions kubernetes/resource_kubernetes_deployment_v1_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -323,6 +323,7 @@ func TestAccKubernetesDeploymentV1_with_tolerations(t *testing.T) {
deploymentName := fmt.Sprintf("tf-acc-test-%s", acctest.RandStringFromCharSet(10, acctest.CharSetAlphaNum))
resourceName := "kubernetes_deployment_v1.test"
imageName := busyboxImage
key := "myKey"
tolerationSeconds := 6000
operator := "Equal"

Expand All @@ -332,11 +333,11 @@ func TestAccKubernetesDeploymentV1_with_tolerations(t *testing.T) {
CheckDestroy: testAccCheckKubernetesDeploymentV1Destroy,
Steps: []resource.TestStep{
{
Config: testAccKubernetesDeploymentV1ConfigWithTolerations(deploymentName, imageName, &tolerationSeconds, operator, nil),
Config: testAccKubernetesDeploymentV1ConfigWithTolerations(deploymentName, imageName, key, operator, "", &tolerationSeconds),
Check: resource.ComposeAggregateTestCheckFunc(
testAccCheckKubernetesDeploymentV1Exists(resourceName, &conf),
resource.TestCheckResourceAttr(resourceName, "spec.0.template.0.spec.0.toleration.0.effect", "NoExecute"),
resource.TestCheckResourceAttr(resourceName, "spec.0.template.0.spec.0.toleration.0.key", "myKey"),
resource.TestCheckResourceAttr(resourceName, "spec.0.template.0.spec.0.toleration.0.key", key),
resource.TestCheckResourceAttr(resourceName, "spec.0.template.0.spec.0.toleration.0.operator", operator),
resource.TestCheckResourceAttr(resourceName, "spec.0.template.0.spec.0.toleration.0.toleration_seconds", "6000"),
resource.TestCheckResourceAttr(resourceName, "spec.0.template.0.spec.0.toleration.0.value", ""),
Expand All @@ -352,6 +353,7 @@ func TestAccKubernetesDeploymentV1_with_tolerations_unset_toleration_seconds(t *
deploymentName := fmt.Sprintf("tf-acc-test-%s", acctest.RandStringFromCharSet(10, acctest.CharSetAlphaNum))
resourceName := "kubernetes_deployment_v1.test"
imageName := busyboxImage
key := "myKey"
operator := "Equal"
value := "value"

Expand All @@ -361,11 +363,11 @@ func TestAccKubernetesDeploymentV1_with_tolerations_unset_toleration_seconds(t *
CheckDestroy: testAccCheckKubernetesDeploymentV1Destroy,
Steps: []resource.TestStep{
{
Config: testAccKubernetesDeploymentV1ConfigWithTolerations(deploymentName, imageName, nil, operator, &value),
Config: testAccKubernetesDeploymentV1ConfigWithTolerations(deploymentName, imageName, key, operator, value, nil),
Check: resource.ComposeAggregateTestCheckFunc(
testAccCheckKubernetesDeploymentV1Exists(resourceName, &conf),
resource.TestCheckResourceAttr(resourceName, "spec.0.template.0.spec.0.toleration.0.effect", "NoExecute"),
resource.TestCheckResourceAttr(resourceName, "spec.0.template.0.spec.0.toleration.0.key", "myKey"),
resource.TestCheckResourceAttr(resourceName, "spec.0.template.0.spec.0.toleration.0.key", key),
resource.TestCheckResourceAttr(resourceName, "spec.0.template.0.spec.0.toleration.0.operator", operator),
resource.TestCheckResourceAttr(resourceName, "spec.0.template.0.spec.0.toleration.0.value", "value"),
resource.TestCheckResourceAttr(resourceName, "spec.0.template.0.spec.0.toleration.0.toleration_seconds", ""),
Expand All @@ -375,6 +377,36 @@ func TestAccKubernetesDeploymentV1_with_tolerations_unset_toleration_seconds(t *
})
}

func TestAccKubernetesDeploymentV1_with_well_known_tolerations(t *testing.T) {
var conf appsv1.Deployment

deploymentName := fmt.Sprintf("tf-acc-test-%s", acctest.RandStringFromCharSet(10, acctest.CharSetAlphaNum))
resourceName := "kubernetes_deployment_v1.test"
imageName := busyboxImage
key := "node.kubernetes.io/unreachable"
tolerationSeconds := 6000
operator := "Exists"

resource.ParallelTest(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },
ProviderFactories: testAccProviderFactories,
CheckDestroy: testAccCheckKubernetesDeploymentV1Destroy,
Steps: []resource.TestStep{
{
Config: testAccKubernetesDeploymentV1ConfigWithTolerations(deploymentName, imageName, key, operator, "", &tolerationSeconds),
Check: resource.ComposeAggregateTestCheckFunc(
testAccCheckKubernetesDeploymentV1Exists(resourceName, &conf),
resource.TestCheckResourceAttr(resourceName, "spec.0.template.0.spec.0.toleration.0.effect", "NoExecute"),
resource.TestCheckResourceAttr(resourceName, "spec.0.template.0.spec.0.toleration.0.key", key),
resource.TestCheckResourceAttr(resourceName, "spec.0.template.0.spec.0.toleration.0.operator", operator),
resource.TestCheckResourceAttr(resourceName, "spec.0.template.0.spec.0.toleration.0.toleration_seconds", "6000"),
resource.TestCheckResourceAttr(resourceName, "spec.0.template.0.spec.0.toleration.0.value", ""),
),
},
},
})
}

func TestAccKubernetesDeploymentV1_with_container_liveness_probe_using_exec(t *testing.T) {
var conf appsv1.Deployment

Expand Down Expand Up @@ -1796,14 +1828,14 @@ func testAccKubernetesDeploymentV1ConfigWithSecurityContextSysctl(deploymentName
`, deploymentName, imageName)
}

func testAccKubernetesDeploymentV1ConfigWithTolerations(deploymentName, imageName string, tolerationSeconds *int, operator string, value *string) string {
func testAccKubernetesDeploymentV1ConfigWithTolerations(deploymentName, imageName, key, operator, value string, tolerationSeconds *int) string {
tolerationDuration := ""
if tolerationSeconds != nil {
tolerationDuration = fmt.Sprintf("toleration_seconds = %d", *tolerationSeconds)
}
valueString := ""
if value != nil {
valueString = fmt.Sprintf("value = \"%s\"", *value)
if value != "" {
valueString = fmt.Sprintf("value = \"%s\"", value)
}

return fmt.Sprintf(`resource "kubernetes_deployment_v1" "test" {
Expand Down Expand Up @@ -1832,7 +1864,7 @@ func testAccKubernetesDeploymentV1ConfigWithTolerations(deploymentName, imageNam
spec {
toleration {
effect = "NoExecute"
key = "myKey"
key = "%s"
operator = "%s"
%s
%s
Expand All @@ -1841,14 +1873,14 @@ func testAccKubernetesDeploymentV1ConfigWithTolerations(deploymentName, imageNam
container {
image = "%s"
name = "containername"
command = ["sleep", "300"]
command = ["sleep", "infinity"]
}
termination_grace_period_seconds = 1
}
}
}
}
`, deploymentName, operator, valueString, tolerationDuration, imageName)
`, deploymentName, key, operator, valueString, tolerationDuration, imageName)
}

func testAccKubernetesDeploymentV1ConfigWithLivenessProbeUsingExec(deploymentName, imageName string) string {
Expand Down
2 changes: 1 addition & 1 deletion kubernetes/resource_kubernetes_pod_v1.go
Original file line number Diff line number Diff line change
Expand Up @@ -200,7 +200,7 @@ func resourceKubernetesPodV1Read(ctx context.Context, d *schema.ResourceData, me
return diag.FromErr(err)
}

podSpec, err := flattenPodSpec(pod.Spec)
podSpec, err := flattenPodSpec(pod.Spec, false)
if err != nil {
return diag.FromErr(err)
}
Expand Down
2 changes: 1 addition & 1 deletion kubernetes/structures_daemonset.go
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ func flattenDaemonSetSpec(in appsv1.DaemonSetSpec, d *schema.ResourceData, meta
att["selector"] = flattenLabelSelector(in.Selector)
}

podSpec, err := flattenPodSpec(in.Template.Spec)
podSpec, err := flattenPodSpec(in.Template.Spec, true)
if err != nil {
return nil, err
}
Expand Down
2 changes: 1 addition & 1 deletion kubernetes/structures_deployment.go
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@ func flattenDeploymentSpec(in appsv1.DeploymentSpec, d *schema.ResourceData, met

att["strategy"] = flattenDeploymentStrategy(in.Strategy)

podSpec, err := flattenPodSpec(in.Template.Spec)
podSpec, err := flattenPodSpec(in.Template.Spec, true)
if err != nil {
return nil, err
}
Expand Down
8 changes: 4 additions & 4 deletions kubernetes/structures_pod.go
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,7 @@ func flattenOS(in v1.PodOS) []interface{} {
return []interface{}{att}
}

func flattenPodSpec(in v1.PodSpec) ([]interface{}, error) {
func flattenPodSpec(in v1.PodSpec, isTeamplate bool) ([]interface{}, error) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the changes @arybolovlev, should this be isTeamplate bool or isTemplate bool ?

att := make(map[string]interface{})
if in.ActiveDeadlineSeconds != nil {
att["active_deadline_seconds"] = *in.ActiveDeadlineSeconds
Expand Down Expand Up @@ -141,7 +141,7 @@ func flattenPodSpec(in v1.PodSpec) ([]interface{}, error) {
}

if len(in.Tolerations) > 0 {
att["toleration"] = flattenTolerations(in.Tolerations)
att["toleration"] = flattenTolerations(in.Tolerations, isTeamplate)
}

if len(in.TopologySpreadConstraints) > 0 {
Expand Down Expand Up @@ -293,11 +293,11 @@ func flattenSysctls(sysctls []v1.Sysctl) []interface{} {
return att
}

func flattenTolerations(tolerations []v1.Toleration) []interface{} {
func flattenTolerations(tolerations []v1.Toleration, isTeamplate bool) []interface{} {
att := []interface{}{}
for _, v := range tolerations {
// The API Server may automatically add several Tolerations to pods, strip these to avoid TF diff.
if _, ok := builtInTolerations[v.Key]; ok {
if _, ok := builtInTolerations[v.Key]; ok && !isTeamplate {
log.Printf("[INFO] ignoring toleration with key: %s", v.Key)
continue
}
Expand Down
43 changes: 42 additions & 1 deletion kubernetes/structures_pod_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,7 @@ import (
func TestFlattenTolerations(t *testing.T) {
cases := []struct {
Input []corev1.Toleration
IsTeamplate bool
ExpectedOutput []interface{}
}{
{
Expand All @@ -25,6 +26,7 @@ func TestFlattenTolerations(t *testing.T) {
Value: "true",
},
},
false,
[]interface{}{
map[string]interface{}{
"key": "node-role.kubernetes.io/spot-worker",
Expand All @@ -43,6 +45,7 @@ func TestFlattenTolerations(t *testing.T) {
Value: "true",
},
},
false,
[]interface{}{
map[string]interface{}{
"key": "node-role.kubernetes.io/other-worker",
Expand All @@ -61,21 +64,59 @@ func TestFlattenTolerations(t *testing.T) {
TolerationSeconds: ptr.To(int64(120)),
},
},
false,
[]interface{}{
map[string]interface{}{
"effect": "NoExecute",
"toleration_seconds": "120",
},
},
},
{
[]corev1.Toleration{
{
Effect: "NoExecute",
Key: "node.kubernetes.io/unreachable",
Operator: "Exists",
TolerationSeconds: ptr.To(int64(120)),
},
},
false,
[]interface{}{},
},
{
[]corev1.Toleration{},
false,
[]interface{}{},
},
{
[]corev1.Toleration{},
true,
[]interface{}{},
},
{
[]corev1.Toleration{
{
Effect: "NoExecute",
Key: "node.kubernetes.io/unreachable",
Operator: "Exists",
TolerationSeconds: ptr.To(int64(120)),
},
},
true,
[]interface{}{
map[string]interface{}{
"effect": "NoExecute",
"key": "node.kubernetes.io/unreachable",
"operator": "Exists",
"toleration_seconds": "120",
},
},
},
}

for _, tc := range cases {
output := flattenTolerations(tc.Input)
output := flattenTolerations(tc.Input, tc.IsTeamplate)
if !reflect.DeepEqual(output, tc.ExpectedOutput) {
t.Fatalf("Unexpected output from flattener.\nExpected: %#v\nGiven: %#v",
tc.ExpectedOutput, output)
Expand Down
2 changes: 1 addition & 1 deletion kubernetes/structures_replication_controller.go
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ func flattenReplicationControllerSpec(in corev1.ReplicationControllerSpec, d *sc
}

if in.Template != nil {
podSpec, err := flattenPodSpec(in.Template.Spec)
podSpec, err := flattenPodSpec(in.Template.Spec, true)
if err != nil {
return nil, err
}
Expand Down
2 changes: 1 addition & 1 deletion kubernetes/structures_stateful_set.go
Original file line number Diff line number Diff line change
Expand Up @@ -188,7 +188,7 @@ func flattenPodTemplateSpec(t corev1.PodTemplateSpec) ([]interface{}, error) {
template := make(map[string]interface{})

template["metadata"] = flattenMetadataFields(t.ObjectMeta)
spec, err := flattenPodSpec(t.Spec)
spec, err := flattenPodSpec(t.Spec, true)
if err != nil {
return []interface{}{template}, err
}
Expand Down
Loading