-
Hello, I have a problem with etcd continuously restarting. ❯ kubectl get pods
NAME READY STATUS RESTARTS AGE
cloud-controller-manager-rke2-server-1 0/1 Running 5179 (3m59s ago) 20d
etcd-rke2-server-1 0/1 Running 4453 (8s ago) 18h
kube-proxy-rke2-server-1 0/1 Running 4464 (7s ago) 20d My cluster consists in 3 nodes : ❯ kubectl get nodes
NAME STATUS ROLES AGE VERSION
rke2-agent-1 Ready <none> 20d v1.30.4+rke2r1
rke2-loadbalancer-1 Ready control-plane,master 20d v1.30.4+rke2r1
rke2-server-1 Ready etcd 20d v1.30.4+rke2r1 When I look at description of etcd pod, the kubelet can't check health of the pod : ❯ kubectl describe pods etcd-rke2-server-1
...
Warning BackOff 72m (x1876 over 17h) kubelet Back-off restarting failed container etcd in pod etcd-rke2-server-1_kube-system(5716c4a9275006798064896b8e82fcd0)
Warning Unhealthy 53m (x3790 over 18h) kubelet Startup probe failed: Get "http://localhost:2381/health?serializable=true": dial tcp [::1]:2381: connect: connection refused
Normal Killing 19m kubelet Container etcd failed startup probe, will be restarted
Warning Unhealthy 2m22s (x126 over 23m) kubelet Startup probe failed: Get "http://localhost:2381/health?serializable=true": dial tcp [::1]:2381: connect: connection refused Accessing from the node works fine :
There's no error in the logs. May it be a firewall issue? |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 3 replies
-
localhost appears to resolve to an ipv6 address in the pod when the kubelet is health-checking it. Do you perhaps have ipv6 incorrectly configured on this cluster, or on that node? Please show the output of |
Beta Was this translation helpful? Give feedback.
Nevermind... The problem was very simple :
❯ cat /etc/hosts *127.0.0.1 localhost
When I edit my hosts file some times ago, I accidentally added a star to the locahost.
Thank you for your time!