-
Notifications
You must be signed in to change notification settings - Fork 4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
VPA updater errors with messages ~"fail to get pod controller: pod=kube-scheduler-XYZ err=Unhandled targetRef v1 / Node / XYZ, last error node is not a valid owner" #7378
Comments
/area vertical-pod-autoscaler |
Would it be possible to see the spec of the Pod that this is failing on? |
/triage needs-information |
We use standard kubeadm, K8s Rev: v1.25.16. I've updated description with an example Pod Spec. |
Hi. It seems like you added the VPA spec. I'm looking for the spec of the Pod |
Thank you and sorry, fixed in description. |
Sorry, I need the metadata too. |
No problem, here are the metadata:
|
The problem here is that this Pod doesn't have an
The VPA requires a Pod to have an owner. |
/close |
@adrianmoisey: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
/assign |
We are getting this error with static pods:
It's handled in the code here - autoscaler/vertical-pod-autoscaler/pkg/target/controller_fetcher/controller_fetcher.go Lines 289 to 293 in b01bff1
Based on the comment the node controller is skipped on purpose -> in that case it could provide info message with some higher log level, or can be ignored completely. Reporting this as error is confusing. |
Correct me if I'm wrong, but the error message is only produced when a VPA object exists that targets Pods that are owned by the Node? |
Also, would it be possible for someone to create steps to reproduce this using kind? |
This error is produced when any VPA object exists -> not pointing to static pods. Unable to reproduce with kind but easy to reproduce with kubeadm. Example how to install - https://blog.radwell.codes/2022/07/single-node-kubernetes-cluster-via-kubeadm-on-ubuntu-22-04/ (kubeadm installation is using old non-existing repos - instead use https://v1-30.docs.kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#installing-kubeadm-kubelet-and-kubectl) |
/reopen |
@adrianmoisey: Reopened this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
with the kubeadm I can see that ownerReference on node, but the error is not there. Trying to find reproducer. |
I can reproduce it in kind.
I get the following error in the admission-controller logs:
|
I agree that that shouldn't be bubbled up as an error |
Which component are you using?: vertical-pod-autoscaler
What version of the component are you using?: 1.1.2
Component version:
What k8s version are you using (
kubectl version
)?: kubectl 1.25What did you expect to happen?: VPA updater does not error with
fail to get pod controller: pod=kube-scheduler-XYZ err=Unhandled targetRef v1 / Node / XYZ, last error node is not a valid owner
What happened instead?: vpa-updater log contains
`
│ E1010 12:38:44.476232 1 api.go:153] fail to get pod controller: pod=kube-apiserver-x-master-1 err=Unhandled targetRef v1 / Node / x-master-1, last error node is not a valid owner │
│ E1010 12:38:44.477788 1 api.go:153] fail to get pod controller: pod=kube-controller-manager-master-1 err=Unhandled targetRef v1 / Node / x-master-1, last error node is not a valid owner │
│ E1010 12:38:44.547767 1 api.go:153] fail to get pod controller: pod=etcd-x-master-1 err=Unhandled targetRef v1 / Node / x-master-1, last error node is not a valid owner │
│ E1010 12:38:44.554646 1 api.go:153] fail to get pod controller: pod=kube-scheduler-x-master-1 err=Unhandled targetRef v1 / Node / x-master-1, last error node is not a valid owner │
`
How to reproduce it (as minimally and precisely as possible):
Update VPA from 0.4 to 1.1.2 and observ the vpa-updater log.
Anything else we need to know?: I've tried to update to 1.2.1 and the error is in the log again. Did not happen with vpa 0.4. I can see this error message also in already fixed issue with panic/SIGSEGV problem but nowhere else.
kube-controller-manager Pod Spec (generated by kubeadm with a very little patch in IPs)
The text was updated successfully, but these errors were encountered: