-
Notifications
You must be signed in to change notification settings - Fork 611
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Allow automatic recreation on LB ERROR state to be disabled #2596
base: master
Are you sure you want to change the base?
Allow automatic recreation on LB ERROR state to be disabled #2596
Conversation
This is e.g. helpful when you try to debug the LB in ERROR state.
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Welcome @baurmatt! |
Hi @baurmatt. Thanks for your PR. I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
// Allow users to disable automatic recreation on Octavia ERROR state | ||
recreateOnError := getBoolFromServiceAnnotation(service, ServiceAnnotationLoadBalancerRecreateOnError, true) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is exposing a debugging function to the end users. I think I'd rather make it an option in OCCM configuration, so that administrator can turn it on and investigate what's happening. What do you thinK?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@dulek Thanks for your review! :) In general I agree, but in our Managed Kubernetes setup users wouldn't be able to change the OCCM configuration due to it not being (editable) exposed to the user. Implementing it as an OCCM configuration would also affect all LBs, while my implementation limits the functionality to a single LB.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
um.. does this mean every configuration option we have for OCCM will also not available for managed k8s?
not sure other sitaution we have in such scenario and how we handle it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
um.. does this mean every configuration option we have for OCCM will also not available for managed k8s?
Just to clarify, when I'm talking about OCCM config I mean the cloud-config
file/secret. In our managed k8s setup, we don't have (persistent) writeable access to it, because the cloudprovider runs on the master nodes which we don't have access to. cloud-config
is readonly accessible on the worker nodes for csi-cinder-nodeplugin
. So yes, other options aren't (freely) configurable for us as well.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@baurmatt: I don't believe the users of the managed K8s should be trying to debug stuff on Octavia side really. Can you provide an example where keeping the Octavia LB in ERROR state aids with debugging? The regular users should not have access to amphora resources (admin-only API) or Nova VMs backing amphoras (these should live in a service tenant). The LB itself does not expose any debugging information. The Nova VM does expose the error, but most of the time it's a NoValidHost
anyway, so scheduler logs are required to do debugging.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@dulek To the background: I've created a LoadBalancer Service with loadbalancer.openstack.org/network-id: $uuid
and loadbalancer.openstack.org/member-subnet-id: $uuid
, which failed because one of the UUIDs was wrong. Thus it was recreated "all the time". This was hard for the OpenStack team to debug because they only had seconds to take a look on the Octavia LB before it was deleted by cloudprovider. Keeping it in ERROR state allowed for easier debugging on my/OpenStack team side.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Was the error missing from kubectl describe <svc-name>
? We emit events that should be enough to debug such problems. What was Octavia returning? Just normally 201?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@dulek It only shows that the load balancer went into ERROR state:
$ kubectl describe service cloudnative-pg-cluster-primary2
...
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning SyncLoadBalancerFailed 29m service-controller Error syncing load balancer: failed to ensure load balancer: error creating loadbalancer kube_service_2mqlgjjphg_cloudnative-pg-cluster_cloudnative-pg-cluster-primary2: loadbalancer has gone into ERROR state
Normal EnsuringLoadBalancer 24m (x7 over 31m) service-controller Ensuring load balancer
Warning SyncLoadBalancerFailed 24m (x6 over 29m) service-controller Error syncing load balancer: failed to ensure load balancer: load balancer 71f3ac5c-6740-4fca-8d57-a62d30697629 is not ACTIVE, current provisioning status: ERROR
Normal EnsuringLoadBalancer 2m34s (x10 over 22m) service-controller Ensuring load balancer
Warning SyncLoadBalancerFailed 2m34s (x10 over 22m) service-controller Error syncing load balancer: failed to ensure load balancer: load balancer 71f3ac5c-6740-4fca-8d57-a62d30697629 is not ACTIVE, current provisioning status: ERROR
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hm, I see, though in this case you end up with an LB in ERROR state. I still don't see how it staying there is helpful to debugging. Maybe seeing the full LB resource helps, as then you can see the wrong ID, but then we can solve that use case by making sure at some more granular log level, we log full request made to Octavia by Gophercloud instead of adding a new option.
I can also see value in CPO validating network and subnet IDs before creating the LB.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry for the late reply, I've been on vacation. It didn't help me directly, because as a user I still wasn't able to get more information.But when I was able to give our cloud operations team the ID, they were able to debug the problem and tell me the reason.
/ok-to-test |
The Kubernetes project currently lacks enough contributors to adequately respond to all PRs. This bot triages PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
This is e.g. helpful when you try to debug the LB in ERROR state.
What this PR does / why we need it:
Allow automatic recreation on LB ERROR state to be disabled. This is e.g. helpful when you try to debug the LB in ERROR state.
Which issue this PR fixes(if applicable):
fixes #
Special notes for reviewers:
Release note: