You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We've recently encountered an issue related to leader election in the Volcano scheduler and controller. When there's a problem renewing the lease, the current leader logs leaderelection lost and exits the process, as shown in the code snippet below.
Is this design intentional? Specifically, was it a deliberate decision to exit the process when the leader loses the election?
Can this behavior be adjusted? For example, could the leader election logic run in a loop, allowing the process to attempt to rejoin the election without exiting the container?
We'd like to understand the rationale behind this approach and whether there are recommended workarounds (or potential fixes) to avoid container restarts when a lease renewal fails.
If you'd like to reproduce this issue locally, here’s how you can simulate a failing leader election by blocking traffic to the API server:
Steps to Reproduce
1. Set Up a Kubernetes Cluster:
Use a local Kubernetes environment, such as minikube or kind (Kubernetes in Docker).
Ensure the Volcano scheduler is deployed and running in the cluster.
2. Verify Leader Election is Active:
Check the logs of the Volcano scheduler to confirm the current leader:
Find the API server's IP address or hostname. For example:
kubectl cluster-info
Note the URL of the Kubernetes control plane (API server).
4. Simulate Blocking Traffic to the API Server:
On the node running the Volcano scheduler pod (or locally if using minikube/kind), use a network tool like iptables to block traffic to the API server. Example using iptables:
sudo iptables -A OUTPUT -d <api-server-ip> -j DROP
Replace with the IP address of the Kubernetes API server.
Restarting the container is a more common approach, e.g., in kube-controller-manager:
leaderelection.LeaderCallbacks{
OnStartedLeading: func(ctx context.Context) {
controllerDescriptors:=NewControllerDescriptors()
ifleaderMigrator!=nil {
// If leader migration is enabled, we should start only non-migrated controllers// for the main lock.controllerDescriptors=filteredControllerDescriptors(controllerDescriptors, leaderMigrator.FilterFunc, leadermigration.ControllerNonMigrated)
logger.Info("leader migration: starting main controllers.")
}
controllerDescriptors[names.ServiceAccountTokenController] =saTokenControllerDescriptorrun(ctx, controllerDescriptors)
},
OnStoppedLeading: func() {
logger.Error(nil, "leaderelection lost")
klog.FlushAndExit(klog.ExitFlushTimeout, 1)
},
})
If you continue to try to obtain a lease, and the slave component is selected as the master at this time, some conflicts may occur, especially if some states of the current container are not cleaned up.
Please describe your problem in detail
Hi Team,
We've recently encountered an issue related to leader election in the Volcano scheduler and controller. When there's a problem renewing the lease, the current leader logs
leaderelection lost
and exits the process, as shown in the code snippet below.volcano/cmd/scheduler/app/server.go
Line 139 in da761e2
This behavior raises a couple of questions:
We'd like to understand the rationale behind this approach and whether there are recommended workarounds (or potential fixes) to avoid container restarts when a lease renewal fails.
If you'd like to reproduce this issue locally, here’s how you can simulate a failing leader election by blocking traffic to the API server:
Steps to Reproduce
1. Set Up a Kubernetes Cluster:
2. Verify Leader Election is Active:
Check the logs of the Volcano scheduler to confirm the current leader:
kubectl logs -n <namespace> <volcano-scheduler-pod-name>
3. Identify the API Server Endpoint:
Find the API server's IP address or hostname. For example:
kubectl cluster-info
Note the URL of the Kubernetes control plane (API server).
4. Simulate Blocking Traffic to the API Server:
On the node running the Volcano scheduler pod (or locally if using minikube/kind), use a network tool like iptables to block traffic to the API server. Example using iptables:
sudo iptables -A OUTPUT -d <api-server-ip> -j DROP
Replace with the IP address of the Kubernetes API server.
5. Observe the Behavior:
Check the logs of the Volcano scheduler pod:
kubectl logs -n <namespace> <volcano-scheduler-pod-name>
You should see logs indicating that the scheduler is unable to renew its lease and eventually logs
leaderelection lost
.6. Restore Traffic to the API Server:
Remove the rule blocking traffic to the API server:
sudo iptables -D OUTPUT -d <api-server-ip> -j DROP
7. Confirm the Outcome:
The Volcano scheduler pod will likely exit after logging
leaderelection lost
.Kubernetes should restart the pod. You can manually check its status:
kubectl get pods -n <namespace>
Looking forward to your insights—thanks for your hard work on Volcano!
Any other relevant information
No response
The text was updated successfully, but these errors were encountered: