You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is it possible to run Spark applications (on Kubernetes) using Volcano so that only high-priority pods are scheduled first and low-priority pods wait until all high-priority pods are scheduled?
Yes, volcano can, high-priority pods will pop from queue before the low-priority pods, but you may have to specify them in the same queue.
Even if priority pods cannot be scheduled at the moment (they lack resources), but non-priority pods can (they have enough resources)? Will low priority pods wait in this case?
Please describe your problem in detail
Is it possible to run Spark applications (on Kubernetes) using Volcano so that only high-priority pods are scheduled first and low-priority pods wait until all high-priority pods are scheduled?
Any other relevant information
When I run Spark applications using the default Kubernetes scheduler, I get situations where "small" pods with low priority are scheduled before pods with high priority (https://kubernetes.io/docs/concepts/scheduling-eviction/pod-priority-preemption/#effect-of-pod-priority-on-scheduling-order). So I started looking towards Volcano.
The text was updated successfully, but these errors were encountered: