We leverage the priority features of Kubernetes to allow you to run workloads with relative priorities.
This is useful is some applications need to acquire resources faster than others.
For example, you may want Spark notebooks to acquire nodes before batch applications so that data scientists have a more reactive experience with Data Mechanics.
To set the desired priority of your Spark application, set the field
priority when submitting your app.
Possible values are
By default all applications are ran with a
The given priority will be applied to the driver and executor pods of your application.
If your Spark applications cannot be scheduled because you lack cluster resources, applications with
high priority will be scheduled first if new nodes are acquired by the cluster.
normal priority come next while applications with
low priority will always be scheduled last.