How does spark choose nodes to run executors?(spark on yarn) We use spark on yarn mode, with a cluster of 120 nodes. Yesterday one spark job create 200 executors, while 11 executors on node1, 10 executors on node2, and other executors distributed equally on the other nodes.
Since there are so many executors on node1 and node2, the job run slowly.
How does spark select the node to run executors? according to yarn resourceManager?
Cluster Manager allocates resources across the other applications. I think the issue is with bad optimized configuration. You need to configure Spark on the Dynamic Allocation. In this case Spark will analyze cluster resources and add changes to optimize work.
You can find all information about Spark resource allocation and how to configure it here: http://site.clairvoyantsoft.com/understanding-resource-allocation-configurations-spark-application/
Collected from the Internet
Please contact [email protected] to delete if infringement.
Comments