I'm trying to setup a standalone Spark 2.0 server to process an analytics function in parallel. To do this I want to have a single worker with multiple executors.
I'm using :
This is just for pure proof of concept purposes but I want to have 8 executors, one per each core.
I've tried to follow the other threads on this topic but for some reason it's not working for me. IE: Spark Standalone Number Executors/Cores Control
My configuration is as follows:
conf\spark-defaults.conf
spark.cores.max = 8
spark.executor.cores = 1
I have tried to also change my spark-env.sh file to no avail. Instead what is happening is that it shows that my 1 worker only has 1 executor on it. As you can see below, it still shows the standalone with 1 executor with 8 cores to it.
I believe you mixed up local and standalone modes:
local
, local[*]
or local[n]
. spark.executor.cores
and spark.executor.cores
are not applicable in the local mode because there is only one embedded executor.Standalone mode requires a standalone Spark cluster. It requires a master node (can be started using SPARK_HOME/sbin/start-master.sh
script) and at least one worker node (can be started using SPARK_HOME/sbin/start-slave.sh
script).
SparkConf
should use master node address to create (spark://host:port
).
Collected from the Internet
Please contact [email protected] to delete if infringement.
Comments