通过 AWS [EMR] 提交 Spark 应用程序

马特

你好,我对云计算很陌生,所以我为(也许)这个愚蠢的问题道歉。我需要帮助才能知道我所做的实际上是在集群上计算还是仅在主机上计算(无用的东西)。

我可以做什么:好吧,我可以使用 AWS 控制台设置一个包含一定数量节点的集群,并在所有节点上安装 Spark。我可以通过 SSH 连接到主节点。那么它需要什么才能在集群上使用 Spark 代码运行我的 jar。

我会做什么:我会调用 spark-submit 来运行我的代码:

spark-submit --class cc.Main /home/ubuntu/MySparkCode.jar 3 [arguments] 

我的疑问:

  1. 是否需要使用 --master 和 master 的“spark://”引用来指定 master?我在哪里可以找到那个参考?我应该在 sbin/start-master.sh 中运行脚本来启动独立集群管理器还是已经设置了?如果我运行上面的代码,我想代码只会在主服务器上本地运行,对吗?

  2. 我可以只在主节点上保留我的输入文件吗?假设我要计算一个巨大的文本文件的字数,我可以将它只保存在master的磁盘上吗?或者为了保持并行性,我需要像 HDFS 这样的分布式内存?我不明白这一点,如果合适,我会将其保留在主节点磁盘上。

所以谢谢你的回复。

UPDATE1:我尝试在集群上运行 Pi 示例,但无法获得结果。

$ sudo spark-submit   --class org.apache.spark.examples.SparkPi   --master yarn   --deploy-mode cluster   /usr/lib/spark/examples/jars/spark-examples.jar   10

我希望得到一行打印Pi is roughly 3.14...而不是我得到:

17/04/15 13:16:01 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
17/04/15 13:16:03 INFO RMProxy: Connecting to ResourceManager at ip-172-31-37-222.us-west-2.compute.internal/172.31.37.222:8032
17/04/15 13:16:03 INFO Client: Requesting a new application from cluster with 2 NodeManagers 
17/04/15 13:16:03 INFO Client: Verifying our application has not requested more than the maximum memory capability of the cluster (5120 MB per container)
17/04/15 13:16:03 INFO Client: Will allocate AM container, with 5120 MB memory including 465 MB overhead
17/04/15 13:16:03 INFO Client: Setting up container launch context for our AM
17/04/15 13:16:03 INFO Client: Setting up the launch environment for our AM container
17/04/15 13:16:03 INFO Client: Preparing resources for our AM container
17/04/15 13:16:06 WARN Client: Neither spark.yarn.jars nor spark.yarn.archive is set, falling back to uploading libraries under SPARK_HOME.
17/04/15 13:16:10 INFO Client: Uploading resource file:/mnt/tmp/spark-aa757ca0-4ff7-460c-8bee-27bc8c8dada9/__spark_libs__5838015067814081789.zip -> hdfs://ip-172-31-37-222.us-west-2.compute.internal:8020/user/root/.sparkStaging/application_1492261407069_0007/__spark_libs__5838015067814081789.zip
17/04/15 13:16:12 INFO Client: Uploading resource file:/usr/lib/spark/examples/jars/spark-examples.jar -> hdfs://ip-172-31-37-222.us-west-2.compute.internal:8020/user/root/.sparkStaging/application_1492261407069_0007/spark-examples.jar
17/04/15 13:16:12 INFO Client: Uploading resource file:/mnt/tmp/spark-aa757ca0-4ff7-460c-8bee-27bc8c8dada9/__spark_conf__1370316719712336297.zip -> hdfs://ip-172-31-37-222.us-west-2.compute.internal:8020/user/root/.sparkStaging/application_1492261407069_0007/__spark_conf__.zip
17/04/15 13:16:13 INFO SecurityManager: Changing view acls to: root
17/04/15 13:16:13 INFO SecurityManager: Changing modify acls to: root
17/04/15 13:16:13 INFO SecurityManager: Changing view acls groups to: 
17/04/15 13:16:13 INFO SecurityManager: Changing modify acls groups to: 
17/04/15 13:16:13 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users  with view permissions: Set(root); groups with view permissions: Set(); users  with modify permissions: Set(root); groups with modify permissions: Set()

17/04/15 13:16:13 INFO Client: Submitting application application_1492261407069_0007 to ResourceManager
17/04/15 13:16:13 INFO YarnClientImpl: Submitted application application_1492261407069_0007
17/04/15 13:16:14 INFO Client: Application report for application_1492261407069_0007 (state: ACCEPTED)
17/04/15 13:16:14 INFO Client: 
     client token: N/A
     diagnostics: N/A
     ApplicationMaster host: N/A
     ApplicationMaster RPC port: -1
     queue: default
     start time: 1492262173096
     final status: UNDEFINED
     tracking URL: http://ip-172-31-37-222.us-west-2.compute.internal:20888/proxy/application_1492261407069_0007/
     user: root
17/04/15 13:16:15 INFO Client: Application report for application_1492261407069_0007 (state: ACCEPTED)
17/04/15 13:16:24 INFO Client: Application report for application_1492261407069_0007 (state: ACCEPTED)
17/04/15 13:16:25 INFO Client: Application report for application_1492261407069_0007 (state: RUNNING)
17/04/15 13:16:25 INFO Client: 
     client token: N/A
     diagnostics: N/A
     ApplicationMaster host: 172.31.33.215
     ApplicationMaster RPC port: 0
     queue: default
     start time: 1492262173096
     final status: UNDEFINED
     tracking URL: http://ip-172-31-37-222.us-west-2.compute.internal:20888/proxy/application_1492261407069_0007/
     user: root
17/04/15 13:16:26 INFO Client: Application report for application_1492261407069_0007 (state: RUNNING)
17/04/15 13:16:55 INFO Client: Application report for application_1492261407069_0007 (state: RUNNING)
17/04/15 13:16:56 INFO Client: Application report for application_1492261407069_0007 (state: FINISHED)
17/04/15 13:16:56 INFO Client: 
     client token: N/A
     diagnostics: N/A
     ApplicationMaster host: 172.31.33.215
     ApplicationMaster RPC port: 0
     queue: default
     start time: 1492262173096
     final status: SUCCEEDED
     tracking URL: http://ip-172-31-37-222.us-west-2.compute.internal:20888/proxy/application_1492261407069_0007/
     user: root
17/04/15 13:16:56 INFO ShutdownHookManager: Shutdown hook called
17/04/15 13:16:56 INFO ShutdownHookManager: Deleting directory /mnt/tmp/spark-aa757ca0-4ff7-460c-8bee-27bc8c8dada9
阿文德·库马尔·安古拉

回答第一个问题:

我假设你想在纱线上运行火花。您可以通过--master yarn --deploy-mode cluster,Spark 驱动程序在由集群上的 YARN 管理的应用程序主进程内运行

spark-submit --master yarn  --deploy-mode cluster \
    --class cc.Main /home/ubuntu/MySparkCode.jar 3 [arguments] 

其他模式参考

当您在 --deploy-mode 集群上运行作业时,您在运行的机器上看不到输出(如果您正在打印某些内容)。

原因:您正在集群模式下运行作业,因此 master 将在集群中的一个节点上运行,并且输出将在同一台机器上发出。

要检查输出,您可以使用以下命令在应用程序日志中查看它。

yarn logs -applicationId application_id

回答第二个问题:

您可以将输入文件保存在任何地方(主节点/HDFS)。

并行度完全取决于加载数据时创建的 RDD/DataFrame 的分区数量。分区的数量取决于数据大小,但您可以在加载数据时通过传递参数来控制。

如果您从 master 加载数据:

 val rdd =   sc.textFile("/home/ubumtu/input.txt",[number of partitions])

rdd将使用您通过的分区数创建。如果您没有通过多个分区,那么它将考虑spark.default.parallelism在 spark conf 中进行配置。

如果您从 HDFS 加载数据:

 val rdd =  sc.textFile("hdfs://namenode:8020/data/input.txt")

rdd 将使用等于 HDFS 内部块数的分区数创建。

希望我的回答对你有帮助。

本文收集自互联网,转载请注明来源。

如有侵权,请联系 [email protected] 删除。

编辑于
0

我来说两句

0 条评论
登录 后参与评论

相关文章