如何修复此Scala jar错误“线程“主”中的异常java.lang.NoClassDefFoundError:org / apache / spark / sql / types / DataType”

普拉卡什·拉杰(Prakash Raj)

当在智能中运行时,scala spark对象运行良好。但是在构建工件并以jar执行后,我在下面收到此错误。

线程“主”中的异常java.lang.NoClassDefFoundError:org / apache / spark / sql / types / DataType

如何解决这个问题?感谢您对此的投入。

IntelliJ IDEA

“文件”>“项目结构”>“项目设置”>“工件”> +>“ Jar”>具有依赖项的模块生成的jar文件选中“包含在项目构建中”复选框,然后单击“应用”>“确定”选项卡:“构建”>“构建工件”>“ poc:jar>构建”

Jar错误在此处输入图片说明

build.sbt

name := "poc"
version := "0.1"
scalaVersion := "2.11.12"
libraryDependencies ++= Seq(
  "org.apache.spark" % "spark-core_2.11" % "2.4.3",
  "org.apache.spark" % "spark-sql_2.11" % "2.4.3",
  "com.datastax.spark" % "spark-cassandra-connector_2.11" % "2.4.1",
  "org.apache.hadoop" % "hadoop-aws" % "2.7.1"
) 

Pos.scala

import org.apache.spark.sql.types.{ IntegerType, StringType, StructField, StructType}
import org.apache.spark.{SparkConf, SparkContext}
import org.apache.spark.sql.SparkSession

object dataload {
  def main(args: Array[String]): Unit =
  {
    val awsAccessKeyId: String     = args(0)
    val awsSecretAccessKey: String = args(1)
    val csvFilePath: String        = args(2)
    val host: String               = args(3)
    val username: String           = args(4)
    val password: String           = args(5)
    val keyspace: String           = args(6)

    println("length args: " + args.length)

    val Conf = new SparkConf().setAppName("Imp_DataMigration").setMaster("local[2]")
      .set("fs.s3n.awsAccessKeyId", awsAccessKeyId)
      .set("fs.s3n.awsSecretAccessKey", awsSecretAccessKey)
      .set("fs.s3n.impl", "org.apache.hadoop.fs.s3native.NativeS3FileSystem")
      .set("spark.cassandra.connection.host", host)
      .set("spark.cassandra.connection.port","9042")
      .set("spark.cassandra.auth.username", username)
      .set("spark.cassandra.auth.password", password)

    val sc = new SparkContext(Conf)
    val spark = SparkSession.builder.config(sc.getConf).getOrCreate()

    val schemaHdr = StructType(
      StructField("a2z_name", StringType) ::
        StructField("a2z_key", StringType) ::
        StructField("a2z_id", IntegerType) :: Nil
    )

    val df = spark.read.format( source = "csv")
      .option("header", "true")
      .option("delimiter", "\t")
      .option("quote", "\"")
      .schema(schemaHdr)
      .load( path = "s3n://at-spring/a2z.csv")

    println(df.count())

    df.write
      .format( source = "org.apache.spark.sql.cassandra")
      .option("keyspace","poc_sparkjob")
      .option("table","a2z")
      .mode(org.apache.spark.sql.SaveMode.Append)
      .save

    sc.stop()

  }


}
普拉卡什·拉杰(Prakash Raj)

通过使用sbt程序集构建胖子罐来解决此问题

这篇文章对我有帮助

如何在IntelliJ IDEA中使用SBT构建Uber JAR(Fat JAR)?

本文收集自互联网,转载请注明来源。

如有侵权,请联系 [email protected] 删除。

编辑于
0

我来说两句

0 条评论
登录 后参与评论

相关文章