我是Scala的新手,并开始与之合作RDDs
。我有两个具有以下标头和数据的csv文件:csv1.txt
id,"location", "zipcode"
1, "a", "12345"
2, "b", "67890"
3, "c" "54321"
csv2.txt
"location_x", "location_y", trip_hrs
"a", "b", 1
"a", "c", 3
"b", "c", 2
"a", "b", 1
"c", "b", 2
基本上,csv1数据是一组不同的位置和邮政编码,而csv2数据的行程持续时间在location_x和location_y之间。
在这两个数据集信息的公共部分是定位在csv1和location_x在CSV2,即使他们有不同的头名。
我想创建两个RDDs
包含从数据的一个csv1,并从其他CSV2。
然后,我想join
对这两个RDDs
位置进行返回,并返回该位置的邮政编码,所有行程时间的总和,如下所示:
("a", "zipcode", 5)
("b", "zipcode", 2)
("c", "zipcode", 2)
我想知道你们中的一个人是否可以帮助我解决这个问题。谢谢。
我将给您代码(在IntelliJ中是一个完整的应用程序),并附带一些解释。希望对您有所帮助。
请阅读Spark
文档以获取详细信息。
可以使用Spark Dataframes解决此问题,您可以自己尝试。
import org.apache.log4j.{Level, Logger}
import org.apache.spark.sql.SparkSession
object Joining {
val spark = SparkSession
.builder()
.appName("Joining")
.master("local[*]")
.config("spark.sql.shuffle.partitions", "4") //Change to a more reasonable default number of partitions for our data
.config("spark.app.id", "Joining") // To silence Metrics warning
.getOrCreate()
val sc = spark.sparkContext
val path = "/home/cloudera/files/tests/"
def main(args: Array[String]): Unit = {
Logger.getRootLogger.setLevel(Level.ERROR)
try {
// read the files
val file1 = sc.textFile(s"${path}join1.csv")
val header1 = file1.first // extract the header of the file
val file2 = sc.textFile(s"${path}join2.csv")
val header2 = file2.first // extract the header of the file
val rdd1 = file1
.filter(line => line != header1) // to leave out the header
.map(line => line.split(",")) // split the lines => Array[String]
.map(arr => (arr(1).trim,arr(2).trim)) // to make up a pairRDD with arr(1) as key and zipcode
val rdd2 = file2
.filter(line => line != header2)
.map(line => line.split(",")) // split the lines => Array[String]
.map(arr => (arr(0).trim, arr(2).trim.toInt)) // to make up a pairRDD with arr(0) as key and trip_hrs
val joined = rdd1 // join the pairRDD by its keys
.join(rdd2)
.cache() // cache joined in memory
joined.foreach(println) // checking data
println("**************")
// ("c",("54321",2))
// ("b",("67890",2))
// ("a",("12345",1))
// ("a",("12345",3))
// ("a",("12345",1))
val result = joined.reduceByKey({ case((zip, time), (zip1, time1) ) => (zip, time + time1) })
result.map({case( (id,(zip,time)) ) => (id, zip, time)}).foreach(println) // checking output
// ("b","67890",2)
// ("c","54321",2)
// ("a","12345",5)
// To have the opportunity to view the web console of Spark: http://localhost:4041/
println("Type whatever to the console to exit......")
scala.io.StdIn.readLine()
} finally {
sc.stop()
println("SparkContext stopped")
spark.stop()
println("SparkSession stopped")
}
}
}
本文收集自互联网,转载请注明来源。
如有侵权,请联系 [email protected] 删除。
我来说两句