2017-04-19 103 views
0

除了在Spark DataFrame上正常工作嗎?除Apache Spark 2.1.0中的DataFrame外使用

在Spark shell中,我用三個字符串創建了一個簡單的DataFrame:「a」,「b」,「c」。限制(1)分配給正確生成Array([a])的row1。然後將row1用作grfDF DataFrame上的extend方法的參數,以生成tail1。不應該tail1是Array的新DataFrame([b],[c])?

爲什麼tail1仍然包含「a」並刪除了「b」?

scala> grfDF.collect 
res1: Array[org.apache.spark.sql.Row] = Array([a], [b], [c])     

scala> val row1 = grfDF.limit(1) 
row1: org.apache.spark.sql.Dataset[org.apache.spark.sql.Row] = [sub: string] 

scala> row1.collect 
res3: Array[org.apache.spark.sql.Row] = Array([a]) 

scala> val tail1 = grfDF.except(row1).collect 
tail1: Array[org.apache.spark.sql.Row] = Array([c], [a]) 

數據幀創建如下:

case class Grf(sub: String) 
    def toGrf = (grf: Seq[String]) => Grf(grf(0)) 
    val sourceList = Array("a", "b", "c") 
    val grfRDD = sc.parallelize(sourceList).map(_.split(",")).map(toGrf(_)) 
    val grfDF = spark.createDataFrame(grfRDD) 
    grfDF.createOrReplaceTempView("grf") 

然後我嘗試流行過一排:

val row1 = grfDF.limit(1) 
    row1.collect 
    val tail1 = grfDF.except(row1) 
    tail1.collect 
+1

需要[最小,完整,可驗證的示例](https://stackoverflow.com/help/mcve)。 –

+0

我覺得這個故事從第2章開始。請你分享一下你如何構建'grfDF'? – Vidya

+0

如果你能夠在'row1.collect'中看到'[a]',那麼'tail1'將總是給你帶有你的代碼的'Array([c],[b])' – himanshuIIITian

回答

0

我試圖做類似的事情在火花外殼。請再次嘗試相同的代碼,因爲我得到的結果是Array([b],[c])。請參閱以下代碼:

scala> val sourceList=Array("a","b","c") 
sourceList: Array[String] = Array(a, b, c) 

scala> val grfRDD = sc.parallelize(sourceList) 
grfRDD: org.apache.spark.rdd.RDD[String] = ParallelCollectionRDD[0] at parallelize at <console>:29 

val grfDF = grfRDD.toDF() 
grfDF: org.apache.spark.sql.DataFrame = [_1: string] 

scala> grfDF 
res0: org.apache.spark.sql.DataFrame = [_1: string] 

scala> val row1 = grfDF.limit(1) 
row1: org.apache.spark.sql.DataFrame = [_1: string] 

scala> row1 
res1: org.apache.spark.sql.DataFrame = [_1: string] 

row1.collect() 
res2: Array[org.apache.spark.sql.Row] = Array([a]) 

scala> val tail = grfDF.except(row1) 
tail: org.apache.spark.sql.DataFrame = [_1: string] 

scala> tail.collect() 
res6: Array[org.apache.spark.sql.Row] = Array([b], [c]) 
相關問題