2017-10-16 56 views
3

我正在使用Spark 2.2,並且在試圖撥打的Seq上的spark.createDataset時遇到了麻煩。將地圖序列編碼爲火花數據集

代碼和輸出從我的星火SSH會話如下:

// createDataSet on Seq[T] where T = Int works 
scala> spark.createDataset(Seq(1, 2, 3)).collect 
res0: Array[Int] = Array(1, 2, 3) 

scala> spark.createDataset(Seq(Map(1 -> 2))).collect 
<console>:24: error: Unable to find encoder for type stored in a Dataset. 
Primitive types (Int, String, etc) and Product types (case classes) are 
supported by importing spark.implicits._ 
Support for serializing other types will be added in future releases. 
     spark.createDataset(Seq(Map(1 -> 2))).collect 
         ^

// createDataSet on a custom case class containing Map works 
scala> case class MapHolder(m: Map[Int, Int]) 
defined class MapHolder 

scala> spark.createDataset(Seq(MapHolder(Map(1 -> 2)))).collect 
res2: Array[MapHolder] = Array(MapHolder(Map(1 -> 2))) 

我試過import spark.implicits._,雖然我相當肯定,含蓄真實由星火shell會話進口。

這是一個沒有被當前編碼器覆蓋的情況嗎?

回答

5

它不包含在2.2中,但可以很容易地解決。您可以使用ExpressionEncoder添加所需的Encoder,無論是明確的:

import org.apache.spark.sql.catalyst.encoders.ExpressionEncoder 
import org.apache.spark.sql.Encoder 

spark 
    .createDataset(Seq(Map(1 -> 2)))(ExpressionEncoder(): Encoder[Map[Int, Int]]) 

implicitly

implicit def mapIntIntEncoder: Encoder[Map[Int, Int]] = ExpressionEncoder() 
spark.createDataset(Seq(Map(1 -> 2)))