2017-10-21 134 views
0

我寫一個斯卡拉/火花程序,會發現該僱員的薪水最高。員工數據可以CSV文件形式提供,而薪金列有數千個逗號分隔符,並且還有一個$前綴,例如$ 74,628.00。星火錯誤:異常線程「main」 java.lang.UnsupportedOperationException

爲了解決這個逗號和美元符號,我已經用Scala編寫這將分割每行一個解析器功能「」然後每一列映射到各個變量被分配到一個案例類。

我的解析器程序看起來像下面。爲了消除逗號和美元符號,我使用替換函數將其替換爲空,然後最終將類型轉換爲Int。

def ParseEmployee(line: String): Classes.Employee = { 
    val fields = line.split(",") 
    val Name = fields(0) 
    val JOBTITLE = fields(2) 
    val DEPARTMENT = fields(3) 
    val temp = fields(4) 

    temp.replace(",","")//To eliminate the , 
    temp.replace("$","")//To remove the $ 
    val EMPLOYEEANNUALSALARY = temp.toInt //Type cast the string to Int 

    Classes.Employee(Name, JOBTITLE, DEPARTMENT, EMPLOYEEANNUALSALARY) 
    } 

我的情況類看起來像下面

case class Employee (Name: String, 
         JOBTITLE: String, 
        DEPARTMENT: String, 
        EMPLOYEEANNUALSALARY: Number, 
) 

我的火花數據幀的SQL查詢看起來像下面

val empMaxSalaryValue = sc.sqlContext.sql("Select Max(EMPLOYEEANNUALSALARY) From EMP") 
empMaxSalaryValue.show 

,當我運行這個程序我得到這個下面例外

Exception in thread "main" java.lang.UnsupportedOperationException: No Encoder found for Number 
- field (class: "java.lang.Number", name: "EMPLOYEEANNUALSALARY") 
- root class: "Classes.Employee" 
    at org.apache.spark.sql.catalyst.ScalaReflection$.org$apache$spark$sql$catalyst$ScalaReflection$$serializerFor(ScalaReflection.scala:625) 
    at org.apache.spark.sql.catalyst.ScalaReflection$$anonfun$10.apply(ScalaReflection.scala:619) 
    at org.apache.spark.sql.catalyst.ScalaReflection$$anonfun$10.apply(ScalaReflection.scala:607) 
    at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241) 
    at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241) 
    at scala.collection.immutable.List.foreach(List.scala:381) 
    at scala.collection.TraversableLike$class.flatMap(TraversableLike.scala:241) 
    at scala.collection.immutable.List.flatMap(List.scala:344) 
    at org.apache.spark.sql.catalyst.ScalaReflection$.org$apache$spark$sql$catalyst$ScalaReflection$$serializerFor(ScalaReflection.scala:607) 
    at org.apache.spark.sql.catalyst.ScalaReflection$.serializerFor(ScalaReflection.scala:438) 
    at org.apache.spark.sql.catalyst.encoders.ExpressionEncoder$.apply(ExpressionEncoder.scala:71) 
    at org.apache.spark.sql.Encoders$.product(Encoders.scala:275) 
    at org.apache.spark.sql.SparkSession.createDataFrame(SparkSession.scala:282) 
    at org.apache.spark.sql.SQLContext.createDataFrame(SQLContext.scala:272) 
    at CalculateMaximumSalary$.main(CalculateMaximumSalary.scala:27) 
    at CalculateMaximumSalary.main(CalculateMaximumSalary.scala) 
  1. 任何想法,爲什麼我收到此錯誤?我在這裏做的錯誤是什麼,以及爲什麼它不能對數字進行類型轉換?

  2. 有沒有更好的方法來處理得到員工的最高薪水的這個問題呢?

+0

請致電ParseEmployee功能 –

回答

0

Spark SQL只提供有限數量的Encoders,它們定位具體的類。抽象類如Number不受支持(可與限制二進制Encoders一起使用)。

既然你轉換爲Int無論如何,只是重新定義類:

case class Employee (
    Name: String, 
    JOBTITLE: String, 
    DEPARTMENT: String, 
    EMPLOYEEANNUALSALARY: Int 
) 
+0

提供代碼'BigDecimal'是錢更適合,因爲它可以與整數和小數有兩種處理準確性 –

相關問題