2015-07-10 83 views
0

在嘗試Naive Bayes示例時here我在Ubuntu上的Spark 1.4上遇到此問題。我已經看到類似問題的帖子,其中修復了一個jar不匹配(通過Maven),但在這種情況下,所討論的類與Spark打包在一起,所以我不知道如何繼續。確切類型的火花類型不匹配

scala> val model = NaiveBayes.train(training, lambda = 1.0, modelType = "multinomial") 
<console>:46: error: type mismatch; 
found : org.apache.spark.rdd.org.apache.spark.rdd.org.apache.spark.rdd.org.apache.spark.rdd.org.apache.spark.rdd.RDD[org.apache.spark.mllib.regression.LabeledPoint] 
required: org.apache.spark.rdd.org.apache.spark.rdd.org.apache.spark.rdd.org.apache.spark.rdd.org.apache.spark.rdd.RDD[org.apache.spark.mllib.regression.LabeledPoint] 
     val model = NaiveBayes.train(training, lambda = 1.0, modelType = "multinomial") 
            ^

還要注意很長的類類型的鏈條:

org.apache.spark.rdd.org.apache.spark.rdd.org.apache.spark.rdd.org.apache.spark.rdd.org.apache.spark.rdd.RDD 

難道這是一些類加載器的bug?也許它正在尋找org.apache.spark.rdd.RDD,但發現錯誤的字符串(儘管該類實際上是正確的)。

相關: Apache Spark type mismatch of the same type (String)

回答

0

原來進口在同一個庫火花外殼某種原因導致名稱的腐敗(即追加的名稱)。通過使用羅伊的建議關掉它,我再次得到這個工作,但我遇到了另一個問題:

java.lang.NumberFormatException: empty String 
     at sun.misc.FloatingDecimal.readJavaFormatString(FloatingDecimal.java:1020) 
     at java.lang.Double.parseDouble(Double.java:540) 
     at scala.collection.immutable.StringLike$class.toDouble(StringLike.scala:232) 
     at scala.collection.immutable.StringOps.toDouble(StringOps.scala:31) 
     at $line22.$read$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$anonfun$1$$anonfun$apply$1.apply(<console>:28) 
     at $line22.$read$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$anonfun$1$$anonfun$apply$1.apply(<console>:28) 
     at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244) 
     at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244) 
     at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33) 
     at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:108) 
     at scala.collection.TraversableLike$class.map(TraversableLike.scala:244) 
     at scala.collection.mutable.ArrayOps$ofRef.map(ArrayOps.scala:108) 
     at $line22.$read$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$anonfun$1.apply(<console>:28) 
     at $line22.$read$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$anonfun$1.apply(<console>:26) 
     at scala.collection.Iterator$$anon$11.next(Iterator.scala:328) 
     at scala.collection.Iterator$$anon$14.hasNext(Iterator.scala:389) 
     at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) 
     at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:199) 
     at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:56) 
     at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:70) 
     at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) 
     at org.apache.spark.scheduler.Task.run(Task.scala:70) 
     at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213) 
     at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) 
     at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) 
     at java.lang.Thread.run(Thread.java:745)