2016-11-14 689 views
0

運行spark程序時出現錯誤火花提交。java.lang.NoSuchMethodError:org.apache.spark.sql.hive.HiveContext.sql(Ljava/lang/String;)Lorg/apache/spark/sql/DataFrame

我的spark-cluster版本是2.0.0,我用sbt編譯我的代碼,下面是我的sbt依賴關係。

libraryDependencies ++= Seq(
    "commons-io" % "commons-io" % "2.4", 
    "com.google.guava" % "guava" % "19.0", 
    "jfree" % "jfreechart" % "1.0.13", 
    ("org.deeplearning4j" % "deeplearning4j-core" % "0.5.0").exclude("org.slf4j", "slf4j-log4j12"), 
    "org.jblas" % "jblas" % "1.2.4", 
    "org.nd4j" % "canova-nd4j-codec" % "0.0.0.15", 
    "org.nd4j" % "nd4j-native" % "0.5.0" classifier "" classifier "linux-x86_64", 
    "org.deeplearning4j" % "dl4j-spark" % "0.4-rc3.6" , 
    "org.apache.spark" % "spark-sql_2.10" % "1.3.1", 
    "org.apache.spark" % "spark-hive_2.10" % "1.3.1", 
    "org.apache.hive" % "hive-serde" % "0.14.0", 
    ("org.deeplearning4j" % "arbiter-deeplearning4j" % "0.5.0")) 



16/11/14 22:57:03 INFO hive.HiveSharedState: Warehouse path is 'file:/home/hduser/spark-warehouse'. 
Exception in thread "main" java.lang.NoSuchMethodError: org.apache.spark.sql.hive.HiveContext.sql(Ljava/lang/String;)Lorg/apache/spark/sql/DataFrame; 
    at poc.common.utilities.StockData$.fetchStockData(StockData.scala:15) 
    at poc.analaticsEngine.AnalaticsStockWorkBench.fetchTrainingDataSet(AnalaticsStockWorkBench.scala:69) 
    at poc.analaticsEngine.AnalaticsStockWorkBench.trainModel(AnalaticsStockWorkBench.scala:79) 
    at test.poc.analatics.StockPrediction$.testTrainSaveModel(StockPrediction.scala:21) 
    at test.poc.analatics.StockPrediction$.main(StockPrediction.scala:10) 
    at test.poc.analatics.StockPrediction.main(StockPrediction.scala) 
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 
    at java.lang.reflect.Method.invoke(Method.java:497) 
    at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:729) 
    at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:185) 
    at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:210) 
    at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:124) 
    at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala) 
16/11/14 22:57:03 INFO spark.SparkContext: Invoking stop() from shutdown hook 
+0

它失敗的點的下方,並在當地火花工作local val sqlData = Spark.hiveContext.sql(hiveQueryForStockCloseValue); –

回答

0

首先,你說你用星火2.0.0,但在你的依賴,你有

"org.apache.spark" % "spark-sql_2.10" % "1.3.1", 
"org.apache.spark" % "spark-hive_2.10" % "1.3.1", 

您需要更改這些依賴於2.0.0版,以保持它與Spark一致。而且,您不需要單獨指定spark-sql依賴關係,因爲它已包含在spark-hive中。 hive-serde現在也在2.1.0版本,所以0.14可能已經過時。

+0

是的。我記住2.0.0版本更新。但1.3.1已經添加到我的jar中,它不應該抱怨沒有找到的方法就是我想的。 –

+0

謝謝。它工作時,我升級到2.0.0 –

0

請按照dl4j示例進行版本控制。我不知道你在哪裏或如何在那裏得到卡諾瓦(我們沒有用它爲近6個月了?)

https://github.com/deeplearning4j/dl4j-examples/blob/master/dl4j-examples/pom.xml

+0

卡諾瓦被誤添加,沒有被使用。 –

+0

我收到了另一個錯誤:線程「main」java.lang.NoSuchMethodError:org.apache.spark.Accumulator中的異常。 (Ljava/lang/Object; Lorg/apache/spark/AccumulatorParam;)V \t at org.deeplearning4j.spark.impl.common.SumAccum。 (SumAccum.java:14) \t at org.deeplearning4j.spark.impl.common.Adder。 (Adder.java:17) \t在org.deeplearning4j.spark.impl.multilayer.SparkDl4jMultiLayer.runIteration(SparkDl4jMultiLayer.java:237) \t在org.deeplearning4j.spark.impl.multilayer.SparkDl4jMultiLayer.fitDataSet(SparkDl4jMultiLayer。 java:189) \t at poc.analaticsEngine.AnalaticsStockWorkBench $$ anonfun $ tra –

+0

我正在使用spark 2.0.0核心罐。 –