2017-02-28 489 views
1

我想通過windows上的spark提交我的spark-mongo代碼jar。我在獨立模式下使用spark。我在同一臺機器上配置了火花大師和兩名工人。我想用一個師傅和二workers.I我試圖執行以下命令來執行我的jar:spark-submit --master spark://localhost:7077 --deploy-mode cluster --executor-memory 5G --class spark.mongohadoop.testing3 G:\sparkmon1.jarspark提交java.lang.NullPointerException錯誤

我面臨以下錯誤:

Running Spark using the REST application submission protocol. 
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties 
17/02/28 17:09:13 INFO RestSubmissionClient: Submitting a request to launch an application in spark://192.168.242.1:7077. 
17/02/28 17:09:24 WARN RestSubmissionClient: Unable to connect to server spark://192.168.242.1:7077. 
Warning: Master endpoint spark://192.168.242.1:7077 was not a REST server. Falling back to legacy submission gateway instead. 
17/02/28 17:09:25 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 
17/02/28 17:09:32 ERROR ClientEndpoint: Exception from cluster was: java.lang.NullPointerException 
java.lang.NullPointerException 
     at java.lang.ProcessBuilder.start(ProcessBuilder.java:1012) 
     at org.apache.hadoop.util.Shell.runCommand(Shell.java:482) 
     at org.apache.hadoop.util.Shell.run(Shell.java:455) 
     at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:715) 
     at org.apache.hadoop.fs.FileUtil.chmod(FileUtil.java:873) 
     at org.apache.hadoop.fs.FileUtil.chmod(FileUtil.java:853) 
     at org.apache.spark.util.Utils$.fetchFile(Utils.scala:474) 
     at org.apache.spark.deploy.worker.DriverRunner.org$apache$spark$deploy$worker$DriverRunner$$downloadUserJar(DriverRunner.scala:154) 
     at org.apache.spark.deploy.worker.DriverRunner$$anon$1.run(DriverRunner.scala:83 

我已經設置winutil在ENV通道。 爲什麼我得到這個錯誤和解決方案是什麼?

+0

是master [standalone master server running?](http://spark.apache.org/docs/latest/spark-standalone.html) - 你有這個錯誤 - 「無法連接到服務器spark://192.168 .242.1:7077" –

回答

0

我在Linux上遇到了同樣的錯誤,但是當我的驅動程序從羣集中的特定機器啓動時,如果啓動驅動程序的請求轉到羣集中的任何其他機器,它會正常工作。所以,我的案子似乎是一個環境問題。 然後我檢查了org.apache.hadoop.util.Shell $ ShellCommandExecutor類的代碼,並得到它試圖運行一個命令,但在此之前,它試圖爲該機器運行「bash」。我觀察到我的bash響應緩慢。在bashrc中做了一些更改並重新啓動了羣集。 現在它的工作正常。