0
我在Debian 8爲什麼Spark作業在「hadoop:not found」的Mesos上失敗?
使用星火1.6.1,2.6.4的Hadoop和Mesos 0.28雖然試圖通過提交作業的Mesos集羣的從屬失敗,並在標準錯誤日誌中的以下內容:
I0427 22:35:39.626055 48258 fetcher.cpp:424] Fetcher Info: {"cache_directory":"\/tmp\/mesos\/fetch\/slaves\/ad642fcf-9951-42ad-8f86-cc4f5a5cb408-S0\/hduser","items":[{"action":"BYP$
I0427 22:35:39.628031 48258 fetcher.cpp:379] Fetching URI 'hdfs://xxxxxxxxx:54310/sources/spark/SimpleEventCounter.jar'
I0427 22:35:39.628057 48258 fetcher.cpp:250] Fetching directly into the sandbox directory
I0427 22:35:39.628078 48258 fetcher.cpp:187] Fetching URI 'hdfs://xxxxxxx:54310/sources/spark/SimpleEventCounter.jar'
E0427 22:35:39.629243 48258 shell.hpp:93] Command 'hadoop version 2>&1' failed; this is the output:
sh: 1: hadoop: not found
Failed to fetch 'hdfs://xxxxxxx:54310/sources/spark/SimpleEventCounter.jar': Failed to create HDFS client: Failed to execute 'hadoop version 2>&1'; the command was e$
Failed to synchronize with slave (it's probably exited)
- 我的JAR文件包含的Hadoop 2.6二進制
- 路徑引發執行人/二進制是通過
hdfs://
鏈接
我的工作沒有出現在框架選項卡中,但他們確實出現在狀態爲「排隊」的驅動程序中,他們只是坐在那裏直到我關閉spark-mesos-dispatcher.sh
服務。
你配置了hadoop_home嗎?看來它無法在mesos奴隸上找到hadoop家! – avr
Mesos的JIRA上有類似的問題(https://issues.apache.org/jira/browse/MESOS-4852)。檢查你的機器上是否裝有捲曲 – Tobi
你如何'spark-submit',即你能顯示整個命令行? –