2016-01-21 78 views
1

我試圖在集羣中運行我的JAR,但是在一段時間後我收到異常。失敗前的最後一個INFOUploading resource。我已經檢查了所有的安全組,成功完成了hsdf ls,但仍然出現錯誤。上載資源文件時在createBlockOutputStream中產生Spark HDFS異常

./bin/spark-submit --class MyMainClass --master紗線簇/tmp/myjar-1.0.jar myjarparameter

16/01/21 16:13:51 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 
16/01/21 16:13:52 INFO client.RMProxy: Connecting to ResourceManager at yarn.myserver.com/publicip:publicport 
16/01/21 16:13:53 INFO yarn.Client: Requesting a new application from cluster with 10 NodeManagers 
16/01/21 16:13:53 INFO yarn.Client: Verifying our application has not requested more than the maximum memory capability of the cluster (13312 MB per container) 
16/01/21 16:13:53 INFO yarn.Client: Will allocate AM container, with 896 MB memory including 384 MB overhead 
16/01/21 16:13:53 INFO yarn.Client: Setting up container launch context for our AM 
16/01/21 16:13:53 INFO yarn.Client: Preparing resources for our AM container 
16/01/21 16:13:54 INFO yarn.Client: Uploading resource file:/opt/spark-1.2.0-bin-hadoop2.3/lib/spark-assembly-1.2.0-hadoop2.3.0.jar -> hdfs://hdfs.myserver.com/user/henrique/.sparkStaging/application_1452514285349_6427/spark-assembly-1.2.0-hadoop2.3.0.jar 
16/01/21 16:14:55 INFO hdfs.DFSClient: Exception in createBlockOutputStream 
org.apache.hadoop.net.ConnectTimeoutException: 60000 millis timeout while waiting for channel to be ready for connect. ch : java.nio.channels.SocketChannel[connection-pending remote=/PRIVATE_IP:50010] 
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:532) 
    at org.apache.hadoop.hdfs.DFSOutputStream.createSocketForPipeline(DFSOutputStream.java:1341) 
    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1167) 
    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1122) 
    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:522) 
16/01/21 16:14:55 INFO hdfs.DFSClient: Abandoning BP-26920217-10.140.213.58-1440247331237:blk_1132201932_58466886 
16/01/21 16:14:55 INFO hdfs.DFSClient: Excluding datanode 10.164.16.207:50010 
16/01/21 16:15:55 INFO hdfs.DFSClient: Exception in createBlockOutputStream 

./bin/hadoop FS - LS /user/henrique/.sparkStaging/

drwx------- henrique supergroup 0 2016-01-20 18:36 user/henrique/.sparkStaging/application_1452514285349_5868 
drwx------ henrique supergroup 0 2016-01-21 16:13 user/henrique/.sparkStaging/application_1452514285349_6427 
drwx------ henrique supergroup 0 2016-01-21 17:06 user/henrique/.sparkStaging/application_1452514285349_6443 
+0

這個問題是不可能的,由於代碼,似乎更多的是Hdoop配置問題。檢查Yarn/Spark使用的「用戶」已經需要特權才能訪問HDFS /文件系統,並啓用禁用防火牆或允許所需的IP和主機。您可能還需要查看[this](http://stackoverflow.com/questions/21718900/exception-in-createblockoutputstream-when-copying-data-into-hdfs) – Sumit

回答

2

解決了! Hadoop正試圖連接到私有IP。這個問題是通過增加這個配置解決了hsdf-site.xml

<property> 
    <name>dfs.client.use.datanode.hostname</name> 
    <value>true</value> 
</property>