2015-07-21 56 views
0

當我在NameNode的JPS爲什麼我的datanode在hadoop集羣上運行,但我仍然無法將文件放入hdfs?

[email protected]:~$ jps 
3669 SecondaryNameNode 
3830 ResourceManager 
3447 NameNode 
4362 Jps 

當我的DataNode

[email protected]:~$ jps 
3574 Jps 
3417 NodeManager 
3292 DataNode 

JPS但是當我把一個文件

[email protected]:~$ hadoop fs -put txt hdfs://hadoop:9000/txt 
15/07/21 22:08:32 WARN hdfs.DFSClient: DataStreamer Exception 
org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /txt._COPYING_ could only be replicated to 0 nodes instead of minReplication (=1). There are 0 datanode(s) running and no node(s) are excluded in this operation. 
at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1550) 
at 
....... 
put: File /txt._COPYING_ could only be replicated to 0 nodes instead of minReplication (=1). There are 0 datanode(s) running and no node(s) are excluded in this operation. 

而且我注意到,沒有 「版本」 文件在datanode機器中,但無論我運行多少次「hadoop namenode -format」,都會創建版本文件。

BTW ubuntu。

+0

請檢查NameNode網頁並查看DataNode是否連接到NoneNode。同時檢查DataNode日誌。 – shanmuga

回答

0

現在我知道原因是VM的IP已經改變。我只是修改namenode中的/ etc/hosts,但沒有修改datanode中的那個。