2015-09-25 115 views
0

當我運行fsck命令時,它顯示總塊爲68(平均塊大小286572 B)。我怎麼能只有68塊?HDFS塊問題

我最近安裝版本CDH5:Hadoop的2.6.0

-

[HDFS @ cluster1中〜] $ HDFS fsck的/

Connecting to namenode via http://cluster1.abc:50070 
FSCK started by hdfs (auth:SIMPLE) from /192.168.101.241 for path/at Fri Sep 25 09:51:56 EDT 2015 
....................................................................Status:  HEALTHY 
Total size: 19486905 B 
Total dirs: 569 
Total files: 68 
Total symlinks: 0 
Total blocks (validated): 68 (avg. block size 286572 B) 
Minimally replicated blocks: 68 (100.0 %) 
Over-replicated blocks: 0 (0.0 %) 
Under-replicated blocks: 0 (0.0 %) 
Mis-replicated blocks: 0 (0.0 %) 
Default replication factor: 3 
Average block replication: 1.9411764 
Corrupt blocks: 0 
Missing replicas: 0 (0.0 %) 
Number of data-nodes: 3 
Number of racks: 1 
FSCK ended at Fri Sep 25 09:51:56 EDT 2015 in 41 milliseconds 


The filesystem under path '/' is HEALTHY 

-

這是我獲取當我運行hdfsadmin -repot命令:

[hdfs @ cluster1〜] $ hdfs dfsa DMIN -REPORT

Configured Capacity: 5715220577895 (5.20 TB) 
Present Capacity: 5439327449088 (4.95 TB) 
DFS Remaining: 5439303270400 (4.95 TB) 
DFS Used: 24178688 (23.06 MB) 
DFS Used%: 0.00% 
Under replicated blocks: 0 
Blocks with corrupt replicas: 0 
Missing blocks: 0 
Missing blocks (with replication factor 1): 504 

-

而且,我的蜂巢查詢不啓動MapReduce工作,難道是上面的問題?

什麼建議嗎?

謝謝!

回答

0

塊是分佈在文件系統節點中的數據塊。舉例來說,如果你有一個200MB的文件,實際上會有兩個128和72 mbs的塊。

所以,不要擔心這個框架,因爲這是由框架照顧。正如fsck報告所示,HDFS中有68個文件,因此有68個塊。