我得到試圖寫入HDFS作爲我的多線程應用程序Hadoop:...被複制到0節點而不是minReplication(= 1)。有1數據節點(或多個)運行,沒有節點(S)被排除在此操作
could only be replicated to 0 nodes instead of minReplication (=1). There are 1 datanode(s) running and no node(s) are excluded in this operation.
我的一部分,當出現以下錯誤「在這裏已經試過收視率最高的答案圍繞格式化但這並不爲我工作:HDFS error: could only be replicated to 0 nodes, instead of 1
正在發生的事情是這樣的:
- 我的應用程序由2個線程每一個與自己的春天配置數據
PartitionTextFileWriter
- 線程1是第一個來處理數據,這可以成功地寫入到HDFS
- 然而,一旦線程2點開始處理數據我得到這個錯誤,當它試圖刷新到一個文件
主題1和2不會寫入同一個文件,儘管它們在我的目錄樹的根目錄共享一個父目錄。
我的服務器上的磁盤空間沒有問題。
我也看到這個在我的名字節點日誌,但不知道這意味着什麼:
2016-03-15 11:23:12,149 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy
2016-03-15 11:23:12,150 WARN org.apache.hadoop.hdfs.protocol.BlockStoragePolicy: Failed to place enough replicas: expected size is 1 but only 0 storage types can be selected (replication=1, selected=[], unavailable=[DISK], removed=[DISK], policy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]})
2016-03-15 11:23:12,150 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) All required storage types are unavailable: unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}
2016-03-15 11:23:12,151 INFO org.apache.hadoop.ipc.Server: IPC Server handler 8 on 9000, call org.apache.hadoop.hdfs.protocol.ClientProtocol.addBlock from 10.104.247.78:52004 Call#61 Retry#0
java.io.IOException: File /metrics/abc/myfile could only be replicated to 0 nodes instead of [2016-03-15 13:34:16,663] INFO [Group Metadata Manager on Broker 0]: Removed 0 expired offsets in 1 milliseconds. (kafka.coordinator.GroupMetadataManager)
可能是什麼這個錯誤的原因是什麼?
感謝