2015-09-26 66 views
1

我有多個壓縮文件,每個壓縮文件包含8個大小爲5-10kb的xml文件。我將這些數據用於測試目的,否則實時數據有1000個xml文件。我寫的地圖只是程序解壓壓縮文件Hadoop - LeaseExpiredException

 for(FileStatus status : status_list){ 
      this.unzip(status.getPath().toString() , DestPath, fs); 
     } 

此方法時映射器試圖寫每一個不同名稱的多個文件將創建文件和讀取非壓縮數據

FSDataOutputStream out = fs.create(new Path(filePath)); 
    byte[] bytesIn = new byte[BUFFER_SIZE]; 
    int read = 0; 
    while ((read = zipIn.read(bytesIn)) != -1) { 
     out.write(bytesIn, 0, read); 
    } 
    out.flush(); 
    out.close(); 

,Hadoop的返回LeaseExpiredException.

15/09/26 19:53:46 INFO mapreduce.Job: Task Id : attempt_1443265405944_0005_m_000000_0, Status : FAILED 
Error: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException): No lease on /test_poc/x/injection_00001 (163rd copy).xml (inode 400006): File does not exist. [Lease. Holder: DFSClient_attempt_1443265405944_0005_m_000000_0_-657768289_1, pendingcreates: 1] 
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:3431) 
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.analyzeFileState(FSNamesystem.java:3236) 
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getNewBlockTargets(FSNamesystem.java:3074) 
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3034) 
    at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:723) 
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:492) 
    at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) 
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616) 
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969) 
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049) 
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045) 
    at java.security.AccessController.doPrivileged(Native Method) 
    at javax.security.auth.Subject.doAs(Subject.java:415) 
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657) 
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2043) 

    at org.apache.hadoop.ipc.Client.call(Client.java:1476) 
    at org.apache.hadoop.ipc.Client.call(Client.java:1407) 
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229) 
    at com.sun.proxy.$Proxy12.addBlock(Unknown Source) 
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:418) 
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) 
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 
    at java.lang.reflect.Method.invoke(Method.java:606) 
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187) 
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) 
    at com.sun.proxy.$Proxy13.addBlock(Unknown Source) 
    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1430) 
    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1226) 
    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:449) 

我不知道如何處理多個壓縮文件而不使用循環。 I wrote map reduce program using MR2 API and using Hadoop 2.7.1 in psuedo distributed mode。任何指針?

+0

我不完全清楚zipIn流中包含的內容。你已經說過每個zip文件都包含多個XML文件,但是你的邏輯似乎完全消耗了zipIn。你可以展示更多的代碼,單個文件和多個文件嗎? – ThrawnCA

+0

您是否在映射器的設置方法中解壓縮它? –

回答

0

我在對代碼進行一些更改後解決了這個問題。在代碼的第一部分,我試圖解壓縮所有的zip文件,而我應該訪問spilts。基本Hadoop,我在執行時忘記了。

0

假設zipInjava.util.zip.ZipInputStream,您不應該迭代調用getNextEntry而不是讀取字節嗎?