2008-12-03 150 views
15

使用python 2.4和內置的ZipFile庫,我無法讀取非常大的zip文件(大於1或2 GB),因爲它想要將未壓縮文件的全部內容存儲在內存中。還有另一種方法可以做到這一點(無論是第三方庫還是其他黑客),還是我必須「掏空」並以這種方式解壓縮(顯然,這不是跨平臺的)。如何在python中解壓非常大的文件?

回答

16

下面是大文件解壓的概述。

import zipfile 
import zlib 
import os 

src = open(doc, "rb") 
zf = zipfile.ZipFile(src) 
for m in zf.infolist(): 

    # Examine the header 
    print m.filename, m.header_offset, m.compress_size, repr(m.extra), repr(m.comment) 
    src.seek(m.header_offset) 
    src.read(30) # Good to use struct to unpack this. 
    nm= src.read(len(m.filename)) 
    if len(m.extra) > 0: ex= src.read(len(m.extra)) 
    if len(m.comment) > 0: cm= src.read(len(m.comment)) 

    # Build a decompression object 
    decomp= zlib.decompressobj(-15) 

    # This can be done with a loop reading blocks 
    out= open(m.filename, "wb") 
    result= decomp.decompress(src.read(m.compress_size)) 
    out.write(result) 
    result = decomp.flush() 
    out.write(result) 
    # end of the loop 
    out.close() 

zf.close() 
src.close() 
+0

這正是我所尋找的 - 謝謝! – 2008-12-04 03:55:34

8

對於Python 2.6,你可以使用ZipFile.open()打開一個文件句柄上的文件,有效的內容複製到您選擇的目標文件:

import errno 
import os 
import shutil 
import zipfile 

TARGETDIR = '/foo/bar/baz' 

with open(doc, "rb") as zipsrc: 
    zfile = zipfile.ZipFile(zipsrc) 
    for member in zfile.infolist(): 
     target_path = os.path.join(TARGETDIR, member.filename) 
     if target_path.endswith('/'): # folder entry, create 
      try: 
       os.makedirs(target_path) 
      except (OSError, IOError) as err: 
       # Windows may complain if the folders already exist 
       if err.errno != errno.EEXIST: 
        raise 
      continue 
     with open(target_path, 'wb') as outfile, zfile.open(member) as infile: 
      shutil.copyfileobj(infile, outfile) 

這使用shutil.copyfileobj()高效地閱讀來自打開的zipfile對象的數據,將其複製到輸出文件。

相關問題