2012-02-29 67 views
3

我目前將所有照片存儲在亞馬遜S3上,並使用django爲我的網站。我想要一個按鈕,允許用戶單擊它並將所有照片壓縮並返回給它們。從亞馬遜下載和壓縮文件

我目前使用博託亞馬遜對接,我發現我可以去通過整個遺願清單/使用get_key查找特定文件並進行下載

在此之後,我需要暫時儲存它們,然後壓縮並返回。

要做到這一點,最好的方法是什麼?

感謝

+0

[可能的重複](http://stackoverflow.com/questions/908258/generating-file-to-download-with-django) – Lynob 2012-02-29 09:56:15

+0

關閉,除非我仍然需要先下載所有的圖像,然後處理壓縮/返回 – mirugai 2012-02-29 10:46:32

+0

是您的圖片位於s3的一個文件夾中?或者它們位於多個文件夾中?它也必須是django嗎?我想用shell腳本 – Lynob 2012-02-29 11:58:48

回答

1

你可以看看這個question或在此snippet下載文件

# This is not a full working example, just a starting point 
# for downloading images in different formats. 

import subprocess 
import Image 

def image_as_png_pdf(request): 
    output_format = request.GET.get('format') 
    im = Image.open(path_to_image) # any Image object should work 
    if output_format == 'png': 
    response = HttpResponse(mimetype='image/png') 
    response['Content-Disposition'] = 'attachment; filename=%s.png' % filename 
    im.save(response, 'png') # will call response.write() 
    else: 
    # Temporary disk space, server process needs write access 
    tmp_path = '/tmp/' 
    # Full path to ImageMagick convert binary 
    convert_bin = '/usr/bin/convert' 
    im.save(tmp_path+filename+'.png', 'png') 
    response = HttpResponse(mimetype='application/pdf') 
    response['Content-Disposition'] = 'attachment; filename=%s.pdf' % filename 
    ret = subprocess.Popen([ convert_bin, 
          "%s%s.png"%(tmp_path,filename), "pdf:-" ], 
          stdout=subprocess.PIPE) 
    response.write(ret.stdout.read()) 
    return response 

創建一個zip遵循link that i gave you,你也可以使用的zipimport如圖here例子在頁面底部,按照documentation獲取更新版本

雖然您可能也有興趣this雖然這是Django的1.2做,它可能無法在1.3

1

使用python-zipstreamthis pull request補丁,你可以做這樣的事情的工作:

import boto 
import io 
import zipstream 
import sys 


def iterable_to_stream(iterable, buffer_size=io.DEFAULT_BUFFER_SIZE): 
    """ 
    Lets you use an iterable (e.g. a generator) that yields bytestrings as a 
    read-only input stream. 

    The stream implements Python 3's newer I/O API (available in Python 2's io 
    module). For efficiency, the stream is buffered. 

    From: https://stackoverflow.com/a/20260030/729491 
    """ 
    class IterStream(io.RawIOBase): 
     def __init__(self): 
      self.leftover = None 

     def readable(self): 
      return True 

     def readinto(self, b): 
      try: 
       l = len(b) # We're supposed to return at most this much 
       chunk = self.leftover or next(iterable) 
       output, self.leftover = chunk[:l], chunk[l:] 
       b[:len(output)] = output 
       return len(output) 
      except StopIteration: 
       return 0 # indicate EOF 
    return io.BufferedReader(IterStream(), buffer_size=buffer_size) 


def iterate_key(): 
    b = boto.connect_s3().get_bucket('lastage') 
    key = b.get_key('README.markdown') 
    for b in key: 
     yield b 

with open('/tmp/foo.zip', 'w') as f: 
    z = zipstream.ZipFile(mode='w') 
    z.write(iterable_to_stream(iterate_key()), arcname='foo1') 
    z.write(iterable_to_stream(iterate_key()), arcname='foo2') 
    z.write(iterable_to_stream(iterate_key()), arcname='foo3') 
    for chunk in z: 
     print "CHUNK", len(chunk) 
     f.write(chunk) 

基本上,我們在關鍵的內容使用博託迭代,轉換這個迭代器使用iterable_to_stream方法從this answer轉換爲流,然後python-zipstream即時創建zip文件。