2016-07-25 96 views
1

我在處理超過2700個文件時遇到問題 如果我有一些文件像幾百個一樣,我就會猜測它與限制打開的窗口有關像linux中的ulimit文件可以定義在系統範圍內。我相信事情沒有被關閉,這就是我得到這個錯誤的原因。打開的文件太多urllib

我有一個通過郵寄發送文件功能:

def upload_photos(url_photo, dict, timeout): 
    photo = dict['photo'] 
    data_photo = dict['data'] 
    name = dict['name'] 
    conn = requests.post(url_photo, data=data_photo, files=photo, timeout=timeout) 
    return {'json': conn.json(), 'name': name} 

這是從一個目錄列表的循環中調用:

for photo_path in [p.lower() for p in photos_path]: 
     if ('jpg' in photo_path or 'jpeg' in photo_path) and "thumb" not in photo_path: 
      nr_photos_upload +=1 
    print("Found " + str(nr_photos_upload) + " pictures to upload") 
    local_count = 0 
    list_to_upload = [] 
    for photo_path in [p.lower() for p in photos_path]: 
     local_count += 1 
     if ('jpg' in photo_path or 'jpeg' in photo_path) and "thumb" not in photo_path and local_count > count: 
      total_img = nr_photos_upload 
      photo_name = os.path.basename(photo_path) 
      try : 
       photo = {'photo': (photo_name, open(path + photo_path, 'rb'), 'image/jpeg')} 
       try: 
        latitude, longitude, compas = get_gps_lat_long_compass(path + photo_path) 
       except ValueError as e: 
        if e != None: 
         try: 
          tags = exifread.process_file(open(path + photo_path, 'rb')) 
          latitude, longitude = get_exif_location(tags) 
          compas = -1 
         except Exception: 
          continue 
       if compas == -1: 
        data_photo = {'coordinate' : str(latitude) + "," + str(longitude), 
           'sequenceId'  : id_sequence, 
           'sequenceIndex' : count 
           } 
       else : 
        data_photo = {'coordinate' : str(latitude) + "," + str(longitude), 
           'sequenceId'  : id_sequence, 
           'sequenceIndex' : count, 
           'headers'   : compas 
           } 
       info_to_upload = {'data': data_photo, 'photo':photo, 'name': photo_name} 
       list_to_upload.append(info_to_upload) 
       count += 1 
      except Exception as ex: 
       print(ex) 
    count_uploaded = 0 
    with concurrent.futures.ThreadPoolExecutor(max_workers=max_workers) as executor: 
     # Upload feature called from here 
     future_to_url = {executor.submit(upload_photos, url_photo, dict, 100): dict for dict in list_to_upload} 
     for future in concurrent.futures.as_completed(future_to_url): 
      try: 
       data = future.result()['json'] 
       name = future.result()['name'] 
       print("processing {}".format(name)) 
       if data['status']['apiCode'] == "600": 

        percentage = float((float(count_uploaded) * 100)/float(total_img)) 
        print(("Uploaded - " + str(count_uploaded) + ' of total :' + str(
         total_img) + ", percentage: " + str(round(percentage, 2)) + "%")) 
       elif data['status']['apiCode'] == "610": 
        print("skipping - a requirement arguments is missing for upload") 
       elif data['status']['apiCode'] == "611": 
        print("skipping - image does not have GPS location metadata") 
       elif data['status']['apiCode'] == "660": 
        print("skipping - duplicate image") 
       else : 
        print("skipping - bad image") 
       count_uploaded += 1 
       with open(path + "count_file.txt", "w") as fis: 
        fis.write((str(count_uploaded))) 
      except Exception as exc: 
       print('%generated an exception: %s' % (exc)) 
+0

那麼,問題是太多的文件simulatanious處理。可能是解決問題的最簡單方法如下:如果有任何問題,請勿停止,但等待幾毫秒後重復(注意避免無限循環)。在這種情況下,所有的文件將被處理。 – Ilya

+0

在一般情況下,同時上傳少於幾個文件是非常有益的,特別是當您錘擊相同的服務器時。簡化同時連接的數量。 –

回答

1

您可以設置_setmaxstdio在C更改號碼可以一次打開的文件。

對於Python,你必須使用win32filepywin32爲:

import win32file 
win32file._setmaxstdio(1024) #set max number of files to 1024 

默認爲512。並確保您檢查您設置的最大值是否由您的平臺支持。

參考:https://msdn.microsoft.com/en-us/library/6e3b887c.aspx

+1

「打開文件的最大數量」是那種現在的限制足夠高,應該讓你質疑你的算法,沒有辦法改變限制... –

+1

這個選項不起作用,Matteo說,它不是最佳做法是做這樣的事情 – James