2016-04-25 67 views
1

我已經用Python編寫了一個小型多處理程序,該程序讀取值數組並異步運行多個進程以在數據陣列的某些部分上進行操作。每個獨立的進程應該是它自己的二維數組的一維部分,進程之間沒有重疊。一旦所有進程完成,共享內存數組將被寫出到文件中,但此時在我的代碼中,共享內存數組中沒有預期值/計算值,但原始值仍然存在。看來,在進程內賦值的新值並不是堅持共享內存對象。也許有什麼事情我不理解(例如,通過參考傳遞價值傳遞),這引發了我的麻煩?Python多處理:共享內存(numpy)未按預期修改陣列

我有一個處理器類,它創建了一些工作進程並實例化一個JoinableQueue。由每個進程調用的函數在二維共享內存數組的索引片上運行,並更新這些數組值,因此輸入(共享內存)數組應該將所有值替換爲計算結果,因此在那裏應該不需要爲結果提供第二個數組。主函數傳遞共享內存數組和一個索引值作爲計算函數的參數,這些被添加到隊列中,進程對象將從該隊列中消耗工作項目。代碼如下:

class Processor: 

    queue = None 

    def __init__(self, 
       number_of_workers=1): 

     # create a joinable queue 
     self.queue = JoinableQueue() 

     # create the processes 
     self.processes = [Process(target=self.compute) for _ in range(number_of_workers)] 
     for p in self.processes: 
      p.start() 

    def add_work_item(self, item): 

     # add the parameters list to the parameters queue 
     self.queue.put(item) 

    def compute(self): 

     while True: 

      # get a list of arguments from the queue 
      arguments = self.queue.get() 

      # if we didn't get one we keep looping 
      if arguments is None: 
       break 

      # process the arguments here 
      data = arguments[0] 
      index = arguments[1] 

      # only process non-empty grid cells, i.e. data array contains at least some non-NaN values 
      if (isinstance(data[:, index], np.ma.MaskedArray) and data[:, index].mask.all()) or np.isnan(data[:, index]).all(): 

       pass   

      else: # we have some valid values to work with 

       logger.info('Processing latitude: {}'.format(index)) 

       # perform a fitting to gamma  
       results = do_something(data[:, index]) 

       # update the shared array 
       data[:, index] = results 

      # indicate that the task has completed 
      self.queue.task_done() 

    def terminate(self): 

     # terminate all processes 
     for p in self.processes: 
      p.terminate() 

    def wait_on_all(self): 

     #wait until queue is empty 
     self.queue.join() 

#----------------------------------------------------------------------------------------------------------------------- 
if __name__ == '__main__': 

    try: 

     # log some timing info, used later for elapsed time 
     start_datetime = datetime.now() 
     logger.info("Start time: {}".format(start_datetime, '%x')) 

     # get the command line arguments 
     input_file = sys.argv[1] 
     input_var_name = sys.argv[2] 
     output_file_base = sys.argv[3] 
     month_scale = int(sys.argv[4]) 

     # create the variable name from the indicator, distribution, and month scale 
     variable_name = 'spi_gamma_{}'.format(str(month_scale).zfill(2)) 

     # open the NetCDF files 
     with netCDF4.Dataset(input_file) as input_dataset, \ 
      netCDF4.Dataset(output_file_base + '_' + variable_name + '.nc', 'w') as output_dataset: 

      # read info from the input dataset and initialize the output for writing 

      # create a processor with a number of worker processes 
      number_of_workers = 1 
      processor = Processor(number_of_workers) 

      # for each longitude slice 
      for lon_index in range(lon_size): 

       logger.info('\n\nProcessing longitude: {}\n'.format(lon_index)) 

       # read the longitude slice into a data array  
       longitude_slice = input_dataset.variables[input_var_name][:, lon_index, :] 

       # reshape into a 1-D array 
       original_shape = longitude_slice.shape 
       flat_longitude_slice = longitude_slice.flatten() 

       # convert the array onto a shared memory array which can be accessed from within another process 
       shared_array_base = Array(ctypes.c_double, flat_longitude_slice) 
       shared_array = np.ctypeslib.as_array(shared_array_base.get_obj()) 
       shared_array = shared_array.reshape(original_shape) 

       # loop over each latitude point in the longitude slice 
       for lat_index in range(lat_size): 

        # have the processor process the shared array at this index 
        arguments = [shared_array, lat_index] 
        processor.add_work_item(arguments) 

       # join to the processor and don't continue until all processes have completed 
       processor.wait_on_all() 


       # write the fitted longitude slice values into the output NetCDF 
       output_dataset.variables[variable_name][:, lon_index, :] = np.reshape(shared_array, (time_size, 1, lat_size)) 

      # all processes have completed 
      processor.terminate() 

    except Exception, e: 
     logger.error('Failed to complete', exc_info=True) 
     raise 

任何人都可以看到我要去哪裏錯了,即爲什麼在共享存儲陣列的值不爲我期待更新?

在此先感謝您的任何意見或建議。

更新:我這現在工作了一個單一的過程,但是當我試圖產卵多個進程,我得到一個泡菜的錯誤:

pickle.PicklingError: Can't pickle '_subprocess_handle' object: <_subprocess_handle object at 0x00000000021CF9F0> 

當第二個進程開始時會出現此,在處理器內部。 init()函數。如果我使用單個進程(number_of_workers = 1)運行下面的代碼,那麼我不會遇到這個錯誤,並且我的代碼按預期運行,儘管沒有使用多個處理器,這是一個目標。

class Processor: 

    queue = None 

    def __init__(self, 
       shared_array, 
       data_shape, 
       number_of_workers=1): 

     # create a joinable queue 
     self.queue = JoinableQueue() 

     # keep reference to shared memory array 
     self.shared_array = shared_array 
     self.data_shape = data_shape 

     # create the processes 
     self.processes = [Process(target=self.compute_indicator) for _ in range(number_of_workers)] 
     for p in self.processes: 
      p.start() 

    def add_work_item(self, item): 

     # add the parameters list to the parameters queue 
     self.queue.put(item) 

    def compute_indicator(self): 

     while True: 

      # get a list of arguments from the queue 
      arguments = self.queue.get() 

      # if we didn't get one we keep looping 
      if arguments is None: 
       break 

      # process the arguments here 
      index = arguments[0] 

      # turn the shared array into a numpy array  
      data = np.ctypeslib.as_array(self.shared_array) 
      data = data.reshape(self.data_shape) 

      # only process non-empty grid cells, i.e. data array contains at least some non-NaN values 
      if (isinstance(data[:, index], np.ma.MaskedArray) and data[:, index].mask.all()) \ 
       or np.isnan(data[:, index]).all() or (data[:, index] < 0).all(): 

       pass   

      else: # we have some valid values to work with 

       logger.info('Processing latitude: {}'.format(index)) 

       # perform computation  
       fitted_values = do_something(data[:, index]) 

       # update the shared array 
       data[:, index] = fitted_values 

      # indicate that the task has completed 
      self.queue.task_done() 

    def terminate(self): 

     # terminate all processes 
     for p in self.processes: 
      p.terminate() 

    def wait_on_all(self): 

     #wait until queue is empty 
     self.queue.join() 

#----------------------------------------------------------------------------------------------------------------------- 
if __name__ == '__main__': 

      # create a shared memory array which can be accessed from within another process 
      shared_array_base = Array(ctypes.c_double, time_size * lat_size, lock=False) 

      # create a processor with a number of worker processes 
      number_of_workers = 4 
      data_shape = (time_size, lat_size) 
      processor = Processor(shared_array_base, data_shape, number_of_workers) 

      # for each longitude slice 
      for lon_index in range(lon_size): 

       logger.info('\n\nProcessing longitude: {}\n'.format(lon_index)) 

       # get the shared memory array and convert into a numpy array with proper dimensions 
       longitude_array = np.ctypeslib.as_array(shared_array_base) 
       longitude_array = np.reshape(longitude_array, data_shape) 

       # read the longitude slice into the shared memory array  
       longitude_array[:] = input_dataset.variables[input_var_name][:, lon_index, :] 

       # a list of arguments we'll map to the processes of the pool 
       arguments_iterable = [] 

       # loop over each latitude point in the longitude slice 
       for lat_index in range(lat_size): 

        # have the processor process the shared array at this index 
        processor.add_work_item([lat_index]) 

       # join to the processor and don't continue until all processes have completed 
       processor.wait_on_all() 

       # get the longitude slice of fitted values from the shared memory array and convert 
       # into a numpy array with proper dimensions which we can then use to write to NetCDF 
       fitted_array = np.ctypeslib.as_array(shared_array_base) 
       fitted_array = np.reshape(fitted_array, (time_size, 1, lat_size)) 

       # write the longitude slice of computed values into the output NetCDF 
       output_dataset.variables[variable_name][:, lon_index, :] = fitted_array 

      # all processes have completed 
      processor.terminate() 
+1

爲什麼不從'multiprocessing.Pool'使用'map()'方法而不是自己寫? –

+1

只是一個想法,但是您在創建共享數組之前正在創建'Process'-es *。你確定'Process'-es實際上可以訪問共享數組嗎?可能是他們只是得到一個副本... –

+1

'shared_array_base'應該作爲參數傳遞給目標'compute'方法。實際上,對於POSIX系統,它只需要通過'fork'繼承,但對於Windows支持,它需要一個參數來允許將'mmap'共享內存的名稱和狀態醃製並傳遞給子進程。您可以將共享數組作爲NumPy數組包裝在每個工作進程中;不要醃製,並將副本分發給每個工作人員,就像你現在正在做的那樣。 – eryksun

回答

0

現在我已經成功地實現了一個解決方案,儘管它仍然表現出意外的行爲:1)在其上運行在Windows環境,但總的運行時間過程中所有的CPU並不比運行單個處理器的工作速度更快(即沒有任何多處理*用法的相同代碼),以及2)當我在Linux環境(虛擬容器)上運行代碼時,我只能看到四個CPU中有一個被使用。在任何情況下,我現在都有一個使用共享內存陣列的工作代碼,這就是最初的問題。如果任何人都能看到我出錯的地方,導致上述兩個問題,請在評論中跟進。

def compute(args): 

    # extract the arguments 
    lon_index = args[0] 
    lat_index = args[1] 

    # NOT SHOWN 
    # get the data_shape value 

    # turn the shared array into a numpy array 
    data = np.ctypeslib.as_array(shared_array) 
    data = data.reshape(data_shape) 

    # perform the computation, update the indexed array slice 
    data[:, lon_index, lat_index] = perform_computation(data[:, lon_index, lat_index]) 

def init_process(array): 

    # put the arguments to the global namespace 
    global shared_array 
    shared_array = array 


if __name__ == '__main__': 

    # NOT SHOWN 
    # get the lat_size, lon_size, time_size, lon_stride, and data_shape values 

    # create a shared memory array which can be accessed from within another process 
    shared_array = Array(ctypes.c_double, time_size * lon_stride * lat_size, lock=False) 
    data_shape = (time_size, lon_stride, lat_size) 

    # create a processor with a number of worker processes 
    number_of_workers = cpu_count() 

    # create a Pool, essentially forking with copies of the shared array going to each pooled/forked process 
    pool = Pool(processes=number_of_workers, 
       initializer=init_process, 
       initargs=(shared_array)) 

    # for each slice 
    for lon_index in range(0, lon_size, lon_stride): 

     # convert the shared memory array into a numpy array with proper dimensions 
     slice_array = np.ctypeslib.as_array(shared_array) 
     slice_array = np.reshape(slice_array, data_shape) 

     # read the longitude slice into the shared memory array  
     slice_array[:] = read_data(data_shape) 

     # a list of arguments we'll map to the processes of the pool 
     arguments_iterable = [] 

     # loop over each latitude point in the longitude slice 
     for lat_index in range(lat_size): 

      for i in range(lon_stride): 

       # have the processor process the shared array at this index 
       arguments = [i, lat_index] 
       arguments_iterable.append(arguments) 

       # map the arguments iterable to the compute function 
       pool.map(compute, arguments_iterable) 

       # get the longitude slice of fitted values from the shared memory array and convert 
       # into a numpy array with proper dimensions which we can then use to write to NetCDF 
       fitted_array = np.ctypeslib.as_array(shared_array) 
       fitted_array = np.reshape(fitted_array, (time_size, lon_stride, lat_size)) 

       # NOT SHOWN 
       # write the slice of computed values to file 

      # all processes have completed, close the pool 
      pool.close()