0

尋找一些眼球來驗證下面的psuedo python塊是否有意義。我期待產生一些線程來儘可能快地實現一些inproc函數。這個想法是產卵在主迴路中的線程,因此該應用將在並行/並行的方式同時運行的線程在python中實現一個基本的隊列/線程進程

chunk of code 
-get the filenames from a dir 
-write each filename ot a queue 
-spawn a thread for each filename, where each thread 
    waits/reads value/data from the queue 
-the threadParse function then handles the actual processing 
    based on the file that's included via the "execfile" function... 


# System modules 
from Queue import Queue 
from threading import Thread 
import time 

# Local modules 
#import feedparser 

# Set up some global variables 
appqueue = Queue() 

# more than the app will need 
# this matches the number of files that will ever be in the 
# urldir 
# 
num_fetch_threads = 200 


def threadParse(q) 
    #decompose the packet to get the various elements 
    line = q.get() 
    college,level,packet=decompose (line) 

    #build name of included file 
    fname=college+"_"+level+"_Parse.py" 
    execfile(fname) 
    q.task_done() 


#setup the master loop 
while True 
    time.sleep(2) 
    # get the files from the dir 
    # setup threads 
    filelist="ls /urldir" 
    if filelist 
    foreach file_ in filelist: 
     worker = Thread(target=threadParse, args=(appqueue,)) 
     worker.start() 

    # again, get the files from the dir 
    #setup the queue 
    filelist="ls /urldir" 
    foreach file_ in filelist: 
     #stuff the filename in the queue 
     appqueue.put(file_) 


    # Now wait for the queue to be empty, indicating that we have 
    # processed all of the downloads. 

    #don't care about this part 

    #print '*** Main thread waiting' 
    #appqueue.join() 
    #print '*** Done' 

思想/評論/指針被理解...

感謝

回答

0

如果我理解這個權利:你產生了很多線程來讓事情更快完成。

這隻有在每個線程完成的工作的主要部分完成而沒有保持GIL的情況下才有效。因此,如果有很多等待網絡,磁盤或類似的數據,這可能是一個好主意。 如果每個任務都使用了大量的CPU,這將與單核1 CPU機器上的運行非常相似,您也可以按順序執行它們。

我應該補充一點,我寫的對CPython是正確的,但對Jython/IronPython不一定是這樣。 另外,我應該補充說,如果你需要使用更多的CPU /內核,那麼multiprocessing模塊可能會有所幫助。