2016-08-16 123 views
1


我想讓我的實際爬蟲多線程。
當我設置多線程時,該函數的幾個實例將被啓動。避免重複結果多線程Python

例:

如果我的功能我用print range(5),我將有1,1,2,2,3,3,4,4,5,5如果我有2個線程。

如何在Multithread中得到結果1,2,3,4,5

我實際的代碼是履帶,你可以看到下:

import requests 
from bs4 import BeautifulSoup 

def trade_spider(max_pages): 
    page = 1 
    while page <= max_pages: 
     url = "http://stackoverflow.com/questions?page=" + str(page) 
     source_code = requests.get(url) 
     plain_text = source_code.text 
     soup = BeautifulSoup(plain_text, "html.parser") 
     for link in soup.findAll('a', {'class': 'question-hyperlink'}): 
      href = link.get('href') 
      title = link.string 
      print(title) 
      get_single_item_data("http://stackoverflow.com/" + href) 
     page += 1 

def get_single_item_data(item_url): 
    source_code = requests.get(item_url) 
    plain_text = source_code.text 
    soup = BeautifulSoup(plain_text, "html.parser") 
    res = soup.find('span', {'class': 'vote-count-post '}) 
    print("UpVote : " + res.string) 

trade_spider(1) 

我怎麼能調用多線程trade_spider()不重複的鏈接?

+0

您是否嘗試過使用[shared_ multiprocessing.Value'](https://docs.python.org/2/library/multiprocessing.html#sharing-state-between-processes)? –

+0

還沒有,我會嘗試 – Pixel

+0

@DavidCullen請你給我一個例子,我不明白共享多處理在文檔中是如何工作的。謝謝 – Pixel

回答

1

是否將頁碼作爲trade_spider函數的參數。

使用不同的頁碼在每個進程中調用該函數,以便每個線程都獲得唯一的頁面。

例如:

import multiprocessing 

def trade_spider(page): 
    url = "http://stackoverflow.com/questions?page=%s" % (page,) 
    source_code = requests.get(url) 
    plain_text = source_code.text 
    soup = BeautifulSoup(plain_text, "html.parser") 
    for link in soup.findAll('a', {'class': 'question-hyperlink'}): 
     href = link.get('href') 
     title = link.string 
     print(title) 
     get_single_item_data("http://stackoverflow.com/" + href) 

# Pool of 10 processes 
max_pages = 100 
num_pages = range(1, max_pages) 
pool = multiprocessing.Pool(10) 
# Run and wait for completion. 
# pool.map returns results from the trade_spider 
# function call but that returns nothing 
# so ignoring it 
pool.map(trade_spider, num_pages) 
+0

我可以舉個例子嗎。 – Pixel

+1

用示例更新 – danny

1

試試這個:

from multiprocessing import Process, Value 
import time 

max_pages = 100 
shared_page = Value('i', 1) 
arg_list = (max_pages, shared_page) 
process_list = list() 
for x in range(2): 
    spider_process = Process(target=trade_spider, args=arg_list) 
    spider_process.daemon = True 
    spider_process.start() 
    process_list.append(spider_process) 
for spider_process in process_list: 
    while spider_process.is_alive(): 
     time.sleep(1.0) 
    spider_process.join() 

變化trade_spider的參數列表

def trade_spider(max_pages, page) 

並取出

page = 1 

這將創建兩個進程,通過共享page值來處理頁面列表。