2016-04-03 74 views
0

我的特殊使用案例是: 我有一個正在刮取網站的刮板,一旦產生了一個項目 - 我有一個綁定信號,在Redis中設置一個密鑰並過期。下次運行scraper時,它應忽略Redis中存在密鑰的所有URL。是否有可能在Scrapy中默認出列請求?

第一部分我工作得很好;第二部分 - 我創建了一個DownloaderMiddleware,它有一個process_request函數,用於查看傳入請求對象,並檢查其URL是否存在於Redis中。如果是這樣,那麼raise是一個IgnoreRequest例外。

我想知道的是: 有沒有一種方法可以靜靜地將請求出隊而不是引發異常? 作爲一種審美的東西並不是一個艱難的要求;我不想在我的錯誤日誌中看到這些錯誤 - 我只想看到真正的錯誤。

我在他們使用什麼樣子的主調度重複過濾(scrapy /核心/ scheduler.py)一個雜牌的Scrapy來源看:

def enqueue_request(self, request): 
    if not request.dont_filter and self.df.request_seen(request): 
     self.df.log(request, self.spider) 
     return False 
+1

你可以分享你在日誌中看到?而你想要什麼呢?關於默認重複過濾,默認確實很簡單,基於[請求的指紋(來自規範化URL,HTTP方法,正文和頭文件)(https://github.com/scrapy/scrapy/blob/75cd056223a5a8da87a361aee42a541afcf27553/scrapy /utils/request.py#L19)),但[dupefilter是可定製的](http://doc.scrapy.org/en/latest/topics/settings.html?#std:setting-DUPEFILTER_CLASS)。如果您有關於如何改進或簡化或澄清設計的想法,請隨時在Github上打開討論。 –

+0

錯誤:在信號處理程序上發生錯誤:> Traceback(最近調用最後一次): 文件「/Library/Python/2.7/site-packages/scrapy/utils/signal.py」,第26行,在send_catch_log中 *參數,** named ) 文件「/Library/Python/2.7/site-packages/scrapy/xlib/pydispatch/robustapply.py」,第57行,在robustApply return receiver(* arguments,** named) 文件「.../deferred py」爲22行,在process_request 提高IgnoreRequest( 'URL被推遲') IgnoreRequest:URL被推遲 – infomaniac

+0

我中間件是這樣的: '高清__init __(自我,履帶): self.client = Redis的( ) self.crawler = crawler self.crawler.signals.connect(self.process_request,signals.request_scheduled) 高清process_request(個體經營,要求,蜘蛛): 如果沒有self.client.is_deferred(request.url): #URL不推遲,正常進行 返回無 raise IgnoreRequest('URL is deferred')' – infomaniac

回答

0

Scrapy使用Python模塊logging來記錄東西。因爲你想要的只是一種審美的東西,你可以寫一個logging filter來過濾掉你不想看到的東西。從OP

3

中間件代碼註釋

def __init__(self, crawler): 
    self.client = Redis() 
    self.crawler = crawler 
    self.crawler.signals.connect(self.process_request, signals.request_scheduled) 

def process_request(self, request, spider): 
    if not self.client.is_deferred(request.url): # URL is not deferred, proceed as normal 
     return None 
    raise IgnoreRequest('URL is deferred') 

的問題是讓您在signals.request_scheduled添加的信號處理程序。如果它引發異常,it will appear in the logs

我相信這裏註冊process_request作爲信號處理程序是不正確的(或無形)。

我能與此類似(非正確的)測試的中間件,它忽略它看到所有其他請求,重現您的控制檯錯誤:

from scrapy import log, signals 
from scrapy.exceptions import IgnoreRequest 

class TestMiddleware(object): 

    def __init__(self, crawler): 
     self.counter = 0 

    @classmethod 
    def from_crawler(cls, crawler): 
     o = cls(crawler) 
     crawler.signals.connect(o.open_spider, signals.spider_opened) 

     # this raises an exception always and will trigger errors in the console 
     crawler.signals.connect(o.process, signals.request_scheduled) 
     return o 

    def open_spider(self, spider): 
     spider.logger.info('TestMiddleware.open_spider()') 

    def process_request(self, request, spider): 
     spider.logger.info('TestMiddleware.process_request()') 
     self.counter += 1 
     if (self.counter % 2) == 0: 
      raise IgnoreRequest("ignoring request %d" % self.counter) 

    def process(self, *args, **kwargs): 
     raise Exception 

看到控制檯說什麼運行這個蜘蛛時中間件:

2016-04-06 00:16:58 [scrapy] ERROR: Error caught on signal handler: <bound method ?.process of <mwtest.middlewares.TestMiddleware object at 0x7f83d4a73f50>> 
Traceback (most recent call last): 
    File "/home/paul/.virtualenvs/scrapy11rc3.py27/local/lib/python2.7/site-packages/scrapy/utils/signal.py", line 30, in send_catch_log 
    *arguments, **named) 
    File "/home/paul/.virtualenvs/scrapy11rc3.py27/local/lib/python2.7/site-packages/pydispatch/robustapply.py", line 55, in robustApply 
    return receiver(*arguments, **named) 
    File "/home/paul/tmp/mwtest/mwtest/middlewares.py", line 26, in process 
    raise Exception 
Exception 

The code is here

比較本:

$ cat middlewares.py 
from scrapy import log, signals 
from scrapy.exceptions import IgnoreRequest 

class TestMiddleware(object): 

    def __init__(self, crawler): 
     self.counter = 0 

    @classmethod 
    def from_crawler(cls, crawler): 
     o = cls(crawler) 
     crawler.signals.connect(o.open_spider, signals.spider_opened) 
     return o 

    def open_spider(self, spider): 
     spider.logger.info('TestMiddleware.open_spider()') 

    def process_request(self, request, spider): 
     spider.logger.info('TestMiddleware.process_request()') 
     self.counter += 1 
     if (self.counter % 2) == 0: 
      raise IgnoreRequest("ignoring request %d" % self.counter) 

IgnoreRequest不是在日誌打印,但你必須在統計異常計數結尾:

$ scrapy crawl httpbin 
2016-04-06 00:27:24 [scrapy] INFO: Scrapy 1.1.0rc3 started (bot: mwtest) 
(...) 
2016-04-06 00:27:24 [scrapy] INFO: Enabled downloader middlewares: 
['scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware', 
'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware', 
'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware', 
'scrapy.downloadermiddlewares.retry.RetryMiddleware', 
'mwtest.middlewares.TestMiddleware', 
'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware', 
'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware', 
'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware', 
'scrapy.downloadermiddlewares.redirect.RedirectMiddleware', 
'scrapy.downloadermiddlewares.cookies.CookiesMiddleware', 
'scrapy.downloadermiddlewares.chunked.ChunkedTransferMiddleware', 
'scrapy.downloadermiddlewares.stats.DownloaderStats'] 
(...) 
2016-04-06 00:27:24 [scrapy] INFO: Spider opened 
2016-04-06 00:27:24 [scrapy] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min) 
2016-04-06 00:27:24 [httpbin] INFO: TestMiddleware.open_spider() 
2016-04-06 00:27:24 [scrapy] DEBUG: Telnet console listening on 127.0.0.1:6023 
2016-04-06 00:27:24 [httpbin] INFO: TestMiddleware.process_request() 
2016-04-06 00:27:24 [httpbin] INFO: TestMiddleware.process_request() 
2016-04-06 00:27:24 [httpbin] INFO: TestMiddleware.process_request() 
2016-04-06 00:27:24 [httpbin] INFO: TestMiddleware.process_request() 
2016-04-06 00:27:24 [httpbin] INFO: TestMiddleware.process_request() 
2016-04-06 00:27:24 [scrapy] DEBUG: Crawled (200) <GET http://www.httpbin.org/user-agent> (referer: None) 
2016-04-06 00:27:25 [scrapy] DEBUG: Crawled (200) <GET http://www.httpbin.org/> (referer: None) 
2016-04-06 00:27:25 [scrapy] DEBUG: Crawled (200) <GET http://www.httpbin.org/headers> (referer: None) 
2016-04-06 00:27:25 [scrapy] INFO: Closing spider (finished) 
2016-04-06 00:27:25 [scrapy] INFO: Dumping Scrapy stats: 
{'downloader/exception_count': 2, 
'downloader/exception_type_count/scrapy.exceptions.IgnoreRequest': 2, 
'downloader/request_bytes': 665, 
'downloader/request_count': 3, 
'downloader/request_method_count/GET': 3, 
'downloader/response_bytes': 13006, 
'downloader/response_count': 3, 
'downloader/response_status_count/200': 3, 
'finish_reason': 'finished', 
'finish_time': datetime.datetime(2016, 4, 5, 22, 27, 25, 596652), 
'log_count/DEBUG': 4, 
'log_count/INFO': 13, 
'log_count/WARNING': 1, 
'response_received_count': 3, 
'scheduler/dequeued': 5, 
'scheduler/dequeued/memory': 5, 
'scheduler/enqueued': 5, 
'scheduler/enqueued/memory': 5, 
'start_time': datetime.datetime(2016, 4, 5, 22, 27, 24, 661345)} 
2016-04-06 00:27:25 [scrapy] INFO: Spider closed (finished) 
相關問題