2016-06-09 96 views
1

剛開始使用scrapy,我試圖通過整個數據庫一頁一頁地做一個通用的搜索引擎,並抓住某些我需要的鏈接,但我得到這個錯誤當我嘗試去下一頁。不完全確定如何去下一頁,會感謝任何幫助正確的方法!Scrapy抓取多個頁面錯誤過濾重複

這是我的代碼:(?)

class TestSpider(scrapy.Spider): 

    name = "PLC" 
    allowed_domains = ["exploit-db.com"] 

    start_urls = [ 
     "https://www.exploit-db.com/local/" 
    ] 

    def parse(self, response): 
     filename = response.url.split("/")[-2] + '.html' 
     links = response.xpath('//tr/td[5]/a/@href').extract() 
     description = response.xpath('//tr/td[5]/a[@href]/text()').extract()  

     for data, link in zip(description, links): 
      if "PLC" in data: 
       with open(filename, "a") as f: 
        f.write(data+'\n') 
        f.write(link+'\n\n') 
        f.close() 


      else: 
       pass 

     next_page = response.xpath('//div[@class="pagination"][1]//a/@href').extract() 
     if next_page: 
      url = response.urljoin(next_page[0]) 
      yield scrapy.Request(url, callback=self.parse) 

但我在控制檯上收到此錯誤

2016-06-08 16:05:21 [scrapy] INFO: Enabled item pipelines: 
[] 
2016-06-08 16:05:21 [scrapy] INFO: Spider opened 
2016-06-08 16:05:21 [scrapy] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min) 
2016-06-08 16:05:21 [scrapy] DEBUG: Telnet console listening on 127.0.0.1:6023 
2016-06-08 16:05:22 [scrapy] DEBUG: Crawled (200) <GET https://www.exploit-db.com/robots.txt> (referer: None) 
2016-06-08 16:05:22 [scrapy] DEBUG: Crawled (200) <GET https://www.exploit-db.com/local/> (referer: None) 
2016-06-08 16:05:23 [scrapy] DEBUG: Crawled (200) <GET https://www.exploit-db.com/local/?order_by=date&order=desc&pg=2> (referer: https://www.exploit-db.com/local/) 
2016-06-08 16:05:23 [scrapy] DEBUG: Crawled (200) <GET https://www.exploit-db.com/local/?order_by=date&order=desc&pg=1> (referer: https://www.exploit-db.com/local/?order_by=date&order=desc&pg=2) 
2016-06-08 16:05:23 [scrapy] DEBUG: Filtered duplicate request: <GET https://www.exploit-db.com/local/?order_by=date&order=desc&pg=2> - no more duplicates will be shown (see DUPEFILTER_DEBUG to show all duplicates) 
2016-06-08 16:05:23 [scrapy] INFO: Closing spider (finished) 
2016-06-08 16:05:23 [scrapy] INFO: Dumping Scrapy stats: 
{'downloader/request_bytes': 1162, 
'downloader/request_count': 4, 
'downloader/request_method_count/GET': 4, 
'downloader/response_bytes': 40695, 
'downloader/response_count': 4, 
'downloader/response_status_count/200': 4, 
'dupefilter/filtered': 1, 
'finish_reason': 'finished', 
'finish_time': datetime.datetime(2016, 6, 8, 8, 5, 23, 514161), 
'log_count/DEBUG': 6, 
'log_count/INFO': 7, 
'request_depth_max': 3, 
'response_received_count': 4, 
'scheduler/dequeued': 3, 
'scheduler/dequeued/memory': 3, 
'scheduler/enqueued': 3, 
'scheduler/enqueued/memory': 3, 
'start_time': datetime.datetime(2016, 6, 8, 8, 5, 21, 561678)} 
2016-06-08 16:05:23 [scrapy] INFO: Spider closed (finished) 

無法檢索下一個頁面,很想解釋爲什麼TT

回答

0

您可以在您的請求中使用參數dont_filter = True:

if next_page: 
    url = response.urljoin(next_page[0]) 
    yield scrapy.Request(url, callback=self.parse, dont_filter=True) 

但是你會遇到一個無限循環,因爲它似乎您的xpath檢索相同的鏈接兩次(檢查每頁上的尋呼機,因爲.pagination的第二個元素可能並不總是「下一頁」 。

next_page = response.xpath('//div[@class="pagination"][1]//a/@href').extract() 

而且,如果他們開始使用引導或相似,他們補充BTN BTN-默認鏈接的類?

我建議使用

selector.css(".pagination").xpath('.//a/@href') 

代替