2017-09-27 172 views
0

我學習scrapy,想scrapy從這個頁面的幾個項目: https://www.gumtree.com/search?sort=date&search_category=flats-houses&q=box&search_location=Vale+of+Glamorganscrapy爬0頁(0頁/分鐘),刮0件(0個/分鐘)

爲了避免機器人.txt政策等我已經保存在我的高清頁面和測試我的xpaths使用scrapy外殼。他們似乎按預期工作。但是,當我運行與scrapy crawl basic命令蜘蛛(因爲它在這本書我讀,建議),我得到了以下的輸出:

2017-09-27 12:05:02 [scrapy.utils.log] INFO: Scrapy 1.4.0 started (bot: properties) 
2017-09-27 12:05:02 [scrapy.utils.log] INFO: Overridden settings: {'USER_AGENT': 'Mozila/5.0', 'SPIDER_MODULES': ['properties.spiders'], 'BOT_NAME': 'properties', 'NEWSPIDER_MODULE': 'properties.spiders'} 
2017-09-27 12:05:03 [scrapy.middleware] INFO: Enabled extensions: 
['scrapy.extensions.logstats.LogStats', 
'scrapy.extensions.memusage.MemoryUsage', 
'scrapy.extensions.telnet.TelnetConsole', 
'scrapy.extensions.corestats.CoreStats'] 
2017-09-27 12:05:03 [scrapy.middleware] INFO: Enabled downloader middlewares: 
['scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware', 
'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware', 
'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware', 
'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware', 
'scrapy.downloadermiddlewares.retry.RetryMiddleware', 
'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware', 
'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware', 
'scrapy.downloadermiddlewares.redirect.RedirectMiddleware', 
'scrapy.downloadermiddlewares.cookies.CookiesMiddleware', 
'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware', 
'scrapy.downloadermiddlewares.stats.DownloaderStats'] 
2017-09-27 12:05:03 [scrapy.middleware] INFO: Enabled spider middlewares: 
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware', 
'scrapy.spidermiddlewares.offsite.OffsiteMiddleware', 
'scrapy.spidermiddlewares.referer.RefererMiddleware', 
'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware', 
'scrapy.spidermiddlewares.depth.DepthMiddleware'] 
2017-09-27 12:05:03 [scrapy.middleware] INFO: Enabled item pipelines: 
[] 
2017-09-27 12:05:03 [scrapy.core.engine] INFO: Spider opened 
2017-09-27 12:05:03 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min) 
2017-09-27 12:05:03 [scrapy.extensions.telnet] DEBUG: Telnet console listening on 127.0.0.1:6026 
2017-09-27 12:05:03 [scrapy.core.engine] DEBUG: Crawled (200) <GET file:///home/albert/Documents/programming/python/scrapy/properties/properties/tests/test_page.html> (referer: None) 
2017-09-27 12:05:04 [basic] DEBUG: title: 
2017-09-27 12:05:04 [basic] DEBUG: price: 
2017-09-27 12:05:04 [basic] DEBUG: description: 
2017-09-27 12:05:04 [basic] DEBUG: address: 
2017-09-27 12:05:04 [basic] DEBUG: image_urls: 
2017-09-27 12:05:04 [scrapy.core.engine] INFO: Closing spider (finished) 
2017-09-27 12:05:04 [scrapy.statscollectors] INFO: Dumping Scrapy stats: 
{'downloader/request_bytes': 262, 
'downloader/request_count': 1, 
'downloader/request_method_count/GET': 1, 
'downloader/response_bytes': 270547, 
'downloader/response_count': 1, 
'downloader/response_status_count/200': 1, 
'finish_reason': 'finished', 
'finish_time': datetime.datetime(2017, 9, 27, 9, 5, 4, 91741), 
'log_count/DEBUG': 7, 
'log_count/INFO': 7, 
'memusage/max': 50790400, 
'memusage/startup': 50790400, 
'response_received_count': 1, 
'scheduler/dequeued': 1, 
'scheduler/dequeued/memory': 1, 
'scheduler/enqueued': 1, 
'scheduler/enqueued/memory': 1, 
'start_time': datetime.datetime(2017, 9, 27, 9, 5, 3, 718976)} 
2017-09-27 12:05:04 [scrapy.core.engine] INFO: Spider closed (finished) 
[email protected]:properties$ scrapy crawl basic 
2017-09-27 12:10:13 [scrapy.utils.log] INFO: Scrapy 1.4.0 started (bot: properties) 
2017-09-27 12:10:13 [scrapy.utils.log] INFO: Overridden settings: {'SPIDER_MODULES': ['properties.spiders'], 'BOT_NAME': 'properties', 'NEWSPIDER_MODULE': 'properties.spiders', 'USER_AGENT': 'Mozila/5.0'} 
2017-09-27 12:10:13 [scrapy.middleware] INFO: Enabled extensions: 
['scrapy.extensions.memusage.MemoryUsage', 
'scrapy.extensions.corestats.CoreStats', 
'scrapy.extensions.logstats.LogStats', 
'scrapy.extensions.telnet.TelnetConsole'] 
2017-09-27 12:10:13 [scrapy.middleware] INFO: Enabled downloader middlewares: 
['scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware', 
'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware', 
'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware', 
'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware', 
'scrapy.downloadermiddlewares.retry.RetryMiddleware', 
'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware', 
'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware', 
'scrapy.downloadermiddlewares.redirect.RedirectMiddleware', 
'scrapy.downloadermiddlewares.cookies.CookiesMiddleware', 
'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware', 
'scrapy.downloadermiddlewares.stats.DownloaderStats'] 
2017-09-27 12:10:13 [scrapy.middleware] INFO: Enabled spider middlewares: 
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware', 
'scrapy.spidermiddlewares.offsite.OffsiteMiddleware', 
'scrapy.spidermiddlewares.referer.RefererMiddleware', 
'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware', 
'scrapy.spidermiddlewares.depth.DepthMiddleware'] 
2017-09-27 12:10:13 [scrapy.middleware] INFO: Enabled item pipelines: 
[] 
2017-09-27 12:10:13 [scrapy.core.engine] INFO: Spider opened 
2017-09-27 12:10:13 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min) 
2017-09-27 12:10:13 [scrapy.extensions.telnet] DEBUG: Telnet console listening on 127.0.0.1:6026 
2017-09-27 12:10:13 [scrapy.core.engine] DEBUG: Crawled (200) <GET file:///home/albert/Documents/programming/python/scrapy/properties/properties/tests/test_page.html> (referer: None) 
2017-09-27 12:10:13 [basic] DEBUG: title: 
2017-09-27 12:10:13 [basic] DEBUG: price: 
2017-09-27 12:10:13 [basic] DEBUG: description: 
2017-09-27 12:10:13 [basic] DEBUG: address: 
2017-09-27 12:10:13 [basic] DEBUG: image_urls: 
2017-09-27 12:10:13 [scrapy.core.engine] INFO: Closing spider (finished) 
2017-09-27 12:10:13 [scrapy.statscollectors] INFO: Dumping Scrapy stats: 
{'downloader/request_bytes': 262, 
'downloader/request_count': 1, 
'downloader/request_method_count/GET': 1, 
'downloader/response_bytes': 270547, 
'downloader/response_count': 1, 
'downloader/response_status_count/200': 1, 
'finish_reason': 'finished', 
'finish_time': datetime.datetime(2017, 9, 27, 9, 10, 13, 927817), 
'log_count/DEBUG': 7, 
'log_count/INFO': 7, 
'memusage/max': 51032064, 
'memusage/startup': 51032064, 
'response_received_count': 1, 
'scheduler/dequeued': 1, 
'scheduler/dequeued/memory': 1, 
'scheduler/enqueued': 1, 
'scheduler/enqueued/memory': 1, 
'start_time': datetime.datetime(2017, 9, 27, 9, 10, 13, 722731)} 
2017-09-27 12:10:13 [scrapy.core.engine] INFO: Spider closed (finished) 

這裏是我的items.py

from scrapy.item import Item, Field 


class PropertiesItem(Item): 
    title = Field() 
    price = Field() 
    description = Field() 
    address = Field() 
    image_urls = Field() 

    images = Field() 
    location = Field() 

    url = Field() 
    project = Field() 
    spider = Field() 
    server = Field() 
    date = Field() 

而這裏的蜘蛛basic.py

import scrapy 


class BasicSpider(scrapy.Spider): 
    name = 'basic' 
    start_urls = ['file:///home/albert/Documents/programming/python/scrapy/properties/properties/site/test_page.html'] 

    def parse(self, response): 
     self.log('title: '.format(response.xpath(
      "//h2[@class='listing-title' and not(span)]/text()").extract())) 
     self.log('price: '.format(response.xpath(
      "//meta[@itemprop='price']/@content").extract())) 
     self.log("description: ".format(response.xpath(
      "//p[@itemprop='description' and not(span)]/text()").extract())) 
     self.log('address: '.format(response.xpath(
      "//span[@class='truncate-line']/text()[2]").re('\|(\s+\w+.+)'))) 
     self.log('image_urls: '.format(response.xpath(
      "//noscript/img/@src").extract())) 

的XPath是一個有點笨拙,但日工作。但是不收集物品。我想知道爲什麼。

+0

加上'打印(response.body)'和'打印(類型(響應))'在分析功能看,如果你得到HTMLResponse和正確的身體與所有預期的HTML? –

+0

@TarunLalwani讓我檢查一下。但我試過在scrapy shell中運行這個保存的頁面並實現xpaths,並且它們工作正常,我認爲這是HTML體的正確標誌。 – Albert

+0

@TarunLalwani'print(type(response))'產生''和print(response.body)'打印html文檔的主體。乍一看,一切似乎都很好。 – Albert

回答

1

你問題在於你沒有在字符串中的任何位置插入格式化函數的輸出。所以你需要將title更改爲title {},所以格式會插入值。還可以使用extract_first()而不是extract()。所以,你得到的一個字符串輸出代替數組

class BasicSpider(scrapy.Spider): 
    name = 'basic' 
    start_urls = ['file:///home/albert/Documents/programming/python/scrapy/properties/properties/site/test_page.html'] 

    def parse(self, response): 
     self.log('title: {}'.format(response.xpath(
      "//h2[@class='listing-title' and not(span)]/text()").extract_first())) 
     self.log('price: {}'.format(response.xpath(
      "//meta[@itemprop='price']/@content").extract_first())) 
     self.log("description: {}".format(response.xpath(
      "//p[@itemprop='description' and not(span)]/text()").extract_first())) 
     self.log('address: {}'.format(response.xpath(
      "//span[@class='truncate-line']/text()[2]").re('\|(\s+\w+.+)'))) 
     self.log('image_urls: {}'.format(response.xpath(
      "//noscript/img/@src").extract_first())) 
+0

ooooh我....我沒有注意到它......我不相信我錯過了它......我覺得很愚蠢。 .. 謝謝!是的,現在它按照它應該的方式工作! – Albert

1

我不嘗試Scrapy在本地的文件,但如果你想scrapy的東西,你必須初始化Items第一,必須分配Itemdict在python,終於yield itempipeline

import scrapy 
from properties.items import PropertiesItem 

class BasicSpider(scrapy.Spider): 
    name = 'basic' 
    start_urls = ['file:///home/albert/Documents/programming/python/scrapy/properties/properties/site/test_page.html'] 

    def parse(self, response): 
     item = PropertiesItem()  # init Item 
     # assignment 
     item['title'] = response.xpath("//h2[@class='listing-title' and not(span)]/text()").extract() 
     item['price'] = response.xpath("//h2[@class='listing-title' and not(span)]/text()").extract() 
     item['description'] = response.xpath("//h2[@class='listing-title' and not(span)]/text()").extract() 
     item['address'] = response.xpath("//h2[@class='listing-title' and not(span)]/text()").extract() 
     item['image_urls'] = response.xpath("//h2[@class='listing-title' and not(span)]/text()").extract() 
     # yield item 
     yield item 
+0

嗯,這段代碼比我在教程中編寫的內容更有意義。其實,他們颳了一些其他的網站,但我只是重複他們的代碼與所需的所有需要​​刮我想要的網站。我不明白他們的例子如何用這種方式工作... 謝謝澄清!順便說一下,它應該是'from properties.items import PropertiesItem'。 – Albert

+0

是的,它應該是'從properties.items導入PropertiesItem',我編輯它 – zhongjiajie