2017-02-13 80 views
1

我部署了我的項目(搜狗)成功,但是當我運行此:錯誤運行scrapyd項目

curl http://localhost:6800/schedule.json -d project=sogou -d spider=sogou 

失敗:

2017-02-13 10:44:51 [scrapy] INFO: Scrapy 1.2.1 started (bot: sogou) 

2017-02-13 10:44:51 [scrapy] INFO: Overridden settings: 
{'NEWSPIDER_MODULE': 'sogou.spiders', 'CONCURRENT_REQUESTS': 5, 
'SPIDER_MODULES': ['sogou.spiders'], 'RETRY_HTTP_CODES': [500, 502, 
503, 504, 400, 403, 408], 'BOT_NAME': 'sogou', 'DOWNLOAD_TIMEOUT': 10, 
'RETRY_TIMES': 10, 'LOG_FILE': 
'logs/sogou/sogou/63a0bbacf19611e69eea240a644f1626.log'} 

2017-02-13 10:44:51 [scrapy] INFO: Enabled extensions: 
['scrapy.extensions.logstats.LogStats', 
'scrapy.extensions.telnet.TelnetConsole', 
'scrapy.extensions.corestats.CoreStats'] 2017-02-13 10:44:51 [twisted] 
CRITICAL: Unhandled error in Deferred: 2017-02-13 10:44:51 [twisted] 
CRITICAL: Traceback (most recent call last): 

    File "/usr/local/lib/python2.7/dist-packages/twisted/internet/defer.py", line 1299, in _inlineCallbacks 
    result = g.send(result) 
    File "/usr/local/lib/python2.7/dist-packages/scrapy/crawler.py", line 90, in crawl 
    six.reraise(*exc_info) 
    File "/usr/local/lib/python2.7/dist-packages/scrapy/crawler.py", line 71, in crawl 
    self.spider = self._create_spider(*args, **kwargs) 
    File "/usr/local/lib/python2.7/dist-packages/scrapy/crawler.py", line 94, in _create_spider 
    return self.spidercls.from_crawler(self, *args, **kwargs) 
    File "/usr/local/lib/python2.7/dist-packages/scrapy/spiders/__init__.py", line 50, in from_crawler 
    spider = cls(*args, **kwargs) 

TypeError: __init__() got an unexpected keyword argument '_job' 
+0

日誌說,它未能由於'嚴重:Deferred'未處理的錯誤。它也提到了一些源代碼文件名。你有沒有試過尋找它並找到根本原因?如是;你有什麼看法。沒有更多的信息,不要期待好的答案。 –

回答

0

這是很難找到沒有源代碼的問題但很可能你會覆蓋你的蜘蛛的__init__,而不是它不接受任意的**kwargs,而scrapyd將作業標識符作爲蜘蛛參數傳遞。在這種情況下,你應該添加**kwargs蜘蛛的構造是這樣的:

class Spider(scrapy.Spider): 
    name = 'spider' 

    def __init__(self, param1, param2, **kwargs): 
     ...