2014-10-22 88 views
0

我正在運行scrapy教程中的示例。我正在運行Python 2.7.8。我使用點擊下載Scrapy和其他必需的軟件包。我相信我正確地跟着教程,但我無法運行蜘蛛。我已經閱讀了以前的文章,講述了其他人也遇到過的相同問題,但仍然無法解決問題。運行scrapy教程示例(scrapy抓取dmoz/scrapy.core.downloader.handlers.s3.S3DownloadHandler)時導入錯誤

我很感激任何幫助。

C:\tutorial>scrapy crawl dmoz 
2014-10-22 02:14:56-0400 [scrapy] INFO: Scrapy 0.24.4 started (bot: tutorial) 
2014-10-22 02:14:56-0400 [scrapy] INFO: Optional features available: ssl, http11 
2014-10-22 02:14:56-0400 [scrapy] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'tutorial.spiders' 
, 'SPIDER_MODULES': ['tutorial.spiders'], 'BOT_NAME': 'tutorial'} 
2014-10-22 02:14:58-0400 [scrapy] INFO: Enabled extensions: LogStats, TelnetConsole, CloseSpider, We 
bService, CoreStats, SpiderState 
Traceback (most recent call last): 
    File "C:\Python27\lib\runpy.py", line 162, in _run_module_as_main 
    "__main__", fname, loader, pkg_name) 
    File "C:\Python27\lib\runpy.py", line 72, in _run_code 
    exec code in run_globals 
    File "C:\Python27\Scripts\scrapy.exe\__main__.py", line 9, in <module> 
    File "C:\Python27\lib\site-packages\scrapy\cmdline.py", line 143, in execute 
    _run_print_help(parser, _run_command, cmd, args, opts) 
    File "C:\Python27\lib\site-packages\scrapy\cmdline.py", line 89, in _run_print_help 
    func(*a, **kw) 
    File "C:\Python27\lib\site-packages\scrapy\cmdline.py", line 150, in _run_command 
    cmd.run(args, opts) 
    File "C:\Python27\lib\site-packages\scrapy\commands\crawl.py", line 60, in run 
    self.crawler_process.start() 
    File "C:\Python27\lib\site-packages\scrapy\crawler.py", line 92, in start 
    if self.start_crawling(): 
    File "C:\Python27\lib\site-packages\scrapy\crawler.py", line 124, in start_crawling 
    return self._start_crawler() is not None 
    File "C:\Python27\lib\site-packages\scrapy\crawler.py", line 139, in _start_crawler 
    crawler.configure() 
    File "C:\Python27\lib\site-packages\scrapy\crawler.py", line 47, in configure 
    self.engine = ExecutionEngine(self, self._spider_closed) 
    File "C:\Python27\lib\site-packages\scrapy\core\engine.py", line 64, in __init__ 
    self.downloader = downloader_cls(crawler) 
    File "C:\Python27\lib\site-packages\scrapy\core\downloader\__init__.py", line 73, in __init__ 
    self.handlers = DownloadHandlers(crawler) 
    File "C:\Python27\lib\site-packages\scrapy\core\downloader\handlers\__init__.py", line 22, in __in 
it__ 
    cls = load_object(clspath) 
    File "C:\Python27\lib\site-packages\scrapy\utils\misc.py", line 42, in load_object 
    raise ImportError("Error loading object '%s': %s" % (path, e)) 
ImportError: Error loading object 'scrapy.core.downloader.handlers.s3.S3DownloadHandler': No module 
named win32api 
+0

當你在windows上運行它時,你是否安裝了OpenSSL? – 2014-10-22 06:50:47

+0

@TanveerAlam我相信當我使用pip安裝Scrapy時,pip會自動安裝OpenSSL。當我在我的Python目錄中打開站點包時,我看到一個OpenSSL文件夾 – user2990246 2014-10-23 03:36:52

回答

1

按Scrapy文檔,你想使用以下步驟安裝Scrapy之前安裝OpenSSL的: - 在這裏

install OpenSSL by following these steps: 
1. go to Win32 OpenSSL page 
2. download Visual C++ 2008 redistributables for your Windows and architecture 
3. download OpenSSL for your Windows and architecture (the regular version, not the light one) 
4. add the c:\openssl-win32\bin (or similar) directory to your PATH, the same way you added python27 in the first step`` in the first step 

見特定於平臺的安裝步驟。 Scrapy for windows