Crawlera没有使用Scrapy,下载器无法正常工作

时间:2013-12-23 14:45:27

标签: python python-2.7 scrapy web-crawler

试图在scrapy中实施Common Practices。因此,尝试实施 crawlera 库。

我按照here安装并设置了 Crawlera 。 (我可以通过scrapylib

在我的系统中看到help('modules')

这是我用于scrapy的 settings.py

BOT_NAME = 'cnn'

SPIDER_MODULES = ['cnn.spiders']
NEWSPIDER_MODULE = 'cnn.spiders'
COOKIES_ENABLED = False
DOWNLOADER_MIDDLEWARES = {'scrapylib.crawlera.CrawleraMiddleware': 600,'scrapy.contrib.downloadermiddleware.useragent.UserAgentMiddleware': None,}
CRAWLERA_ENABLED = True
CRAWLERA_USER = 'abc'
CRAWLERA_PASS = 'abc@abc'  

但是当我跑蜘蛛时没有任何事情发生。

我可以在scrapy日志中看到 CrawleraMiddleware 已加载:

2013-12-23 20:12:54+0530 [scrapy] DEBUG: Enabled downloader middlewares: HttpAuthMiddleware, DownloadTimeoutMiddleware, RetryMiddleware, DefaultHeadersMiddleware, MetaRefreshMiddleware, HttpCompressionMiddleware, RedirectMiddleware, CrawleraMiddleware, ChunkedTransferMiddleware, DownloaderStats  

为什么不爬行?

这是 Crawlera已启用

的日志
2013-12-23 21:58:14+0530 [scrapy] INFO: Scrapy 0.20.2 started (bot: cnn)
2013-12-23 21:58:14+0530 [scrapy] DEBUG: Optional features available: ssl, http11
2013-12-23 21:58:14+0530 [scrapy] DEBUG: Overridden settings: {'NEWSPIDER_MODULE': 'cnn.spiders', 'FEED_URI': 'news.json', 'MEMDEBUG_ENABLED': True, 'RETRY_ENABLED': False, 'SPIDER_MODULES': ['cnn.spiders'], 'BOT_NAME': 'cnn', 'DOWNLOAD_TIMEOUT': 240, 'COOKIES_ENABLED': False, 'FEED_FORMAT': 'json', 'MEMUSAGE_REPORT': True, 'REDIRECT_ENABLED': False, 'MEMUSAGE_ENABLED': True}
2013-12-23 21:58:14+0530 [scrapy] DEBUG: Enabled extensions: FeedExporter, LogStats, TelnetConsole, CloseSpider, WebService, CoreStats, MemoryUsage, MemoryDebugger, SpiderState
2013-12-23 21:58:14+0530 [scrapy] DEBUG: Enabled downloader middlewares: HttpAuthMiddleware, DownloadTimeoutMiddleware, DefaultHeadersMiddleware, MetaRefreshMiddleware, HttpCompressionMiddleware, CrawleraMiddleware, ChunkedTransferMiddleware, DownloaderStats
2013-12-23 21:58:14+0530 [scrapy] DEBUG: Enabled spider middlewares: HttpErrorMiddleware, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddleware
2013-12-23 21:58:14+0530 [scrapy] DEBUG: Enabled item pipelines: 
2013-12-23 21:58:14+0530 [cnn] INFO: Spider opened
2013-12-23 21:58:14+0530 [cnn] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2013-12-23 21:58:14+0530 [cnn] INFO: Using crawlera at http://proxy.crawlera.com:8010 (user: xmpirate)
2013-12-23 21:58:14+0530 [scrapy] DEBUG: Telnet console listening on 0.0.0.0:6023
2013-12-23 21:58:14+0530 [scrapy] DEBUG: Web service listening on 0.0.0.0:6080
2013-12-23 21:58:15+0530 [cnn] DEBUG: Crawled (407) <GET http://www.example1.com> (referer: None)
2013-12-23 21:58:15+0530 [cnn] DEBUG: Crawled (407) <GET http://www.example2.com> (referer: None)
2013-12-23 21:58:15+0530 [cnn] INFO: Closing spider (finished)
2013-12-23 21:58:15+0530 [cnn] INFO: Dumping Scrapy stats:
    {'downloader/request_bytes': 464,
     'downloader/request_count': 2,
     'downloader/request_method_count/GET': 2,
     'downloader/response_bytes': 364,
     'downloader/response_count': 2,
     'downloader/response_status_count/407': 2,
     'finish_reason': 'finished',
     'finish_time': datetime.datetime(2013, 12, 23, 16, 28, 15, 679961),
     'log_count/DEBUG': 8,
     'log_count/INFO': 4,
     'memusage/max': 30236737536,
     'memusage/startup': 30236737536,
     'response_received_count': 2,
     'scheduler/dequeued': 2,
     'scheduler/dequeued/memory': 2,
     'scheduler/enqueued': 2,
     'scheduler/enqueued/memory': 2,
     'start_time': datetime.datetime(2013, 12, 23, 16, 28, 14, 853975)}
2013-12-23 21:58:15+0530 [cnn] INFO: Spider closed (finished)  

这是 Crawlera已禁用

2013-12-23 22:00:45+0530 [scrapy] INFO: Scrapy 0.20.2 started (bot: cnn)
2013-12-23 22:00:45+0530 [scrapy] DEBUG: Optional features available: ssl, http11
2013-12-23 22:00:45+0530 [scrapy] DEBUG: Overridden settings: {'NEWSPIDER_MODULE': 'cnn.spiders', 'FEED_URI': 'news.json', 'MEMDEBUG_ENABLED': True, 'RETRY_ENABLED': False, 'SPIDER_MODULES': ['cnn.spiders'], 'BOT_NAME': 'cnn', 'DOWNLOAD_TIMEOUT': 240, 'COOKIES_ENABLED': False, 'FEED_FORMAT': 'json', 'MEMUSAGE_REPORT': True, 'REDIRECT_ENABLED': False, 'MEMUSAGE_ENABLED': True}
2013-12-23 22:00:46+0530 [scrapy] DEBUG: Enabled extensions: FeedExporter, LogStats, TelnetConsole, CloseSpider, WebService, CoreStats, MemoryUsage, MemoryDebugger, SpiderState
2013-12-23 22:00:46+0530 [scrapy] DEBUG: Enabled downloader middlewares: HttpAuthMiddleware, DownloadTimeoutMiddleware, DefaultHeadersMiddleware, MetaRefreshMiddleware, HttpCompressionMiddleware, CrawleraMiddleware, ChunkedTransferMiddleware, DownloaderStats
2013-12-23 22:00:46+0530 [scrapy] DEBUG: Enabled spider middlewares: HttpErrorMiddleware, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddleware
2013-12-23 22:00:46+0530 [scrapy] DEBUG: Enabled item pipelines: 
2013-12-23 22:00:46+0530 [cnn] INFO: Spider opened
2013-12-23 22:00:46+0530 [cnn] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2013-12-23 22:00:46+0530 [scrapy] DEBUG: Telnet console listening on 0.0.0.0:6023
2013-12-23 22:00:46+0530 [scrapy] DEBUG: Web service listening on 0.0.0.0:6080
2013-12-23 22:00:46+0530 [cnn] DEBUG: Crawled (200) <GET http://www.example1.com> (referer: None)
2013-12-23 22:00:47+0530 [cnn] DEBUG: Crawled (200) <GET http://www.example2.com> (referer: None)
**Pages are crawled here**
2013-12-23 22:01:00+0530 [cnn] INFO: Closing spider (finished)
2013-12-23 22:01:00+0530 [cnn] INFO: Stored json feed (7 items) in: news.json
2013-12-23 22:01:00+0530 [cnn] INFO: Dumping Scrapy stats:
    {'downloader/request_bytes': 10151,
     'downloader/request_count': 36,
     'downloader/request_method_count/GET': 36,
     'downloader/response_bytes': 762336,
     'downloader/response_count': 36,
     'downloader/response_status_count/200': 35,
     'downloader/response_status_count/404': 1,
     'finish_reason': 'finished',
     'finish_time': datetime.datetime(2013, 12, 23, 16, 31, 0, 376888),
     'item_scraped_count': 7,
     'log_count/DEBUG': 49,
     'log_count/INFO': 4,
     'memusage/max': 30157045760,
     'memusage/startup': 30157045760,
     'request_depth_max': 1,
     'response_received_count': 36,
     'scheduler/dequeued': 36,
     'scheduler/dequeued/memory': 36,
     'scheduler/enqueued': 36,
     'scheduler/enqueued/memory': 36,
     'start_time': datetime.datetime(2013, 12, 23, 16, 30, 46, 61019)}
2013-12-23 22:01:00+0530 [cnn] INFO: Spider closed (finished)

1 个答案:

答案 0 :(得分:0)

来自Crawlera的407错误代码是一个身份验证错误,APIKEY中可能存在拼写错误,或者您可能没有使用正确的错误代码。

Source

相关问题