获取"请求太多"使用scrapy抓取特定网站时出错

时间:2017-11-03 10:23:38

标签: python web-scraping scrapy python-requests

我写了一个蜘蛛来从http://allevents.in获取详细信息。 每当我试图报废时,我都会得到一个回应主体

Too many requests, please try after some time or report this problem at contact@allevents.in

我也尝试使用shell命令。

 scrapy shell 'http://allevents.in/new%20delhi/all'

但我仍然对response.body得到相同的回复。 我尝试过像amazon这样的其他网站。 此外,可以使用requests以及urllib.urlopen()来获取上述网址。

这是我的settings.py文件

# -*- coding: utf-8 -*-

# Scrapy settings for tutorial project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
#     http://doc.scrapy.org/en/latest/topics/settings.html
#     http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html
#     http://scrapy.readthedocs.org/en/latest/topics/spider-middleware.html

BOT_NAME = 'tutorial'

SPIDER_MODULES = ['tutorial.spiders']
NEWSPIDER_MODULE = 'tutorial.spiders'


# Crawl responsibly by identifying yourself (and your website) on the user-agent
#USER_AGENT = 'tutorial (+http://www.yourdomain.com)'

# Obey robots.txt rules
ROBOTSTXT_OBEY = True

# Configure maximum concurrent requests performed by Scrapy (default: 16)
# CONCURRENT_REQUESTS = 1

# Configure a delay for requests for the same website (default: 0)
# See http://scrapy.readthedocs.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
DOWNLOAD_DELAY = 5
# The download delay setting will honor only one of:
CONCURRENT_REQUESTS_PER_DOMAIN = 1
CONCURRENT_REQUESTS_PER_IP = 1

# Disable cookies (enabled by default)
COOKIES_ENABLED = False

# Disable Telnet Console (enabled by default)
# TELNETCONSOLE_ENABLED = False

# Override the default request headers:
# DEFAULT_REQUEST_HEADERS = {
#   'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
#   'Accept-Language': 'en',
# }

# Enable or disable spider middlewares
# See http://scrapy.readthedocs.org/en/latest/topics/spider-middleware.html
#SPIDER_MIDDLEWARES = {
#    'tutorial.middlewares.TutorialSpiderMiddleware': 543,
#}

# Enable or disable downloader middlewares
# See http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html
# DOWNLOADER_MIDDLEWARES = {
# #    'tutorial.middlewares.MyCustomDownloaderMiddleware': 543,
#      'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware': None,
#      # 'tutorial.middlewares.ProxyMiddleware': 100,
# }

# Enable or disable extensions
# See http://scrapy.readthedocs.org/en/latest/topics/extensions.html
#EXTENSIONS = {
#    'scrapy.extensions.telnet.TelnetConsole': None,
#}

# Configure item pipelines
# See http://scrapy.readthedocs.org/en/latest/topics/item-pipeline.html
#ITEM_PIPELINES = {
#    'tutorial.pipelines.TutorialPipeline': 300,
#}

# Enable and configure the AutoThrottle extension (disabled by default)
# See http://doc.scrapy.org/en/latest/topics/autothrottle.html
AUTOTHROTTLE_ENABLED = True
# The initial download delay
AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
AUTOTHROTTLE_MAX_DELAY = 60
# The average number of requests Scrapy should be sending in parallel to
# each remote server
AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
# Enable showing throttling stats for every response received:
AUTOTHROTTLE_DEBUG = False

# Enable and configure HTTP caching (disabled by default)
# See http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
# HTTPCACHE_ENABLED = True
# HTTPCACHE_EXPIRATION_SECS = 0
# HTTPCACHE_DIR = 'httpcache'
#HTTPCACHE_IGNORE_HTTP_CODES = []
# HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'

我是scrapy的初学者。请帮忙

4 个答案:

答案 0 :(得分:4)

Scrapy使用多个并发请求(默认为8个)来废弃您指定的网站。似乎allevents.in在你打得太多时并不喜欢。

最有可能的解决方案是设置以下配置选项之一:

  • CONCURRENT_REQUESTS_PER_DOMAIN(默认为8,尝试使用较小的数字)
  • CONCURRENT_REQUESTS_PER_IP(如果设置为正数,则默认为0,覆盖前一个)

或者,您也可以使用AutoThrottle extension

答案 1 :(得分:1)

您可以尝试在settings.py CONCURRENT_REQUESTS = 1中进行分配并逐渐增加它,如果您看到它有效,如果仍然收到相同的警告,请尝试设置更高的DOWNLOAD_DELAY

答案 2 :(得分:0)

尝试在请求之间设置一个时间限制,以免损坏网站。在如此短的时间内过多请求可能会导致网站阻止它们。

import time
time.sleep(x) #x is the number of seconds to wait

答案 3 :(得分:0)

使用scrapy-random-proxies而不是应用自动油门,当您可以以更高的速度实现目标时,没有任何限制爬虫的乐趣-相信我,如果您使用数百个代理人,他们将永远不会知道您在哪里{更多总是更好}。

相关问题