scrapy如何设置referer url

时间:2012-10-25 13:36:41

标签: screen-scraping scrapy

我需要设置referer url,在抓取网站之前,该网站使用引用网址验证,因此如果引用无效,则不允许我登录。

有人可以告诉你如何在Scrapy中做到这一点吗?

4 个答案:

答案 0 :(得分:11)

如果要更改蜘蛛请求中的引用,可以在settings.py文件中更改DEFAULT_REQUEST_HEADERS

示例:

DEFAULT_REQUEST_HEADERS = { 'Referer': 'http://www.google.com'
}

答案 1 :(得分:10)

你应该完全像@warwaruk所说的那样,下面是我对爬行蜘蛛的例子阐述:

from scrapy.contrib.spiders import CrawlSpider
from scrapy.http import Request

class MySpider(CrawlSpider):
  name = "myspider"
  allowed_domains = ["example.com"]
  start_urls = [
      'http://example.com/foo'
      'http://example.com/bar'
      'http://example.com/baz'
      ]
  rules = [(...)]

  def start_requests(self):
    requests = []
    for item in start_urls:
      requests.append(Request(url=item, headers={'Referer':'http://www.example.com/'}))
    return requests    

  def parse_me(self, response):
    (...)

这应在终端中生成以下日志:

(...)
[myspider] DEBUG: Crawled (200) <GET http://example.com/foo> (referer: http://www.example.com/)
(...)
[myspider] DEBUG: Crawled (200) <GET http://example.com/bar> (referer: http://www.example.com/)
(...)
[myspider] DEBUG: Crawled (200) <GET http://example.com/baz> (referer: http://www.example.com/)
(...)

与BaseSpider一样。最后,start_requests方法是BaseSpider方法,CrawlSpider从该方法继承。

Documentation解释了除了标题之外在Request中设置的更多选项,例如:cookies,回调函数,请求的优先级等。

答案 2 :(得分:4)

只需在请求标头

中设置Referer网址即可
  

class scrapy.http.Request(url[, method='GET', body, headers, ...

     

headers (dict) – the headers of this request. The dict values can be strings (for single valued headers) or lists (for multi-valued headers).

示例:

return Request(url=your_url, headers={'Referer':'http://your_referer_url'})

答案 3 :(得分:3)

覆盖BaseSpider.start_requests并在那里创建自定义Request,将referer标题传递给您。