Scrapy SgmlLinkExtractor referer none

时间:2013-02-28 06:33:53

标签: python scrapy

我正在努力让我的蜘蛛工作。 这是我在蜘蛛中的代码:

start_urls = ["http://www.khmer24.com/"]   

rules = (
Rule(SgmlLinkExtractor(allow=(r'ad/\w+/67-\d+\.html',),
    ), 
    callback='parse_items'),
)

示例网址如下: http://www.khmer24.com/ad/honda-click-2012-98/67-258149.html

我想保留“广告”和“67-”

scrapy crawl khmer24的输出为:

Crawled (200) <GET http://www.khmer24.com/> (referer: None)

我无法弄清楚原因 这是我的全部代码:

from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
from scrapy.selector import HtmlXPathSelector


class MySpider(CrawlSpider):
    name = "khmer24"
    allowed_domains = ["khmer24.com"]
    start_urls = ["http://www.khmer24.com/"]   

    rules = (
    Rule(SgmlLinkExtractor(allow=(r'ad/\w+/67-\d+\.html',),
        ), 
        callback='parse_items'),
    )

    def parse_items(self, response):
        hxs = HtmlXPathSelector(response)
        titles = hxs.select("//div[@class='innerbox']/h1/text()")
        return(titles)

1 个答案:

答案 0 :(得分:1)

所以你的问题是,“为什么我的推荐人没有?

日志输出中的行

Crawled (200) <GET http://www.khmer24.com/> (referer: None)

来自start_urls,而不是链接提取器。默认情况下,start_urls发出的请求不包含referer标头。您可以通过发出requests yourself手动添加标头。