从Link中提取Scrapy

时间:2015-06-10 10:05:08

标签: python scrapy scrapy-spider

我正在尝试在某些链接中提取信息,但我没有去转到链接,我从start_url中提取并且我不确定原因。

这是我的代码:

import scrapy
from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
from tutorial.items import DmozItem
from scrapy.selector import HtmlXPathSelector

class DmozSpider(scrapy.Spider):
    name = "dmoz"
    allowed_domains = ["dmoz.org"]
    start_urls = [
        "http://www.dmoz.org/Computers/Programming/Languages/Python"
    ]
    rules = [Rule(SgmlLinkExtractor(allow=[r'Books']), callback='parse')] 


    def parse(self, response):
        hxs = HtmlXPathSelector(response)
        item = DmozItem()

        # Extract links
        item['link'] = hxs.select("//li/a/text()").extract()  # Xpath selector for tag(s)

        print item['title']

        for cont, i in enumerate(item['link']):
            print "link: ", cont, i

我没有收到来自" http://www.dmoz.org/Computers/Programming/Languages/Python/Books"的链接,而是从" http://www.dmoz.org/Computers/Programming/Languages/Python"获得链接。

为什么?

1 个答案:

答案 0 :(得分:4)

要使rules起作用,你需要使用CrawlSpider而不是普通的scrapy蜘蛛。

此外,您需要将第一个解析函数重命名为parse以外的名称。否则,您将覆盖CrawlSpider的一个重要方法,它将无法正常工作。请参阅文档http://doc.scrapy.org/en/0.24/topics/spiders.html?highlight=rules#crawlspider

中的警告

您的代码正在抓取“http://www.dmoz.org/Computers/Programming/Languages/Python”中的链接,因为普通Spider忽略了rule命令。

此代码应该有效:

import scrapy
from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
from dmoz.items import DmozItem
from scrapy.selector import HtmlXPathSelector

class DmozSpider(CrawlSpider):
    name = "dmoz"
    allowed_domains = ["dmoz.org"]
    start_urls = [
        "http://www.dmoz.org/Computers/Programming/Languages/Python"
    ]
    rules = [Rule(SgmlLinkExtractor(allow=[r'Books']), callback='parse_item')] 


    def parse_item(self, response):
        hxs = HtmlXPathSelector(response)
        item = DmozItem()

        # Extract links
        item['link'] = hxs.select("//li/a/text()").extract()  # Xpath selector for tag(s)

        print item['link']

        for cont, i in enumerate(item['link']):
            print "link: ", cont, i