Scrapy只抓取每页的第一个结果

时间:2013-11-17 08:58:18

标签: python web-scraping screen-scraping scrapy

我目前正在尝试运行以下代码,但它只会抓取每个页面的第一个结果。知道问题可能是什么?

from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
from scrapy.selector import HtmlXPathSelector
from firstproject.items import xyz123Item
import urlparse
from scrapy.http.request import Request

class MySpider(CrawlSpider):
    name = "xyz123"
    allowed_domains = ["www.xyz123.com.au"]
    start_urls = ["http://www.xyz123.com.au/",]

    rules = (Rule (SgmlLinkExtractor(allow=("",),restrict_xpaths=('//*[@id="1234headerPagination_hlNextLink"]',))
    , callback="parse_xyz", follow=True),
    )

    def parse_xyz(self, response):
        hxs = HtmlXPathSelector(response)
        xyz = hxs.select('//div[@id="1234SearchResults"]//div/h2')
        items = []
        for xyz in xyz:
            item = xyz123Item()
            item ["title"] = xyz.select('a/text()').extract()[0]
            item ["link"] = xyz.select('a/@href').extract()[0]
            items.append(item)
            return items

Basespider版本可以很好地抓取第一页上的所有必需数据:

from scrapy.spider import BaseSpider
from scrapy.selector import HtmlXPathSelector
from firstproject.items import xyz123

class MySpider(BaseSpider):
    name = "xyz123test"
    allowed_domains = ["xyz123.com.au"]
    start_urls = ["http://www.xyz123.com.au/"]


    def parse(self, response):
        hxs = HtmlXPathSelector(response)
        titles = hxs.select('//div[@id="1234SearchResults"]//div/h2')
        items = []
        for titles in titles:
            item = xyz123Item()
            item ["title"] = titles.select("a/text()").extract()
            item ["link"] = titles.select("a/@href").extract()
            items.append(item)
        return items

对于审查抱歉。出于隐私原因,我不得不审查网站。

第一个代码以我希望抓取的方式抓取页面,但它只会提取第一个项目标题和链接。注意:谷歌中使用“inspect element”的第一个标题的XPath是:
//*[@id="xyz123SearchResults"]/div[1]/h2/a
第二个是//*[@id="xyz123SearchResults"]/div[2]/h2/a
第三是//*[@id="xyz123SearchResults"]/div[3]/h2/a等。

我不确定div [n]位是否正在杀死它。我希望这很容易解决。

由于

1 个答案:

答案 0 :(得分:2)

 for xyz in xyz:
            item = xyz123Item()
            item ["title"] = xyz.select('a/text()').extract()[0]
            item ["link"] = xyz.select('a/@href').extract()[0]
            items.append(item)
            return items

您确定退货的缩进吗?它应该少一个。