为什么Scrapy会跳过循环?

时间:2013-01-15 20:54:53

标签: python scrapy

这个蜘蛛应该遍历http://www.saylor.org/site/syllabus.php?cid=NUMBER,其中NUMBER是1到404并提取每个页面。但由于某种原因,它会在循环中跳过页面。很多页面。例如,它跳过1到16.有人可以告诉我发生了什么吗?

以下是代码:

 from scrapy.spider import BaseSpider
 from scrapy.http import Request
 from opensyllabi.items import OpensyllabiItem

 import boto

 class OpensyllabiSpider(BaseSpider):
      name = 'saylor'
      allowed_domains = ['saylor.org']
      max_cid = 405
      i = 1

      def start_requests(self):
          for self.i in range(1, self.max_cid):
              yield Request('http://www.saylor.org/site/syllabus.php?cid=%d' % self.i, callback=self.parse_Opensyllabi)

      def parse_Opensyllabi(self, response):
          Opensyllabi = OpensyllabiItem()
          Opensyllabi['url'] = response.url
          Opensyllabi['body'] = response.body

          filename = ("/root/opensyllabi/data/saylor" + '%d' % self.i)
          syllabi = open(filename, "w")
          syllabi.write(response.body)

          return Opensyllabi

1 个答案:

答案 0 :(得分:3)

试试这个

class OpensyllabiSpider(BaseSpider):
      name = 'saylor'
      allowed_domains = ['saylor.org']
      max_cid = 405

      def start_requests(self):
          for i in range(1, self.max_cid):
              yield Request('http://www.saylor.org/site/syllabus.php?cid=%d' % i, 
                    meta={'index':i},
                    callback=self.parse_Opensyllabi)

      def parse_Opensyllabi(self, response):
          Opensyllabi = OpensyllabiItem()
          Opensyllabi['url'] = response.url
          Opensyllabi['body'] = response.body


          filename = ("/root/opensyllabi/data/saylor" + '%d' % response.request.meta['index'])
          syllabi = open(filename, "w")
          syllabi.write(response.body)

          return Opensyllabi