Scrapy停止刮擦但继续爬行

时间:2018-01-23 16:45:25

标签: python scrapy web-crawler scrapy-spider

我正在尝试从网站的多个页面中删除不同的信息。 直到第十六页,所有工作:页面被爬行,抓取和我的数据库中的信息库存。但是在第十六页之后,它会停止报废但继续爬行。 我检查了网站,有470页的信息。 HTML标记是相同的。所以我不明白为什么它会停止报废。

我的代码

def url_lister():
    url_list = []
    page_count = 1
    while page_count < 480:
        url = 'https://www.active.com/running?page=%s' %page_count 
        url_list.append(url)
        page_count += 1 
    return url_list

class ListeCourse_level1(scrapy.Spider):
    name = 'ListeCAP_ACTIVE' 
    allowed_domains = ['www.active.com'] 
    start_urls = url_lister()

    def parse(self, response):    
        selector = Selector(response)
        for uneCourse in response.xpath('//*[@id="lpf-tabs2-a"]/article/div/div/div/a[@itemprop="url"]'): 
            loader = ItemLoader(ActiveItem(), selector=uneCourse)
            loader.add_xpath('nom_evenement', './/div[2]/div/h5[@itemprop="name"]/text()')
        loader.default_input_processor = MapCompose(string) 
        loader.default_output_processor = Join()
        yield loader.load_item()
    pass

外壳

>     2018-01-23 17:22:29 [scrapy.core.scraper] DEBUG: Scraped from <200     
>     https://www.active.com/running?page=15>
>     {
>      'nom_evenement': 'Enniscrone 10k run & 5k run/walk',
>      }
>     2018-01-23 17:22:33 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.active.com/running?page=16> (referer: None)
>     --------------------------------------------------
>                     SCRAPING DES ELEMENTS EVENTS
>     --------------------------------------------------
>     2018-01-23 17:22:34 [scrapy.extensions.logstats] INFO: Crawled 17 pages (at 17 pages/min), scraped 155 items (at 155 items/min)
>     2018-01-23 17:22:36 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.active.com/running?page=17> (referer: None)
> 
> --------------------------------------------------
>                 SCRAPING DES ELEMENTS EVENTS
> -------------------------------------------------- 2018-01-23 17:22:40 [scrapy.core.engine] DEBUG: Crawled (200) <GET
> https://www.active.com/running?page=18> (referer: None)
> --------------------------------------------------
>                 SCRAPING DES ELEMENTS EVENTS
> -------------------------------------------------- 2018-01-23 17:22:43 [scrapy.core.engine] DEBUG: Crawled (200) <GET
> https://www.active.com/running?page=19> (referer: None)

1 个答案:

答案 0 :(得分:4)

这可能是因为您正在寻找的内容只有17页,而您指示Scrapy访问https://www.active.com/running?page=NNN格式的所有480页。更好的方法是检查您访问的每个页面是否存在下一页,并且仅在这种情况下将Request生成到下一页。

所以,我会将你的代码重构为(未经测试):

class ListeCourse_level1(scrapy.Spider):
    name = 'ListeCAP_ACTIVE' 
    allowed_domains = ['www.active.com'] 
    base_url = 'https://www.active.com/running'
    start_urls = [base_url]

    def parse(self, response):    
        selector = Selector(response)
        for uneCourse in response.xpath('//*[@id="lpf-tabs2-a"]/article/div/div/div/a[@itemprop="url"]'): 
            loader = ItemLoader(ActiveItem(), selector=uneCourse)
            loader.add_xpath('nom_evenement', './/div[2]/div/h5[@itemprop="name"]/text()')
        loader.default_input_processor = MapCompose(string) 
        loader.default_output_processor = Join()
        yield loader.load_item()
        # check for next page link
        if response.xpath('//a[contains(@class, "next-page")]'):
            next_page = response.meta.get('page_number', 1) + 1
            next_page_url = '{}?page={}'.format(base_url, next_page)
            yield scrapy.Request(next_page_url, callback=self.parse, meta={'page_number': next_page})