Scrapy - 多次运行蜘蛛

时间:2017-09-08 20:28:38

标签: python scrapy

我已经以这种方式设置了一个爬虫:

from twisted.internet import reactor
from scrapy.crawler import CrawlerProcess
from scrapy.utils.project import get_project_settings

def crawler(mood):
        process = CrawlerProcess(get_project_settings())
        #crawl music selected by critics on the web
        process.crawl('allmusic_{}_tracks'.format(mood), domain='allmusic.com')
        # the script will block here until the crawling is finished
        process.start() 
        #create containers for scraped data 
        allmusic = []
        allmusic_tracks = []
        allmusic_artists = []
        # #process pipelined files  
        with open('blogs/spiders/allmusic_data/{}_tracks.jl'.format(mood), 'r+') as t: 
            for line in t:
                allmusic.append(json.loads(line))
        #fecth artists and their correspondant tracks
        for item in allmusic:
            allmusic_artists.append(item['artist'])
            allmusic_tracks.append(item['track'])
        return (allmusic_artists, allmusic_tracks)

我可以像这样运行它:

artist_list, song_list = crawler('bitter')
print artist_list

它工作正常。

但如果我想连续几次运行它:

artist_list, song_list = crawler('bitter')
artist_list2, song_list2 = crawler('harsh')

我明白了:

twisted.internet.error.ReactorNotRestartable

有没有一种简单的方法可以为这个蜘蛛设置一个包装器,这样我可以多次运行它?

1 个答案:

答案 0 :(得分:0)

它相当简单。

已在函数内定义了单个进程。

所以,我可以这样做:

def crawler(mood1, mood2):
        process = CrawlerProcess(get_project_settings())
        #crawl music selected by critics on the web
        process.crawl('allmusic_{}_tracks'.format(mood1), domain='allmusic.com')
        process.crawl('allmusic_{}_tracks'.format(mood2), domain='allmusic.com')
        # the script will block here until the crawling is finished
        process.start() 

如果您已经为每个流程定义了类。