如何在单个Scrapy项目中为不同的蜘蛛使用不同的管道

时间:2011-12-04 02:08:26

标签: python scrapy web-crawler

我有一个包含多个蜘蛛的scrapy项目。 有什么方法可以定义哪些管道用于哪个蜘蛛?并非我所定义的所有管道都适用于每个蜘蛛。

由于

10 个答案:

答案 0 :(得分:87)

只需从主设置中删除所有管道,然后在蜘蛛内部使用。

这将定义每个蜘蛛用户的管道

class testSpider(InitSpider):
    name = 'test'
    custom_settings = {
        'ITEM_PIPELINES': {
            'app.MyPipeline': 400
        }
    }

答案 1 :(得分:31)

the solution from Pablo Hoffman上构建,您可以在Pipeline对象的process_item方法上使用以下装饰器,以便检查蜘蛛的pipeline属性是否应该是执行。例如:

def check_spider_pipeline(process_item_method):

    @functools.wraps(process_item_method)
    def wrapper(self, item, spider):

        # message template for debugging
        msg = '%%s %s pipeline step' % (self.__class__.__name__,)

        # if class is in the spider's pipeline, then use the
        # process_item method normally.
        if self.__class__ in spider.pipeline:
            spider.log(msg % 'executing', level=log.DEBUG)
            return process_item_method(self, item, spider)

        # otherwise, just return the untouched item (skip this step in
        # the pipeline)
        else:
            spider.log(msg % 'skipping', level=log.DEBUG)
            return item

    return wrapper

为使此装饰器正常工作,spider必须具有一个管道属性,其中包含要用于处理项目的Pipeline对象的容器,例如:

class MySpider(BaseSpider):

    pipeline = set([
        pipelines.Save,
        pipelines.Validate,
    ])

    def parse(self, response):
        # insert scrapy goodness here
        return item

然后在pipelines.py文件中:

class Save(object):

    @check_spider_pipeline
    def process_item(self, item, spider):
        # do saving here
        return item

class Validate(object):

    @check_spider_pipeline
    def process_item(self, item, spider):
        # do validating here
        return item

所有Pipeline对象仍然应该在ITEM_PIPELINES中的设置中定义(按照正确的顺序 - 更改以便在Spider上指定顺序)。

答案 2 :(得分:12)

The other solutions given here are good, but I think they could be slow, because we are not really not using the pipeline per spider, instead we are checking if a pipeline exists every time an item is returned (and in some cases this could reach millions).

A good way to completely disable (or enable) a feature per spider is using custom_setting and from_crawler for all extensions like this:

pipelines.py

from scrapy.exceptions import NotConfigured

class SomePipeline(object):
    def __init__(self):
        pass

    @classmethod
    def from_crawler(cls, crawler):
        if not crawler.settings.getbool('SOMEPIPELINE_ENABLED'):
            # if this isn't specified in settings, the pipeline will be completely disabled
            raise NotConfigured
        return cls()

    def process_item(self, item, spider):
        # change my item
        return item

settings.py

ITEM_PIPELINES = {
   'myproject.pipelines.SomePipeline': 300,
}
SOMEPIPELINE_ENABLED = True # you could have the pipeline enabled by default

spider1.py

class Spider1(Spider):

    name = 'spider1'

    start_urls = ["http://example.com"]

    custom_settings = {
        'SOMEPIPELINE_ENABLED': False
    }

As you check, we have specified custom_settings that will override the things specified in settings.py, and we are disabling SOMEPIPELINE_ENABLED for this spider.

Now when you run this spider, check for something like:

[scrapy] INFO: Enabled item pipelines: []

Now scrapy has completely disabled the pipeline, not bothering of its existence for the whole run. Check that this also works for scrapy extensions and middlewares.

答案 3 :(得分:10)

我至少可以想到四种方法:

  1. 每套蜘蛛+管道使用不同的scrapy项目(如果您的蜘蛛在不同的项目中有足够的不同,可能是合适的)
  2. 在scrapy工具命令行中,在每次调用蜘蛛之间用scrapy settings更改管道设置
  3. 将您的蜘蛛隔离到他们自己的scrapy tool commands中,并将命令类中的default_settings['ITEM_PIPELINES']定义为该命令所需的管道列表。见line 6 of this example
  4. 在管道类本身中,让process_item()检查它正在运行的蜘蛛,如果该蜘蛛应该被忽略,则不执行任何操作。请参阅example using resources per spider以帮助您入门。 (这似乎是一个丑陋的解决方案,因为它紧密地耦合了蜘蛛和物品管道。你可能不应该使用这个。)

答案 4 :(得分:8)

您可以在管道中使用蜘蛛的name属性

class CustomPipeline(object)

    def process_item(self, item, spider)
         if spider.name == 'spider1':
             # do something
             return item
         return item

以这种方式定义所有管道可以实现您想要的效果。

答案 5 :(得分:2)

您可以像这样在蜘蛛内设置项目管道设置:

class CustomSpider(Spider):
    name = 'custom_spider'
    custom_settings = {
        'ITEM_PIPELINES': {
            '__main__.PagePipeline': 400,
            '__main__.ProductPipeline': 300,
        },
        'CONCURRENT_REQUESTS_PER_DOMAIN': 2
    }

然后,我可以通过在加载器/返回的项目中添加一个值来拆分流水线(甚至使用多个流水线),该值可以标识蜘蛛程序的哪一部分发送了内容。这样,我不会收到任何KeyError异常,而且我知道哪些项目应该可用。

    ...
    def scrape_stuff(self, response):
        pageloader = PageLoader(
                PageItem(), response=response)

        pageloader.add_xpath('entire_page', '/html//text()')
        pageloader.add_value('item_type', 'page')
        yield pageloader.load_item()

        productloader = ProductLoader(
                ProductItem(), response=response)

        productloader.add_xpath('product_name', '//span[contains(text(), "Example")]')
        productloader.add_value('item_type', 'product')
        yield productloader.load_item()

class PagePipeline:
    def process_item(self, item, spider):
        if item['item_type'] == 'product':
            # do product stuff

        if item['item_type'] == 'page':
            # do page stuff

答案 6 :(得分:1)

最简单有效的解决方案是在每个蜘蛛网中设置自定义设置。

custom_settings = {'ITEM_PIPELINES': {'project_name.pipelines.SecondPipeline': 300}}

之后,您需要在settings.py文件中设置它们

ITEM_PIPELINES = {
   'project_name.pipelines.FistPipeline': 300,
   'project_name.pipelines.SecondPipeline': 400
}

那样,每个蜘蛛将使用各自的管道。

答案 7 :(得分:0)

我使用两个管道,一个用于图像下载(MyImagesPipeline),另一个用于在mongodb(MongoPipeline)中保存数据。

假设我们有很多蜘蛛(spider1,spider2,...........),在我的例子中,spider1和spider5不能使用MyImagesPipeline

settings.py

ITEM_PIPELINES = {'scrapycrawler.pipelines.MyImagesPipeline' : 1,'scrapycrawler.pipelines.MongoPipeline' : 2}
IMAGES_STORE = '/var/www/scrapycrawler/dowload'

并且完整的管道代码

import scrapy
import string
import pymongo
from scrapy.pipelines.images import ImagesPipeline

class MyImagesPipeline(ImagesPipeline):
    def process_item(self, item, spider):
        if spider.name not in ['spider1', 'spider5']:
            return super(ImagesPipeline, self).process_item(item, spider)
        else:
           return item 

    def file_path(self, request, response=None, info=None):
        image_name = string.split(request.url, '/')[-1]
        dir1 = image_name[0]
        dir2 = image_name[1]
        return dir1 + '/' + dir2 + '/' +image_name

class MongoPipeline(object):

    collection_name = 'scrapy_items'
    collection_url='snapdeal_urls'

    def __init__(self, mongo_uri, mongo_db):
        self.mongo_uri = mongo_uri
        self.mongo_db = mongo_db

    @classmethod
    def from_crawler(cls, crawler):
        return cls(
            mongo_uri=crawler.settings.get('MONGO_URI'),
            mongo_db=crawler.settings.get('MONGO_DATABASE', 'scraping')
        )

    def open_spider(self, spider):
        self.client = pymongo.MongoClient(self.mongo_uri)
        self.db = self.client[self.mongo_db]

    def close_spider(self, spider):
        self.client.close()

    def process_item(self, item, spider):
        #self.db[self.collection_name].insert(dict(item))
        collection_name=item.get( 'collection_name', self.collection_name )
        self.db[collection_name].insert(dict(item))
        data = {}
        data['base_id'] = item['base_id']
        self.db[self.collection_url].update({
            'base_id': item['base_id']
        }, {
            '$set': {
            'image_download': 1
            }
        }, upsert=False, multi=True)
        return item

答案 8 :(得分:0)

我们可以在管道中使用一些条件

    # -*- coding: utf-8 -*-
from scrapy_app.items import x

class SaveItemPipeline(object):
    def process_item(self, item, spider):
        if isinstance(item, x,):
            item.save()
        return item

答案 9 :(得分:0)

简单但仍然有用的解决方案。

蜘蛛代码

    def parse(self, response):
        item = {}
        ... do parse stuff
        item['info'] = {'spider': 'Spider2'}

管道代码

    def process_item(self, item, spider):
        if item['info']['spider'] == 'Spider1':
            logging.error('Spider1 pipeline works')
        elif item['info']['spider'] == 'Spider2':
            logging.error('Spider2 pipeline works')
        elif item['info']['spider'] == 'Spider3':
            logging.error('Spider3 pipeline works')

希望这可以节省一些时间!

相关问题