为什么我没有使用此scrapy爬虫获得任何结果?

时间:2016-06-18 09:42:21

标签: python scrapy

这是我的测试项目树:

├── test11
│   ├── __init__.py
│   ├── items.py
│   ├── pipelines.py
│   ├── settings.py
│   └── spiders
│       ├── __init__.py
│       ├── basic.py
│       ├── easy.py
├── scrapy.cfg

在我的items.py文件中:

来自scrapy.item import Item,Field

class test11Item(Item):

    name = Field()
    price = Field()

在我的easy.py文件中:

import scrapy
import urlparse
from scrapy.loader import ItemLoader
from scrapy.loader.processors import MapCompose, Join
from scrapy.linkextractors import LinkExtractor
from scrapy.spiders import CrawlSpider, Rule
from test11.items import Test11Item


class EasySpider(CrawlSpider):
    name = 'easy'
    allowed_domains = ['web']

    start_urls = ['https://www.amazon.cn/b?ie=UTF8&node=2127529051']

    rules = (
            Rule(SgmlLinkExtractor(restrict_xpaths='//*[@id="pagnNextLink"]')),
            Rule(SgmlLinkExtractor(restrict_xpaths='//*[contains(@class,"s-access-detail-page")]'),
                callback='parse_item')
    )

    def parse_item(self, response):
        l = ItemLoader(item = Test11Item(), response = response)

        l.add_xpath('name', '//*[@id="productTitle"]/text()', MapCompose(unicode.strip))
        l.add_xpath('//*[@id="priceblock_ourprice"]/text()', MapCompose(lambda i: i.replace(',', ''), float), re='[,.0-9]+')

        return l.load_item()

在我的basic.py文件中:

import scrapy
import urlparse
from scrapy.loader import ItemLoader
from scrapy.loader.processors import MapCompose, Join
from scrapy.linkextractors import LinkExtractor
from scrapy.spiders import CrawlSpider, Rule
from test11.items import Test11Item

class BasicSpider(scrapy.Spider):
    name = 'basic'
    allowed_domains = ['web']

    start_urls = ['https://www.amazon.cn/b?ie=UTF8&node=2127529051']

    def parse(self, response):
        l = ItemLoader(item = Test11Item(), response = response)

        l.add_xpath('name', '//*[@id="productTitle"]/text()', MapCompose(unicode.strip))
        l.add_xpath('//*[@id="priceblock_ourprice"]/text()', MapCompose(lambda i: i.replace(',', ''), float), re='[,.0-9]+')

        return l.load_item()

当我运行basic蜘蛛(scrapy crawl basic)时,我得到了我想要的结果。但当我运行easy蜘蛛scrapy crawl easy时,我根本没有结果!

我在这里缺少什么?

1 个答案:

答案 0 :(得分:2)

您只需要恰当地设置allowed_domains

allowed_domains = ['amazon.cn']