Scrapy:crawlspider在给定链接中不生成所有链接和数据

时间:2014-01-29 22:22:30

标签: python web-scraping scrapy

我无法废弃以下网址中的数据。我试图废弃它,但在我的机器中它显示了一些不相关的数据。

网址1:http://www.amazon.com/s/ref=nb_sb_noss_1?url=search-alias%3Daps&field-keywords=samsung%20appliances&sprefix=samsung+applia%2Caps&rh=i%3Aaps%2Ck%3Asamsung%20appliances

网址2:http://www.amazon.com/s/ref=sr_pg_2?rh=i%3Aaps%2Ck%3Asamsung+appliances&page=2&keywords=samsung+appliances&ie=UTF8&qid=1391033912

编码:

第1行:hxs.select('// h3 [@ class =“newaps”] / span / text()')。extract()

第2行:hxs.select('// h3 [@ class =“newaps”] / a / @ href')。extract() 预期产出:

对于URL 1&第1行

三星RF4289HARS 三星加热元件DC47-00019A 三星WIS12ABGNX无线LAN适配器 三星SMH1816S 1.8铜。英尺。不锈钢超范围微波炉 三星RF4287 28立式脚法式门冰箱,带4门和集成水&冰,真正的不锈钢 。 。 。 等.....

我需要上面的第2行代码,然后我还需要URL 2 ..

查看我的代码.........

    From scrapy.spider import BaseSpider      
    from scrapy.http import Request    
    from urlparse import urljoin        
    from scrapy.selector import HtmlXPathSelector    
    import inspect    
    from amazon.items import AmazonItem    

    class amzspider(BaseSpider):
        name="amz"    

        start_urls=["http://www.amazon.com/s/ref=sr_pg_2?rh=i%3Aaps%2Ck%3Asamsung+appliances&page=2&keywords=samsung+appliances&ie=UTF8&qid=1386153209"]
        print start_urls           

    def parse(self,response):

        hxs = HtmlXPathSelector(response)


        ul=hxs.select('//div/ul[@class="rsltGridList grey"]').extract()
        l=len(hxs.select('//h3[@class="newaps"]/a/@href').extract())

        x=[]

        x1=[]
        url1=[]
        for i in range(l):
            x1.append(hxs.select('//h3[@class="newaps"]/a/@href').extract()[i].encode('utf-8').strip())


        print "URl parsed"          

        for i in range(l):
            url1.append(urljoin(response.url, x1[i]))

        for i in range(l):
            if url1[i]:
                yield Request(url1[i], callback=self.parse_sub)     

        r=hxs.select('//a[@id="pagnNextLink"]/@href').extract()[0].encode('utf-8')

        if r:
            yield Request(urljoin(response.url, r), callback=self.parse)

    def parse_sub(self,response):
        print " sub callled"
        itm=[]
#       item = response.meta.get('item')
        item=AmazonItem()
        hxs = HtmlXPathSelector(response)

0 个答案:

没有答案