Scrapy仅在15个开始网址列表中抓取第一个开始网址

时间:2015-07-30 22:56:01

标签: python web-scraping scrapy scrapy-spider

我是Scrapy的新手并且正在尝试自学基础知识。我已编制了一份代码,该代码将发送到路易斯安那州自然资源部网站,以检索某些油井的序列号。

我在start URLs命令中列出了每个井的链接,但是只有第一个网址才会下载数据。我做错了什么?

import scrapy
from scrapy import Spider
from scrapy.selector import Selector
from mike.items import MikeItem

class SonrisSpider(Spider):
   name = "sspider"

   start_urls = [
    "http://sonlite.dnr.state.la.us/sundown/cart_prod/cart_con_wellinfo2?p_WSN=207899",
    "http://sonlite.dnr.state.la.us/sundown/cart_prod/cart_con_wellinfo2?p_WSN=971683",
    "http://sonlite.dnr.state.la.us/sundown/cart_prod/cart_con_wellinfo2?p_WSN=214206",
    "http://sonlite.dnr.state.la.us/sundown/cart_prod/cart_con_wellinfo2?p_WSN=159420",
    "http://sonlite.dnr.state.la.us/sundown/cart_prod/cart_con_wellinfo2?p_WSN=243671",
    "http://sonlite.dnr.state.la.us/sundown/cart_prod/cart_con_wellinfo2?p_WSN=248942",
    "http://sonlite.dnr.state.la.us/sundown/cart_prod/cart_con_wellinfo2?p_WSN=156613",
    "http://sonlite.dnr.state.la.us/sundown/cart_prod/cart_con_wellinfo2?p_WSN=972498",
    "http://sonlite.dnr.state.la.us/sundown/cart_prod/cart_con_wellinfo2?p_WSN=215443",
    "http://sonlite.dnr.state.la.us/sundown/cart_prod/cart_con_wellinfo2?p_WSN=248463",
    "http://sonlite.dnr.state.la.us/sundown/cart_prod/cart_con_wellinfo2?p_WSN=195136",
    "http://sonlite.dnr.state.la.us/sundown/cart_prod/cart_con_wellinfo2?p_WSN=179181",
    "http://sonlite.dnr.state.la.us/sundown/cart_prod/cart_con_wellinfo2?p_WSN=199930",
    "http://sonlite.dnr.state.la.us/sundown/cart_prod/cart_con_wellinfo2?p_WSN=203419",
    "http://sonlite.dnr.state.la.us/sundown/cart_prod/cart_con_wellinfo2?p_WSN=220454",

]


    def parse(self, response):
       item = MikeItem()
       item['serial'] =    response.xpath('/html/body/table[1]/tr[2]/td[1]/text()').extract()[0]
       yield item   

感谢您提供的任何帮助。如果我没有彻底解释我的问题,请告诉我,我会尝试澄清。

2 个答案:

答案 0 :(得分:2)

我认为这段代码可能会有所帮助,

默认情况下,scrapy会阻止重复请求。由于start-url scrapy中只有参数不同,因此将start-url中的其余url视为第一个的重复请求。这就是为什么你的蜘蛛在获取第一个网址后停止的原因。为了解析其余网址,我们在scrapy请求中启用了dont_filter标记。 (谢start_request()

# -*- coding: utf-8 -*-
import scrapy
from scrapy.http import Request
from mike.items import MikeItem


class SonrisSpider(scrapy.Spider):
    name = "sspider"
    allowed_domains = ["sonlite.dnr.state.la.us"]
    start_urls = [
                "http://sonlite.dnr.state.la.us/sundown/cart_prod/cart_con_wellinfo2?p_WSN=207899",
                "http://sonlite.dnr.state.la.us/sundown/cart_prod/cart_con_wellinfo2?p_WSN=971683",
                "http://sonlite.dnr.state.la.us/sundown/cart_prod/cart_con_wellinfo2?p_WSN=214206",
                "http://sonlite.dnr.state.la.us/sundown/cart_prod/cart_con_wellinfo2?p_WSN=159420",
                "http://sonlite.dnr.state.la.us/sundown/cart_prod/cart_con_wellinfo2?p_WSN=243671",
                "http://sonlite.dnr.state.la.us/sundown/cart_prod/cart_con_wellinfo2?p_WSN=248942",
                "http://sonlite.dnr.state.la.us/sundown/cart_prod/cart_con_wellinfo2?p_WSN=156613",
                "http://sonlite.dnr.state.la.us/sundown/cart_prod/cart_con_wellinfo2?p_WSN=972498",
                "http://sonlite.dnr.state.la.us/sundown/cart_prod/cart_con_wellinfo2?p_WSN=215443",
                "http://sonlite.dnr.state.la.us/sundown/cart_prod/cart_con_wellinfo2?p_WSN=248463",
                "http://sonlite.dnr.state.la.us/sundown/cart_prod/cart_con_wellinfo2?p_WSN=195136",
                "http://sonlite.dnr.state.la.us/sundown/cart_prod/cart_con_wellinfo2?p_WSN=179181",
                "http://sonlite.dnr.state.la.us/sundown/cart_prod/cart_con_wellinfo2?p_WSN=199930",
                "http://sonlite.dnr.state.la.us/sundown/cart_prod/cart_con_wellinfo2?p_WSN=203419",
                "http://sonlite.dnr.state.la.us/sundown/cart_prod/cart_con_wellinfo2?p_WSN=220454",
            ]

    def start_requests(self):
        for url in self.start_urls:
            yield Request(url=url, callback=self.parse_data, dont_filter=True)

    def parse_data(self, response):
        item = MikeItem()
        serial = response.xpath(
            '/html/body/table[1]/tr[2]/td[1]/text()').extract()
        serial = serial[0] if serial else 'n/a'
        item['serial'] = serial
        yield item

此蜘蛛返回的示例输出如下,

{'serial': u'207899'}
{'serial': u'971683'}
{'serial': u'214206'}
{'serial': u'159420'}
{'serial': u'248942'}
{'serial': u'243671'}

答案 1 :(得分:0)

您的代码听起来不错,请尝试添加此功能

class SonrisSpider(Spider):
    def start_requests(self):
            for url in self.start_urls:
                print(url)
                yield self.make_requests_from_url(url)
    #the result of your code goes here

现在应该打印网址。测试一下,如果没有,请说