登录解析网址列表后的Scrapy

时间:2015-12-03 11:27:59

标签: python scrapy scrapy-spider

我对python不太熟悉所以请耐心等待我。 我有一个scrapy爬虫,它应该工作,但现在我需要做一个新的,但这次它应该抓取登录的会话。 所以我的scrapy使用start_urls作为从站点地图获取的URL列表,它应该向登录表单发出请求,然后,如果登录它应该开始解析我的列表......

到目前为止,这是我的代码:

class StockPricesSpider(Spider):
    name = "logged-in"
    allowed_domains = ["example.com"]
    d = strftime("%Y-%m-%d", gmtime())
    start_urls = ['https://www.example.com/customer/account/login/']

    def parse(self, response):
        return [FormRequest.from_response(response,
                    formdata={'username': 'myuser', 'password': 'mypass'},
                    callback=self.after_login)]

    def after_login(self, response):
        # check login succeed before going on
        if "Invalid login or password." in response.body:
            self.log("Login failed", level=log.ERROR)
            return
        else:
             logging.log(logging.INFO,'Logged in and start parsing')
             return Request("http://www.example.com/", callback=self.parse_products)

    def parse_products(self, response):
        f = open("data/sitemaps/urls04102015.txt")
        start_urls = [url.strip() for url in f.readlines()]
        f.close()
        d = strftime("%Y-%m-%d", gmtime())
        if os.path.exists("data/results/stock_"+d+".csv"):
            os.remove("data/results/stock_"+d+".csv")             

        sel = Selector(response)
        separator = ";"
        items = []

        item = MyPrices()
        sku = sel.xpath('.//strong[@itemprop="productID"]/text()').extract()
        logging.log(logging.INFO, sku)
        if len(sku) > 0:        
            item['sku'] = "med_" + sel.xpath('.//strong[@itemprop="productID"]/text()').extract()[0].strip()
            ...
        items.append(item)         
        return items

所以这不起作用,因为我没有正确调用解析器。 所以基本上,我没有得到错误,但网址也没有得到解析。 所以登录工作,我成功登录,但在那之后(登录后)我怎么做scrapy做的(解析网址列表)?

修改 我找到了解决问题的新方法,但它也无法正常工作。请帮我调试一下(或第一种方法)

class StockPricesSpiderX(InitSpider):
    name = "logged-in"
    allowed_domains = ["example.com"]
    login_page = 'https://www.example.com/ro/customer/account/login/' 
    d = strftime("%Y-%m-%d", gmtime())
    f = open("data/sitemaps/urls04102015.txt")
    start_urls = [url.strip() for url in f.readlines()]
    f.close()
    if os.path.exists("data/results/stock_"+d+".csv"):
        os.remove("data/results/stock_"+d+".csv")

    def init_request(self):
        """ Called before crawler starts """
        logging.log(logging.INFO, 'before crawler starts...')
        return Request(url=self.login_page, callback=self.login)

    def login(self, response):
        """ Generate login request """
        logging.log(logging.INFO, 'do login...')
        return FormRequest.from_response(response,
                                         formdata={'name':'myuser','password':'mypass'},
                                         callback=self.check_login_response)
    def check_login_response(self,response):
        """ Check the response returned by login request to see if we are logged in """
        if "Invalid login or password." in response.body:
            logging.log(logging.INFO,'... BAD LOGIN ...')
        else:
            logging.log(logging.INFO, 'GOOD LOGIN... initialize')
            self.initialized()

    def parse_item(self, response):
        sel = Selector(response)
        separator = ";"
        items = []
        item = StockPrices()
        sku = sel.xpath('.//strong[@itemprop="productID"]/text()').extract()
        logging.log(logging.INFO, sku)
        ...
        items.append(item)         
        return items

执行日志显示:

2015-12-03 14:54:16 [scrapy] INFO: Scrapy 1.0.3 started (bot: scrapybot)
2015-12-03 14:54:16 [scrapy] INFO: Optional features available: ssl, http11
2015-12-03 14:54:16 [scrapy] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'products.spiders', 'FEED_URI': 'calinxautomat.csv', 'LOG_LEVEL': 'INFO', 'DUPEFILTER_CLASS': 'scrapy.dupefilter.RFPDupeFilter', 'SPIDER_MODULES': ['products.spiders'], 'DEFAULT_ITEM_CLASS': 'products.items.Subcategories', 'FEED_FORMAT': 'csv'}
2015-12-03 14:54:21 [scrapy] INFO: Enabled extensions: CloseSpider, FeedExporter, TelnetConsole, LogStats, CoreStats, SpiderState
2015-12-03 14:54:23 [scrapy] INFO: Enabled downloader middlewares: HttpAuthMiddleware, DownloadTimeoutMiddleware, UserAgentMiddleware, RetryMiddleware, DefaultHeadersMiddleware, MetaRefreshMiddleware, HttpCompressionMiddleware, RedirectMiddleware, CookiesMiddleware, ChunkedTransferMiddleware, DownloaderStats
2015-12-03 14:54:23 [scrapy] INFO: Enabled spider middlewares: HttpErrorMiddleware, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddleware
2015-12-03 14:54:23 [scrapy] INFO: Enabled item pipelines: myWriteToCsv
2015-12-03 14:54:23 [root] INFO: before crawler starts...
2015-12-03 14:54:23 [scrapy] INFO: Spider opened
2015-12-03 14:54:24 [scrapy] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2015-12-03 14:54:25 [root] INFO: do login...
2015-12-03 14:54:26 [scrapy] INFO: Closing spider (finished)
2015-12-03 14:54:26 [scrapy] INFO: Dumping Scrapy stats:

...

所以这个似乎没有通过登录阶段...它就像回调没有退出formRequest ... 我做错了什么?

1 个答案:

答案 0 :(得分:0)

parse_products()中,start_urls的分配将使用该例程的本地变量,而不是您在蜘蛛顶部设置的全局类。无论如何,我不认为分配给start_urls会做你想要的,scrapy不会注意到然后解析它们。您需要做的是将要解析的新URL排队。

for url in f.readlines()
    yield Request(url.strip(), callback=self.parse_products)

更新:来自您的更新:scrapy有一个网址过滤器,因此它不会重新访问网页。请参阅FormRequest中的this,tldr:set dont_filter=True