捕获发电机内的错误,然后继续

时间:2012-11-30 12:18:55

标签: python exception-handling try-catch iteration

我有一个应该运行几天的迭代器。我希望捕获并报告错误,然后我希望迭代器继续。或者整个过程可以重新开始。

这是功能:

def get_units(self, scraper):
    units = scraper.get_units()
    i = 0
    while True:
        try:
            unit = units.next()
        except StopIteration:
            if i == 0:
                log.error("Scraper returned 0 units", {'scraper': scraper})
            break
        except:
            traceback.print_exc()
            log.warning("Exception occurred in get_units", extra={'scraper': scraper, 'iteration': i})
        else:
            yield unit
        i += 1

因为scraper可能是许多代码变体之一,所以它不可信任,我不想处理那里的错误。

但是当units.next()发生错误时,整个事情就会停止。我怀疑是因为迭代器在其中一个迭代失败时抛出StopIteration

这是输出(仅最后一行)

[2012-11-29 14:11:12 /home/amcat/amcat/scraping/scraper.py:135 DEBUG] Scraping unit <Element div at 0x4258c710>
[2012-11-29 14:11:13 /home/amcat/amcat/scraping/scraper.py:138 DEBUG] .. yields article
[2012-11-29 14:11:13 /home/amcat/amcat/scraping/scraper.py:138 DEBUG] .. yields article
[2012-11-29 14:11:13 /home/amcat/amcat/scraping/scraper.py:138 DEBUG] .. yields article
[2012-11-29 14:11:13 /home/amcat/amcat/scraping/scraper.py:138 DEBUG] .. yields article
[2012-11-29 14:11:13 /home/amcat/amcat/scraping/scraper.py:138 DEBUG] .. yields article
[2012-11-29 14:11:13 /home/amcat/amcat/scraping/scraper.py:138 DEBUG] .. yields article
[2012-11-29 14:11:13 /home/amcat/amcat/scraping/scraper.py:138 DEBUG] .. yields article
[2012-11-29 14:11:13 /home/amcat/amcat/scraping/scraper.py:138 DEBUG] .. yields article
[2012-11-29 14:11:13 /home/amcat/amcat/scraping/scraper.py:138 DEBUG] .. yields article
[2012-11-29 14:11:13 /home/amcat/amcat/scraping/scraper.py:138 DEBUG] .. yields article
[2012-11-29 14:11:13 /home/amcat/amcat/scraping/scraper.py:138 DEBUG] .. yields article Counter-Strike: Global Offensive Update Released
Traceback (most recent call last):
  File "/home/amcat/amcat/scraping/controller.py", line 101, in get_units
    unit = units.next()
  File "/home/amcat/amcat/scraping/scraper.py", line 114, in get_units
    for unit in self._get_units():
  File "/home/amcat/scraping/games/steamcommunity.py", line 90, in _get_units
    app_doc = self.getdoc(url,urlencode(form))
  File "/home/amcat/amcat/scraping/scraper.py", line 231, in getdoc
    return self.opener.getdoc(url, encoding)
  File "/home/amcat/amcat/scraping/htmltools.py", line 54, in getdoc
    response = self.opener.open(url, encoding)
  File "/usr/lib/python2.7/urllib2.py", line 406, in open
    response = meth(req, response)
  File "/usr/lib/python2.7/urllib2.py", line 519, in http_response
    'http', request, response, code, msg, hdrs)
  File "/usr/lib/python2.7/urllib2.py", line 444, in error
    return self._call_chain(*args)
  File "/usr/lib/python2.7/urllib2.py", line 378, in _call_chain
    result = func(*args)
  File "/usr/lib/python2.7/urllib2.py", line 527, in http_error_default
    raise HTTPError(req.get_full_url(), code, msg, hdrs, fp)
HTTPError: HTTP Error 500: Internal Server Error
[2012-11-29 14:11:14 /home/amcat/amcat/scraping/controller.py:110 WARNING] Exception occurred in get_units

...code ends...

那么如何在发生错误时阻止迭代停止?

编辑:这是get_units()

中的代码
def get_units(self):
    """                                                                                                                                                                                                                                  
    Split the scraping job into a number of 'units' that can be processed independently                                                                                                                                                  
    of each other.                                                                                                                                                                                                                       

    @return: a sequence of arbitrary objects to be passed to scrape_unit                                                                                                                                                                 
    """
    self._initialize()
    for unit in self._get_units():
        yield unit

这是一个简化的_get_units():

INDEX_URL = "http://www.steamcommunity.com"

def _get_units(self):
  doc = self.getdoc(INDEX_URL)  #returns a lxml.etree document

  for a in doc.cssselect("div.discussion a"):
    link = a.get('href')
    yield link

编辑:问题后续:Alter each for-loop in a function to have error handling executed automatically after each failed iteration

1 个答案:

答案 0 :(得分:2)

当没有下一个项目时,

StopIteration由生成器的next()方法引发。它与生成器/迭代器中的错误无关。

需要注意的另一点是,根据迭代器的类型,它可能无法在异常后恢复。如果迭代器是具有next方法的对象,它将起作用。但是,如果它实际上是一个生成器,它就不会。

据我所知,这是在units.next()发生错误后迭代无法继续的唯一原因。即units.next()失败了,下次你调用它时,它无法恢复,它说它是通过抛出StopIteration异常来完成的。

基本上你必须向我们展示scraper.get_units()内的代码,以便我们理解为什么循环在单次迭代内发生错误后无法继续。如果将get_units()实现为生成器函数,则很清楚。如果没有,它可能是阻止它恢复的其他东西。

更新:解释生成器功能是什么:

class Scraper(object):
    def get_units(self):
        for i in some_stuff:
            bla = do_some_processing()
            bla *= 2  # random stuff
            yield bla

现在,当你调用Scraper().get_units()时,它不是运行整个函数,而是返回一个生成器对象。在其上调用next()将执行到第一个yield。等等。现在如果在get_units内任何地方发生错误,它就会被污染,所以说,下次你致电next()时,它会提升StopIteration,就好像它有耗尽物品给你。

强烈推荐阅读http://www.dabeaz.com/generators/(及http://www.dabeaz.com/coroutines/)。

UPDATE2:可能的解决方案https://gist.github.com/4175802