刮y的问题:SSLError :(“读取操作超时”,)

时间:2019-05-17 11:48:13

标签: python error-handling scrapy

当我尝试运行蜘蛛程序:scrapy crawl spider_namescrapy runspider spider_name.py时,出现以下错误:

Traceback (most recent call last):
  File "/home/zijie/.virtualenvs/alascrapy/local/lib/python2.7/site-packages/scrapy/commands/runspider.py", line 88, in run
    self.crawler_process.crawl(spidercls, **opts.spargs)
  File "/home/zijie/.virtualenvs/alascrapy/local/lib/python2.7/site-packages/scrapy/crawler.py", line 168, in crawl
    return self._crawl(crawler, *args, **kwargs)
  File "/home/zijie/.virtualenvs/alascrapy/local/lib/python2.7/site-packages/scrapy/crawler.py", line 172, in _crawl
    d = crawler.crawl(*args, **kwargs)
  File "/home/zijie/.virtualenvs/alascrapy/local/lib/python2.7/site-packages/twisted/internet/defer.py", line 1274, in unwindGenerator
    return _inlineCallbacks(None, gen, Deferred())
--- <exception caught here> ---
  File "/home/zijie/.virtualenvs/alascrapy/local/lib/python2.7/site-packages/twisted/internet/defer.py", line 1128, in _inlineCallbacks
    result = g.send(result)
  File "/home/zijie/.virtualenvs/alascrapy/local/lib/python2.7/site-packages/scrapy/crawler.py", line 95, in crawl
    six.reraise(*exc_info)
  File "/home/zijie/.virtualenvs/alascrapy/local/lib/python2.7/site-packages/scrapy/crawler.py", line 77, in crawl
    self.engine = self._create_engine()
  File "/home/zijie/.virtualenvs/alascrapy/local/lib/python2.7/site-packages/scrapy/crawler.py", line 102, in _create_engine
    return ExecutionEngine(self, lambda _: self.stop())
  File "/home/zijie/.virtualenvs/alascrapy/local/lib/python2.7/site-packages/scrapy/core/engine.py", line 69, in __init__
    self.downloader = downloader_cls(crawler)
  File "/home/zijie/.virtualenvs/alascrapy/local/lib/python2.7/site-packages/scrapy/core/downloader/__init__.py", line 88, in __init__
    self.middleware = DownloaderMiddlewareManager.from_crawler(crawler)
  File "/home/zijie/.virtualenvs/alascrapy/local/lib/python2.7/site-packages/scrapy/middleware.py", line 58, in from_crawler
    return cls.from_settings(crawler.settings, crawler)
  File "/home/zijie/.virtualenvs/alascrapy/local/lib/python2.7/site-packages/scrapy/middleware.py", line 36, in from_settings
    mw = mwcls.from_crawler(crawler)
  File "/home/zijie/.virtualenvs/alascrapy/local/lib/python2.7/site-packages/scrapy/downloadermiddlewares/useragent.py", line 14, in from_crawler
    o = cls(crawler.settings['USER_AGENT'])
  File "/home/zijie/alatest/alaScrapy/alascrapy/middleware/user_agent_middleware.py", line 17, in __init__
    self.ua = UserAgent(fallback=self.user_agent)
  File "/home/zijie/.virtualenvs/alascrapy/local/lib/python2.7/site-packages/fake_useragent/fake.py", line 69, in __init__
    self.load()
  File "/home/zijie/.virtualenvs/alascrapy/local/lib/python2.7/site-packages/fake_useragent/fake.py", line 78, in load
    verify_ssl=self.verify_ssl,
  File "/home/zijie/.virtualenvs/alascrapy/local/lib/python2.7/site-packages/fake_useragent/utils.py", line 250, in load_cached
    update(path, use_cache_server=use_cache_server, verify_ssl=verify_ssl)
  File "/home/zijie/.virtualenvs/alascrapy/local/lib/python2.7/site-packages/fake_useragent/utils.py", line 245, in update
    write(path, load(use_cache_server=use_cache_server, verify_ssl=verify_ssl))
  File "/home/zijie/.virtualenvs/alascrapy/local/lib/python2.7/site-packages/fake_useragent/utils.py", line 189, in load
    verify_ssl=verify_ssl,
  File "/home/zijie/.virtualenvs/alascrapy/local/lib/python2.7/site-packages/fake_useragent/utils.py", line 67, in get
    context=context,
  File "/usr/lib/python2.7/urllib2.py", line 154, in urlopen
    return opener.open(url, data, timeout)
  File "/usr/lib/python2.7/urllib2.py", line 429, in open
    response = self._open(req, data)
  File "/usr/lib/python2.7/urllib2.py", line 447, in _open
    '_open', req)
  File "/usr/lib/python2.7/urllib2.py", line 407, in _call_chain
    result = func(*args)
  File "/usr/lib/python2.7/urllib2.py", line 1241, in https_open
    context=self._context)
  File "/usr/lib/python2.7/urllib2.py", line 1201, in do_open
    r = h.getresponse(buffering=True)
  File "/usr/lib/python2.7/httplib.py", line 1121, in getresponse
    response.begin()
  File "/usr/lib/python2.7/httplib.py", line 438, in begin
    version, status, reason = self._read_status()
  File "/usr/lib/python2.7/httplib.py", line 394, in _read_status
    line = self.fp.readline(_MAXLINE + 1)
  File "/usr/lib/python2.7/socket.py", line 480, in readline
    data = self._sock.recv(self._rbufsize)
  File "/usr/lib/python2.7/ssl.py", line 772, in recv
    return self.read(buflen)
  File "/usr/lib/python2.7/ssl.py", line 659, in read
    v = self._sslobj.read(len)
ssl.SSLError: ('The read operation timed out',)

有人知道这个问题吗?两天前运行任何蜘蛛程序都可以正常工作,但是今天整个项目完全死了,并填充了上面粘贴的问题。

0 个答案:

没有答案
相关问题