phantomjs引发OSError:[Errno 9]错误的文件描述符

时间:2017-07-09 09:44:42

标签: python selenium scrapy phantomjs

当我在Scrapy中间件中使用phantomj时,它有时会引发:

Traceback (most recent call last):
 File "/usr/lib/python2.7/dist-packages/twisted/internet/defer.py", line 1128, in _inlineCallbacks
result = g.send(result)
File "/usr/local/lib/python2.7/dist-
packages/scrapy/core/downloader/middleware.py", line 37, in 
process_request
response = yield method(request=request, spider=spider)
File "/home/ttc/ruyi-
scrapy/saibolan/saibolan/hz_webdriver_middleware.py", line 47, in 
 process_request
driver.quit()
File "/usr/local/lib/python2.7/dist-packages/selenium/webdriver/phantomjs/webdriver.py", line 76, in quit
self.service.stop()
File "/usr/local/lib/python2.7/dist-packages/selenium/webdriver/common/service.py", line 149, in stop
self.send_remote_shutdown_command()
File "/usr/local/lib/python2.7/dist-packages/selenium/webdriver/phantomjs/service.py", line 67, in send_remote_shutdown_command
os.close(self._cookie_temp_file_handle)
OSError: [Errno 9] Bad file descriptor

实际上每次都不会出现,我抓了80页,它出现了30次,这在phantomjs中间件中

class HZPhantomjsMiddleware(object):

def __init__(self, settings):
    self.phantomjs_driver_path = settings.get('PHANTOMJS_DRIVER_PATH')
    self.cloud_mode = settings.get('CLOUD_MODE')

@classmethod
def from_crawler(cls, crawler):
    return cls(crawler.settings)

def process_request(self, request, spider):
    # 线上需要 display, 本地调试可以注释掉
    # if self.cloud_mode:
    #     display = Display(visible=0, size=(800, 600))
    #     display.start()
    dcap = dict(DesiredCapabilities.PHANTOMJS)
    dcap["phantomjs.page.settings.userAgent"] = (
        "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/57.0.2987.133 Safari/537.36")
    driver = webdriver.PhantomJS(
        self.phantomjs_driver_path, desired_capabilities=dcap)
    # chrome_options = webdriver.ChromeOptions()
    # prefs = {"profile.managed_default_content_settings.images": 2}
    # chrome_options.add_experimental_option("prefs", prefs)
    # driver = webdriver.Chrome(self.chrome_driver_path, chrome_options=chrome_options)
    driver.get(request.url)
    try:
        element = WebDriverWait(driver, 15).until(
            ec.presence_of_element_located(
                (By.XPATH, '//div[@class="txt-box"]|//h4[@class="weui_media_title"]|//div[@class="rich_media_content "]'))
        )
        body = driver.page_source
        time.sleep(1)
        driver.quit()
        return HtmlResponse(request.url, body=body, encoding='utf-8', request=request)
    except:
        driver.quit()
        spider.logger.error('Ignore request, url: {}'.format(request.url))
        raise IgnoreRequest()

我不知道可能导致此错误的原因。

2 个答案:

答案 0 :(得分:4)

截至2016年7月,driver.close()和driver.quit()对我来说还不够。这会杀死节点进程,但不会导致它产生的phantomjs子进程。

在对this GitHub issue的讨论之后,对我有用的唯一解决方案就是运行:

var arr1 = ['Car', 'red', 'Nikolai'];
var arr2 = ['Car', 'red', 'Nikolai'];
var arr3 = ['Car', 'red', 'Nikolai'];
var arrs = [arr1, arr2, arr3];

var concatenatedString = "";
arrs.forEach(function(arr) {
   if (concatenatedString.length > 0){
     concatenatedString += ", ";
   }
   concatenatedString += arr[0];
});

var result = [concatenatedString];

答案 1 :(得分:0)

此处描述了问题:https://github.com/SeleniumHQ/selenium/issues/3216。建议的解决方法(明确指定cookie文件)对我有用:

driver = webdriver.PhantomJS(self.phantomjs_driver_path, desired_capabilities=dcap, service_args=['--cookies-file=/tmp/cookies.txt'])