BeautifulSoup find_all不能始终正常工作

时间:2018-10-25 22:36:51

标签: python selenium-webdriver web-scraping beautifulsoup

我正在使用selenium和phantomJS并使用BeautifulSoup抓取数据来获取页面。这段代码有时可以工作,但大多数情况下却无效。我使用的网址是Google Flight的网址。 我无法理解到底是什么原因导致驱动程序失败。驱动程序返回了html内容,但没有屏幕截图。这是代码:

def update_ds():
print("Called")
url = "https://www.google.fr/flights#flt=DEL.r/m/02j9z.2018-11-10*r/m/02j9z.DEL.2018-11-14;c:USD;e:1;ls:1w;sd:0;t:e"
driver = webdriver.PhantomJS(executable_path='C:\\phantomjs-2.1.1-windows\\bin\\phantomjs')
dcap = dict(DesiredCapabilities.PHANTOMJS)
dcap["phantomjs.page.settings.userAgent"] = ("Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/69.0.3497.100 Safari/537.36")
driver = webdriver.PhantomJS(executable_path='C:\\phantomjs-2.1.1-windows\\bin\\phantomjs',desired_capabilities=dcap,service_args = ['--ignore-ssl-errors=true'])
driver.implicitly_wait(120)
driver.get(url)
driver.save_screenshot('flight.png')
html_content = driver.page_source
#print(html_content)
print("Connected")

s = BeautifulSoup(html_content,"lxml")
best_price_tags = s.find_all('span',class_=['uKOpFp4SF2X__price flt-subhead2','uKOpFp4SF2X__price flt-subhead2 uKOpFp4SF2X__deal'])
print("tags ",len(best_price_tags))
best_price = []
for tag in best_price_tags:
    best_price.append(int(tag.string.replace('US$','').replace(',','')))

1 个答案:

答案 0 :(得分:0)

尝试使用ByWebDriverWait。这样您就可以等到您在特定时间找到课程为止。

  

通过https://selenium-python.readthedocs.io/locating-elements.html

     

WebDriverWait https://selenium-python.readthedocs.io/waits.html

from selenium.webdriver.support.wait import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.by import By

locator = (By.CLASS_NAME , 'uKOpFp4SF2X__price flt-subhead2')
try:
    WebDriverWait(driver, 120, 0.5).until(EC.element_to_be_selected(locator))
except:
    print("Could not find class")