在单击下一页按钮时无法从网站上抓取标题

时间:2019-08-01 10:19:02

标签: python python-3.x selenium selenium-webdriver web-scraping

我在pythonselenium的结合下编写了一个脚本,以在单击下一页按钮的同时抓取来自不同页面的不同文章的链接,并从其内部获取每个文章的标题。页。尽管我要在此处处理的内容是静态内容,但我还是使用了硒来查看它如何在单击下一页时解析项目。 I'm only after any soultion related to selenium.

Website address

如果我定义了一个空白列表并扩展了所有链接,那么最终我可以解析所有标题,从而在单击下一页按钮时重新使用其内部页面中的那些链接,但这不是我想要的。

  

但是,我打算做的是收集每个页面上的所有链接,并在单击下一页按钮时从其内部页面解析每个帖子的标题。简而言之,我希望同时做两件事。

我尝试过:

import time
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.wait import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC

link = "https://stackoverflow.com/questions/tagged/web-scraping"

def get_links(url):
    driver.get(url)
    while True:
        items = [item.get_attribute("href") for item in wait.until(EC.visibility_of_all_elements_located((By.CSS_SELECTOR,".summary .question-hyperlink")))]
        yield from get_info(items)

        try:
            elem = wait.until(EC.visibility_of_element_located((By.CSS_SELECTOR,".pager > a[rel='next']")))
            driver.execute_script("arguments[0].scrollIntoView();",elem)
            elem.click()
            time.sleep(2)
        except Exception:
            break

def get_info(links):
    for link in links:
        driver.get(link)
        name = wait.until(EC.visibility_of_element_located((By.CSS_SELECTOR, "a.question-hyperlink"))).text
        yield name

if __name__ == '__main__':
    driver = webdriver.Chrome()
    wait = WebDriverWait(driver,10)
    for item in get_links(link):
        print(item)

当我运行上面的脚本时,它通过重用首页上的链接来解析不同帖子的标题,但会抛出此错误raise TimeoutException(message, screen, stacktrace) 当它到达此elem = wait.until(EC.visibility_of_element_located((By.CSS_SELECTOR,".pager > a[rel='next']")))行时。

如何从首页的收集链接中抓取每个帖子的标题,然后单击下一页按钮以重复此过程直到完成? < / strong>

1 个答案:

答案 0 :(得分:1)

之所以没有下一个按钮,是因为遍历该循环末尾的每个内部链接时,找不到下一个按钮。

您需要像下面那样获取每个nexturl并执行。

  

urlnext ='https://stackoverflow.com/questions/tagged/web-scraping?tab=newest&page= {}&pagesize = 30'.format(pageno)#其中页面将从2开始

尝试下面的代码。

import time
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.wait import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC

link = "https://stackoverflow.com/questions/tagged/web-scraping"

def get_links(url):
    urlnext = 'https://stackoverflow.com/questions/tagged/web-scraping?tab=newest&page={}&pagesize=30'
    npage = 2
    driver.get(url)
    while True:
        items = [item.get_attribute("href") for item in wait.until(EC.visibility_of_all_elements_located((By.CSS_SELECTOR,".summary .question-hyperlink")))]
        yield from get_info(items)
        driver.get(urlnext.format(npage))
        try:
            elem = wait.until(EC.visibility_of_element_located((By.CSS_SELECTOR,".pager > a[rel='next']")))
            npage=npage+1
            time.sleep(2)
        except Exception:

            break

def get_info(links):
    for link in links:
        driver.get(link)
        name = wait.until(EC.visibility_of_element_located((By.CSS_SELECTOR, "a.question-hyperlink"))).text
        yield name

if __name__ == '__main__':
    driver = webdriver.Chrome()
    wait = WebDriverWait(driver,10)

    for item in get_links(link):
        print(item)