用发电机刮页

时间:2014-06-24 11:16:51

标签: python web-scraping beautifulsoup generator

我用Beautiful Soup抓一个网站。我遇到的问题是该网站的某些部分是用JS分页的,其中包含未知(变化)的页面数量。 我正试图用发电机来解决这个问题,但这是我第一次写一个发电机,而我很难绕过它并弄清楚我正在做的事情是否有意义。

代码:

from bs4 import BeautifulSoup
import urllib
import urllib2
import jabba_webkit as jw
import csv
import string
import re
import time

tlds = csv.reader(open("top_level_domains.csv", 'r'), delimiter=';')
sites = csv.writer(open("websites_to_scrape.csv", "w"), delimiter=',')

tld = "uz"
has_next = True
page = 0

def create_link(tld, page):
    if page == 0:
        link = "https://domaintyper.com/top-websites/most-popular-websites-with-" + tld + "-domain"
    else:
        link = "https://domaintyper.com/top-websites/most-popular-websites-with-" + tld + "-domain/page/" + repr(page)

    return link

def check_for_next(soup):
    disabled_nav = soup.find(class_="pagingDivDisabled")

    if disabled_nav:
        if "Next" in disabled_nav:
            return False
        else:
            return True
    else:
        return True


def make_soup(link):
    html = jw.get_page(link)
    soup = BeautifulSoup(html, "lxml")

    return soup

def all_the_pages(counter):
    while True: 
        link = create_link(tld, counter)
        soup = make_soup(link)
        if check_for_next(soup) == True:
            yield counter
        else:
            break
        counter += 1

def scrape_page(soup):
    table = soup.find('table', {'class': 'rankTable'})
    th = table.find('tbody')
    test = th.find_all("td")

    correct_cells = range(1,len(test),3)
    for cell in correct_cells:
        #print test[cell]
        url = repr(test[cell])
        content = re.sub("<[^>]*>", "", url)
        sites.writerow([tld]+[content])


def main():

    for page in all_the_pages(0):

        print page
        link = create_link(tld, page)
        print link
        soup = make_soup(link)
        scrape_page(soup)






main()

我在代码背后的想法:
刮刀应该获取页面,确定是否有其他页面跟随,刮取当前页面并移动到下一页,表示该过程。如果没有下一页,它应该停止。我在这里怎么样才有意义吗?

1 个答案:

答案 0 :(得分:0)

正如我告诉你的那样,你可以使用selenium以编程方式点击Next按钮,但由于这不是你的选择,我可以想到以下方法来获取使用纯BS4的页数:

import requests
from bs4 import BeautifulSoup

def page_count():
    pages = 1    
    url = "https://domaintyper.com/top-websites/most-popular-websites-with-uz-domain/page/{}"

    while True:
        html = requests.get(url.format(pages)).content
        soup = BeautifulSoup(html)

        table = soup.find('table', {'class': 'rankTable'})
        if len(table.find_all('tr')) <= 1:
            return pages
        pages += 1