美丽的汤 - 刮多页

时间:2018-06-15 07:26:43

标签: python web-scraping beautifulsoup

如何从网站上抓取多个页面?此代码仅适用于第一个代码。任何意见,将不胜感激。 谢谢。

import csv
import requests
from bs4 import BeautifulSoup

import datetime

filename = "azet_" + datetime.datetime.now().strftime("%Y-%m-%d-%H-%M")+".csv"
with open(filename, "w+") as f:
    writer = csv.writer(f)
    writer.writerow(["Descriere","Pret","Data"])

    r = requests.get("https://azetshop.ro/12-extensa?page=1")

    soup = BeautifulSoup(r.text, "html.parser")
    x = soup.find_all("div", "thumbnail")

    for thumbnail in x:
        descriere = thumbnail.find("h3").text.strip()
        pret = thumbnail.find("price").text.strip()

        writer.writerow([descriere, pret, datetime.datetime.now()]) 

2 个答案:

答案 0 :(得分:1)

对于使用BeautifulSoup进行多页抓取,很多人通常使用while

进行裁剪
import csv
import requests
from bs4 import BeautifulSoup    
import datetime

end_page_num = 50

filename = "azet_" + datetime.datetime.now().strftime("%Y-%m-%d-%H-%M")+".csv"
with open(filename, "w+") as f:

    writer = csv.writer(f)
    writer.writerow(["Descriere","Pret","Data"])
    i = 1
    while i <= end_page_num:

        r = requests.get("https://azetshop.ro/12-extensa?page={}".format(i))

        soup = BeautifulSoup(r.text, "html5lib")
        x = soup.find_all("div", {'class': 'thumbnail-container'})

        for thumbnail in x:
            descriere = thumbnail.find('h1', {"class": "h3 product-title"}).text.strip()
            pret = thumbnail.find('span', {"class": "price"}).text.strip()
            writer.writerow([descriere, pret, datetime.datetime.now()])
        i += 1

此处i将随着1的增量而变化,因为页面的报废已完成。 这将继续报废直至您定义的end_page_num

答案 1 :(得分:0)

  

这个代码也可以使用bs4

的class属性
            import csv
            import requests
            from bs4 import BeautifulSoup
            import datetime

            filename = "azet_" + datetime.datetime.now().strftime("%Y-%m-%d-%H-%M")+".csv"
            with open(filename, "w+") as f:
                writer = csv.writer(f)
                writer.writerow(["Descriere","Pret","Data"])

                for i in range(1,50):
                    r = requests.get("https://azetshop.ro/12-extensa?page="+format(i))

                    soup = BeautifulSoup(r.text, "html.parser")
                    array_price= soup.find_all('span', class_='price')
                    array_desc=soup.find_all('h1', class_='h3 product-title',text=True)
                    for iterator in range(0,len(array_price)):
                        descriere = array_desc[iterator].text.strip()
                        pret = array_price[iterator].text.strip()

                        writer.writerow([descriere, pret, datetime.datetime.now()])