Python / Pandas - 跨多个页面刮取网页搜索结果

时间:2017-11-21 16:44:27

标签: python pandas web-scraping beautifulsoup

我和朋友一起尝试将几个网页的结果拉入数据框(https://motos.coches.net/ocasion/barcelona/?pg=1&fi=oTitle&or=1&Tops=1,其中页码会增加)。我之前没有使用网络抓取工作,并尝试使用Pandas read_html和BeautifulSoup,但我很难找到从哪里开始。

理想情况下,我们希望将所有5000多个结果放入CSV中,显示标题,发布日期,公里,年份,CC和位置。

使用Pandas和网络抓取库可以轻松完成这样的事吗?谢谢你的帮助!

2 个答案:

答案 0 :(得分:0)

您还没有尝试自己找到解决方案,但是您可以这样做:

offset = 0
pg = 1
base_url = 'https://url?start={0}&pg={1}'

url = base_url.format(offset,pg)
results = first page from BeautifulSoup scrape or requests.get
all_results= results

while results:
    # Rebuild url base on current start.
    start += rows
    url = base_url.format(offset, pg)
    results = next page from BeautifulSoup scrape or requests.get
    all_results += results

答案 1 :(得分:0)

我想出了一个解决方案,虽然可能不是最优雅的:

import requests
from bs4 import BeautifulSoup
import pandas as pd
from time import sleep

base_url = 'https://motos.coches.net/ocasion/barcelona/?pg={}&fi=CreationDate&or=-1'

#excluding page from base_url for further adding
res = []

for page in range(1,300): # unknown last page

    request = requests.get(base_url.format(page), headers={'User-Agent':'Mozilla/5.0 (Windows NT 6.3; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/59.0.3071.115 Safari/537.36'}) # here adding page    
    if request.status_code == 404: #added just in case of error
        break
    soup = BeautifulSoup(request.content, 'lxml')

    for url in soup.find_all('div', class_ = 'col2-grid'):
        res.append([
            url.find('h2', class_ = 'floatleft').contents[0].encode('utf8')
            ,url.find('p', class_ = 'data floatright').contents[0].encode('utf8')
            ,url.find('p', class_ = 'preu').contents[0].encode('utf8')
            ,url.find('span', class_ = 'd1').contents[0].encode('utf8')
            ,url.find('span', class_ = 'd2').contents[0].encode('utf8')
            ,url.find('span', class_ = 'd3').contents[0].encode('utf8')
            ,url.find('span', class_ = 'lloc').contents[0].encode('utf8')
                    ]
                  )
    sleep(2) #pause code

    #create dataframe
    df = pd.DataFrame(data=res, columns=['title', 'date_posted', 'price_in_euros', 'km', 'year', 'engine_size', 'location'])
    df = df.replace({'<span>|</span>': ''}, regex=True) #remove span tags

    df['engine_size_metric'] = None
    df.loc[df['engine_size'].str.contains(' cc'),'engine_size_metric'] = 'cc'
    df.loc[df['engine_size'].str.contains(' kw'),'engine_size_metric'] = 'kw'

    df['price_in_euros'] = df['price_in_euros'].replace({'\.|€': ''}, regex=True)
    df['price_in_euros'] = df['price_in_euros'].astype(float)

    df['km'] = df['km'].replace({'\.| km': ''}, regex=True)
    df['km'] = df['km'].replace({'N/D': None}, regex=True)
    df['km'] = df['km'].astype(float)

    df['engine_size'] = df['engine_size'].str.split(' ').str[0].replace({'\.|cc|kw': ''}, regex=True)
    df.loc[df['engine_size']=='','engine_size'] = None
    df['engine_size'] = df['engine_size'].astype(float)

    df.to_csv('output.csv', index=False)