您好,我正在尝试通过网络抓取一个网站以获取其每日交易,但是find_all遇到错误

时间:2019-07-17 23:47:37

标签: python web-scraping

我试图通过网络抓取一个网站(新Egg)进行日常交易,但遇到了find_all参数问题。

#imported Modules and Libraries
from bs4 import BeautifulSoup
import requests
import pandas as pd


#Website to be scraped
website = requests.get('https://www.newegg.com/DailyDeal?icid=368517')
soup = BeautifulSoup(website.content, 'html.parser')


#Getting container with all featured items
all_deals = soup.find(class_='items-view.is-grid')

#Featured Items in container
item = all_deals.find_all(class_= 'item-container')

#putting values in a list
product_title = [item.find(class_= 'item-title').text()for item in item]
maker = [item.find(class_='item-brand').text()for item in item]
price_before = [item.find(class_= 'price-was').text()for item in item]
price_now = [item.find(class_= 'price-current').text()for item in item]
price_saved = [item.find(class_= 'price-save').text()for item in item]
shipping = [item.find(class_= 'price-ship').text()for item in item]
product_link = [item.find('a',['href']) for item in item]

Deals_of_the_day = pd.DataFrame(
    {
    'item-title': product_title,
    'item-brand': maker,
    'price-was': price_before,
    'price-current': price_now,
    'price-save': price_saved,
    'price-ship': shipping,
   # 'a',['href']: product_link,
     })

print(Deals_of_the_day)
Deals_of_the_day.to_csv('New_Egg_Daily_Deals.csv')

我希望它们位于一个csv文件中。

Error: 
Traceback (most recent call last):
  File "C:/Users/elise/PycharmProjectsPython Boring Projects/Web Scraping New_Egg Website .py", line 16, in <module>
    item = all_deals.find_all(class_= 'item-container')
AttributeError: 'NoneType' object has no attribute 'find_all'

1 个答案:

答案 0 :(得分:0)

您不能将多个类与soup.find()匹配; class_参数被视为要搜索的单个类。请改用soup.select()

all_deals = soup.select(".items-view.is-grid")

您可以将其与.item-container类结合使用:

item = soup.select(".items-view.is-group .item-container")