在bs4中定位元素

时间:2019-03-01 06:03:37

标签: python selenium beautifulsoup

试图在此页面上抓取每个推土机的所有信息。 我才刚刚开始,对刮刮只有一个不错的主意,但不确定如何去做。

driver=webdriver.Firefox()
driver.get('https://www.rbauction.com/dozers?keywords=&category=21261693092')    
soup=BeautifulSoup(driver.page_source,'html.parser')

#trying all d/f ways buh getting oly nonetype or no element
get= soup.findAll('div' , attrs={'class' : 'sc-gisBJw eHFfwj'})
get2= soup.findAll('div' , attrs={'id' : 'searchResultsList'})
get3= soup.find('div.searchResultsList').find_all('a')

我必须进入每个类/ id,然后循环a ['href']并获取每个推土机的信息。 请帮忙。

2 个答案:

答案 0 :(得分:0)

在将数据读入之前,您需要等待要加载的数据 BeautifulSoup对象。在硒中使用WebDriverWait,等待页面加载,因为需要一段时间才能完全呈现:

from bs4 import BeautifulSoup
from selenium import webdriver
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.by import By

driver = webdriver.Firefox()
driver.get('https://www.rbauction.com/dozers?keywords=&category=21261693092')
WebDriverWait(driver, 10).until(EC.presence_of_element_located((By.ID, 'searchResultsList')))
soup = BeautifulSoup(driver.page_source,'html.parser')

此行应返回页面的href,然后:

hrefs = [el.attrs.get('href') for el in soup.find('div', attrs={'id': 'searchResultsList'}).find_all('a')]

答案 1 :(得分:0)

您可以只使用请求

import requests 
headers = {'Referrer':'https://www.rbauction.com/dozers?keywords=&category=21261693092'}
data = requests.get('https://www.rbauction.com/rba-msapi/search?keywords=&searchParams=%7B%22category%22%3A%2221261693092%22%7D&page=0&maxCount=48&trackingType=2&withResults=true&withFacets=true&withBreadcrumbs=true&catalog=ci&locale=en_US', headers = headers).json()

for item in data['response']['results']:
    print(item['name'],item['url'])