将数据放入网页列表(分裂)

时间:2017-07-25 07:12:31

标签: python browser splinter

我正在做一个小机器人,应该从网站(ebay)提供信息并使用splinter和python放入列表。我的第一行代码:

from splinter import Browser
with Browser() as browser:
url = "http://www.ebay.com"
browser.visit(url)
browser.fill('_nkw', 'levis')
button = browser.find_by_id('gh-btn')
button.click()

Ebay.com 如何使用网页上的信息将红色框中的信息列入列表?

喜欢:[[“Levi Strauss& Co. 513 Slim Straight Jean Ivory Men's SZ”,12.99,0],[“Levi 501 Jeans Mens Original Levi's Strauss Denim Straight”,71.44,“Now”],[“ Levis 501纽扣飞牛仔裤收缩适合多种尺码“,[$ 29.99,$ 39.99]]

2 个答案:

答案 0 :(得分:1)

我同意@ Aki003,像这样的东西

def get_links(ebay_url):
    page = requests.get(ebay_url).text
    soup = BeautifulSoup(page)
    links = []
    for item in soup.find_all('a'):
        links.append(item.get('href'))
    return(links)

您可以抓取网页上的任何其他元素。查看beautifulsoup文档。

答案 1 :(得分:1)

这不是完美的答案,但应该有效。 首先安装这两个模块 requestsBS4

  

pip安装请求

     

pip install beautifulsoup4

import requests
import json
from bs4 import BeautifulSoup

#setting up the headers
headers={
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/59.0.3071.115 Safari/537.36',
'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8',
'Referer': 'https://www.ebay.com/',
'Accept-Encoding': 'gzip, deflate, br',
'Accept-Language': 'en-US,en;q=0.8',
'Host': 'www.ebay.com',
'Connection': 'keep-alive',
'Cache-Control': 'max-age=0',
}
#setting up my proxy, you can disable it
proxy={
'https':'127.0.0.1:8888'
}

#search terms
search_term='armani'

#request session begins
ses=requests.session()

#first get home page so to set cookies
resp=ses.get('https://www.ebay.com/',headers=headers,proxies=proxy,verify=False)

#next get the search term page to parse request
resp=ses.get('https://www.ebay.com/sch/i.html?_from=R40&_trksid=p2374313.m570.l1313.TR12.TRC2.A0.H0.X'+search_term+'.TRS0&_nkw='+search_term+'&_sacat=0',
headers=headers,proxies=proxy,verify=False)


soup = BeautifulSoup(resp.text, 'html.parser')
items=soup.find_all('a', { "class" : "vip" })
price_items=soup.find_all('span', { "class" : "amt" })

final_list=list()

for item,price in zip(items,price_items):
    try:
        title=item.getText()
        price_val=price.find('span',{"class":"bold"}).getText()
        final_list.append((title,price_val))
    except Exception as ex:
        pass

print(final_list)

这是我得到的输出

enter image description here