在Python中抓一个网址

时间:2018-05-29 07:35:05

标签: python ajax web-scraping beautifulsoup python-requests

我试图从搜索页面获取adidas shoe link,无法弄清楚我做错了什么。

我试过 tags = soup.find("section", {"class": "productList"}).findAll("a") 不起作用:(

我还尝试打印所有href,并且所需的链接不在其中:(

所以我希望打印出来:

https://www.tennisexpress.com/adidas-mens-adizero-ubersonic-50-yrs-ltd-tennis-shoes-off-white-and-signal-blue-62138


from bs4 import BeautifulSoup
import requests

url = "https://www.tennisexpress.com/search.cfm?searchKeyword=BB6892"

# Getting the webpage, creating a Response object.
response = requests.get(url)

# Extracting the source code of the page.
data = response.text

# Passing the source code to BeautifulSoup to create a BeautifulSoup object for it.
soup = BeautifulSoup(data, 'lxml')

# Extracting all the <a> tags into a list.
tags = soup.find("section", {"class": "productList"}).findAll("a")

# Extracting URLs from the attribute href in the <a> tags.
for tag in tags:
    print(tag.get('href'))

以下是该链接的HTML代码

<section class="productList"> <article class="productListing"> <a class="product" href="//www.tennisexpress.com/adidas-mens-adizero-ubersonic-50-yrs-ltd-tennis-shoes-off-white-and-signal-blue-62138" title="Men`s Adizero Ubersonic 50 Yrs LTD Tennis Shoes Off White and Signal Blue" onmousedown="return nxt_repo.product_x('38698770','1');"> <span class="sale">SALE</span> <span class="image"> <img src="//www.tennisexpress.com/prodimages/78091-DEFAULT-m.jpg" alt="Men`s Adizero Ubersonic 50 Yrs LTD Tennis Shoes Off White and Signal Blue"> </span> <span class="brand"> Adidas </span> <span class="name"> Men`s Adizero Ubersonic 50 Yrs LTD Tennis Shoes Off White and Signal Blue </span> <span class="pricing"> <strong class="listPrice">$140.00</strong> <strong class="percentOff">0% OFF</strong> <strong class="salePrice">$139.95</strong> </span> <br> </a> </article> </section>

3 个答案:

答案 0 :(得分:2)

通过检查Chrome DevTools中的“网络”标签,您可以注意到在向https://tennisexpress-com.ecomm-nav.com/search.js发出请求后,系统会提取您搜索的产品。您可以看到示例回复here。正如你所看到的,这是一团糟,所以我不会遵循这种方法。

在您的代码中,您无法看到产品,因为请求是在初始页面加载后由JavaScript(在浏览器中运行)生成的。独立的urllibrequests都不能呈现该内容。但是,你可以使用支持JavaScript的Requests-HTML(它在幕后使用Chromium)来实现这一点。

<强>代码:

from itertools import chain
from requests_html import HTMLSession

session = HTMLSession()
url = 'https://www.tennisexpress.com/search.cfm?searchKeyword=adidas+boost'
r = session.get(url)
r.html.render()

links = list(chain(*[prod.absolute_links for prod in r.html.find('.product')]))

我使用chain将所有具有绝对链接的集合连接在一起,然后我创建了一个列表。

>>> links
['https://www.tennisexpress.com/adidas-mens-barricade-2018-boost-tennis-shoes-black-and-night-metallic-62110',
 'https://www.tennisexpress.com/adidas-mens-barricade-2018-boost-tennis-shoes-white-and-matte-silver-62109',
 ...
 'https://www.tennisexpress.com/adidas-mens-supernova-glide-7-running-shoes-black-and-white-41636',
 'https://www.tennisexpress.com/adidas-womens-adizero-boston-6-running-shoes-solar-yellow-and-midnight-gray-45268']

不要忘记使用pip install requests-html安装Requests-HTML。

答案 1 :(得分:1)

soup = BeautifulSoup(data, "html.parser")    
markup = soup.find_all("section", class_=["productList"])
markupContent = markup.get_text()

所以你的代码就像

import urllib
from bs4 import BeautifulSoup
import requests

url = "https://www.tennisexpress.com/search.cfm?searchKeyword=BB6892"

r = urllib.urlopen(url).read()
soup = BeautifulSoup(r, "html.parser")
productMarkup = soup.find_all("section", class_=["productList"])
product = productMarkup.get_text()

答案 2 :(得分:0)

这就是解决方案:

@TextIndexed
private String descriptionShort;

这将为您提供所有adidas网球鞋的链接列表!我相信你可以从那里管理。