使用漂亮的汤获取锚标签的href

时间:2019-01-04 11:48:07

标签: python-3.x python-2.7 beautifulsoup

我正在尝试使用漂亮的汤来在youtube上获得第一个视频搜索的定位标记的href。我正在使用“ a”和class _ =“ yt-simple-endpoint style-scope ytd-video-renderer”进行搜索。 但是我没有人失恋。

请协助。 :)

from bs4 import BeautifulSoup
import requests    

source = requests.get("https://www.youtube.com/results?search_query=MP+election+results+2018%3A+BJP+minister+blames+conspiracy+as+reason+while+losing").text

soup = BeautifulSoup(source,'lxml')

# print(soup2.prettify())


a =soup.findAll("a", class_="yt-simple-endpoint style-scope ytd-video-renderer")

a_fin = soup.find("a", class_="compact-media-item-image")

#
print(a)

5 个答案:

答案 0 :(得分:1)

from bs4 import BeautifulSoup
import requests    

source = requests.get("https://www.youtube.com/results?search_query=MP+election+results+2018%3A+BJP+minister+blames+conspiracy+as+reason+while+losing").text

soup = BeautifulSoup(source,'lxml')
first_serach_result_link = soup.findAll('a',attrs={'class':'yt-uix-tile-link'})[0]['href']

受到 this答案的启发

答案 1 :(得分:1)

另一个选择是首先使用Selenium渲染页面。

import bs4 
from selenium import webdriver 

url = 'https://www.youtube.com/results?search_query=MP+election+results+2018%3A+BJP+minister+blames+conspiracy+as+reason+while+losing'

browser = webdriver.Chrome('C:\chromedriver_win32\chromedriver.exe')
browser.get(url)

source = browser.page_source

soup = bs4.BeautifulSoup(source,'html.parser')

hrefs = soup.find_all("a", class_="yt-simple-endpoint style-scope ytd-video-renderer")
for a in hrefs:
    print (a['href'])

输出:

/watch?v=Jor09n2IF44
/watch?v=ym14AyqJDTg
/watch?v=g-2V1XJL0kg
/watch?v=eeVYaDLC5ik
/watch?v=StI92Bic3UI
/watch?v=2W_4LIAhbdQ
/watch?v=PH1WZPT5IKw
/watch?v=Au2EH3GsM7k
/watch?v=q-j1HEnDn7w
/watch?v=Usjg7IuUhvU
/watch?v=YizmwHibomQ
/watch?v=i2q6Fm0E3VE
/watch?v=OXNAMyEvcH4
/watch?v=vdcBtAeZsCk
/watch?v=E4v2StDdYqs
/watch?v=x7kCuRB0f7E
/watch?v=KERtHNoZrF0
/watch?v=TenbA4wWIJA
/watch?v=Ey9HfjUyUvY
/watch?v=hqsuOT0URJU

答案 2 :(得分:1)

您可以使用Selenium来生成动态HTML,也可以使用GoogleBot用户代理来获取静态html

headers = {'User-Agent' : 'Googlebot/2.1 (+http://www.google.com/bot.html)'}
source = requests.get("https://.......", headers=headers).text

soup = BeautifulSoup(source, 'lxml')

links = soup.findAll("a", class_="yt-uix-tile-link")
for link in links:
    print(link['href'])

答案 3 :(得分:1)

尝试遍历比赛:

import urllib2
data = urllib2.urlopen("some_url")
html_data = data.read()
soup = BeautifulSoup(html_data)

for a in soup.findAll('a',href=True):
    print a['href']

答案 4 :(得分:0)

要搜索的类在已废弃的html中不存在。您可以通过打印汤变量来识别它。 例如,

a =soup.findAll("a", class_="sign-in-link")

给出的输出为

[<a class="sign-in-link" href="https://accounts.google.com/ServiceLogin?passive=true&amp;continue=https%3A%2F%2Fwww.youtube.com%2Fsignin%3Faction_handle_signin%3Dtrue%26app%3Ddesktop%26feature%3Dplaylist%26hl%3Den%26next%3D%252Fresults%253Fsearch_query%253DMP%252Belection%252Bresults%252B2018%25253A%252BBJP%252Bminister%252Bblames%252Bconspiracy%252Bas%252Breason%252Bwhile%252Blosing&amp;uilel=3&amp;hl=en&amp;service=youtube">Sign in</a>]