beautiful soup parser can't find links

时间:2016-04-21 22:45:02

标签: python beautifulsoup html-parsing

I was trying to parse an HTML document to find links using Beautiful Soup and found a weird behavior. The page is http://people.csail.mit.edu/gjtucker/ . Here's my code:

from bs4 import BeautifulSoup
import requests

user_agent = {'User-agent': 'Mozilla/5.0 (X11; Linux i686) AppleWebKit/537.17 (KHTML, like Gecko) Chrome/24.0.1312.52 Safari/537.17'}

t=requests.get(url, headers = user_agent).text

soup=BeautifulSoup(t, 'html.parser')
for link in soup.findAll('a'):
    print link['href']

This prints two links: http://www.amazon.jobs/team/speech-amazon and https://scholar.google.com/citations?user=-gJkPHIAAAAJ&hl=en, whereas clearly there are many more links in the page.

Can anyone reproduce this? Is there a specific reason for this happening with this URL? A few outher urls worked just fine.

1 个答案:

答案 0 :(得分:0)

页面的HTML格式不正确,您应该使用more lenient parser,例如html5lib

soup = BeautifulSoup(t, 'html5lib')
for link in soup.find_all('a'):
    print(link['href'])

打印:

http://www.amazon.jobs/team/speech-amazon
https://scholar.google.com/citations?user=-gJkPHIAAAAJ&hl=en
http://www.linkedin.com/pub/george-tucker/6/608/3ba
...
http://www.hsph.harvard.edu/alkes-price/
...
http://www.nature.com/ng/journal/v47/n3/full/ng.3190.html
http://www.biomedcentral.com/1471-2105/14/299
pdfs/journal.pone.0029095.pdf
pdfs/es201187u.pdf
pdfs/sigtrans.pdf