Beautifulsoup无法在本网站中提取数据

时间:2016-08-01 08:45:02

标签: python request beautifulsoup

import requests
from bs4 import BeautifulSoup
import lxml
import urllib2
opener = urllib2.build_opener()
opener.addheaders = [('User-agent', 'Mozilla/5.0')]
f =open('ala2009link.csv','r')
s=open('2009alanews.csv','w')
for row in csv.reader(f):
url=row[0]
print url
res = requests.get(url)
print res.content
soup = BeautifulSoup(res.content)
print soup
data=soup.find_all("article",{"class":"article-wrapper news"})
#data=soup.find_all("main",{"class":"main-content"})
for item in data:
    title= item.find_all("h2",{"class","article-headline"})[0].text
    s.write("%s \n"% title)
content=soup.find_all("p")
for main in content:
    k=main.text.encode('utf-8')
    s.write("%s \n"% k)
    #k=csv.writer(s)
    #k.writerow('%s\n'% (main))
s.close()
f.close()

这是我在网站上提取数据的代码,但我不知道为什么我无法提取数据,这个广告拦截器警告阻止我的美丽汤? enter image description here 这是示例链接:http://www.rolltide.com/news/2009/6/23/Bert_Bank_Passes_Away.aspx?path=football

1 个答案:

答案 0 :(得分:0)

没有返回结果的原因是因为此网站要求您在请求中包含User-Agent标头。

要解决此问题,请将User-Agent的标头参数添加到requests.get(),如此。

url = 'http://www.rolltide.com/news/2009/6/23/Bert_Bank_Passes_Away.aspx?path=football'
headers = {
    'User-Agent': 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Ubuntu Chromium/29.0.1547.65 Chrome/29.0.1547.65 Safari/537.36',
    }
res = requests.get(url, headers=headers)