使用BeautifulSoup解析网页 - 跳过404错误页面

时间:2014-06-20 07:46:05

标签: python web-scraping beautifulsoup

我使用下面的代码来获取网站的标题。

from bs4 import BeautifulSoup
import urllib2

line_in_list = ['www.dailynews.lk','www.elpais.com','www.dailynews.co.zw']

for websites in line_in_list:
    url = "http://" + websites
    page = urllib2.urlopen(url)
    soup = BeautifulSoup(page.read())
    site_title = soup.find_all("title")
    print site_title

如果网站列表包含“不良”(不存在)网站/网页,或者网站有某种类型或错误,例如“404找不到页面”等,则脚本将会中断并停止。

我可以通过什么方式让脚本忽略/跳过“不良”(不存在)和有问题的网站/网页?

1 个答案:

答案 0 :(得分:7)

line_in_list = ['www.dailynews.lk','www.elpais.com',"www.no.dede",'www.dailynews.co.zw']

for websites in line_in_list:
    url = "http://" + websites
    try:
       page = urllib2.urlopen(url)
    except Exception, e:
        print e
        continue

    soup = BeautifulSoup(page.read())
    site_title = soup.find_all("title")
    print site_title

[<title>Popular News Items | Daily News Online : Sri Lanka's National News</title>]
[<title>EL PAÍS: el periódico global</title>]
<urlopen error [Errno -2] Name or service not known>
[<title>
DailyNews - Telling it like it is
</title>]
相关问题