在Beautiful Soup中刮取多个页面进行解析

时间:2011-11-30 23:57:38

标签: python web-scraping urllib2

我正试图从一个网站上刮掉多个页面,让BeautifulSoup解析。到目前为止,我已经尝试使用urllib2来做到这一点,但是遇到了一些问题。我试过的是:

import urllib2,sys
from BeautifulSoup import BeautifulSoup

for numb in ('85753', '87433'):
    address = ('http://www.presidency.ucsb.edu/ws/index.php?pid=' + numb)
html = urllib2.urlopen(address).read()
soup = BeautifulSoup(html)

title = soup.find("span", {"class":"paperstitle"})
date = soup.find("span", {"class":"docdate"})
span = soup.find("span", {"class":"displaytext"})  # span.string gives you the first bit
paras = [x for x in span.findAllNext("p")]

first = title.string
second = date.string
start = span.string
middle = "\n\n".join(["".join(x.findAll(text=True)) for x in paras[:-1]])
last = paras[-1].contents[0]

print "%s\n\n%s\n\n%s\n\n%s\n\n%s" % (first, second, start, middle, last)

这只会给出numb序列中第二个数字的结果,即http://www.presidency.ucsb.edu/ws/index.php?pid=87433。我也尝试过使用机械化,但没有成功。理想情况下,我希望能够做的是有一个页面,一个链接列表,然后自动选择一个链接,将HTML传递给BeautifulSoup,然后移动到列表中的下一个链接。

3 个答案:

答案 0 :(得分:1)

您需要将其余代码放在循环中。现在你正在迭代元组中的两个项目,但是在迭代结束时,只有最后一个项目仍然被分配给address,后来在循环之外被解析。

答案 1 :(得分:1)

这是一个更整洁的解决方案(使用lxml):

import lxml.html as lh

root_url = 'http://www.presidency.ucsb.edu/ws/index.php?pid='
page_ids = ['85753', '87433']

def scrape_page(page_id):
    url = root_url + page_id
    tree = lh.parse(url)

    title = tree.xpath("//span[@class='paperstitle']")[0].text
    date = tree.xpath("//span[@class='docdate']")[0].text
    text = tree.xpath("//span[@class='displaytext']")[0].text_content()

    return title, date, text

if __name__ == '__main__':
    for page_id in page_ids:
        title, date, text = scrape_page(page_id)

答案 2 :(得分:1)

我认为你错过了循环中的缩进:

import urllib2,sys
from BeautifulSoup import BeautifulSoup

for numb in ('85753', '87433'):
    address = ('http://www.presidency.ucsb.edu/ws/index.php?pid=' + numb)
    html = urllib2.urlopen(address).read()
    soup = BeautifulSoup(html)

    title = soup.find("span", {"class":"paperstitle"})
    date = soup.find("span", {"class":"docdate"})
    span = soup.find("span", {"class":"displaytext"})  # span.string gives you the first bit
    paras = [x for x in span.findAllNext("p")]

    first = title.string
    second = date.string
    start = span.string
    middle = "\n\n".join(["".join(x.findAll(text=True)) for x in paras[:-1]])
    last = paras[-1].contents[0]

    print "%s\n\n%s\n\n%s\n\n%s\n\n%s" % (first, second, start, middle, last)

我认为这应该可以解决问题..