python:从任何网站提取文本

时间:2015-01-31 07:58:04

标签: python beautifulsoup

到目前为止,我已经完成了我的工作,但它成功地从这两个网站获取了文本:

但是,我不知道我在哪里做错了,并且没有从其他网站获取文字,当我放置其他链接时,它会给我错误:

错误:

  

追踪(最近的呼叫最后):
    文件" C:\ Users \ DELL \ Desktop \ python \ s \ fyp \ data extraction.py",第20行,in       text = soup.select(' .C_doc')[0] .get_text()   IndexError:列表索引超出范围

我的代码:

import urllib
from bs4 import BeautifulSoup
url = "http://www.i-programmer.info/babbages-bag/477-trees.html" #unsuccessfull 
#url = "http://www.tutorialspoint.com/cplusplus/index.htm"   #doing successfully
#url = "http://www.cplusplus.com/doc/tutorial/program_structure/" #doing successfully
html = urllib.urlopen(url).read()
soup = BeautifulSoup(html)

# kill all script and style elements
for script in soup(["script", "style","a","<div id=\"bottom\" >"]):
    script.extract()    # rip it out

# get text
#text = soup.select('.C_doc')[0].get_text()
#text = soup.select('.content')[0].get_text()

if soup.select('.content'):
    text = soup.select('.content')[0].get_text()
else:
    text = soup.select('.C_doc')[0].get_text()

# break into lines and remove leading and trailing space on each
lines = (line.strip() for line in text.splitlines())
# break multi-headlines into a line each
chunks = (phrase.strip() for line in lines for phrase in line.split("  "))
# drop blank lines
text = '\n'.join(chunk for chunk in chunks if chunk)

print text
fo = open('foo.txt', 'w')
fo.seek(0, 2)
line = fo.writelines( text )
fo.close()
#writing done :)

2 个答案:

答案 0 :(得分:1)

您假设您废弃的所有网站都有类名contentC_doc 如果您报废的网站没有此类名C_doc怎么办?

以下是修复:

text = ''
    if soup.select('.content'):
        text = soup.select('.content')[0].get_text()
    elif soup.select('.C_doc'):
        text = soup.select('.C_doc')[0].get_text()

if text:
    #put rest of the code.
else:
    print 'text does not exists.'

答案 1 :(得分:1)

尝试使用

Text = soup.findAll(text=True)

<强>更新

这是一个可以从中开始的基本文本剥离器。

import urllib
from bs4 import BeautifulSoup
url = "http://www.i-programmer.info/babbages-bag/477-trees.html" 
html = urllib.urlopen(url).read()
soup = BeautifulSoup(html)

for script in soup(["script", "style","a","<div id=\"bottom\" >"]):
    script.extract()    

text = soup.findAll(text=True)
for p in text:
    print p
相关问题