从嵌套div中提取文本到csv文件中

时间:2013-09-14 05:02:03

标签: python beautifulsoup

我设法隔离了html文件的这一部分

<div class="item_display_label"><strong>**Name of Office:** </strong></div>
<div class="item_display_field">**Dacotah**</div>
<div class="item_display_label"><strong>**Federal Electoral District:** </strong></div>
<div class="item_display_field">
**St. Boniface
(Manitoba)**
</div>
<div class="item_display_label"><strong>**Dates:** </strong></div>
<div class="item_display_field">
<table border="0" cellpadding="0" cellspacing="0" class="data_table">
<tr>
<th class="th_heading" valign="top">Establishment Re-openings</th>
<th class="th_heading" valign="top">Closings</th>
</tr>
<tr>
<td class="index" valign="top">**1903-05-01**</td>
<td class="index" valign="top">**1970-05-01**</td>
</tr>
</table>
</div>
<div class="item_display_label"><strong>Postmaster Information: </strong></div>
<div class="item_display_label"><strong>**Additional Information:** </strong></div>
<div class="item_display_field">
**Closed due to Rural Mail Delivery service via Headingley, R.R. 1**<br/><br/>
**Sec. 25, Twp. 10, R. 2, WPM - 1903-05-01**<br/><br/>
**Sec. 34, Twp. 10, R. 2, WP**M<br/><br/>
**SW 1/4 Sec. 35, Twp. 10, R. 2, WPM**<br/><br/>
</div>

使用:

    from bs4 import BeautifulSoup
    soup = BeautifulSoup(open("post2.html"))
    with open("post2.txt", "wb") as file:
    for link in soup.find_all('div',['item_display_label','item_display_field']):
    print  link

我需要将粗体字段导出到带有Beautiful Soup的csv中。我尝试了不同的方法,没有结果。 csv文件的列应为:“Office of Office”,“Federal Electoral District”,“Opening”,“Closing”,“I​​nfo”。有线索吗? 非常感谢

编辑:

我正在尝试用这个编写csv:

    from bs4 import BeautifulSoup
    import csv
    soup = BeautifulSoup(open("post2.html"))
    f= csv.writer(open("post2.csv", "w"))   
    f.writerow(["Name", "District", "Open", "Close","Info"]) 
    for link in soup.find_all('div', ['item_display_label',   'item_display_field'].__contains__):
   print  link.text.strip()
   Name = link.contents[0]
   District = link.contents[1]
   Open = link.contents[2]
   Close = link.contents[3]
   Info = link.contents[4]
   f.writerow([Name, District, Open, Close, Info])

但我首先只获得了最后一个字段(info)。

1 个答案:

答案 0 :(得分:0)

请尝试以下操作:

from bs4 import BeautifulSoup
soup = BeautifulSoup(open("post2.html"))
#for link in soup.find_all('div', lambda cls: cls in ['item_display_label', 'item_display_field']):
for link in soup.find_all('div', ['item_display_label', 'item_display_field'].__contains__):
    print  link.text.strip()

lxml与xpath一起使用:

import lxml.html

tree = lxml.html.parse('post2.html')
for x in tree.xpath('.//div[@class="item_display_label"]//text()|.//div[@class="item_display_field"]//text()'):
    print x.strip()