以完美格式导出到CSV

时间:2019-02-12 05:15:14

标签: html pandas csv beautifulsoup screen-scraping

我想在csv中打印此数据,以便我可以循环搜索我的网站的许多公司。

我正在stackoverflow本身的帮助下获取此代码,并希望将此打印格式转换为excel或csv,每列有或没有149卢比。

import pandas as pd
import requests
from bs4 import BeautifulSoup as bs

url = 'https://www.zaubacorp.com/documents/KAKDA/U01122MP1985PTC002857'
res = requests.get(url)
soup = bs(res.content,'lxml')
headers = [header.text for header in soup.select('h3.pull-left')]
tables = pd.read_html(url)
items = zip(headers,tables)
for header, table in items:
    print(header)
    print(table)

**

Certificates
         Date                         Title   ₨ 149 Each
0  2006-04-24  Certificate of Incorporation  Add to Cart
1  2006-04-24  Certificate of Incorporation  Add to Cart
Other Documents Attachment
         Date Title   ₨ 149 Each
0  2006-04-24   AOA  Add to Cart
1  2006-04-24   AOA  Add to Cart
2  2006-04-24   MOA  Add to Cart
3  2006-04-24   MOA  Add to Cart
Annual Returns and balance sheet Eform
         Date                    Title   ₨ 149 Each
0  2006-04-24  Annual Return 2002_2003  Add to Cart
1  2006-04-24  Annual Return 2003_2004  Add to Cart

**

1 个答案:

答案 0 :(得分:0)

目前还不清楚您想要什么作为期望的输出。但是,一旦您合并了数据帧,就可以使用熊猫将其写入csv。

import pandas as pd
import requests
from bs4 import BeautifulSoup as bs

url = 'https://www.zaubacorp.com/documents/KAKDA/U01122MP1985PTC002857'
res = requests.get(url)
soup = bs(res.content,'lxml')
headers = [header.text for header in soup.select('h3.pull-left')]
tables = pd.read_html(url)

tables = [ table[1:] for idx, table in enumerate(tables) ]

df = pd.concat(tables)   
df.columns = headers 
df = df.reset_index(drop=True)


df.to_csv('path/to/filename.csv', index=False)