使用python BeautifulSoup将Html抓取数据转换为读取和写入CSV文件

时间:2017-01-02 14:34:05

标签: python csv

请找到以下链接:

http://www.bseindia.com/stock-share-price/stockreach_financials.aspx?scripcode=505200&expandable=0

我尝试了以下内容:

from bs4 import BeautifulSoup as soup
import csv
from pandas import read_csv
import requests

file_path=r'C:\Users\PreciseT3\Desktop\EicherStockDetails.csv'
eicher_stock_url='http://www.bseindia.com/stock-share-price/stockreach_financials.aspx?scripcode=505200&expandable=0'
get_url=requests.get(eicher_stock_url)
target_table=soup(get_url.text,'lxml')
extracted_table_data=target_table.find('table',id='acr')
datasets=[]
col_names=[]
count=1

with open(file_path,'r+') as file:
 writer=csv.writer(file)
 col_names.append('Years')
 for years_row in extracted_table_data.find('tr').find_all('td',class_='TTHeader'):    
    if not(years_row.get_text()=='(in Cr.)'):
        print(years_row.get_text())
        col_names.append(years_row.get_text()) 

 writer.writerow(col_names)

with open(file_path,'r+') as file:
 writer=csv.writer(file)
 for row_headings in extracted_table_data.find('tr').find('td',class_='TTRow_left'):
  col_names.append(row_headings)
  for row_values in extracted_table_data.find('tr').find_all('td',class_='TTRow_right',text=lambda x:'6,188.03' in x or '3,031.22' in x or '1,702.47' in x or '1,049.26' in x or '670.95' in x):
   col_names.append(row_values.get_text())

 writer.writerow(col_names)

我的结果如下:

Years,2016,2014,2013,2012,2011,Revenue,"6,188.03","3,031.22","1,702.47","1,049.26",670.95

我的要求是:

  • 而不是((在Cr.))列名称我需要将其更改为' year'

  • 我需要探索它并希望将数据作为csv格式的文件(写入csv支持的文件)并且我想要Transpose(T)行和列

  • 我需要从另一个html页面添加额外的列(需要一些示例)

请帮助我。我不能再往前走了。提前谢谢。

1 个答案:

答案 0 :(得分:0)

我已经对这段代码进行了一些修改 - 但逻辑应该很容易理解。我已经使用Cr和Year作为这个基本分析数据的分配器,但你也可以通过调整代码的“main_split”部分将其转换为数百万/ qtr。

from bs4 import BeautifulSoup
import urllib2
import pandas as pd

url = 'http://www.bseindia.com/stock-share-price/stockreach_financials.aspx?scripcode=505200&expandable=0'
html = urllib2.urlopen(url).read()
soup = BeautifulSoup(html, "html.parser")

main = []
for tr in soup.findAll('tr'):
    mainSub = []
    for td in tr.findAll('td'):
        mainSub += [td.text]
    main += [mainSub]

splitter = []
for y in range(len(main)):
    splitter += [any('--' in x for x in main[y])]

split_index = [x for x in range(len(splitter)) if splitter[x] == True]

main_split = main[(split_index[7]+2):(split_index[8]-2)]


main_zip = zip(*main_split)
DF = pd.DataFrame(main_zip,columns=[x.replace(' ', '_') for x in main_zip.pop(0)])
print DF

希望这会有所帮助。欢呼声。