Python美丽的汤表刮

时间:2017-11-09 11:36:16

标签: python json beautifulsoup

我一直在寻找从在线石油生产SSRS饲料中抓取HTML表格。我已经设法学习了一些美丽的汤/蟒蛇,以达到我目前的目的,但我想我需要一点帮助才能完成它。

目的是刮掉所有标记的表并输出json数据。我有一个json格式的输出,但对于10个标题,但它重复每个标题相同的数据行单元格值。我认为通过单元格迭代分配给标题是个问题。我确信它在运行时会有意义。

任何帮助都会非常感激,试着去了解我做错了什么,因为这对我来说很新鲜。

干杯

import json
from bs4 import BeautifulSoup
import urllib.request
import boto3
import botocore

#Url to scrape

url='http://factpages.npd.no/ReportServer?/FactPages/TableView/
    field_production_monthly&rs:Command=Render&rc:Toolbar=
    false&rc:Parameters=f&Top100=True&IpAddress=108.171.128.174&
    CultureCode=en'


#Agent detail to prevent scraping bot detection 
user_agent = 'Mozilla/5(Macintosh; Intel Mac OS X 10_9_3) 
    AppleWebKit/537.36 (KHTML, like Gecko) Chrome/35.0.1916.47 
    Safari/537.36'

header = {'User-Agent': user_agent}

#Request url from list above, assign headers from criteria above
req = urllib.request.Request(url, headers = header)

#Open url from the previous request and assign
npddata = urllib.request.urlopen(req, timeout = 20)

#Start soup on url request data
soup = BeautifulSoup(npddata, 'html.parser')

# Scrape the html table variable from selected website 
table = soup.find('table')


headers = {}

col_headers = soup.findAll('tr')[3].findAll('td')

for i in range(len(col_headers)):
    headers[i] = col_headers[i].text.strip()

# print(json.dumps(headers, indent = 4))


cells = {}

rows = soup.findAll('td', {
    'class': ['a61cl', 'a65cr', 'a69cr', 'a73cr', 'a77cr', 'a81cr', 'a85cr', 
    'a89cr', 'a93cr', 'a97cr']})

for row in rows[i]: #remove index!(###ISSUE COULD BE HERE####)

# findall function was original try (replace getText with FindAll to try)

    cells = row.getText('div')


# Attempt to fix, can remove and go back to above
#for i in range(len(rows)): #cells[i] = rows[i].text.strip()


#print(cells)# print(json.dumps(cells, indent = 4))
#print(cells)# print(json.dumps(cells, indent = 4))


data = []

item = {}

for index in headers:
    item[headers[index]] = cells#[index]

# if no getText on line 47 then.text() here### ISSUE COULD BE HERE####

data.append(item)


#print(data)
print(json.dumps(data, indent = 4))
# print(item)# 
print(json.dumps(item, indent = 4))

1 个答案:

答案 0 :(得分:1)

您的代码中存在一些错误我修复了这些错误并稍微修改了您的代码:

这就是你想要的:

import requests
from bs4 import BeautifulSoup
import json

# Webpage connection
html = "http://factpages.npd.no/ReportServer?/FactPages/TableView/field_production_monthly&rs:Command=Render&rc:Toolbar=false&rc:Parameters=f&Top100=True&IpAddress=108.171.128.174&CultureCode=en"
r=requests.get(html)
c=r.content
soup=BeautifulSoup(c,"html.parser")


rows = soup.findAll('td', {
    'class': ['a61cl', 'a65cr', 'a69cr', 'a73cr', 'a77cr', 'a81cr', 'a85cr',
    'a89cr', 'a93cr', 'a97cr']})

headers = soup.findAll('td', {
    'class': ['a20c','a24c', 'a28c', 'a32c', 'a36c', 'a40c', 'a44c', 'a48c',
    'a52c']})

headers_list = [item.getText('div') for item in headers]

rows_list=[item.getText('div') for item in rows]

final=[rows_list[item:item+9] for item in range(0,len(rows_list),9)]

row_header={}
for item in final:
    for indices in range(0,9):
        if headers_list[indices] not in row_header:
            row_header[headers_list[indices]]=[item[indices]]
        else:
            row_header[headers_list[indices]].append(item[indices])



result=json.dumps(row_header,indent=4)
print(result)

输出样本:

{
    "Year": [
        "2009",
        "2009",
        "2009",
        "2009",
        "2009",
        "2009",
        "2010",
        "2010",
        "2010",
        "2010",
        "2010",