从包含R的超链接的网页中提取多个表

时间:2017-12-15 17:08:16

标签: html r xml xpath web-scraping

这是我第一次尝试网络抓取。我试图从这个网页中提取一个表格列(列名:油和气表)。 Oil and Gas Data。通过使用该状态Alabama Data的链接,可以轻松地提取一个州的数据。但是,我想要一个可以为所有状态提取数据的程序,如HTML数据所示,保持年度状态。我根据之前发现的类似帖子加载了 RCurl,XML,rlist和purrr 软件包。

How can I use R (Rcurl/XML packages ?!) to scrape this webpage?这个解决方案看起来很完整,但问题网页自发布以来可能已经发生了变化(我试图模仿,但不能)

R: XPath expression returns links outside of selected element。我如何使用Xpath来提取我需要的表,因为它们都包含" stateinitials_table.html"和阿拉巴马州一样" al_table.html" view source

theurl <- getURL("https://www.eia.gov/naturalgas/archive/petrosystem/al_table.html",.opts = list(ssl.verifypeer = FALSE) )
tables <- readHTMLTable(theurl)
tables <- list.clean(tables, fun = is.null, recursive = FALSE)
berilium<-tables[seq(3,length(tables),2)]

这是&#34; al_table.html&#34;的输出。 15年的15个数据框列表。 26 rows817 columns, 15 data frames 1 for each year

所以我需要,

创建一个函数(Xpath vs readHTMLTable,更好的Xpath)从主Web链接中提取所有表。我需要它标记&#39;按国家和年份显示在网页中。 (目前不关心清理无用的列和行)

2 个答案:

答案 0 :(得分:2)

这更多的是博客文章或教程,而不是SO答案,但我也很欣赏学习的愿望,也正在为这个主题编写一本书,这似乎是一个gd的例子。

library(rvest)
library(tidyverse)

我们将从顶级页面开始:

pg <- read_html("https://www.eia.gov/naturalgas/archive/petrosystem/petrosysog.html")

现在,我们将使用一个XPath,它只能获取包含状态数据的表行。将XPath表达式与HTML中的标记进行比较,这应该是有意义的。找到没有<tr>属性的所有colspan,只选择具有正确类别和状态链接的剩余<tr>

states <- html_nodes(pg, xpath=".//tr[td[not(@colspan) and 
                     contains(@class, 'links_normal') and a[@name]]]") 

data_frame(
  state = html_text(html_nodes(states, xpath=".//td[1]")),
  link = html_attr(html_nodes(states, xpath=".//td[2]/a"), "href")
) -> state_tab

这是一个数据框,以保持整洁和方便。

你需要把下一个下面的放在它后面的函数中,但我需要在显示函数之前解释迭代。

我们需要迭代每个链接。在每次迭代中,我们:

  • 暂停,因为您的需求并不比EIA的服务器负载更重要
  • 找到所有“分支”<div>,因为它们包含我们需要的两条信息(状态+年份和所述州+年的数据表)。
  • 将其全部包装在一个漂亮的数据框中

我们将该功能放在另一个函数中(同样需要在此迭代器工作之前定义),而不是使匿名函数混乱。

pb <- progress_estimated(nrow(state_tab))
map_df(state_tab$link, ~{

  pb$tick()$print()

  pg <- read_html(sprintf("https://www.eia.gov/naturalgas/archive/petrosystem/%s", .x))

  Sys.sleep(5) # scrape responsibly

  html_nodes(pg, xpath=".//div[@class='branch']") %>% 
    map_df(extract_table)

}) -> og_df

这是一群努力工作的人。我们需要在页面上找到所有State + Year标签(每个都在<table>中),然后我们需要查找包含数据的表格。我冒昧地删除每个底部的解释性模糊,并将每个变成tibble(但这只是我的班级偏好):

extract_table <- function(pg) {

  t1 <- html_nodes(pg, xpath=".//../tr[td[contains(@class, 'SystemTitle')]][1]")
  t2 <- html_nodes(pg, xpath=".//table[contains(@summary, 'Report')]")

  state_year <- (html_text(t1, trim=TRUE) %>% strsplit(" "))[[1]]

  xml_find_first(t2, "td[@colspan]") %>% xml_remove()

  html_table(t2, header=FALSE)[[1]] %>% 
    mutate(state=state_year[1], year=state_year[2]) %>% 
    tbl_df()

}

重新粘贴上述代码只是为了确保你知道它必须在函数之后:

pb <- progress_estimated(nrow(state_tab))
map_df(state_tab$link, ~{

  pb$tick()$print()

  pg <- read_html(sprintf("https://www.eia.gov/naturalgas/archive/petrosystem/%s", .x))

  Sys.sleep(5) # scrape responsibly

  html_nodes(pg, xpath=".//div[@class='branch']") %>% 
    map_df(extract_table)

}) -> og_df

而且,它有效(你说你要分别进行最后的清理):

glimpse(og_df)
## Observations: 14,028
## Variables: 19
## $ X1    <chr> "", "Prod.RateBracket(BOE/Day)", "0 - 1", "1 - 2", "2 - 4", "4 - 6", "...
## $ X2    <chr> "", "||||", "|", "|", "|", "|", "|", "|", "|", "|", "|", "|", "|", "|"...
## $ X3    <chr> "Oil Wells", "# ofOilWells", "26", "19", "61", "61", "47", "36", "250"...
## $ X4    <chr> "Oil Wells", "% ofOilWells", "5.2", "3.8", "12.1", "12.1", "9.3", "7.1...
## $ X5    <chr> "Oil Wells", "AnnualOilProd.(Mbbl)", "4.1", "7.8", "61.6", "104.9", "1...
## $ X6    <chr> "Oil Wells", "% ofOilProd.", "0.1", "0.2", "1.2", "2.1", "2.2", "2.3",...
## $ X7    <chr> "Oil Wells", "OilRateper Well(bbl/Day)", "0.5", "1.4", "3.0", "4.9", "...
## $ X8    <chr> "Oil Wells", "AnnualGasProd.(MMcf)", "1.5", "3.5", "16.5", "19.9", "9....
## $ X9    <chr> "Oil Wells", "GasRateper Well(Mcf/Day)", "0.2", "0.6", "0.8", "0.9", "...
## $ X10   <chr> "", "||||", "|", "|", "|", "|", "|", "|", "|", "|", "|", "|", "|", "|"...
## $ X11   <chr> "Gas Wells", "# ofGasWells", "365", "331", "988", "948", "867", "674",...
## $ X12   <chr> "Gas Wells", "% ofGasWells", "5.9", "5.4", "16.0", "15.4", "14.1", "10...
## $ X13   <chr> "Gas Wells", "AnnualGasProd.(MMcf)", "257.6", "1,044.3", "6,360.6", "1...
## $ X14   <chr> "Gas Wells", "% ofGasProd.", "0.1", "0.4", "2.6", "4.2", "5.3", "5.4",...
## $ X15   <chr> "Gas Wells", "GasRateper Well(Mcf/Day)", "2.2", "9.2", "18.1", "30.0",...
## $ X16   <chr> "Gas Wells", "AnnualOilProd.(Mbbl)", "0.2", "0.6", "1.6", "2.0", "2.4"...
## $ X17   <chr> "Gas Wells", "OilRateper Well(bbl/Day)", "0.0", "0.0", "0.0", "0.0", "...
## $ state <chr> "Alabama", "Alabama", "Alabama", "Alabama", "Alabama", "Alabama", "Ala...
## $ year  <chr> "2009", "2009", "2009", "2009", "2009", "2009", "2009", "2009", "2009"...

答案 1 :(得分:1)

希望这有帮助!

library(rvest)
library(dplyr)

main_page <- read_html("https://www.eia.gov/naturalgas/archive/petrosystem/petrosysog.html") 
state <- main_page %>%
  html_nodes(xpath='//td[contains(@width, "110")]') %>%
  html_children() %>%
  html_text()
state_link <- main_page %>%
  html_nodes(xpath='//td[contains(@width, "160")]') %>%
  html_children() %>%
  html_attr('href')

final_data <- list()
for (i in 1:length(state)){
  child_page <- read_html(paste0("https://www.eia.gov/naturalgas/archive/petrosystem/",state_link[i]))
  Sys.sleep(5)

  child_page_stateAndYear <- child_page %>%
    html_nodes(xpath = '//td[@class="c SystemTitle" and @style=" font-size: 14pt; color: #CC0000;"]') %>%
    html_text
  child_page_table <- lapply(
    (child_page %>%
       html_nodes(xpath = '//table[contains(@class, "Table")]') %>%
       html_table()), 
    function(x) x[-nrow(x),])
  final_data[[state[i]]] <- setNames(child_page_table, child_page_stateAndYear)

  print(paste('Scrapped data for', state[i], '...'))
  flush.console()
}
print('Congratulation! You have finished scrapping the required data!')

names(final_data)
names(final_data[[1]])

final_data具有34个元素(即主网页上可用的状态),每个元素是表格列表(即状态的年度表格数据)。因此,如果您想拥有属于阿拉巴马州2009年的数据,那么然后输入

final_data[['Alabama']]['Alabama 2009']

(注意:您可能需要进行一些数据清理)

修改: 按照@hrbrmstr

的建议,在删除其他网页之前添加逻辑暂停一段时间