使用确认弹出窗口抓取div

时间:2017-12-08 19:18:21

标签: python html selenium-webdriver web-scraping href

我正在尝试抓取此网站中的文件。

https://data.gov.in/catalog/complete-towns-directory-indiastatedistrictsub-district-level-census-2011

我希望下载excelsheet以及TRIPURA城镇的完整目录。网格列表中的第一个。

我的代码是:

import requests
import selenium

with requests.Session() as session:
    session.headers = {'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/40.0.2214.115 Safari/537.36'}

response = session.get(URL)
soup = BeautifulSoup(response.content, 'html.parser')
soup

下面给出了获取我们文件的相应元素。如何实际下载特定的Excel。它将指向另一个窗口,其中必须给出目的和电子邮件地址。如果你能为此提供解决方案,那就太棒了。

<div class="view-content">
<div class="views-row views-row-1 views-row-odd views-row-first ogpl-grid-list">
<div class="views-field views-field-title"> <span class="field-content"><a href="/resources/complete-town-directory-indiastatedistrictsub-district-level-census-2011-tripura"><span class="title-content">Complete Town Directory by India/State/District/Sub-District Level, Census 2011 - TRIPURA</span></a></span> </div>
<div class="views-field views-field-field-short-name confirmation-popup-177303 download-confirmation-box file-container excel"> <div class="field-content"><a class="177303 data-extension excel" href="https://data.gov.in/resources/complete-town-directory-indiastatedistrictsub-district-level-census-2011-tripura" target="_blank" title="excel (Open in new window)">excel</a></div> </div>
<div class="views-field views-field-dms-allowed-operations-3 visual-access"> <span class="field-content">Visual Access: NA</span> </div>
<div class="views-field views-field-field-granularity"> <span class="views-label views-label-field-granularity">Granularity: </span> <div class="field-content">Decadal</div> </div>
<div class="views-field views-field-nothing-1 download-file"> <span class="field-content"><span class="download-filesize">File Size: 44.5 KB</span></span> </div>
<div class="views-field views-field-field-file-download-count"> <span class="field-content download-counts"> Download: 529</span> </div>
<div class="views-field views-field-field-reference-url"> <span class="views-label views-label-field-reference-url">Reference URL: </span> <div class="field-content"><a href="http://www.censusindia.gov.in/2011census/Listofvillagesandtowns.aspx">http://www.censusindia.gov.in/2011census...</a></div> </div>
<div class="views-field views-field-dms-allowed-operations-1 vote_request_data_api"> <span class="field-content"><a class="api-link" href="https://data.gov.in/resources/complete-town-directory-indiastatedistrictsub-district-level-census-2011-tripura/api" title="View API">Data API</a></span> </div>
<div class="views-field views-field-field-note"> <span class="views-label views-label-field-note">Note: </span> <div class="field-content ogpl-more">NA</div> </div>
<div class="views-field views-field-dms-allowed-operations confirmationpopup-177303 data-export-cont"> <span class="views-label views-label-dms-allowed-operations">EXPORT IN: </span> <span class="field-content"><ul></ul></span> </div> </div>

1 个答案:

答案 0 :(得分:0)

当您点击Excel链接时,它会打开以下页面:

sparkContext.stop()

似乎https://data.gov.in/node/ID/download 是链接的第一个类的名称,例如ID。也许有一个更简洁的方法来获取id,但它的工作方式与使用classname

一样

然后页面t.find('a')['class'][0]重定向到(文件的)最终URL。

以下是收集列表中的所有网址:

https://data.gov.in/node/ID/download

使用默认文件名下载文件的完整代码(使用this post):

import requests
from bs4 import BeautifulSoup

URL = 'https://data.gov.in/catalog/complete-towns-directory-indiastatedistrictsub-district-level-census-2011'

src = requests.get(URL)
soup = BeautifulSoup(src.content, 'html.parser')

node_list = [
    t.find('a')['class'][0]
    for t in soup.findAll("div", { "class" : "excel" })
]

url_list = []

for url in node_list:
    node = requests.get("https://data.gov.in/node/{0}/download".format(url))
    soup = BeautifulSoup(node.content, 'html.parser')
    content = soup.find_all("meta")[1]["content"].split("=")[1]
    url_list.append(content)

print(url_list)
相关问题