使用python和selenium进行网络爬行

时间:2017-09-18 13:32:56

标签: python selenium

我正在尝试从网站抓取数据,但问题是有更多加载按钮来查看下50条记录,我必须点击直到记录结束。

我只能获取50个姓名和地址。需要获取所有直到加载更多的结束。

动态点击按钮我正在使用selenium和python。

我想找到所有零售商城市的名称,地址和联系电话

我的尝试:

import time
from bs4 import BeautifulSoup
from selenium import webdriver
from selenium.common.exceptions import TimeoutException

url = "https://www.test.in/chemists/medical-store/gujarat/surat"
browser = webdriver.Chrome()
browser.get(url)

time.sleep(1)
html = browser.page_source
soup = BeautifulSoup(html, "lxml")

try:
    for row in soup.find_all("div", {"class":"listing "}):
        #print(row.get_text())
        name = row.h3.a.string
        address = row.p.get_text()
        #contactnumber = need to find (can view after click on retailer name )
        print(name)
        print(address)
        print(contactnumber)

    button = browser.find_element_by_id("loadmore")
    button.click()

except TimeoutException as ex: 
    isrunning = 0

#browser.close()
#browser.quit()

1 个答案:

答案 0 :(得分:1)

如果您检查点击load more时发出的网络呼叫,您可以看到其发布请求的参数是城市,州和页码。因此,不是在selnium中加载脚本,而是可以使用普通请求模块来完成。例如,当您遍历页面时,此函数将为您执行加载更多功能。

def hitter(page):
    url = "https://www.healthfrog.in/importlisting.html"

    payload = "page="+str(page)+"&mcatid=chemists&keyword=medical-store&state=gujarat&city=surat"
    headers = {
        'content-type': "application/x-www-form-urlencoded",
        'connection': "keep-alive",
        'cache-control': "no-cache",
        'postman-token': "d4baf979-343a-46e6-f53b-2f003d04da82"
    }

    response = requests.request("POST", url, data=payload, headers=headers)
    return response.text

上面的函数会为您提取包含名称和地址的页面的html。现在,您可以遍历页面,直到找到不返回内容的页面。例如,如果尝试使用州卡纳塔克邦和城市作为迈索尔,您将注意到第三页和第四页之间的差异。这将告诉你必须停在哪里。

要获取电话号码,您可以从批量商家信息响应的<h3>标签请求输入html(之前的回复)。示例html:

<div class="listing">
    <h3>
        <a href="https://www.healthfrog.in/chemists/sunny-medical-store-surat-v8vcr3alr.html">Sunny Medical Store</a>
    </h3>
    <p>
        <i class="fa fa-map-marker"></i>155 - Shiv Shakti Society,  Punagam, , Surat, Gujarat- 394210,India
    </p>
</div>

您需要解析html并找出电话号码的位置,然后您可以填充它。您可以使用以下方式请求此示例:

html = requests.get('https://www.healthfrog.in/chemists/sunny-medical-store-surat-v8vcr3alr.html').text

您现在可以像之前那样使用beautifulSoup解析html。

使用请求代替selenium在这里有很多优点,每次需要电话号码时都不需要打开和关闭多个窗口,并且每次加载更多时都可以避免元素过期。它也快得多。

请注意:如果您正在进行这样的抓取,请遵守网站规定的规则。不要通过发送太多请求来崩溃它。

编辑:工作刮刀。

import requests, time, re
from bs4 import BeautifulSoup

def hitter(page, state="Gujarat", city="Surat"):
    url = "https://www.healthfrog.in/importlisting.html"

    payload = "page="+str(page)+"&mcatid=chemists&keyword=medical-store&state="+state+"&city="+city
    headers = {
        'content-type': "application/x-www-form-urlencoded",
        'connection': "keep-alive",
        'cache-control': "no-cache"
    }

    response = requests.request("POST", url, data=payload, headers=headers)
    return response.text

def getPhoneNo(link):
    time.sleep(3)
    soup1 = BeautifulSoup(requests.get(link).text, "html.parser")
    f = soup1.find('i', class_='fa fa-mobile').next_element
    try:
        phone = re.search(r'(\d{10})', f).group(1)
    except AttributeError:
        phone = None
    return phone

def getChemists(soup):
    stores = []
    for row in soup.find_all("div", {"class":"listing"}):
        print(row)
        dummy = {
            'name': row.h3.string,
            'address': row.p.get_text(),
            'phone': getPhoneNo(row.h3.a.get_attribute_list('href')[0])
        }
        print(dummy)
        stores.append(dummy)

    return stores

if __name__ == '__main__':
    page, chemists = 1, []
    city, state = 'Gulbarga', 'Karnataka'
    html = hitter(page, state, city)
    condition = not re.match(r'\A\s*\Z', html)
    while(condition):
        soup = BeautifulSoup(html, 'html.parser')
        chemists += getChemists(soup)
        page += 1
        html = hitter(page, state, city)
        condition = not re.match(r'\A\s*\Z', html)
    print(chemists)
相关问题