使用Selenium和/或Scrapy对.ASPX网站进行webscraping

时间:2015-02-19 00:32:01

标签: python-2.7 web-scraping scrapy

我是Python / Selenium的新手,并在python / Windows中对以下内容进行编码,以便在MA-Board of Reg. Website中搜索5484医生演示文稿。

我的问题:网站是.aspx,所以我最初选择了Selenium。但是,非常感谢有关编写后续步骤的任何见解/建议(见下文)。更具体地说,如果继续使用硒或加入scrapy更有效吗?任何见解都非常感谢!:

  1. 通过点击每个超链接" PhysicianProfile.aspx?PhysicianID = XXXX"选择每个医生的超链接(每页1-10个)。在" ChooseAPhysician页面"。
  2. 关注每个,并提取,#34;人口统计信息" 人口统计信息:" phy_name"," lic_issue_date",prim_worksetting等
  3. 返回,"选择医生页面",点击"下一步"
  4. 重复另外5474名医生

    from selenium import webdriver
    from selenium.webdriver.support.ui import Select
    from selenium.webdriver.support.ui import WebDriverWait
    from selenium.webdriver.support import expected_conditions as EC
    from selenium.webdriver.common.by import By
    
    driver = webdriver.Chrome()                         driver.get('http://profiles.ehs.state.ma.us/Profiles/Pages/ChooseAPhysician.aspx?Page=1')
    
    #Locate the elements
    zip = driver.find_element_by_xpath("//*[@id=\"ctl00_ContentPlaceHolder1_txtZip\"]")
    select = Select(driver.find_element_by_xpath("//select[@id=\"ctl00_ContentPlaceHolder1_cmbDistance\"]"))
    print select.options
    print [o.text for o in select.options]
    select.select_by_visible_text("15")
    prim_care_chekbox = driver.find_element_by_xpath("//*[@id=\"ctl00_ContentPlaceHolder1_SpecialtyGroupsCheckbox_6\"]")
    find_phy_button = driver.find_element_by_xpath("//*[@id=\"ctl00_ContentPlaceHolder1_btnSearch\"]")
    
    
    #Input zipcode, check "primary care box", and click "find phy" button
    zip.send_keys("02109")
    prim_care_chekbox.click()
    find_phy_button.click()
    
    #wait for "ChooseAPhysician" page to open
    wait = WebDriverWait(driver, 10)
    
    open_phy_bio = driver.find_element_by_xpath("//*[@id=\"PhysicianSearchResultGrid\"]/tbody/tr[2]/td[1]/a")
    element = wait.until(EC.element_to_be_selected(open_phy_bio))
    open_phy_bio.click()
    
    links = self.driver.find_element_by_xpath("//*[@id=\"PhysicianSearchResultGrid\"]/tbody/tr[2]/td[1]/a")
    for link in links:
        link = link.get_attribute("href")
        self.driver.get(link)
    
    def parse(self, response):
    item = SummaryItem()
    sel = self.selenium
    sel.open(response.url)
    time.sleep(4) 
    item["phy_name"] = driver.find_elements_by_xpaths("//*[@id=\"content\"]/center/p[1]").extract() 
    item["lic_status"] = driver.find_elements_by_xpaths("//*[@id=\"content\"]/center/table[2]/tbody/tr[3]/td/table/tbody/tr/td[1]/table/tbody/tr[2]/td[2]/a[1]").extract()
    item["lic_issue_date"] = driver.find.elements_by_xpaths("//*[@id=\"content\"]/center/table[2]/tbody/tr[3]/td/table/tbody/tr/td[1]/table/tbody/tr[3]/td[2]").extract()
    item["prim_worksetting"] = driver.find.elements_by_xpaths("//*[@id=\"content\"]/center/table[2]/tbody/tr[3]/td/table/tbody/tr/td[1]/table/tbody/tr[5]/td[2]").extract()
    item["npi"] = driver.find_elements_by_xpaths("//*[@id=\"content\"]/center/table[2]/tbody/tr[3]/td/table/tbody/tr/td[2]/table/tbody/tr[6]/td[2]").extract()
    item["Med_sch_grad_date"] = driver.find_elements_by_xpaths("//*[@id=\"content\"]/center/table[3]/tbody/tr[3]/td/table/tbody/tr[2]/td[2]").extract()
    item["Area_of_speciality"] = driver.find_elements_by_xpaths("//*[@id=\"content\"]/center/table[4]/tbody/tr[3]/td/table/tbody/tr/td[2]").extract()
    item["link"] =  driver.find_element_by_xpath("//*[@id=\"PhysicianSearchResultGrid\"]/tbody/tr[2]/td[1]/a").extract()
    
    
    return item
    

0 个答案:

没有答案
相关问题