如何增加链接

时间:2017-08-23 13:24:45

标签: python web-scraping scrapy scrapy-spider scrapy-splash

我有一个链接:https://www.glassdoor.ca/Job/canada-data-jobs-SRCH_IL.0,6_IN3_KE7,11_IP1.htm

我想像这样增加链接:https://www.glassdoor.ca/Job/canada-data-jobs-SRCH_IL.0,6_IN3_KE7,11_IP2.htm

然后3,4,5 .... 我的代码是:

# -*- coding: utf-8 -*-
import scrapy


class GlassdoorSpider(scrapy.Spider):

name = 'glassdoor'
#allowed_domains = ['https://www.glassdoor.ca/Job/canada-data-jobs-SRCH_IL.0,6_IN3_KE7,11.htm']
start_urls = ['https://www.glassdoor.ca/Job/canada-data-jobs-SRCH_IL.0,6_IN3_KE7,11_IP1.htm']

def parse(self, response):
    #main_url = "https://www.glassdoor.ca"
    urls = response.css('li.jl > div > div.flexbox > div > a::attr(href)').extract()

    for url in urls:            
            url = "https://www.glassdoor.ca" + url
            yield scrapy.Request(url = url, callback = self.parse_details)

    next_page_url = "https://www.glassdoor.ca/Job/canada-data-jobs-SRCH_IL.0,6_IN3_KE7,11_IP" 
    if next_page_url:
       #next_page_url = response.urljoin(next_page_url) 
       yield scrapy.Request(url = next_page_url, callback = self.parse)  

def parse_details(self,response):
    yield{
        'Job_Title' : response.css('div.header.cell.info > h2::text').extract()

    }
    self.log("reached22: "+ response.url)

我想在变量next_page_url中增加它。

3 个答案:

答案 0 :(得分:2)

在您检查页面的同一位置的页面源中找不到它是对的。但是,您可以在<head>下的页面源中看到它出现在

<link rel="next" href="https://www.monster.ca/jobs/search/?q=data-analyst&amp;page=2" />

您可以使用

提取它
next_page_link = response.xpath('//head/link[@rel="next"]/@href').extract_first()

答案 1 :(得分:2)

获取第二页你可以这个

import requests

headers = {
    'Pragma': 'no-cache',
    'Accept-Encoding': 'gzip, deflate, br',
    'Accept-Language': 'fr-FR,fr;q=0.8,en-US;q=0.6,en;q=0.4',
    'Upgrade-Insecure-Requests': '1',
    'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/60.0.3112.101 Safari/537.36',
    'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8',
    'Referer': 'https://www.monster.ca/jobs/search/?q=data-analyst',
    'Connection': 'keep-alive',
    'Cache-Control': 'no-cache',
}
#for the other page, you should change page number
params = (
    ('q', 'data-analyst'),
    ('page', '2'),
)

r = requests.get('https://www.monster.ca/jobs/search/', headers=headers, params=params)
print r.text

获取所有页面,您应该获得最后一页的编号

for page_number in xrange(2, last_page):
   #put page_number in params

更新1

另一种解决方案

enter image description here

def start_requests(self):
    request =  Request("https://www.monster.ca/jobs/search/?q=data-analyst", callback=self.get_lastPage)
    yield request

def get_lastPage(self,response):
    headers = {
        'Pragma': 'no-cache',
        'Accept-Encoding': 'gzip, deflate, br',
        'Accept-Language': 'fr-FR,fr;q=0.8,en-US;q=0.6,en;q=0.4',
        'Upgrade-Insecure-Requests': '1',
        'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/60.0.3112.101 Safari/537.36',
        'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8',
        'Referer': 'https://www.monster.ca/jobs/search/?q=data-analyst',
        'Connection': 'keep-alive',
        'Cache-Control': 'no-cache',
    }
    last_page = response.css('input#totalPages::attr("value")').extract_first()
    for last_page in xrange(2, int(last_page)):
        link = "https://www.monster.ca/jobs/search/?q=data-analyst&page=" + str(last_page)
        yield Request(link,
                        headers=headers, 
                        callback=self.parse_product)

答案 2 :(得分:0)

您需要以这种方式表达XPath

urls = response.xpath('//*[contains(@class,"next")]//@href')

尝试,它应该有用。