Scrapy-模拟带有标头和请求有效负载的AJAX请求

时间:2019-06-10 11:32:18

标签: python ajax python-3.x web-scraping scrapy

https://www.kralilan.com/liste/kiralik-bina

这是我要抓取的网站。当您打开网站时,列表是通过ajax请求生成的。每当您向下滚动时,相同的请求就会继续填充页面。这就是他们实现无限滚动的方式...

request

当向下滚动时,我发现这是发送到服务器的请求,并尝试使用标头和请求有效负载来模拟相同的请求。这是我的蜘蛛。

class MySpider(scrapy.Spider):

    name = 'kralilanspider'
    allowed_domains = ['kralilan.com']
    start_urls = [
        'https://www.kralilan.com/liste/satilik-bina'
    ]

    def parse(self, response):

        headers = {'Referer': 'https://www.kralilan.com/liste/kiralik-bina',
                   'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:67.0) Gecko/20100101 Firefox/67.0',
                   'Accept': 'application/json, text/javascript, */*; q=0.01',
                   'Accept-Language': 'en-US,en;q=0.5',
                   'Accept-Encoding': 'gzip, deflate, br',
                   #'Content-Type': 'application/json; charset=utf-8',
                   #'X-Requested-With': 'XMLHttpRequest',
                   #'Content-Length': 246,
                   #'Connection': 'keep-alive',
                   }

        yield scrapy.Request(
            url='https://www.kralilan.com/services/ki_operation.asmx/getFilter',
            method='POST',
            headers=headers,
            callback=self.parse_ajax
        )

    def parse_ajax(self, response):
        yield {'data': response.text}
  • 如果我取消注释注释的标题,则请求失败,状态码为400或500。
  • 我试图在parse方法中将请求有效内容作为主体发送。那也不起作用。
  • 如果我尝试产生response.body,我得到TypeError: Object of type bytes is not JSON serializable

我在这里想念什么?

2 个答案:

答案 0 :(得分:2)

以下实现将获取您想要获取的响应。您错过了data中最重要的部分作为参数在您的帖子请求中传递。

import json
import scrapy

class MySpider(scrapy.Spider):
    name = 'kralilanspider'
    data = {'incomestr':'["Bina","1",-1,-1,-1,-1,-1,5]', 'intextstr':'{"isCoordinates":false,"ListDrop":[],"ListText":[{"id":"78","Min":"","Max":""},{"id":"107","Min":"","Max":""}],"FiyatData":{"Max":"","Min":""}}', 'index':0 , 'count':'10' , 'opt':'1' , 'type':'3'}

    def start_requests(self):
        yield scrapy.Request(
            url='https://www.kralilan.com/services/ki_operation.asmx/getFilter',
            method='POST',
            body=json.dumps(self.data),
            headers={"content-type": "application/json"}
        )

    def parse(self, response):
        items = json.loads(response.text)['d']
        yield {"data":items}

万一您想解析多页中的数据(当您向下滚动时会记录新的页索引),则可以使用以下方法。分页位于数据中的index键内。

import json
import scrapy

class MySpider(scrapy.Spider):
    name = 'kralilanspider'
    data = {'incomestr':'["Bina","1",-1,-1,-1,-1,-1,5]', 'intextstr':'{"isCoordinates":false,"ListDrop":[],"ListText":[{"id":"78","Min":"","Max":""},{"id":"107","Min":"","Max":""}],"FiyatData":{"Max":"","Min":""}}', 'index':0 , 'count':'10' , 'opt':'1' , 'type':'3'}
    headers = {"content-type": "application/json"}
    url = 'https://www.kralilan.com/services/ki_operation.asmx/getFilter'

    def start_requests(self):
        yield scrapy.Request(
            url=self.url,
            method='POST',
            body=json.dumps(self.data),
            headers=self.headers,
            meta={'index': 0}
        )

    def parse(self, response):
        items = json.loads(response.text)['d']
        res = scrapy.Selector(text=items)
        for item in res.css(".list-r-b-div"):
            title = item.css(".add-title strong::text").get()
            price = item.css(".item-price::text").get()
            yield {"title":title,"price":price}

        page = response.meta['index'] + 1
        self.data['index'] = page
        yield scrapy.Request(self.url, headers=self.headers, method='POST', body=json.dumps(self.data), meta={'index': page})

答案 1 :(得分:1)

为什么您忽略POST body?您也需要提交:

    def parse(self, response):

        headers = {'Referer': 'https://www.kralilan.com/liste/kiralik-bina',
                   'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:67.0) Gecko/20100101 Firefox/67.0',
                   'Accept': 'application/json, text/javascript, */*; q=0.01',
                   'Accept-Language': 'en-US,en;q=0.5',
                   'Accept-Encoding': 'gzip, deflate, br',
                   'Content-Type': 'application/json; charset=utf-8',
                   'X-Requested-With': 'XMLHttpRequest',
                   #'Content-Length': 246,
                   #'Connection': 'keep-alive',
                   }

        payload = """
{ incomestr:'["Bina","2",-1,-1,-1,-1,-1,5]', intextstr:'{"isCoordinates":false,"ListDrop":[],"ListText":[{"id":"78","Min":"","Max":""},{"id":"107","Min":"","Max":""}],"FiyatData":{"Max":"","Min":""}}', index:'0' , count:'10' , opt:'1' , type:'3'}
"""
        yield scrapy.Request(
            url='https://www.kralilan.com/services/ki_operation.asmx/getFilter',
            method='POST',
            body=payload,
            headers=headers,
            callback=self.parse_ajax
        )
相关问题