asyncio发布请求并登录网站

时间:2018-10-27 10:13:34

标签: python python-3.x python-requests python-asyncio

我目前正在研究一个脚本,该脚本首先使用cfscrape绕过cloudflare,然后使用有效负载进行2次发布请求以登录到该站点。我在future1和future2帖子中遇到了一些错误。这是我的代码:

mysite.com/user/stephen

错误:

import asyncio
import requests
import cfscrape

async def main():
s = requests.Session()
s.get('https://www.off---white.com/en/IT')

headers = {
    'Referer': 'https://www.off---white.com/it/IT/login',
    'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/67.0.3396.99 Safari/537.36'
    }

payload1 = {
    'spree_user[email]': 'email',
    'spree_user[password]': 'password',
    'spree_user[remember_me]': '0',
}

payload2 = {
    'spree_user[email]': 'email',
    'spree_user[password]': 'password',
    'spree_user[remember_me]': '0',
}

scraper = cfscrape.create_scraper(s)
scraper.get('https://www.off---white.com/en/IT', headers=headers)
print('Done')

loop = asyncio.get_event_loop()
print('Starting loop')

future1 = loop.run_in_executor(None, requests.post ,'https://www.off---white.com/it/IT/login', data=payload1, headers=headers)
future2 = loop.run_in_executor(None, requests.post ,'https://www.off---white.com/it/IT/login', data=payload2, headers=headers)
response1 = await future1
response2 = await future2
print(response1.text)
print(response2.text)

loop = asyncio.get_event_loop()
loop.run_until_complete(main())

1 个答案:

答案 0 :(得分:1)

  

BaseEventLoop.run_in_executor(执行程序,回调,* args)

我运行了您的代码并遇到了很多错误,因此我重写了您的代码。您需要了解这些内容

  1. 除非您在发布请求中添加了cookie,否则请使用cfscrape而不是requests来发布数据
  2. await必须在async def
  3. run_in_executor仅获得args而不是kwargs
  4. 第9条规则:请勿在异步代码中使用requests-来自@Brad Solomon

重写代码

import asyncio
import requests
import cfscrape

headers = {
    'Referer': 'https://www.off---white.com/it/IT/login',
    'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/67.0.3396.99 Safari/537.36'
    }

payload1 = {
    'spree_user[email]': 'email',
    'spree_user[password]': 'password',
    'spree_user[remember_me]': '0',
}

payload2 = {
    'spree_user[email]': 'email',
    'spree_user[password]': 'password',
    'spree_user[remember_me]': '0',
}


def post(dict):
    scraper = cfscrape.create_scraper(requests.Session())
    req = scraper.post(**dict)
    return req

async def get_data():
    datas = [dict(url='https://www.off---white.com/it/IT/login', data=payload1, headers=headers),
            dict(url='https://www.off---white.com/it/IT/login', data=payload2, headers=headers)]
    loop = asyncio.get_event_loop()
    response = [loop.run_in_executor(None, post , data) for data in datas]
    result = await asyncio.gather(*response)
    print(result)


loop = asyncio.get_event_loop()
loop.run_until_complete(get_data())
相关问题