我目前正在研究一个脚本,该脚本首先使用cfscrape绕过cloudflare,然后使用有效负载进行2次发布请求以登录到该站点。我在future1和future2帖子中遇到了一些错误。这是我的代码:
mysite.com/user/stephen
错误:
import asyncio
import requests
import cfscrape
async def main():
s = requests.Session()
s.get('https://www.off---white.com/en/IT')
headers = {
'Referer': 'https://www.off---white.com/it/IT/login',
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/67.0.3396.99 Safari/537.36'
}
payload1 = {
'spree_user[email]': 'email',
'spree_user[password]': 'password',
'spree_user[remember_me]': '0',
}
payload2 = {
'spree_user[email]': 'email',
'spree_user[password]': 'password',
'spree_user[remember_me]': '0',
}
scraper = cfscrape.create_scraper(s)
scraper.get('https://www.off---white.com/en/IT', headers=headers)
print('Done')
loop = asyncio.get_event_loop()
print('Starting loop')
future1 = loop.run_in_executor(None, requests.post ,'https://www.off---white.com/it/IT/login', data=payload1, headers=headers)
future2 = loop.run_in_executor(None, requests.post ,'https://www.off---white.com/it/IT/login', data=payload2, headers=headers)
response1 = await future1
response2 = await future2
print(response1.text)
print(response2.text)
loop = asyncio.get_event_loop()
loop.run_until_complete(main())
答案 0 :(得分:1)
BaseEventLoop.run_in_executor(执行程序,回调,* args)
我运行了您的代码并遇到了很多错误,因此我重写了您的代码。您需要了解这些内容
cfscrape
而不是requests
来发布数据await
必须在async def
内run_in_executor
仅获得args
而不是kwargs
requests
-来自@Brad Solomon 重写代码
import asyncio
import requests
import cfscrape
headers = {
'Referer': 'https://www.off---white.com/it/IT/login',
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/67.0.3396.99 Safari/537.36'
}
payload1 = {
'spree_user[email]': 'email',
'spree_user[password]': 'password',
'spree_user[remember_me]': '0',
}
payload2 = {
'spree_user[email]': 'email',
'spree_user[password]': 'password',
'spree_user[remember_me]': '0',
}
def post(dict):
scraper = cfscrape.create_scraper(requests.Session())
req = scraper.post(**dict)
return req
async def get_data():
datas = [dict(url='https://www.off---white.com/it/IT/login', data=payload1, headers=headers),
dict(url='https://www.off---white.com/it/IT/login', data=payload2, headers=headers)]
loop = asyncio.get_event_loop()
response = [loop.run_in_executor(None, post , data) for data in datas]
result = await asyncio.gather(*response)
print(result)
loop = asyncio.get_event_loop()
loop.run_until_complete(get_data())