在启动所有任务之前恢复异步任务

时间:2019-03-11 23:04:20

标签: python-3.x python-asyncio

在这里的示例代码中,所有异步任务首先启动。之后,如果IO操作完成,任务将恢复。

输出看起来像这样,您可以在前6条开始消息后之后看到6条结果消息。

-- Starting https://jamanetwork.com/rss/site_3/67.xml...
-- Starting https://www.b-i-t-online.de/bitrss.xml...
-- Starting http://twitrss.me/twitter_user_to_rss/?user=cochranecollab...
-- Starting http://twitrss.me/twitter_user_to_rss/?user=cochranecollab...
-- Starting https://jamanetwork.com/rss/site_3/67.xml...
-- Starting https://www.b-i-t-online.de/bitrss.xml...
28337 size for http://twitrss.me/twitter_user_to_rss/?user=cochranecollab
28337 size for http://twitrss.me/twitter_user_to_rss/?user=cochranecollab
1938204 size for https://www.b-i-t-online.de/bitrss.xml
1938204 size for https://www.b-i-t-online.de/bitrss.xml
38697 size for https://jamanetwork.com/rss/site_3/67.xml
38697 size for https://jamanetwork.com/rss/site_3/67.xml
FINISHED with 6 results from 6 tasks.

但是我期望的是什么?在我的案子中谁会加快处理速度?

-- Starting https://jamanetwork.com/rss/site_3/67.xml...
-- Starting https://www.b-i-t-online.de/bitrss.xml...
-- Starting http://twitrss.me/twitter_user_to_rss/?user=cochranecollab...
1938204 size for https://www.b-i-t-online.de/bitrss.xml
-- Starting http://twitrss.me/twitter_user_to_rss/?user=cochranecollab...
28337 size for http://twitrss.me/twitter_user_to_rss/?user=cochranecollab
28337 size for http://twitrss.me/twitter_user_to_rss/?user=cochranecollab
-- Starting https://jamanetwork.com/rss/site_3/67.xml...
38697 size for https://jamanetwork.com/rss/site_3/67.xml
-- Starting https://www.b-i-t-online.de/bitrss.xml...
28337 size for http://twitrss.me/twitter_user_to_rss/?user=cochranecollab
28337 size for http://twitrss.me/twitter_user_to_rss/?user=cochranecollab
1938204 size for https://www.b-i-t-online.de/bitrss.xml
38697 size for https://jamanetwork.com/rss/site_3/67.xml
FINISHED with 6 results from 6 tasks.

在我的真实世界代码中,我有数百个这样的下载任务。通常,某些下载会在所有下载开始之前完成。

是否可以使用asyncio处理此问题?

这是一个最小的工作示例:

#!/usr/bin/env python3
import random
import urllib.request
import asyncio
from concurrent.futures import ThreadPoolExecutor

executor = ThreadPoolExecutor()
loop = asyncio.get_event_loop()
urls = ['https://www.b-i-t-online.de/bitrss.xml',
        'https://jamanetwork.com/rss/site_3/67.xml',
        'http://twitrss.me/twitter_user_to_rss/?user=cochranecollab']

async def parse_one_url(u):
    print('-- Starting {}...'.format(u))
    r = await loop.run_in_executor(executor,
                                   urllib.request.urlopen, u)
    r = '{} size for {}'.format(len(r.read()), u)
    print(r)

async def do_async_parsing():
    tasks = [
        parse_one_url(u)
        for u in urls
            ]

    completed, pending = await asyncio.wait(tasks)
    results = [task.result() for task in completed]

    print('FINISHED with {} results from {} tasks.'
          .format(len(results), len(tasks)))

if __name__ == '__main__':
    # blow up the urls
    urls = urls * 2
    random.shuffle(urls)
    try:
        #loop.set_debug(True)
        loop.run_until_complete(do_async_parsing())
    finally:
        loop.close()

侧问asyncio对我来说不是没有用吗?仅使用多线程线程难道不是很容易吗?

0 个答案:

没有答案