如何在python中同时下载多个大文件?

时间:2018-04-16 16:40:24

标签: python python-3.x download urllib common-crawl

我正在尝试从CommonCrawl数据库下载一系列Warc文件,每个文件大约25mb。这是我的剧本:

import json
import urllib.request
from urllib.error import HTTPError

from src.Util import rooted

with open(rooted('data/alexa.txt'), 'r') as alexa:
    for i, url in enumerate(alexa):
        if i % 1000 == 0:
            try:
                request = 'http://index.commoncrawl.org/CC-MAIN-2018-13-index?url={search}*&output=json' \
                    .format(search=url.rstrip())
                page = urllib.request.urlopen(request)
                for line in page:
                    result = json.loads(line)
                    urllib.request.urlretrieve('https://commoncrawl.s3.amazonaws.com/%s' % result['filename'],
                                               rooted('data/warc/%s' % ''.join(c for c in result['url'] if c.isalnum())))
            except HTTPError:
                pass

目前正在做的是请求链接通过CommonCrawl REST API下载Warc文件,然后启动下载到'data / warc'文件夹。

问题在于,在每个urllib.request.urlretrieve()调用中,程序会挂起,直到文件完全下载,然后才会发出下一个下载请求。是否有任何方法可以在下载完成后立即终止urllib.request.urlretrieve()调用,然后下载文件或以某种方式为每个请求旋转新线程并同时下载所有文件?< / p>

由于

1 个答案:

答案 0 :(得分:0)

使用线程,futures偶数:)

jobs = []
with ThreadPoolExecutor(max_workers=100) as executor:
    for line in page:

        future = executor.submit(urllib.request.urlretrieve,
                                'https://commoncrawl.s3.amazonaws.com/%s' % result['filename'],
                                 rooted('data/warc/%s' % ''.join(c for c in result['url'] if c.isalnum()))
        jobs.append(future)
...
for f in jobs:
    print(f.result())

在此处阅读更多内容:https://docs.python.org/3/library/concurrent.futures.html