Urllib2 Python - 重新连接和拆分响应

时间:2015-03-20 16:03:40

标签: python urllib2 urllib

我正在从其他语言转向Python,我不确定如何正确解决这个问题。使用urllib2库可以很容易地设置代理并从站点获取数据:

import urllib2

req = urllib2.Request('http://www.voidspace.org.uk')
response = urllib2.urlopen(req)
the_page = response.read()

我遇到的问题是检索的文本文件非常大(数百MB),并且连接通常有问题。代码还需要捕获连接,服务器和传输错误(它将是小型广泛使用的管道的一部分)。

有人可以建议如何修改上面的代码,以确保代码自动重新连接n次(例如100次),并可能将响应分成块,以便更快,更可靠地下载数据?

我已尽可能多地拆分请求,所以现在必须确保检索代码尽可能好。基于核心python库的解决方案是理想的。

也许图书馆已经在做上面这种情况有没有办法改善下载大文件?我正在使用UNIX并需要处理代理。

感谢您的帮助。

2 个答案:

答案 0 :(得分:1)

我正在举例说明您可能希望如何使用python-requests库执行此操作。下面的脚本检查目标文件是否已存在。如果存在部分目标文件,则假定它是部分下载的文件,并尝试恢复下载。如果服务器声称支持HTTP部分请求(即对HEAD请求的响应包含Accept-Range标头),则脚本根据部分下载的文件的大小恢复;否则它只是定期下载并丢弃已下载的部分。我认为如果你不想使用python-requests,将它转换为仅使用urllib2应该是相当直接的,它可能会更加冗长。

请注意,如果在初始下载和恢复之间修改了服务器上的文件,则恢复下载可能会损坏文件。如果服务器支持强HTTP ETag标头,则可以检测到这一点,以便下载程序可以检查它是否正在恢复相同的文件。

我没有声称它没有错误。 您应该在此脚本周围添加校验和逻辑以检测下载错误,并在校验和不匹配时从头开始重试。

import logging
import os
import re
import requests

CHUNK_SIZE = 5*1024 # 5KB
logging.basicConfig(level=logging.INFO)

def stream_download(input_iterator, output_stream):
    for chunk in input_iterator:
        output_stream.write(chunk)

def skip(input_iterator, output_stream, bytes_to_skip):
    total_read = 0
    while total_read <= bytes_to_skip:
        chunk = next(input_iterator)
        total_read += len(chunk)
    output_stream.write(chunk[bytes_to_skip - total_read:])
    assert total_read == output_stream.tell()
    return input_iterator

def resume_with_range(url, output_stream):
    dest_size = output_stream.tell()
    headers = {'Range': 'bytes=%s-' % dest_size}
    resp = requests.get(url, stream=True, headers=headers)
    input_iterator = resp.iter_content(CHUNK_SIZE)
    if resp.status_code != requests.codes.partial_content:
        logging.warn('server does not agree to do partial request, skipping instead')
        input_iterator = skip(input_iterator, output_stream, output_stream.tell())
        return input_iterator
    rng_unit, rng_start, rng_end, rng_size = re.match('(\w+) (\d+)-(\d+)/(\d+|\*)', resp.headers['Content-Range']).groups()
    rng_start, rng_end, rng_size = map(int, [rng_start, rng_end, rng_size])
    assert rng_start <= dest_size
    if rng_start != dest_size:
        logging.warn('server returned different Range than requested')
        output_stream.seek(rng_start)
    return input_iterator

def download(url, dest):
    ''' Download `url` to `dest`, resuming if `dest` already exists
        If `dest` already exists it is assumed to be a partially 
        downloaded file for the url.
    '''
    output_stream = open(dest, 'ab+')

    output_stream.seek(0, os.SEEK_END)
    dest_size = output_stream.tell()

    if dest_size == 0:
        logging.info('STARTING download from %s to %s', url, dest)
        resp = requests.get(url, stream=True)
        input_iterator = resp.iter_content(CHUNK_SIZE)
        stream_download(input_iterator, output_stream)
        logging.info('FINISHED download from %s to %s', url, dest)
        return

    remote_headers = requests.head(url).headers
    remote_size = int(remote_headers['Content-Length'])
    if dest_size < remote_size:
        logging.info('RESUMING download from %s to %s', url, dest)
        support_range = 'bytes' in [s.strip() for s in remote_headers['Accept-Ranges'].split(',')]
        if support_range:
            logging.debug('server supports Range request')
            logging.debug('downloading "Range: bytes=%s-"', dest_size)
            input_iterator = resume_with_range(url, output_stream)
        else:
            logging.debug('skipping %s bytes', dest_size)
            resp = requests.get(url, stream=True)
            input_iterator = resp.iter_content(CHUNK_SIZE)
            input_iterator = skip(input_iterator, output_stream, bytes_to_skip=dest_size)
        stream_download(input_iterator, output_stream)
        logging.info('FINISHED download from %s to %s', url, dest)
        return
    logging.debug('NOTHING TO DO')
    return

def main():
    TEST_URL = 'http://mirror.internode.on.net/pub/test/1meg.test'
    DEST = TEST_URL.split('/')[-1]
    download(TEST_URL, DEST)

main()

答案 1 :(得分:0)

你可以尝试这样的事情。它逐行读取文件并将其附加到文件中。它还会检查以确保您不会越过同一条线。我会写另一个脚本,也可以用块来完成它。

import urllib2
file_checker = None
print("Please Wait...")
while True:
    try:
        req = urllib2.Request('http://www.voidspace.org.uk')
        response = urllib2.urlopen(req, timeout=20)
        print("Connected")
        with open("outfile.html", 'w+') as out_data:
            for data in response.readlines():
                file_checker = open("outfile.html")
                if data not in file_checker.readlines():
                    out_data.write(str(data))
        break
    except urllib2.URLError:
        print("Connection Error!")
        print("Connecting again...please wait")
file_checker.close()
print("done")

以下是如何以块而不是行

读取数据
import urllib2

CHUNK = 16 * 1024
file_checker = None
print("Please Wait...")
while True:
    try:
        req = urllib2.Request('http://www.voidspace.org.uk')
        response = urllib2.urlopen(req, timeout=1)
        print("Connected")
        with open("outdata", 'wb+') as out_data:
            while True:
                chunk = response.read(CHUNK)
                file_checker = open("outdata")
                if chunk and chunk not in file_checker.readlines():
                 out_data.write(chunk)
                else:
                    break
        break
    except urllib2.URLError:
        print("Connection Error!")
        print("Connecting again...please wait")
file_checker.close()
print("done")