Boto - S3响应错误403禁止

时间:2015-02-27 17:55:02

标签: python amazon-s3 multiprocessing boto

我知道这里有很多关于同一问题的问题,但是我已经完成了每一个问题并尝试了那里的建议和答案,但没有用。这就是我在这里发布这个问题的原因

我正在尝试将文件上传到我的存储桶。由于此文件大于100mb,我尝试使用boto支持的multipart_upload上传它。我能够做到这一点。然后我尝试使用pool模块中的multiprocessing类来提高上传速度。我使用下面给出的代码。当我运行程序时,没有任何反应。我使用from multiprocessing.dummy import pool进行调试,程序引发了

boto.exception.S3ResponseError: S3ResponseError: 403 Forbidden
<?xml version="1.0" encoding="UTF-8"?>
<Error><Code>AccessDenied</Code><Message>Access Denied</Message><RequestId>55D423C42E8A9D94</RequestId><HostId>kxxX+UmBlGaT4X8adUAp9XQV/1jiiK83IZKQuKxAIMEmzdC3g9IRqDqIVXGLPAOe</HostId></Error>
raise exc_upload处标有{&#39;#&#39;}。我不明白为什么我得到这个。我对存储桶有完整的读写访问权限,标准上传工作就像一个没有任何错误的魅力。我也可以从桶中删除我想要的任何文件。唯一的问题似乎是我尝试并行上传时。代码粘贴在adn下面也可以找到here

我的代码:(我已从代码中删除了我的密钥和存储桶名称)

def _upload_part(bucketname, aws_key, aws_secret, multipart_id, part_num,
                     source_path, offset, bytes, amount_of_retries=10):
    """
    Uploads a part with retries.
    """
    def _upload(retries_left=amount_of_retries):
        try:
            logging.info('Start uploading part #%d ...' % part_num)
            conn = S3Connection(aws_key, aws_secret)
            bucket = conn.get_bucket(bucketname)
            for mp in bucket.get_all_multipart_uploads():
                if mp.id == multipart_id:
                    with FileChunkIO(source_path, 'r', offset=offset,
                                     bytes=bytes) as fp:
                        mp.upload_part_from_file(fp=fp, part_num=part_num)
                    break
        except Exception, exc:
            if retries_left:
                _upload(retries_left=retries_left - 1)
            else:
                logging.info('... Failed uploading part #%d' % part_num)
                raise exc  #this line raises error
        else:
            logging.info('... Uploaded part #%d' % part_num)

    _upload()


def upload(bucketname, aws_key, aws_secret, source_path, keyname,
               acl='private', headers={}, parallel_processes=4):
    """
    Parallel multipart upload.
    """
    conn = S3Connection(aws_key, aws_secret)
    bucket = conn.get_bucket(bucketname)

    mp = bucket.initiate_multipart_upload(keyname, headers=headers)

    source_size = os.stat(source_path).st_size
    bytes_per_chunk = max(int(math.sqrt(5242880) * math.sqrt(source_size)),
                          5242880)
    chunk_amount = int(math.ceil(source_size / float(bytes_per_chunk)))

    pool = Pool(processes=parallel_processes)
    for i in range(chunk_amount):
        offset = i * bytes_per_chunk
        remaining_bytes = source_size - offset
        bytes = min([bytes_per_chunk, remaining_bytes])
        part_num = i + 1
        pool.apply_async(_upload_part, [bucketname, aws_key, aws_secret, mp.id,
                                        part_num, source_path, offset, bytes])
    pool.close()
    pool.join()

    if len(mp.get_all_parts()) == chunk_amount:
        mp.complete_upload()
        key = bucket.get_key(keyname)
        key.set_acl(acl)
    else:
        mp.cancel_upload()

upload(default_bucket, acs_key, sec_key, '/path/to/folder/testfile.txt', 'testfile.txt')

0 个答案:

没有答案
相关问题