Python:如何在不耗尽内存的情况下提取Google Cloud Storage中的Zip文件?

时间:2020-08-21 06:23:07

标签: python memory zip google-cloud-storage dask

我需要将文件提取为Google Cloud Storage中的zip文件。我正在使用python函数来执行此操作,但是即使使用Dask群集并且每个Dask工作程序都有20GB的内存限制,我仍然遇到内存问题。

如何优化我的代码,使其不占用太多内存?也许可以分块读取zip文件并将其流式传输到临时文件,然后将该文件发送到Google Cloud Storage?

在此感谢任何指导。

这是我的代码:

@task
def unzip_files(
    bucket_name,
    zip_data
):
    file_date = zip_data['file_date']
    gcs_folder_path = zip_data['gcs_folder_path']
    gcs_blob_name = zip_data['gcs_blob_name']

    storage_client = storage.Client()
    bucket = storage_client.get_bucket(bucket_name)

    destination_blob_pathname = f'{gcs_folder_path}/{gcs_blob_name}'
    blob = bucket.blob(destination_blob_pathname)
    zipbytes = io.BytesIO(blob.download_as_string())

    if is_zipfile(zipbytes):
        with ZipFile(zipbytes, 'r') as zipObj:
            extracted_file_paths = []
            for content_file_name in zipObj.namelist():
                content_file = zipObj.read(content_file_name)
                extracted_file_path = f'{gcs_folder_path}/hgdata_{file_date}_{content_file_name}'
                blob = bucket.blob(extracted_file_path)
                blob.upload_from_string(content_file)
                extracted_file_paths.append(f'gs://{bucket_name}/{extracted_file_path}')
        return extracted_file_paths

    else:
        return []

1 个答案:

答案 0 :(得分:1)

我不太了解您的代码,但总体来说,dask使用fsspecgcsfs库可以很好地处理类似这样的复杂文件操作。例如(您不需要Dask)

import fsspec

with fsspec.open_files("zip://*::gcs://gcs_folder_path/gcs_blob_name") as open_files:
    for of in open_files:
        with fsspec.open("gcs://{something from fo}", "wb") as f:
            data = True
            while data:
                data = of.read(2**22)
                f.write(data)

您可以改为

open_files = fssec.open_files(...)

并与Dask并行化循环。