Pyarrow从s3读/写

时间:2018-03-27 12:42:15

标签: python pyarrow

是否可以在s3中从一个文件夹读取和写入镶木地板文件到另一个文件夹,而无需使用pyarrow转换为pandas。

这是我的代码:

import pyarrow.parquet as pq
import pyarrow as pa
import s3fs

s3 = s3fs.S3FileSystem()

bucket = 'demo-s3'

pd = pq.ParquetDataset('s3://{0}/old'.format(bucket), filesystem=s3).read(nthreads=4).to_pandas()
table = pa.Table.from_pandas(pd)
pq.write_to_dataset(table, 's3://{0}/new'.format(bucket), filesystem=s3, use_dictionary=True, compression='snappy')

2 个答案:

答案 0 :(得分:2)

如果您不希望直接复制文件,那么看来确实可以避免使用熊猫:

table = pq.ParquetDataset('s3://{0}/old'.format(bucket),
    filesystem=s3).read(nthreads=4)
pq.write_to_dataset(table, 's3://{0}/new'.format(bucket), 
    filesystem=s3, use_dictionary=True, compression='snappy')

答案 1 :(得分:0)

为什么不直接复制(S3-> S3)并节省内存和I / O?

import awswrangler as wr

SOURCE_PATH = "s3://..."
TARGET_PATH = "s3://..."

wr.s3.copy_objects(
    source_path=SOURCE_PATH,
    target_path=TARGET_PATH
)

Reference