如何从Spark写入多个分区?

时间:2018-05-16 18:04:06

标签: python apache-spark pyspark pyspark-sql

我有一个只有大约1.5 KB的小文件,只作为1个文件写入S3。我想将它作为多个部分文件写入S3以测试分区,但我遇到了麻烦。 如何设置呢?我有什么不同的地方吗?

from pyspark.sql.types import LongType, StringType, StructField, StructType, BooleanType, ArrayType, IntegerType, TimestampType

spark = SparkSession \
    .builder \
    .appName("Python Spark SQL basic example") \
    .config("spark.ui.enabled", "true") \
    .config("spark.default.parallelism", "4") \
    .config("spark.files.maxPartitionBytes", "500") \
    .master("yarn-client") \
    .getOrCreate()

myschema = StructType([\
                         StructField("field1", TimestampType(), True), \
                         StructField("field2", TimestampType(), True), \
                         StructField("field3", StringType(), True),
                         StructField("field4", StringType(), True), \
                         StructField("field5", StringType(), True)
                         ])

mydf= spark.read.load("s3a://bucket/myfile.csv",\
                     format="csv", \
                     sep=",", \
                     # inferSchema="true", \
                     timestampFormat="MM/dd/yyyy HH:mm:ss",
                     header="true",
                     schema=scheduled_schema
                    )

mydf.coalesce(5) 

df_scheduled.write.csv(path="s3a://bucket/output",\
                     header="true",
                    )

0 个答案:

没有答案