使用Spark访问s3a时出现403错误

时间:2020-03-02 13:32:30

标签: apache-spark hadoop amazon-s3 pyspark

问题:

能够使用AWS CLI和boto 3成功下载文件。 但是,在使用Hadoop / Spark的S3A连接器时,收到以下错误:

py4j.protocol.Py4JJavaError: An error occurred while calling o24.parquet.
: com.amazonaws.services.s3.model.AmazonS3Exception: Status Code: 403, AWS Service: Amazon S3, AWS Request ID: BCFFD14CB2939D68, AWS Error Code: null, AWS Error Message: Forbidden, S3 Extended Request ID: MfT8J6ZPlJccgHBXX+tX1fpX47V7dWCP3Dq+W9+IBUfUhsD4Nx+DcyqsbgbKsPn8NZzjc2U

配置: 在我的本地计算机上运行

  1. Spark版本2.4.4

  2. Hadoop版本2.7

添加了瓶子:

  1. hadoop-aws-2.7.3.jar

  2. aws-java-sdk-1.7.4.jar

Hadoop配置:

hadoop_conf.set("fs.s3a.access.key", access_key)
hadoop_conf.set("fs.s3a.impl", "org.apache.hadoop.fs.s3a.S3AFileSystem")
hadoop_conf.set("fs.s3a.secret.key", secret_key)
hadoop_conf.set("fs.s3a.aws.credentials.provider","org.apache.hadoop.fs.s3a.TemporaryAWSCredentialsProvider")
hadoop_conf.set("fs.s3a.session.token", session_key)
hadoop_conf.set("fs.s3a.endpoint", "s3-us-west-2.amazonaws.com") # yes, I am using central eu server.
hadoop_conf.set("com.amazonaws.services.s3.enableV4", "true")

读取文件的代码:

from pyspark import SparkConf, SparkContext, SQLContext
sc = SparkContext.getOrCreate()
hadoop_conf=sc._jsc.hadoopConfiguration()
sqlContext = SQLContext(sc)
df = sqlContext.read.parquet(path)
print(df.head())

1 个答案:

答案 0 :(得分:0)

设置AWS凭证提供商以配置凭证:

google_maps_flutter: 0.5.32