pyspark.sql.utils.IllegalArgumentException:u"在读取csv时实例化时出错

时间:2018-06-12 14:59:46

标签: apache-spark pyspark apache-spark-sql

我正在尝试使用变量url

从S3读取csv文件
>>> m = spark.read.csv( url, header="true",  sep=",")

但是我收到了如下所示的错误。

Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/opt/sw/spark-2.1.0-bin-hadoop2.7/python/pyspark/sql/session.py", line 565, in read
    return DataFrameReader(self._wrapped)
  File "/opt/sw/spark-2.1.0-bin-hadoop2.7/python/pyspark/sql/readwriter.py", line 70, in __init__
    self._jreader = spark._ssql_ctx.read()
  File "/opt/sw/spark-2.1.0-bin-hadoop2.7/python/lib/py4j-0.10.4-src.zip/py4j/java_gateway.py", line 1133, in __call__
  File "/opt/sw/spark-2.1.0-bin-hadoop2.7/python/pyspark/sql/utils.py", line 79, in deco
    raise IllegalArgumentException(s.split(': ', 1)[1], stackTrace)
pyspark.sql.utils.IllegalArgumentException: u"Error while instantiating 'org.apache.spark.sql.hive.HiveSessionState':"

请告诉我如何解决这个问题?我正在使用spark 2.1.0

0 个答案:

没有答案