如何选择不明确的列引用?

时间:2018-03-28 22:54:16

标签: apache-spark pyspark apache-spark-sql spark-dataframe

这里有一些示例代码说明了我尝试做的事情。列数为companyidcompanyId的数据框。我想选择companyId,但引用含糊不清。如何明确选择正确的列?

>> data = [Row(companyId=1, companyid=2, company="Hello world industries")]
>> df = sc.parallelize(data).toDF()
>> df.createOrReplaceTempView('my_df')
>> spark.sql("SELECT companyid FROM mcl_df")

Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/opt/spark22/python/pyspark/sql/session.py", line 603, in sql
    return DataFrame(self._jsparkSession.sql(sqlQuery), self._wrapped)
  File "/opt/spark22/python/lib/py4j-0.10.4-src.zip/py4j/java_gateway.py", line 1133, in __call__
  File "/opt/spark22/python/pyspark/sql/utils.py", line 69, in deco
    raise AnalysisException(s.split(': ', 1)[1], stackTrace)
pyspark.sql.utils.AnalysisException: u"Reference 'companyid' is ambiguous, could be: companyid#1L, companyid#2L.; line 1 pos 7"

1 个答案:

答案 0 :(得分:3)

解决方案最终非常简单。在运行SELECT语句之前,我运行了以下命令:

spark.sql('set spark.sql.caseSensitive=true')