错误:您必须使用Hive构建Spark

时间:2016-07-10 18:12:48

标签: python apache-spark hive pyspark

我使用Hive 0.13.1和Hadoop 2.6.0运行Spark 1.6.2。

我尝试运行这个pyspark脚本:

import pyspark
from pyspark.sql import HiveContext

sc = pyspark.SparkContext('local[*]')
hc = HiveContext(sc)
hc.sql("select col from table limit 3")

使用此命令行:

 ~/spark/bin/spark-submit script.py 

我收到了此错误消息:

 File "/usr/local/hadoop/spark/python/pyspark/sql/context.py", line >552, in sql
 return DataFrame(self._ssql_ctx.sql(sqlQuery), self)
 File "/usr/local/hadoop/spark/python/pyspark/sql/context.py", line >660, in _ssql_ctx
 "build/sbt assembly", e)
 Exception: ("You must build Spark with Hive. Export 'SPARK_HIVE=true' and run build/sbt assembly", Py4JJavaError(u'An error occurred while >calling None.org.apache.spark.sql.hive.HiveContext.\n', JavaObject >id=o18))

按照他们的要求行事时,我看到一条警告说"出口SPARK_HIVE已被弃用"并使用#34; -Phive -Phive-thriftserver" 所以我这样做了:

 cd ~/spark/
 build/sbt -Pyarn -Phadoop-2.6 -Phive -Phive-thriftserver assembly

但我的错误略有相同:

 [...]
 16/07/17 19:10:01 WARN metadata.Hive: Failed to access metastore. This class should not accessed in runtime.
 org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.RuntimeException: Unable to instantiate      org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient
     at org.apache.hadoop.hive.ql.metadata.Hive.getAllDatabases(Hive.java:1236)   
 [...]
 Traceback (most recent call last):
   File "/home/hadoop/spark3/./script.py", line 6, in <module>
     hc.sql("select timestats from logweb limit 3")
   File "/usr/local/hadoop/spark/python/lib/pyspark.zip/pyspark/sql/context.py",      line 552, in sql
   File "/usr/local/hadoop/spark/python/lib/pyspark.zip/pyspark/sql/context.py", line 660, in _ssql_ctx
 Exception: ("You must build Spark with Hive. Export 'SPARK_HIVE=true' and run build/sbt assembly", Py4JJavaError(u'An error occurred while calling None.org.apache.spark.sql.hive.HiveContext.\n', JavaObject id=o19))

我在网上搜索了这个错误,但如果答案对我有效,则没有...

有人能帮帮我吗?

我还尝试使用spark version which is suposed to work with Hadoop(建议Joss),我收到了此错误:

 Traceback (most recent call last):
 File "/home/hadoop/spark3/./script.py", line 6, in <module>
hc.sql("select timestats from logweb limit 3")
 File "/usr/local/hadoop/spark/python/lib/pyspark.zip/pyspark/sql/context.py", line 552, in sql
 File "/usr/local/hadoop/spark/python/lib/pyspark.zip/pyspark/sql/context.py", line 660, in _ssql_ctx
 Exception: ("You must build Spark with Hive. Export 'SPARK_HIVE=true' and run build/sbt assembly", Py4JJavaError(u'An error occurred while calling None.org.apache.spark.sql.hive.HiveContext.\n', JavaObject id=o19))

1 个答案:

答案 0 :(得分:0)

我有一个Apache Spark版本,默认情况下附带HiveContext,如果您有兴趣,这是下载的链接:

关于您遇到的问题,它可能与您用于编译Spark的Hadoop版本有关。检查与您需要的Hadoop版本相关的参数。