Python中导入的Spark问题

时间:2016-10-03 03:54:03

标签: python apache-spark pyspark caffe pycaffe

我们在python脚本上运行spark-submit命令,该脚本使用Spark在Python中使用Caffe并行化对象检测。如果在仅使用Python的脚本中运行,脚本本身运行完全正常,但在将其与Spark代码一起使用时会返回导入错误。我知道火花代码不是问题,因为它在我的家用机器上运行得非常好,但它在AWS上运行不正常。我不确定这是否与环境变量有关,就好像它没有检测到它们一样。

设置了以下环境变量:

SPARK_HOME=/opt/spark/spark-2.0.0-bin-hadoop2.7
PATH=$SPARK_HOME/bin:$PATH
PYTHONPATH=$SPARK_HOME/python/:$PYTHONPATH
PYTHONPATH=/opt/caffe/python:${PYTHONPATH}

错误:

16/10/03 01:36:21 WARN TaskSetManager: Lost task 0.0 in stage 0.0 (TID 0, 172.31.50.167): org.apache.spark.api.python.PythonException: Traceback (most recent call last):
 File "/opt/spark/spark-2.0.0-bin-hadoop2.7/python/lib/pyspark.zip/pyspark/worker.py", line 161, in main
   func, profiler, deserializer, serializer = read_command(pickleSer, infile)
 File "/opt/spark/spark-2.0.0-bin-hadoop2.7/python/lib/pyspark.zip/pyspark/worker.py", line 54, in read_command
   command = serializer._read_with_length(file)
 File "/opt/spark/spark-2.0.0-bin-hadoop2.7/python/lib/pyspark.zip/pyspark/serializers.py", line 164, in _read_with_length
   return self.loads(obj)
 File "/opt/spark/spark-2.0.0-bin-hadoop2.7/python/lib/pyspark.zip/pyspark/serializers.py", line 422, in loads
   return pickle.loads(obj)
 File "/opt/spark/spark-2.0.0-bin-hadoop2.7/python/lib/pyspark.zip/pyspark/cloudpickle.py", line 664, in subimport
   __import__(name)
ImportError: ('No module named caffe', <function subimport at 0x7efc34a68b90>, ('caffe',))

有谁知道为什么会出现这个问题?

来自Yahoo的这个软件包通过将Caffe作为jar依赖项运送来管理我们正在尝试做的事情,然后在Python中再次使用它。但我还没有找到任何有关如何构建它并自行导入的资源。

https://github.com/yahoo/CaffeOnSpark

1 个答案:

答案 0 :(得分:4)

您可能尚未在AWS环境中编译caffe python包装器。由于完全逃脱我的原因(以及其他几个,https://github.com/BVLC/caffe/issues/2440)pycaffe不能用作pypi包,你必须自己编译它。您应该遵循此处的编译/制作说明,或者如果您在AWS EB环境中使用ebextensions进行自动化:http://caffe.berkeleyvision.org/installation.html#python