Pyspark错误ReduceByKey

时间:2017-01-03 16:52:53

标签: apache-spark mapreduce pyspark key-value reduce

我的reduceByKey()有问题。我没有显示结果......我有钥匙,价值......但是不可能使用reduceByKey ......

data_test_bis = data_textfile.map(lambda x: (x.split(",")[8].encode("utf-8").replace('"','').replace("'",''), 1)).filter(lambda x: x[0].startswith('Ru'))#.reduceByKey(lambda x, y: x + y)
#data_test_filter = data_test_bis.filter(lambda x: x[0].startswith('"R'))
print("TEST FILTER !")
print(type(data_test_bis))
print(data_test_bis.take(5))
print(data_test_bis.keys().take(10))
print(data_test_bis.values().take(10))

结果:

TEST FILTER !
<class 'pyspark.rdd.PipelinedRDD'> 
[('Rueil-Malmaison', 1), ('Ruse', 1), ('Rueil Malmaison', 1), ('Rueil-Malmaison', 1), ('Ruda Slaska', 1)]
['Rueil-Malmaison', 'Ruse', 'Rueil Malmaison', 'Rueil-Malmaison', 'Ruda Slaska', 'Ruisbroek (Belgique)', 'Ruda \xc3\x85\xc5\xa1l\xc3\x84\xe2\x80\xa6ska', 'Rueil malmaison', 'Rueil', 'Ruisbroek']
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1]

当我尝试这个时,有一个错误:

print(data_test_bis.reduceByKey(add).take(10))

print(data_test_bis.reduceByKey(lambda x, y: x + y).take(10))

错误:

17/01/03 17:47:09 ERROR scheduler.TaskSetManager: Task 18 in stage 3.0 failed 4 times; aborting job
Traceback (most recent call last):
  File "/home/spark/julien/Test_.py", line 89, in <module>
    test()
  File "/home/spark/julien/Test_.py", line 33, in test
    print(data_test_bis.reduceByKey(lambda x, y:x+y).take(10))
  File "/home/spark/opt/spark/python/lib/pyspark.zip/pyspark/rdd.py", line 1297, in take
  File "/home/spark/opt/spark/python/lib/pyspark.zip/pyspark/context.py", line 939, in runJob
  File "/home/spark/opt/spark/python/lib/py4j-0.9-src.zip/py4j/java_gateway.py", line 813, in __call__
  File "/home/spark/opt/spark/python/lib/pyspark.zip/pyspark/sql/utils.py", line 45, in deco
  File "/home/spark/opt/spark/python/lib/py4j-0.9-src.zip/py4j/protocol.py", line 308, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.runJob.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 18 in stage 3.0 failed 4 times, most recent failure: Lost task 18.3 in stage 3.0 (TID 67, 10.0.15.7): org.apache.spark.api.python.PythonException: Traceback (most recent call last):
  File "/opt/spark/python/lib/pyspark.zip/pyspark/worker.py", line 111, in main
    process()
  File "/opt/spark/python/lib/pyspark.zip/pyspark/worker.py", line 106, in process
    serializer.dump_stream(func(split_index, iterator), outfile)
  File "/home/spark/opt/spark/python/lib/pyspark.zip/pyspark/rdd.py", line 2346, in pipeline_func
  File "/home/spark/opt/spark/python/lib/pyspark.zip/pyspark/rdd.py", line 2346, in pipeline_func
  File "/home/spark/opt/spark/python/lib/pyspark.zip/pyspark/rdd.py", line 317, in func
  File "/home/spark/opt/spark/python/lib/pyspark.zip/pyspark/rdd.py", line 1776, in combineLocally
  File "/opt/spark/python/lib/pyspark.zip/pyspark/shuffle.py", line 236, in mergeValues
    for k, v in iterator:
  File "/home/spark/julien/Test_.py", line 25, in <lambda>
IndexError: list index out of range

我不明白为什么我有一个IndexError ......

1 个答案:

答案 0 :(得分:0)

在我之后重复:我绝不会认为非结构化数据源格式正确

类似的事情:

... .map(lambda x: (x.split(",")[8].encode("utf-8") ...)

非常适合快速教程,但在实践中毫无用处。一般来说,永远不要依赖于以下假设:

  • 数据具有特定的形状(例如,将有9个以逗号分隔的字段)。
  • 编码/解码将成功(这里我们实际上可以但通常不是这样)。

至少包括一个简约的异常处理:

def parse_to_pair(line):
    try:
        key = (line
            .split(",")[8]
            .encode("utf-8")
            .replace('"', '')
            .replace("'", ''))

        return [(key, 1)]
    except:
        return []

并使用flatMap

data_textfile.flatMap(parse_to_pair)

备注

  • 您可以在encode设置为SparkContext.textFile的情况下致电use_unicode来跳过False。它会:

    • 在Python 2中使用str代替unicode
    • 在Python 3中使用bytes
  • 您不仅要确保该行包含至少9个字段,而且还包含预期数量的字段。

  • 如果您有任何机会csv作为输入使用csv读者。