为什么PySpark会随机出现“Socket is closed”错误?

时间:2016-09-21 15:41:45

标签: apache-spark pyspark

我刚刚参加了一个PySpark培训课程,我正在编写一个示例代码行的脚本(这解释了为什么代码块什么都不做)。每次运行此代码时,我都会收到一次或两次此错误。抛出它的线在运行之间变化。我已尝试设置spark.executor.memoryspark.executor.heartbeatInterval,但错误仍然存​​在。我也尝试将.cache()放在各行的末尾,没有任何变化。

错误:

16/09/21 10:29:32 ERROR Utils: Uncaught exception in thread stdout writer for python
java.net.SocketException: Socket is closed
        at java.net.Socket.shutdownOutput(Socket.java:1551)
        at org.apache.spark.api.python.PythonRunner$WriterThread$$anonfun$run$3$$anonfun$apply$4.apply$mcV$sp(PythonRDD.scala:344)
        at org.apache.spark.api.python.PythonRunner$WriterThread$$anonfun$run$3$$anonfun$apply$4.apply(PythonRDD.scala:344)
        at org.apache.spark.api.python.PythonRunner$WriterThread$$anonfun$run$3$$anonfun$apply$4.apply(PythonRDD.scala:344)
        at org.apache.spark.util.Utils$.tryLog(Utils.scala:1870)
        at org.apache.spark.api.python.PythonRunner$WriterThread$$anonfun$run$3.apply(PythonRDD.scala:344)
        at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1857)
        at org.apache.spark.api.python.PythonRunner$WriterThread.run(PythonRDD.scala:269)

代码:

from pyspark import SparkConf, SparkContext

def parseLine(line):
    fields = line.split(',')
    return (int(fields[0]), float(fields[2]))

def parseGraphs(line):
    fields = line.split()
    return (fields[0]), [int(n) for n in fields[1:]]

# putting the [*] after local makes it run one executor on each core of your local PC
conf = SparkConf().setMaster("local[*]").setAppName("MyProcessName")

sc = SparkContext(conf = conf)

# parse the raw data and map it to an rdd.
# each item in this rdd is a tuple
# two methods to get the exact same data:
########## All of these methods can use lambda or full methods in the same way ##########
# read in a text file
customerOrdersLines = sc.textFile("file:///SparkCourse/customer-orders.csv")
customerOrdersRdd = customerOrdersLines.map(parseLine)
customerOrdersRdd = customerOrdersLines.map(lambda l: (int(l.split(',')[0]), float(l.split(',')[2])))
print customerOrdersRdd.take(1)

# countByValue groups identical values and counts them
salesByCustomer = customerOrdersRdd.map(lambda sale: sale[0]).countByValue()
print salesByCustomer.items()[0]

# use flatMap to cut everything up by whitespace
bookText = sc.textFile("file:///SparkCourse/Book.txt")
bookRdd = bookText.flatMap(lambda l: l.split())
print bookRdd.take(1)

# create key/value pairs that will allow for more complex uses
names = sc.textFile("file:///SparkCourse/marvel-names.txt")
namesRdd = names.map(lambda line: (int(line.split('\"')[0]), line.split('\"')[1].encode("utf8")))
print namesRdd.take(1)

graphs = sc.textFile("file:///SparkCourse/marvel-graph.txt")
graphsRdd = graphs.map(parseGraphs)
print graphsRdd.take(1)

# this will append "extra text" to each name.
# this is faster than a normal map because it doesn't give you access to the keys
extendedNamesRdd = namesRdd.mapValues(lambda heroName: heroName + "extra text")
print extendedNamesRdd.take(1)

# not the best example because the costars is already a list of integers
# but this should return a list, which will update the values
flattenedCostarsRdd = graphsRdd.flatMapValues(lambda costars: costars)
print flattenedCostarsRdd.take(1)

# put the heroes in ascending index order
sortedHeroes = namesRdd.sortByKey()
print sortedHeroes.take(1)

# to sort heroes by alphabetical order, we switch key/value to value/key, then sort
alphabeticalHeroes = namesRdd.map(lambda (key, value): (value, key)).sortByKey()
print alphabeticalHeroes.take(1)

# make sure that "spider" is in the name of the hero
spiderNames = namesRdd.filter(lambda (id, name): "spider" in name.lower())
print spiderNames.take(1)

# reduce by key keeps the key and performs aggregation methods on the values.  in this example, taking the sum
combinedGraphsRdd = flattenedCostarsRdd.reduceByKey(lambda value1, value2: value1 + value2)
print combinedGraphsRdd.take(1)

# broadcast: this is accessible from any executor
sentData = sc.broadcast(["this can be accessed by all executors", "access it using sentData"])

# accumulator:  this is synced across all executors
hitCounter = sc.accumulator(0)

1 个答案:

答案 0 :(得分:0)

免责声明:我没有在Spark的代码库中花费足够的时间,但是让我给你一些可能导致解决方案的提示。以下是解释在何处搜索更多信息而不是解决问题的方法。

您遇到的异常是由于代码here中出现的其他一些问题(正如您在行java.net.Socket.shutdownOutput(Socket.java:1551)时所见的行worker.shutdownOutput()所见)。< / p>

16/09/21 10:29:32 ERROR Utils: Uncaught exception in thread stdout writer for python
java.net.SocketException: Socket is closed
        at java.net.Socket.shutdownOutput(Socket.java:1551)
        at org.apache.spark.api.python.PythonRunner$WriterThread$$anonfun$run$3$$anonfun$apply$4.apply$mcV$sp(PythonRDD.scala:344)
        at org.apache.spark.api.python.PythonRunner$WriterThread$$anonfun$run$3$$anonfun$apply$4.apply(PythonRDD.scala:344)
        at org.apache.spark.api.python.PythonRunner$WriterThread$$anonfun$run$3$$anonfun$apply$4.apply(PythonRDD.scala:344)
        at org.apache.spark.util.Utils$.tryLog(Utils.scala:1870)
        at org.apache.spark.api.python.PythonRunner$WriterThread$$anonfun$run$3.apply(PythonRDD.scala:344)
        at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1857)
        at org.apache.spark.api.python.PythonRunner$WriterThread.run(PythonRDD.scala:269)

这让我相信ERROR是其他一些早期错误的后续行动。

名称 stdout writer for python the name of the thread(使用EvalPythonExec物理运算符和)负责Spark和pyspark之间的通信(所以你可以执行python代码)没有太大的变化)。

事实上the scaladoc of EvalPythonExec提供了很多有关pyspark内部使用的底层通信基础设施的信息,并使用套接字来处理外部Python进程。

  

Python评估的工作原理是通过套接字将必要的(投影的)输入数据发送到外部Python进程,并将Python进程的结果与原始行结合起来。

此外,默认情况下使用python,除非使用PYSPARK_DRIVER_PYTHONPYSPARK_PYTHON覆盖(正如您在pyspark shell脚本here和{{3中看到的那样) }})。这是出现在失败的线程名称中的名称。

  

16/09/21 10:29:32错误实用程序: python 的线程标准编写器中未捕获的异常

我建议使用以下命令检查系统上的python版本。

python -c 'import sys; print(sys.version_info)'

here,但可能是你使用了未经Spark测试的最新Python。 猜测...

您应该包含pyspark应用程序执行的整个日志,这是我希望找到答案的地方。

相关问题