为什么这个例子会导致NaN?

时间:2016-04-19 20:27:03

标签: hadoop apache-spark pyspark pearson-correlation

我正在查看PySpark中Statistics.corr的文档:https://spark.apache.org/docs/1.1.0/api/python/pyspark.mllib.stat.Statistics-class.html#corr

为什么此处的相关性会产生NaN

>>> rdd = sc.parallelize([Vectors.dense([1, 0, 0, -2]), Vectors.dense([4, 5, 0, 3]),
...                       Vectors.dense([6, 7, 0,  8]), Vectors.dense([9, 0, 0, 1])])
>>> pearsonCorr = Statistics.corr(rdd)
>>> print str(pearsonCorr).replace('nan', 'NaN')
[[ 1.          0.05564149         NaN  0.40047142]
 [ 0.05564149  1.                 NaN  0.91359586]
 [        NaN         NaN  1.                 NaN]
 [ 0.40047142  0.91359586         NaN  1.        ]]

1 个答案:

答案 0 :(得分:3)

非常简单。皮尔森相关系数定义如下:

enter image description here

由于第二列([0, 0, 0, 0])的标准偏差等于0,因此整个方程导致NaN。