生成正态分布和均匀分布的随机值

时间:2020-08-05 15:20:11

标签: scala apache-spark apache-spark-sql apache-spark-mllib

我必须使用 Spark MLlib 测试一些算法,我想知道built-in中是否有Spark解决方案来为 normal生成随机双精度值统一分布。

Dataframe的范围可以是随机的,从一百到几百万。

有有效的方法吗?

1 个答案:

答案 0 :(得分:1)

有了SparkSQL,您就可以Random Data Generation SQL functions轻松地做到这一点。

您可以生成填充有统一正态分布的随机值的列。

这对于随机算法,原型设计和性能测试很有用。

例如:

import org.apache.spark.sql.functions.{rand, randn}

val dfr = sqlContext.range(0,20) // range can be what you want
val randomValues = dfr.select("id")
                      .withColumn("uniform", rand(10L)) // uniform distribution
                      .withColumn("normal", randn(10L)) // normal distribution

randomValues.show(truncate = false)

输出

+---+-------------------+---------------------+
|id |uniform            |normal               |
+---+-------------------+---------------------+
|0  |0.41371264720975787|-0.5877482396744728  |
|1  |0.7311719281896606 |1.5746327759749246   |
|2  |0.9031701155118229 |-2.087434531229601   |
|3  |0.09430205113458567|1.0191385374853092   |
|4  |0.38340505276222947|-0.011306020094829757|
|5  |0.1982919638208397 |-0.256535324205377   |
|6  |0.12714181165849525|-0.31703264334668824 |
|7  |0.7604318153406678 |0.4977629425313746   |
|8  |0.83487085888236   |0.6400381760855594   |
|9  |0.3142596916968412 |-0.6157521958767469  |
|10 |0.12030715258495939|-0.506853671746243   |
|11 |0.12131363910425985|1.4250903895905769   |
|12 |0.4054302479603469 |0.1478840304856363   |
|13 |0.7658961595628857 |1.1431439803376258   |
|14 |0.5460182640666627 |1.4335019327105383   |
|15 |0.44292918521277047|-0.1413699193557902  |
|16 |0.8898784253886249 |0.9657665088756656   |
|17 |0.03650707717266999|-0.5021009082343131  |
|18 |0.5702126663185123 |0.07606123371426597  |
|19 |0.9212238921510436 |-0.3136534458701739  |
+---+-------------------+---------------------+
相关问题