在Spark中生成大型随机数据的有效方法

时间:2019-03-09 23:57:21

标签: scala apache-spark dataframe

我正在尝试生成较大的随机数据集火花。我本质上想从2018-12-01 09:00:00开始,对于每个新行,时间戳将更改scala.util.Random.nextInt(3)秒。 (timestamp列是唯一有意义的列)

即使我试图在大型集群上生成数万亿行,我也希望这种方法仍然有效,所以我试图一次以100个元素为批次生成它,因为数万亿行无法容纳{{1 }}。

此代码存在一些问题,例如Seq,但我不确定我是否使用var。我想知道是否有人对此有更好的想法。

union

上面的代码导致一个DataFrame包含10015行,看起来像这样。

import Math.{max, min}
import java.sql.Timestamp
import java.sql.Timestamp.valueOf

import org.apache.spark.sql.expressions.Window
import org.apache.spark.sql.{DataFrame, Row, SaveMode}
import org.apache.spark.sql.types._
import org.apache.spark.sql.functions._

object DataGenerator extends SparkEnv {

  import spark.implicits._

  val batchSize = 100
  val rnd = scala.util.Random

  // randomly generates a DataFrame with n Rows
  def generateTimestampData(n: Int): DataFrame = {
    val timestampDataFields = Seq(StructField("timestamp", TimestampType, false))
    val initDF = spark.createDataFrame(spark.sparkContext.emptyRDD[Row], StructType(timestampDataFields))
    def loop(data: DataFrame, lastTime: Long, _n: Int): DataFrame = {
      if (_n == 0) {
        val w = Window.orderBy("timestamp")
        data.withColumn("eventID", concat(typedLit("event"), row_number().over(w)))
      } else {
        var thisTime = lastTime
        def rts(ts: Long): Stream[Long] = ts #:: { thisTime = ts + rnd.nextInt(3) * 1000; rts(thisTime) }
        val thisBatch = rts(lastTime)
          .map(new Timestamp(_))
          .take(min(batchSize, _n))
          .toDF("timestamp")
        loop(data union thisBatch, thisTime, max(_n - batchSize, 0))
      }
    }
    loop(initDF, valueOf("2018-12-01 09:00:00").getTime(), n)
  }

  def main(args: Array[String]): Unit = {
    val w = Window.orderBy("timestamp")
    val df = generateTimestampData(10015)
      .withColumn("part", floor(row_number().over(w) / 100))
    df.repartition(27)
      .write
      .partitionBy("part")
      .option("compression", "snappy")
      .mode(SaveMode.Overwrite)
      .parquet("data/generated/ts_data")
  }

}

1 个答案:

答案 0 :(得分:0)

您可以实现一个并行执行随机数据生成的RDD,如以下示例所示。

import scala.reflect.ClassTag
import org.apache.spark.{Partition, TaskContext}
import org.apache.spark.rdd.RDD

// Each random partition will hold `numValues` items
final class RandomPartition[A: ClassTag](val index: Int, numValues: Int, random: => A) extends Partition {
  def values: Iterator[A] = Iterator.fill(numValues)(random)
}

// The RDD will parallelize the workload across `numSlices`
final class RandomRDD[A: ClassTag](@transient private val sc: SparkContext, numSlices: Int, numValues: Int, random: => A) extends RDD[A](sc, deps = Seq.empty) {

  // Based on the item and executor count, determine how many values are
  // computed in each executor. Distribute the rest evenly (if any).
  private val valuesPerSlice = numValues / numSlices
  private val slicesWithExtraItem = numValues % numSlices

  // Just ask the partition for the data
  override def compute(split: Partition, context: TaskContext): Iterator[A] =
    split.asInstanceOf[RandomPartition[A]].values

  // Generate the partitions so that the load is as evenly spread as possible
  // e.g. 10 partition and 22 items -> 2 slices with 3 items and 8 slices with 2
  override protected def getPartitions: Array[Partition] =
    ((0 until slicesWithExtraItem).view.map(new RandomPartition[A](_, valuesPerSlice + 1, random)) ++
      (slicesWithExtraItem until numSlices).view.map(new RandomPartition[A](_, valuesPerSlice, random))).toArray

}

一旦有了这个,就可以使用它传递您自己的随机数据生成器来获取RDD[Int]

val rdd = new RandomRDD(spark.sparkContext, 10, 22, scala.util.Random.nextInt(100) + 1)
rdd.foreach(println)
/*
 * outputs:
 * 30
 * 86
 * 75
 * 20
 * ...
 */

RDD[(Int, Int, Int)]

def rand = scala.util.Random.nextInt(100) + 1
val rdd = new RandomRDD(spark.sparkContext, 10, 22, (rand, rand, rand))
rdd.foreach(println)
/*
 * outputs:
 * (33,22,15)
 * (65,24,64)
 * (41,81,44)
 * (58,7,18)
 * ...
 */

当然,您也可以非常容易地将其包装在DataFrame中:

spark.createDataFrame(rdd).show()
/*
 * outputs:
 * +---+---+---+
 * | _1| _2| _3|
 * +---+---+---+
 * |100| 48| 92|
 * | 34| 40| 30|
 * | 98| 63| 61|
 * | 95| 17| 63|
 * | 68| 31| 34|
 * .............
 */

请注意,在这种情况下,每次对RDD / DataFrame进行操作时,生成的数据都会有所不同。通过更改RandomPartition的实现以实际存储值而不是即时生成它们,您可以拥有稳定的随机项集,同时仍保留此方法的灵活性和可伸缩性。

无状态方法的一个不错的特性是,您甚至可以在本地生成巨大的数据集。在我的笔记本电脑上运行了几秒钟:

new RandomRDD(spark.sparkContext, 10, Int.MaxValue, 42).count
// returns: 2147483647
相关问题