将Spark Dataframe(使用WrappedArray)转换为scala中的RDD [labelPoint]

时间:2016-12-29 10:32:30

标签: arrays scala dataframe rdd

我是Scala的新手,我想将数据帧转换为rdd。让标签,功能转换为RDD[labelPoint]以获取MLlib的输入。但我无法找到处理WrappedArray的方法。

scala> test.printSchema
root
 |-- user_id: long (nullable = true)
 |-- brand_store_sn: string (nullable = true)
 |-- label: integer (nullable = true)
 |-- money_score: double (nullable = true)
 |-- normal_score: double (nullable = true)
 |-- action_score: double (nullable = true)
 |-- features: array (nullable = true)
 |    |-- element: string (containsNull = true)
 |-- flag: string (nullable = true)
 |-- dt: string (nullable = true)


scala> test.head
res21: org.apache.spark.sql.Row = [2533,10005072,1,2.0,1.0,1.0,WrappedArray(["d90_pv_1sec:1.4471580313422192", "d3_pv_1sec:0.9030899869919435", "d7_pv_1sec:0.9030899869919435", "d30_pv_1sec:1.414973347970818", "d90_pv_week_decay:1.4235871662780681", "d1_pv_1sec:0.9030899869919435", "d120_pv_1sec:1.4471580313422192"]),user_positive,20161130]

1 个答案:

答案 0 :(得分:1)

首先 - 因为LabeledPoint期望一个Double的向量,我假设您还希望通过冒号(features)拆分每个:数组中的每个元素,并将其右侧视为双重,例如:

 "d90_pv_1sec:1.4471580313422192" --> 1.4471580313422192

如果是这样 - 这是转型:

import org.apache.spark.mllib.linalg.{Vector, Vectors}
import org.apache.spark.mllib.regression.LabeledPoint

// sample data - DataFrame with label, features and other columns
val df = Seq(
  (1, Array("d90_pv_1sec:1.4471580313422192", "d3_pv_1sec:0.9030899869919435"), 4.0),
  (2, Array("d7_pv_1sec:0.9030899869919435", "d30_pv_1sec:1.414973347970818"), 5.0)
).toDF("label", "features", "ignored")

// extract relevant fields from Row and convert WrappedArray[String] into Vector:
val result = df.rdd.map(r => {
  val label = r.getAs[Int]("label")
  val featuresArray = r.getAs[mutable.WrappedArray[String]]("features")
  val features: Vector = Vectors.dense(
    featuresArray.map(_.split(":")(1).toDouble).toArray
  )
  LabeledPoint(label, features)
})

result.foreach(println)
// (1.0,[1.4471580313422192,0.9030899869919435])
// (2.0,[0.9030899869919435,1.414973347970818])

编辑:根据说明,现在假设输入数组中的每个项目都包含生成的稀疏向量中的预期索引

"d90_pv_1sec:1.4471580313422192" --> index = 90; value = 1.4471580313422192

修改后的代码为:

val vectorSize: Int = 100 // just a guess - should be the maximum index + 1

val result = df.rdd.map(r => {
  val label = r.getAs[Int]("label")
  val arr = r.getAs[mutable.WrappedArray[String]]("features").toArray
  // parse each item into (index, value) tuple to use in sparse vector
  val elements = arr.map(_.split(":")).map {
    case Array(s, d) => (s.replaceAll("d|_pv_1sec","").toInt, d.toDouble)
  }
  LabeledPoint(label, Vectors.sparse(vectorSize, elements))
})

result.foreach(println)
// (1.0,(100,[3,90],[0.9030899869919435,1.4471580313422192]))
// (2.0,(100,[7,30],[0.9030899869919435,1.414973347970818]))

注意:使用s.replaceAll("d|_pv_1sec","")可能会有点慢,因为它会分别为每个项目编译正则表达式。如果是这种情况,可以用不使用正则表达式的更快(但更丑)s.replace("d", "").replace("_pv_1sec", "")替换它。

相关问题