将RDD从类型`org.apache.spark.rdd.RDD [((String,String),Double)]`转换为`org.apache.spark.rdd.RDD [((String),List [Double])]`

时间:2014-12-16 16:44:17

标签: scala apache-spark

我有一个RDD:

  val rdd: org.apache.spark.rdd.RDD[((String, String), Double)] =
    sc.parallelize(List(
      (("a", "b"), 1.0),
      (("a", "c"), 3.0),
      (("a", "d"), 2.0)
      )) 

我正在尝试将此RDD从类型org.apache.spark.rdd.RDD[((String, String), Double)]转换为org.apache.spark.rdd.RDD[((String), List[Double])]

RDD中的每个键都应该是唯一的,并且它的值会被排序。

因此rdd以上的结构将转换为:

val newRdd : [((String), List[Double])] = RDD("a" , List(1,2,3))

获取我使用的唯一键列表:

val r2 : org.apache.spark.rdd.RDD[(String, Double)] =  rdd.map(m => (m._1._1 , m._2))

如何将每个键转换为包含已排序的双打列表?

整个代码:

import org.apache.spark.SparkContext;

object group {
  println("Welcome to the Scala worksheet")       //> Welcome to the Scala worksheet

  val conf = new org.apache.spark.SparkConf()
    .setMaster("local")
    .setAppName("distances")
    .setSparkHome("C:\\spark-1.1.0-bin-hadoop2.4\\spark-1.1.0-bin-hadoop2.4")
    .set("spark.executor.memory", "1g")           //> conf  : org.apache.spark.SparkConf = org.apache.spark.SparkConf@1bd0dd4

  val sc = new SparkContext(conf)                 //> 14/12/16 16:44:56 INFO spark.SecurityManager: Changing view acls to: a511381
                                                  //| ,
                                                  //| 14/12/16 16:44:56 INFO spark.SecurityManager: Changing modify acls to: a5113
                                                  //| 81,
                                                  //| 14/12/16 16:44:56 INFO spark.SecurityManager: SecurityManager: authenticatio
                                                  //| n disabled; ui acls disabled; users with view permissions: Set(a511381, ); u
                                                  //| sers with modify permissions: Set(a511381, )
                                                  //| 14/12/16 16:44:57 INFO slf4j.Slf4jLogger: Slf4jLogger started
                                                  //| 14/12/16 16:44:57 INFO Remoting: Starting remoting
                                                  //| 14/12/16 16:44:57 INFO Remoting: Remoting started; listening on addresses :[
                                                  //| akka.tcp://sparkDriver@LA342399.dmn1.fmr.com:51092]
                                                  //| 14/12/16 16:44:57 INFO Remoting: Remoting now listens on addresses: [akka.tc
                                                  //| p://sparkDriver@LA342399.dmn1.fmr.com:51092]
                                                  //| 14/12/16 16:44:57 INFO util.Utils: Successfully started service 'sparkDriver
                                                  //| ' on port 51092.
                                                  //| 14/12/16 16:44:57 INFO spark.SparkEnv: Registering MapOutputTracker
                                                  //| 14/12/16 16:44:57 INFO spark.SparkEnv:
                                                  //| Output exceeds cutoff limit.

  val rdd: org.apache.spark.rdd.RDD[((String, String), Double)] =
    sc.parallelize(List(
      (("a", "b"), 1.0),
      (("a", "c"), 3.0),
      (("a", "d"), 2.0)
      ))                                          //> rdd  : org.apache.spark.rdd.RDD[((String, String), Double)] = ParallelCollec
                                                  //| tionRDD[0] at parallelize at group.scala:15

     val r2 : org.apache.spark.rdd.RDD[(String, Double)] =  rdd.map(m => (m._1._1 , m._2))
                                                  //> r2  : org.apache.spark.rdd.RDD[(String, Double)] = MappedRDD[1] at map at gr
                                                  //| oup.scala:21

     val m1 = r2.collect                          //> 14/12/16 16:44:59 INFO spark.SparkContext: Starting job: collect at group.sc
                                                  //| ala:23
                                                  //| 14/12/16 16:44:59 INFO scheduler.DAGScheduler: Got job 0 (collect at group.s
                                                  //| cala:23) with 1 output partitions (allowLocal=false)
                                                  //| 14/12/16 16:44:59 INFO scheduler.DAGScheduler: Final stage: Stage 0(collect 
                                                  //| at group.scala:23)
                                                  //| 14/12/16 16:44:59 INFO scheduler.DAGScheduler: Parents of final stage: List(
                                                  //| )
                                                  //| 14/12/16 16:44:59 INFO scheduler.DAGScheduler: Missing parents: List()
                                                  //| 14/12/16 16:44:59 INFO scheduler.DAGScheduler: Submitting Stage 0 (MappedRDD
                                                  //| [1] at map at group.scala:21), which has no missing parents
                                                  //| 14/12/16 16:44:59 WARN util.SizeEstimator: Failed to check whether UseCompre
                                                  //| ssedOops is set; assuming yes
                                                  //| 14/12/16 16:44:59 INFO storage.MemoryStore: ensureFreeSpace(1584) called wit
                                                  //| h curMem=0, maxMem=140142182
                                                  //| 14/12/16 16:44:59 INFO storage.MemoryStore: Block broadcast_0 stored as valu
                                                  //| es in memory (estimated size 1584.0 B
                                                  //| Output exceeds cutoff limit.
     m1.foreach { case (e, i) => println(e + "," + i) }
                                                  //> a,1.0
                                                  //| a,3.0
                                                  //| a,2.0


}

2 个答案:

答案 0 :(得分:2)

嗨,使用@Imm解决方案,您的值将不会被排序,如果它发生将是一个伤亡者。 要获得排序列表,您只需添加:

  

val r4 = r3.mapValues(_。toList.sorted)   所以r4将有一个rdd,每个值列表将按每个键排序

我希望这会有用

答案 1 :(得分:1)

使用groupByKey

val r3: RDD[String, Iterable[Double]] = r2.groupByKey

如果您确实希望第二个元素是List而不是一般Iterable,那么您可以使用mapValues

val r4 = r3.mapValues(_.toList)

请确保import org.apache.spark.SparkContext._位于顶部,以便这些功能可用。