从同一列值更新组中的列

时间:2018-10-31 18:50:49

标签: scala grouping updates

我有df:

+---+-----+----+----+
| id|group|pick|name|
+---+-----+----+----+
|  1|    1|   0|   a|
|  2|    1|   1|   b|
|  3|    2|   0|   c|
|  4|    2|   0|   d|
|  5|    2|   1|   e|
|  6|    3|   1|   f|
|  7|    3|   0|   g|
|  8|    4|   1|   h|
+---+-----+----+----+

每个组都有一个选择= 1,我想为每个组选择该名称,如下所示:

+---+-----+----+----+-----------+
| id|group|pick|name|picked_name|
+---+-----+----+----+-----------+
|  1|    1|   0|   a|          b|
|  2|    1|   1|   b|          b|
|  3|    2|   0|   c|          e|
|  4|    2|   0|   d|          e|
|  5|    2|   1|   e|          e|
|  6|    3|   1|   f|          f|
|  7|    3|   0|   g|          f|
|  8|    4|   1|   h|          h|
+---+-----+----+----+-----------+

有人可以帮忙吗...请注意,我对性能非常谨慎,因为我必须在庞大的数据集上执行此操作。预先谢谢你。

1 个答案:

答案 0 :(得分:1)

这是使用df和window函数的一种解决方案

scala> val df = Seq((1,1,0,"a"),(2,1,1,"b"),(3,2,0,"c"),(4,2,0,"d"),(5,2,1,"e"),(6,3,1,"f"),(7,3,0,"g"),(8,4,1,"h")).toDF("id","group","pick","name")
df: org.apache.spark.sql.DataFrame = [id: int, group: int ... 2 more fields]

scala> val df2=df.filter('pick===1).withColumnRenamed("pick","pick2").withColumnRenamed("name","name2")
df2: org.apache.spark.sql.DataFrame = [id: int, group: int ... 2 more fields]

scala> df.join(df2,Seq("id","group"),"leftOuter").withColumn("picked_name",max('name2).over(Window.partitionBy('group))).drop("pick2","name2").show
+---+-----+----+----+-----------+
| id|group|pick|name|picked_name|
+---+-----+----+----+-----------+
|  1|    1|   0|   a|          b|
|  2|    1|   1|   b|          b|
|  6|    3|   1|   f|          f|
|  7|    3|   0|   g|          f|
|  8|    4|   1|   h|          h|
|  3|    2|   0|   c|          e|
|  4|    2|   0|   d|          e|
|  5|    2|   1|   e|          e|
+---+-----+----+----+-----------+


scala>