Scala —数据框的条件替换列值

时间:2018-08-28 19:56:07

标签: scala apache-spark dataframe

DF1是我现在拥有的,我想使DF1看起来像DF2。

所需的输出:

 DF1                                DF2
+---------+-------------------+          +---------+------------------------------+
|   ID    | Category          |          |   ID    | Category                     |
+---------+-------------------+          +---------+------------------------------+  
|  31898  |   Transfer        |          |  31898  |  Transfer (e-Transfer)       |  
|  31898  |  e-Transfer       |  =====>  |  32614  |  Transfer (e-Transfer + IMT) |
|  32614  |   Transfer        |  =====>  |  33987  |   Transfer (IMT)             |
|  32614  |  e-Transfer + IMT |          +---------+------------------------------+      
|  33987  |   Transfer        |  
|  33987  |    IMT            |  
+---------+-------------------+

代码:

val df = DF1.groupBy("ID").agg(collect_set("Category").as("CategorySet"))
val DF2 = df.withColumn("Category", $"CategorySet"(0) ($"CategorySet"(1)))

我该如何解决?而且,如果还有其他更好的方法可以执行相同的操作,则我愿意接受。预先谢谢你

1 个答案:

答案 0 :(得分:0)

使用UDF的一种方法:

val flatten = udf((xs: Seq[String]) => xs.mkString(" + "))

df.groupBy("ID").agg(flatten(collect_set("Category")).as("CategorySet")).show(false)

+-----+---------------------------+
|ID   |CategorySet                |
+-----+---------------------------+
|33987|Transfer                   |
|32614|Transfer + e-Transfer + IMT|
|34193|e-Transfer                 |
|31898|Transfer + e-Transfer      |
+-----+---------------------------+

另一种方法是只使用concat_ws

df.groupBy("ID").agg(concat_ws(" + ",collect_set("Category")).as("CategorySet")).show(false)

+-----+---------------------------+
|ID   |CategorySet                |
+-----+---------------------------+
|33987|Transfer                   |
|32614|Transfer + e-Transfer + IMT|
|34193|e-Transfer                 |
|31898|Transfer + e-Transfer      |
+-----+---------------------------+  
相关问题