展开多个不同长度的相同类型的列

时间:2018-09-18 22:13:30

标签: scala apache-spark apache-spark-sql explode

我有一个火花数据框,其以下格式需要分解。我检查了其他解决方案,例如this one。但是,就我而言,beforeafter可以是不同长度的数组。

root
 |-- id: string (nullable = true)
 |-- before: array (nullable = true)
 |    |-- element: struct (containsNull = true)
 |    |    |-- start_time: string (nullable = true)
 |    |    |-- end_time: string (nullable = true)
 |    |    |-- area: string (nullable = true)
 |-- after: array (nullable = true)
 |    |-- element: struct (containsNull = true)
 |    |    |-- start_time: string (nullable = true)
 |    |    |-- end_time: string (nullable = true)
 |    |    |-- area: string (nullable = true)

例如,如果数据框只有一行,before是大小为2的数组,而after是大小为3的数组,则分解版本应为5行,并包含以下内容架构:

root
 |-- id: string (nullable = true)
 |-- type: string (nullable = true)
 |-- start_time: integer (nullable = false)
 |-- end_time: string (nullable = true)
 |-- area: string (nullable = true)

其中type新列,可以是"before"或“之后”

我可以在两个单独的爆炸中进行操作,在每个爆炸中都创建type列,然后在union中进行。

val dfSummary1 = df.withColumn("before_exp", 
explode($"before")).withColumn("type", 
lit("before")).withColumn(
"start_time", $"before_exp.start_time").withColumn(
"end_time", $"before_exp.end_time").withColumn(
"area", $"before_exp.area").drop("before_exp", "before")

val dfSummary2 = df.withColumn("after_exp", 
explode($"after")).withColumn("type", 
lit("after")).withColumn(
"start_time", $"after_exp.start_time").withColumn(
"end_time", $"after_exp.end_time").withColumn(
"area", $"after_exp.area").drop("after_exp", "after")

val dfResult = dfSumamry1.unionAll(dfSummary2)

但是,我想知道是否有更优雅的方法可以做到这一点。谢谢。

2 个答案:

答案 0 :(得分:4)

您也可以不合并而实现此目的。随着数据:

case class Area(start_time: String, end_time: String, area: String)

val df = Seq((
  "1", Seq(Area("01:00", "01:30", "10"), Area("02:00", "02:30", "20")),
  Seq(Area("07:00", "07:30", "70"), Area("08:00", "08:30", "80"), Area("09:00", "09:30", "90"))
)).toDF("id", "before", "after")

你可以做

df
  .select($"id",
    explode(
      array(
        struct(lit("before").as("type"), $"before".as("data")),
        struct(lit("after").as("type"), $"after".as("data"))
      )
    ).as("step1")
  )
 .select($"id",$"step1.type", explode($"step1.data").as("step2"))
 .select($"id",$"type", $"step2.*")
 .show()

+---+------+----------+--------+----+
| id|  type|start_time|end_time|area|
+---+------+----------+--------+----+
|  1|before|     01:00|   01:30|  10|
|  1|before|     02:00|   02:30|  20|
|  1| after|     07:00|   07:30|  70|
|  1| after|     08:00|   08:30|  80|
|  1| after|     09:00|   09:30|  90|
+---+------+----------+--------+----+

答案 1 :(得分:2)

我认为exploding两列分别后跟union是一种不错的直接方法。您可以稍微简化StructField-element的选择,并为重复的explode过程创建一个简单的方法,如下所示:

import org.apache.spark.sql.functions._
import org.apache.spark.sql.DataFrame

case class Area(start_time: String, end_time: String, area: String)

val df = Seq((
  "1", Seq(Area("01:00", "01:30", "10"), Area("02:00", "02:30", "20")),
  Seq(Area("07:00", "07:30", "70"), Area("08:00", "08:30", "80"), Area("09:00", "09:30", "90"))
)).toDF("id", "before", "after")

def explodeCol(df: DataFrame, colName: String): DataFrame = {
  val expColName = colName + "_exp"
  df.
    withColumn("type", lit(colName)).
    withColumn(expColName, explode(col(colName))).
    select("id", "type", expColName + ".*")
}

val dfResult = explodeCol(df, "before") union explodeCol(df, "after")

dfResult.show
// +---+------+----------+--------+----+
// | id|  type|start_time|end_time|area|
// +---+------+----------+--------+----+
// |  1|before|     01:00|   01:30|  10|
// |  1|before|     02:00|   02:30|  20|
// |  1| after|     07:00|   07:30|  70|
// |  1| after|     08:00|   08:30|  80|
// |  1| after|     09:00|   09:30|  90|
// +---+------+----------+--------+----+
相关问题