如何使用spark scala在日期的基础上将行拆分成多行?

时间:2018-06-13 12:35:31

标签: scala date apache-spark dataframe

我的数据框包含如下所示的行,我需要将此数据拆分为基于pa_start_date和pa_end_date的月份系列,并创建新的列期间开始和结束日期。

i / p dataframe df是

    p_id pa_id  p_st_date   p_end_date     pa_start_date   pa_end_date  
    p1   pa1    2-Jan-18      5-Dec-18     2-Mar-18        8-Aug-18       
    p1   pa2    3-Jan-18      8-Dec-18     6-Mar-18        10-Nov-18   
    p1   pa3    1-Jan-17      1-Dec-17     9-Feb-17        20-Apr-17  

o / p是

p_id pa_id  p_st_date   p_end_date pa_start_date pa_end_date period_start_date period_end_date
p1   pa1    2-Jan-18    5-Dec-18   2-Mar-18      8-Aug-18     2-Mar-18 31-Mar-18
p1   pa1    2-Jan-18    5-Dec-18   2-Mar-18      8-Aug-18     1-Apr-18 30-Apr-18
p1   pa1    2-Jan-18    5-Dec-18   2-Mar-18      8-Aug-18     1-May-18 31-May-18
p1   pa1    2-Jan-18    5-Dec-18   2-Mar-18      8-Aug-18     1-Jun-18 30-Jun-18
p1   pa1    2-Jan-18    5-Dec-18   2-Mar-18      8-Aug-18     1-Jul-18 31-Jul-18
p1   pa1    2-Jan-18    5-Dec-18   2-Mar-18      8-Aug-18     1-Aug-18 31-Aug-18
p1   pa2    3-Jan-18    8-Dec-18   6-Mar-18      10-Nov-18    6-Mar-18 31-Mar-18
p1   pa2    3-Jan-18    8-Dec-18   6-Mar-18      10-Nov-18    1-Apr-18 30-Apr-18
p1   pa2    3-Jan-18    8-Dec-18   6-Mar-18      10-Nov-18    1-May-18 31-May-18
p1   pa2    3-Jan-18    8-Dec-18   6-Mar-18      10-Nov-18    1-Jun-18 30-Jun-18
p1   pa2    3-Jan-18    8-Dec-18   6-Mar-18      10-Nov-18    1-Jul-18 31-Jul-18
p1   pa2    3-Jan-18    8-Dec-18   6-Mar-18      10-Nov-18    1-Aug-18 31-Aug-18
p1   pa2    3-Jan-18    8-Dec-18   6-Mar-18      10-Nov-18    1-Sep-18 30-Sep-18
p1   pa2    3-Jan-18    8-Dec-18   6-Mar-18      10-Nov-18    1-Oct-18 30-Oct-18
p1   pa2    3-Jan-18    8-Dec-18   6-Mar-18      10-Nov-18    1-Nov-18 30-Nov-18
p1   pa3    1-Jan-17    1-Dec-17   9-Feb-17      20-Apr-17    9-Feb-17 28-Feb-17
p1   pa3    1-Jan-17    1-Dec-17   9-Feb-17      20-Apr-17    1-Mar-17 31-Mar-17
p1   pa3    1-Jan-17    1-Dec-17   9-Feb-17      20-Apr-17    1-Apr-17 30-Apr-17

2 个答案:

答案 0 :(得分:1)

我已经完成了创建如下的UDF。

如果pa_start_date以及pa_start_datepa_end_date之间的月份数作为参数传递,则此UDF将创建一个日期数组(包括所有月份包括开始日期和结束日期的日期)

def udfFunc: ((Date, Long) => Array[String]) = {
            (d, l) =>
                {
                    var t = LocalDate.fromDateFields(d)
                    val dates: Array[String] = new Array[String](l.toInt)
                    for (i <- 0 until l.toInt) {
                        println(t)
                        dates(i) = t.toString("YYYY-MM-dd")
                        t = LocalDate.fromDateFields(t.toDate()).plusMonths(1)
                    }
                    dates
                }
        }
        val my_udf = udf(udfFunc)

最终的数据框如下所示。

val df = ss.read.format("csv").option("header", true).load(path)
            .select($"p_id", $"pa_id", $"p_st_date", $"p_end_date", $"pa_start_date", $"pa_end_date",
                my_udf(to_date(col("pa_start_date"), "dd-MMM-yy"), ceil(months_between(to_date(col("pa_end_date"), "dd-MMM-yy"), to_date(col("pa_start_date"), "dd-MMM-yy")))).alias("udf")) // gives array of dates from UDF
            .withColumn("after_divide", explode($"udf")) // divide array of dates to individual rows
            .withColumn("period_end_date", date_format(last_day($"after_divide"), "dd-MMM-yy")) // fetching the end_date for the particular date
            .drop("udf")
            .withColumn("row_number", row_number() over (Window.partitionBy("p_id", "pa_id", "p_st_date", "p_end_date", "pa_start_date", "pa_end_date").orderBy(col("after_divide").asc))) // just helper column for calculating `period_start_date` below
            .withColumn("period_start_date", date_format(when(col("row_number").isin(1), $"after_divide").otherwise(trunc($"after_divide", "month")), "dd-MMM-yy"))
            .drop("after_divide")
            .drop("row_number") // dropping all the helper columns which is not needed in output.

这是输出。

+----+-----+---------+----------+-------------+-----------+---------------+-----------------+
|p_id|pa_id|p_st_date|p_end_date|pa_start_date|pa_end_date|period_end_date|period_start_date|
+----+-----+---------+----------+-------------+-----------+---------------+-----------------+
|  p1|  pa3| 1-Jan-17|  1-Dec-17|     9-Feb-17|  20-Apr-17|      28-Feb-17|        09-Feb-17|
|  p1|  pa3| 1-Jan-17|  1-Dec-17|     9-Feb-17|  20-Apr-17|      31-Mar-17|        01-Mar-17|
|  p1|  pa3| 1-Jan-17|  1-Dec-17|     9-Feb-17|  20-Apr-17|      30-Apr-17|        01-Apr-17|
|  p1|  pa2| 3-Jan-18|  8-Dec-18|     6-Mar-18|  10-Nov-18|      31-Mar-18|        06-Mar-18|
|  p1|  pa2| 3-Jan-18|  8-Dec-18|     6-Mar-18|  10-Nov-18|      30-Apr-18|        01-Apr-18|
|  p1|  pa2| 3-Jan-18|  8-Dec-18|     6-Mar-18|  10-Nov-18|      31-May-18|        01-May-18|
|  p1|  pa2| 3-Jan-18|  8-Dec-18|     6-Mar-18|  10-Nov-18|      30-Jun-18|        01-Jun-18|
|  p1|  pa2| 3-Jan-18|  8-Dec-18|     6-Mar-18|  10-Nov-18|      31-Jul-18|        01-Jul-18|
|  p1|  pa2| 3-Jan-18|  8-Dec-18|     6-Mar-18|  10-Nov-18|      31-Aug-18|        01-Aug-18|
|  p1|  pa2| 3-Jan-18|  8-Dec-18|     6-Mar-18|  10-Nov-18|      30-Sep-18|        01-Sep-18|
|  p1|  pa2| 3-Jan-18|  8-Dec-18|     6-Mar-18|  10-Nov-18|      31-Oct-18|        01-Oct-18|
|  p1|  pa2| 3-Jan-18|  8-Dec-18|     6-Mar-18|  10-Nov-18|      30-Nov-18|        01-Nov-18|
|  p1|  pa1| 2-Jan-18|  5-Dec-18|     2-Mar-18|   8-Aug-18|      31-Mar-18|        02-Mar-18|
|  p1|  pa1| 2-Jan-18|  5-Dec-18|     2-Mar-18|   8-Aug-18|      30-Apr-18|        01-Apr-18|
|  p1|  pa1| 2-Jan-18|  5-Dec-18|     2-Mar-18|   8-Aug-18|      31-May-18|        01-May-18|
|  p1|  pa1| 2-Jan-18|  5-Dec-18|     2-Mar-18|   8-Aug-18|      30-Jun-18|        01-Jun-18|
|  p1|  pa1| 2-Jan-18|  5-Dec-18|     2-Mar-18|   8-Aug-18|      31-Jul-18|        01-Jul-18|
|  p1|  pa1| 2-Jan-18|  5-Dec-18|     2-Mar-18|   8-Aug-18|      31-Aug-18|        01-Aug-18|
+----+-----+---------+----------+-------------+-----------+---------------+-----------------+

答案 1 :(得分:0)

这是我使用RDD和UDF的方式

将数据保存在文件中

/tmp/pdata.csv
p_id,pa_id,p_st_date,p_end_date,pa_start_date,pa_end_date
p1,pa1,2-Jan-18,5-Dec-18,2-Mar-18,8-Aug-18
p1,pa2,3-Jan-18,8-Dec-18,6-Mar-18,10-Nov-18
p1,pa3,1-Jan-17,1-Dec-17,9-Feb-17,20-Apr-17

火花斯卡拉代码

import org.apache.spark.{ SparkConf, SparkContext }
import org.apache.spark.sql.functions.broadcast
import org.apache.spark.sql.types._
import org.apache.spark.sql._
import org.apache.spark.sql.functions._
import scala.collection.mutable.ListBuffer
import java.util.{GregorianCalendar, Date}
import java.util.Calendar
val ipt = spark.read.format("com.databricks.spark.csv").option("header","true").option("inferchema","true").load("/tmp/pdata.csv")
val format = new java.text.SimpleDateFormat("dd-MMM-yy")
format.format(new java.util.Date())  --test date
def generateDates(startdate: Date, enddate: Date): ListBuffer[String] ={
var dateList = new ListBuffer[String]()
var calendar = new GregorianCalendar()
calendar.setTime(startdate)
val monthName  = Array("Jan", "Feb","Mar", "Apr", "May", "Jun", "Jul","Aug", "Sept", "Oct", "Nov","Dec")
dateList +=(calendar.get(Calendar.DAY_OF_MONTH)) + "-" + monthName(calendar.get(Calendar.MONTH)) + "-" +  (calendar.get(Calendar.YEAR)) +","+ 
(calendar.getActualMaximum(Calendar.DAY_OF_MONTH)) + "-" + monthName(calendar.get(Calendar.MONTH)) + "-" +  (calendar.get(Calendar.YEAR))
calendar.add(Calendar.MONTH, 1)
while (calendar.getTime().before(enddate)) {
dateList +="01-" + monthName(calendar.get(Calendar.MONTH)) + "-" +  (calendar.get(Calendar.YEAR)) +","+ 
(calendar.getActualMaximum(Calendar.DAY_OF_MONTH)) + "-" + monthName(calendar.get(Calendar.MONTH)) + "-" +  (calendar.get(Calendar.YEAR))
calendar.add(Calendar.MONTH, 1)
}
dateList
}
val oo  = ipt.rdd.map(x=>(x(0).toString(),x(1).toString(),x(2).toString(),x(3).toString(),x(4).toString(),x(5).toString()))
oo.flatMap(pp=> {
var allDates = new ListBuffer[(String,String,String,String,String,String,String)]()
for (x <- generateDates(format.parse(pp._5),format.parse(pp._6))) {
allDates += ((pp._1,pp._2,pp._3,pp._4,pp._5,pp._6,x))}
allDates
}).collect().foreach(println)

我做了Flatmap,在执行该功能时,它用于提取并置的日期和列表缓冲区以附​​加并置的值 我使用monthName根据您的输出格式获取月份。 输出如下

(p1,pa1,2-Jan-18,5-Dec-18,2-Mar-18,8-Aug-18,2-Mar-2018,31-Mar-2018)
(p1,pa1,2-Jan-18,5-Dec-18,2-Mar-18,8-Aug-18,01-Apr-2018,30-Apr-2018)
(p1,pa1,2-Jan-18,5-Dec-18,2-Mar-18,8-Aug-18,01-May-2018,31-May-2018)
(p1,pa1,2-Jan-18,5-Dec-18,2-Mar-18,8-Aug-18,01-Jun-2018,30-Jun-2018)
(p1,pa1,2-Jan-18,5-Dec-18,2-Mar-18,8-Aug-18,01-Jul-2018,31-Jul-2018)
(p1,pa1,2-Jan-18,5-Dec-18,2-Mar-18,8-Aug-18,01-Aug-2018,31-Aug-2018)
(p1,pa2,3-Jan-18,8-Dec-18,6-Mar-18,10-Nov-18,6-Mar-2018,31-Mar-2018)
(p1,pa2,3-Jan-18,8-Dec-18,6-Mar-18,10-Nov-18,01-Apr-2018,30-Apr-2018)
(p1,pa2,3-Jan-18,8-Dec-18,6-Mar-18,10-Nov-18,01-May-2018,31-May-2018)
(p1,pa2,3-Jan-18,8-Dec-18,6-Mar-18,10-Nov-18,01-Jun-2018,30-Jun-2018)
(p1,pa2,3-Jan-18,8-Dec-18,6-Mar-18,10-Nov-18,01-Jul-2018,31-Jul-2018)
(p1,pa2,3-Jan-18,8-Dec-18,6-Mar-18,10-Nov-18,01-Aug-2018,31-Aug-2018)
(p1,pa2,3-Jan-18,8-Dec-18,6-Mar-18,10-Nov-18,01-Sept-2018,30-Sept-2018)
(p1,pa2,3-Jan-18,8-Dec-18,6-Mar-18,10-Nov-18,01-Oct-2018,31-Oct-2018)
(p1,pa2,3-Jan-18,8-Dec-18,6-Mar-18,10-Nov-18,01-Nov-2018,30-Nov-2018)
(p1,pa3,1-Jan-17,1-Dec-17,9-Feb-17,20-Apr-17,9-Feb-2017,28-Feb-2017)
(p1,pa3,1-Jan-17,1-Dec-17,9-Feb-17,20-Apr-17,01-Mar-2017,31-Mar-2017)
(p1,pa3,1-Jan-17,1-Dec-17,9-Feb-17,20-Apr-17,01-Apr-2017,30-Apr-2017)

如果有人怀疑,我很乐意进一步解释,而且我可能以一种愚蠢的方式阅读了文件,我们也可以对此进行改进。

相关问题