根据每个组的spark / scala的时间窗口查找上次发生的时间

时间:2018-04-25 15:26:56

标签: scala apache-spark apache-spark-sql window-functions

我想根据时间戳窗口找到特定(用户和设备)发生登录尝试的上一次/上一次。

For example my initial dataset looks like this:

+--------+-------+-------------------+-------+
|username| device|         attempt_at|   stat|
+--------+-------+-------------------+-------+
|   user1|     pc|2018-01-02 07:44:27| failed|
|   user1|     pc|2018-01-02 07:44:10|Success|
|   user2| iphone|2017-12-23 16:58:08|Success|
|   user2| iphone|2017-12-23 16:58:30|Success|
|   user2| iphone|2017-12-23 16:58:50| failed|
|   user1|android|2018-01-02 07:44:37| failed|
|   user1|android|2018-01-05 08:33:47| failed|
+--------+-------+-------------------+-------+

//code
val df1 = sc.parallelize(Seq(
  ("user1", "pc", "2018-01-02 07:44:27", "failed"),
  ("user1", "pc", "2018-01-02 07:44:10", "Success"),
  ("user2", "iphone", "2017-12-23 16:58:08", "Success"),
  ("user2", "iphone", "2017-12-23 16:58:30", "Success"),
  ("user2", "iphone", "2017-12-23 16:58:50", "failed"),
  ("user1", "android", "2018-01-02 07:44:37", "failed"),
  ("user1", "android", "2018-01-05 08:33:47", "failed")
)).toDF("username", "device", "attempt_at", "stat")

我想要什么

1小时和7天的窗口,我可以在其中找到每个特定用户和设备的先前时间戳尝试。基本上按用户和设备分组。

例如:对于'user1'和设备'pc',对于上面的数据集,先前对1小时窗口和7天的尝试都是'2018-01-02 07:44:27'。

但是对于user1的设备'android',之前7天的尝试将是'2018-01-02 07:44:27'但是1小时窗口没有任何内容,因为在过去1小时内没有尝试来自android的user1。

预期的输出数据集

// 1 hr window for last known attempt
+--------+-------+---------------------+--------------------+
|username| device|           attempt_at| previous_attempt_at|
+--------+-------+---------------------+--------------------+
|   user1|     pc|  2018-01-02 07:44:10| 2018-01-02 07:44:27|
|   user2| iphone|  2017-12-23 16:58:50| 2017-12-23 16:58:30|
+--------+-------+---------------------+--------------------+

// 7 days window for last known attempt
+--------+--------+---------------------+--------------------+
|username| device |           attempt_at| previous_attempt_at|
+--------+--------+---------------------+--------------------+
|   user1|     pc |  2018-01-02 07:44:10| 2018-01-02 07:44:27|
|   user1| android|  2018-01-05 08:33:47| 2018-01-02 07:44:37|
|   user2|  iphone|  2017-12-23 16:58:50| 2017-12-23 16:58:30|
+--------+--------+---------------------+--------------------+

我尝试了什么:

我尝试使用'last'在1小时的窗口中使用窗口。它给出了当前行的时间戳,但不是基于窗口的前一行。

val w = (Window.partitionBy("username", "device")
                 .orderBy(col("attempt_at").cast("timestamp").cast("long"))
                   .rangeBetween(-3600, 0)
                 )

val df2 = df1.withColumn("previous_attempt_at", last("attempt_at").over(w))

1 个答案:

答案 0 :(得分:1)

.rangeBetween(-3600, 0)替换为.rangeBetween(-3600, -1)

0是CURRENT ROW因此它始终是最后一个。

相关问题