计算每组的最大观察数

时间:2017-05-11 15:54:11

标签: scala apache-spark apache-spark-1.6

我使用Spark 1.6.2

我需要找到每组的最大数量。

val myData = Seq(("aa1", "GROUP_A", "10"),("aa1","GROUP_A", "12"),("aa2","GROUP_A", "12"),("aa3", "GROUP_B", "14"),("aa3","GROUP_B", "11"),("aa3","GROUP_B","12" ),("aa2", "GROUP_B", "12"))

val df = sc.parallelize(myData).toDF("id","type","activity")

让我们首先计算每组的观察次数:

df.groupBy("type","id").count.show

+-------+---+-----+
|   type| id|count|
+-------+---+-----+
|GROUP_A|aa1|    2|
|GROUP_A|aa2|    1|
|GROUP_B|aa2|    1|
|GROUP_B|aa3|    3|
+-------+---+-----+

这是预期的结果:

+--------+----+-----+
|type    |  id|count|
+----+--------+-----+
| GROUP_A| aa1|    2|
| GROUP_B| aa3|    3|
+--------+----+-----+

我试过这个,但它不起作用:

df.groupBy("type","id").count.filter("count = 'max'").show

2 个答案:

答案 0 :(得分:2)

获取具有列X和#34的最大值的"行; (而不仅仅是那个最大值),你可以使用这个小技巧"分组"将相关列合并为struct,其中包含排序列作为第一列 - 然后计算该结构的max。由于struct的排序是"主导"通过第一列的排序 - 我们将获得所需的结果:

df.groupBy("id","type").count()                // get count per id and type
  .groupBy("type")                             // now group by type only
  .agg(max(struct("count", "id")) as "struct") // get maximum of (count, id) structs - since count is first, and id is unique - count will decide the ordering
  .select($"type", $"struct.id" as "id", $"struct.count" as "count") // "unwrap" structs
  .show()

// +-------+---+-----+
// |   type| id|count|
// +-------+---+-----+
// |GROUP_A|aa1|    2|
// |GROUP_B|aa3|    3|
// +-------+---+-----+

答案 1 :(得分:1)

您可以在分组后使用最大功能。

val myData = Seq(("aa1", "GROUP_A", "10"),("aa1","GROUP_A", "12"),("aa2","GROUP_A", "12"),("aa3", "GROUP_B", "14"),("aa3","GROUP_B", "11"),("aa3","GROUP_B","12" ),("aa2", "GROUP_B", "12"))

val df = sc.parallelize(myData).toDF("id","type","activity")

//在groupby之后计算,然后在计数字段的别名后查找cnt字段中的最大值。

val newDF = df1.groupBy("type", "id").agg(count("*").alias("cnt"))

val df1 = newDF.groupBy("type").max("cnt").show

现在您可以加入这两个数据帧来获取输出。

df1.join(newDF.as("newDF"), col("cnt") === col("max(cnt)")).select($"newDF.*").show