如何提高广播加入速度与Spark之间的条件

时间:2017-04-18 23:00:33

标签: apache-spark apache-spark-sql

我有两个数据帧A和B.A很大(100 G),B相对较小(100 M)。 A的分区号为8,B的分区号为1。

A.join(broadcast(B), $"cur" >= $"low" &&  $"cur" <= $"high", "left_outer")

速度非常慢(> 10小时)。

但是如果我将连接条件更改为:

A.join(broadcast(B), $"cur" === $"low" , "left_outer")

变得非常快(<30分钟)。但条件不能改变。

那么有什么办法可以进一步提高原始连接条件下的连接速度吗?

1 个答案:

答案 0 :(得分:11)

诀窍是重写join条件,使其包含=组件,可用于优化查询并缩小可能的匹配范围。对于数字值,您可以对数据进行bucketize并使用存储桶进行连接条件。

假设您的数据如下所示:

val a = spark.range(100000)
  .withColumn("cur", (rand(1) * 1000).cast("bigint"))

val b = spark.range(100)
  .withColumn("low", (rand(42) * 1000).cast("bigint"))
  .withColumn("high", ($"low" + rand(-42) * 10).cast("bigint"))

首先选择适合您数据的铲斗尺寸。在这种情况下,我们可以使用50:

val bucketSize = 50L

a

中的每一行分配存储桶
val aBucketed = a.withColumn(
  "bucket", ($"cur" / bucketSize).cast("bigint") * bucketSize
)

创建将为范围发出存储桶的UDF:

def get_buckets(bucketSize: Long) = 
  udf((low: Long, high: Long) => {
    val min = (low / bucketSize) * bucketSize
    val max = (high / bucketSize) * bucketSize
    (min to max by bucketSize).toSeq
  })

和存储桶b

val bBucketed = b.withColumn(
  "bucket", explode(get_buckets(bucketSize)($"low",  $"high"))
)

join条件中使用存储桶:

aBucketed.join(
  broadcast(bBucketed), 
  aBucketed("bucket") === bBucketed("bucket") && 
    $"cur" >= $"low" &&  
    $"cur" <= $"high",
  "leftouter"
)

这样Spark就会使用BroadcastHashJoin

*BroadcastHashJoin [bucket#184L], [bucket#178L], LeftOuter, BuildRight, ((cur#98L >= low#105L) && (cur#98L <= high#109L))
:- *Project [id#95L, cur#98L, (cast((cast(cur#98L as double) / 50.0) as bigint) * 50) AS bucket#184L]
:  +- *Project [id#95L, cast((rand(1) * 1000.0) as bigint) AS cur#98L]
:     +- *Range (0, 100000, step=1, splits=Some(8))
+- BroadcastExchange HashedRelationBroadcastMode(List(input[3, bigint, false]))
   +- Generate explode(if ((isnull(low#105L) || isnull(high#109L))) null else UDF(low#105L, high#109L)), true, false, [bucket#178L]
      +- *Project [id#102L, low#105L, cast((cast(low#105L as double) + (rand(-42) * 10.0)) as bigint) AS high#109L]
         +- *Project [id#102L, cast((rand(42) * 1000.0) as bigint) AS low#105L]
            +- *Range (0, 100, step=1, splits=Some(8))

而不是BroadcastNestedLoopJoin

== Physical Plan ==
BroadcastNestedLoopJoin BuildRight, LeftOuter, ((cur#98L >= low#105L) && (cur#98L <= high#109L))
:- *Project [id#95L, cast((rand(1) * 1000.0) as bigint) AS cur#98L]
:  +- *Range (0, 100000, step=1, splits=Some(8))
+- BroadcastExchange IdentityBroadcastMode
   +- *Project [id#102L, low#105L, cast((cast(low#105L as double) + (rand(-42) * 10.0)) as bigint) AS high#109L]
      +- *Project [id#102L, cast((rand(42) * 1000.0) as bigint) AS low#105L]
         +- *Range (0, 100, step=1, splits=Some(8))

您可以调整存储桶大小以在精度和数据大小之间取得平衡。

如果你不介意较低级别的解决方案,那么broadcast一个具有常量项访问权限的排序序列(如ArrayVector)并使用udf进行二分查找加入。

您还应该查看分区数量。 100GB的8个分区似乎相当低。

另见