差异联盟与子句

时间:2019-01-02 15:27:41

标签: python apache-spark pyspark

在Spark中,unionor-clause有什么区别?

让我们举个例子:

这是我的数据框:

df = spark.createDataFrame(
  [
    ('96','2e63e9f4-27ba-4f50-bc65-a97032a22096' ),
    ('55','4bced1f9-63ad-4ebb-bf34-5fd7ff52d8e2' ),
    ('47','6c5c8151-7891-4567-9d6a-8dace74904bd' ),
    ('90','781eb57d-0774-46c0-9366-13cbab6322c6' ),
    ('27','7eb27670-1e4d-422f-b4f6-f65461bbeda5' ),
    ('259','91646385-3446-42af-a823-33112645024b'),
    ('33','92c77bd9-373d-4d32-9f36-5fa3fc093cd6' ),
    ('96','c6bcc234-7cd7-4134-8f89-b8bb50ae5e0f' ),
    ('55','4ade739d-5115-439c-900e-09fc4cb25293' ),
    ('47','73a2e429-cadc-4afa-ade2-4251e3745a0c' ),
    ('90','c0246074-a899-4437-a461-26c9445822ef' ),
    ('27','a7f6bbfb-fc03-4d04-ab4a-8f58eaf55dd0' ),
    ('259','13bc9ef0-35a0-4f85-8017-55bb8dae6628'),
    ('33','c77c5580-494f-45bf-bb04-6683a9dcc425' ),
  ],
  ["ClientId", "PublicId"]
)

和我的过滤器信息:

my_filter = [
  ('33','92c77bd9-373d-4d32-9f36-5fa3fc093cd6' ),
  ('96','c6bcc234-7cd7-4134-8f89-b8bb50ae5e0f' ),
  ('55','4ade739d-5115-439c-900e-09fc4cb25293' ),
]

如果我使用union进行过滤,我会这样做:

from functools import reduce

out_dataframe_1 = reduce(
            lambda a, b: a.union(b),
            (
                df.where(
                    "ClientId = '{ClientId}' and "
                    "PublicId = '{PublicId}'".format(
                        ClientId=ClientId,
                        PublicId=PublicId,
                    )
                )
                for ClientId, PublicId
                in my_filter
            )
        )

out_dataframe_1.collect()

如果我用or-clause做,我会做:

where_clause = ' or '.join(
  "(ClientId = '{ClientId}' and "
  "PublicId = '{PublicId}')".format(
    ClientId=ClientId,
    PublicId=PublicId,
  )
  for ClientId, PublicId
  in my_filter
)

out_dataframe_2 = df.where(where_clause)

out_dataframe_2.collect()

哪个最好用? 还有其他方法可以执行一系列过滤器吗?也许加入会是最好的选择?

1 个答案:

答案 0 :(得分:0)

使用单个过滤器语句而不是应用3个过滤器和合并结果应更快并且更易于阅读。您还可以使用'in'组合过滤条件:

where_clause = "(ClientId, PublicId) in ({})".format(', '.join(str(r) for r in my_filter))
df.where(where_clause).collect()

如果您的过滤器语句太大,则可能要使my_filter成为数据框并在left_semi连接中使用它。

相关问题