选择每行的列名称以获取PySpark中的最大值

时间:2019-05-31 06:22:16

标签: apache-spark pyspark apache-spark-sql

我有一个这样的数据框,只显示了两列,但是原始数据框中有很多列

data = [(("ID1", 3, 5)), (("ID2", 4, 12)), (("ID3", 8, 3))]
df = spark.createDataFrame(data, ["ID", "colA", "colB"])
df.show()

+---+----+----+
| ID|colA|colB|
+---+----+----+
|ID1|   3|   5|
|ID2|   4|  12|
|ID3|   8|   3|
+---+----+----+

我想提取每行的列名,该列名具有最大值。因此,预期的输出是这样的

+---+----+----+-------+
| ID|colA|colB|Max_col|
+---+----+----+-------+
|ID1|   3|   5|   colB|
|ID2|   4|  12|   colB|
|ID3|   8|   3|   colA|
+---+----+----+-------+

对于平局,如果colA和colB具有相同的值,请选择第一列。

如何在pyspark中实现这一目标

5 个答案:

答案 0 :(得分:1)

尝试以下操作:

from  pyspark.sql import functions as F
data = [(("ID1", 3, 5)), (("ID2", 4, 12)), (("ID3", 8, 3))]
df = spark.createDataFrame(data, ["ID", "colA", "colB"])
df.withColumn('max_col',
   F.when(F.col('colA') > F.col('colB'), 'colA').
     otherwise('colB')).show()

收益:

+---+----+----+-------+
| ID|colA|colB|max_col|
+---+----+----+-------+
|ID1|   3|   5|   colB|
|ID2|   4|  12|   colB|
|ID3|   8|   3|   colA|
+---+----+----+-------+

答案 1 :(得分:1)

您可以在每一行上使用UDF进行逐行计算,并使用struct将多列传递给udf。希望这会有所帮助。

from pyspark.sql import functions as F
from pyspark.sql.types import IntegerType
data = [(("ID1", 3, 5,78)), (("ID2", 4, 12,45)), (("ID3", 8, 3,67))]
df = spark.createDataFrame(data, ["ID", "colA", "colB","colC"])
df.show()

+---+----+----+----+
| ID|colA|colB|colC|
+---+----+----+----+
|ID1|   3|   5|  78|
|ID2|   4|  12|  45|
|ID3|   8|   3|  67|
+---+----+----+----+
cols = df.columns
maxcol = F.udf(lambda row: max(row), IntegerType())
maxDF = df.withColumn("maxval", maxcol(F.struct([df[x] for x in df.columns[1:]])))
maxDF.show()

+---+----+----+----+-------+
|ID |colA|colB|colC|Max_col|
+---+----+----+----+-------+
|ID1|3   |5   |78  |78     |
|ID2|4   |12  |45  |45     |
|ID3|8   |3   |67  |67     |
+---+----+----+----+-------+

答案 2 :(得分:0)

有多个选项可以实现此目的。我是一个示例,可以为您提供休息提示-

from pyspark.sql import functions as F
from pyspark.sql.window import Window as W
from pyspark.sql import types as T

data = [(("ID1", 3, 5)), (("ID2", 4, 12)), (("ID3", 8, 3))]
df = spark.createDataFrame(data, ["ID", "colA", "colB"])
df.show()

+---+----+----+
| ID|colA|colB|
+---+----+----+
|ID1|   3|   5|
|ID2|   4|  12|
|ID3|   8|   3|
+---+----+----+

#Below F.array creates an array of column name and value pair like [['colA', 3], ['colB', 5]] then F.explode break this array into rows like different column and value pair should be in different rows

df = df.withColumn(
    "max_val",
    F.explode(
        F.array([
            F.array([F.lit(cl), F.col(cl)]) for cl in df.columns[1:]
        ])
    )
)
df.show()
+---+----+----+----------+
| ID|colA|colB|   max_val|
+---+----+----+----------+
|ID1|   3|   5| [colA, 3]|
|ID1|   3|   5| [colB, 5]|
|ID2|   4|  12| [colA, 4]|
|ID2|   4|  12|[colB, 12]|
|ID3|   8|   3| [colA, 8]|
|ID3|   8|   3| [colB, 3]|
+---+----+----+----------+

#Then select columns so that column name and value should be in different columns
df = df.select(
    "ID", 
    "colA", 
    "colB", 
    F.col("max_val").getItem(0).alias("col_name"),
    F.col("max_val").getItem(1).cast(T.IntegerType()).alias("col_value"),
)
df.show()
+---+----+----+--------+---------+
| ID|colA|colB|col_name|col_value|
+---+----+----+--------+---------+
|ID1|   3|   5|    colA|        3|
|ID1|   3|   5|    colB|        5|
|ID2|   4|  12|    colA|        4|
|ID2|   4|  12|    colB|       12|
|ID3|   8|   3|    colA|        8|
|ID3|   8|   3|    colB|        3|
+---+----+----+--------+---------+

# Rank column values based on ID in desc order
df = df.withColumn(
    "rank",
    F.rank().over(W.partitionBy("ID").orderBy(F.col("col_value").desc()))
)
df.show()
+---+----+----+--------+---------+----+
| ID|colA|colB|col_name|col_value|rank|
+---+----+----+--------+---------+----+
|ID2|   4|  12|    colB|       12|   1|
|ID2|   4|  12|    colA|        4|   2|
|ID3|   8|   3|    colA|        8|   1|
|ID3|   8|   3|    colB|        3|   2|
|ID1|   3|   5|    colB|        5|   1|
|ID1|   3|   5|    colA|        3|   2|
+---+----+----+--------+---------+----+

#Finally Filter rank = 1 as max value have rank 1 because we ranked desc value
df.where("rank=1").show()
+---+----+----+--------+---------+----+
| ID|colA|colB|col_name|col_value|rank|
+---+----+----+--------+---------+----+
|ID2|   4|  12|    colB|       12|   1|
|ID3|   8|   3|    colA|        8|   1|
|ID1|   3|   5|    colB|        5|   1|
+---+----+----+--------+---------+----+

其他选项--

  • 在基本df上使用UDF并返回具有最大值的列名
  • 在同一示例中,在使列名和值列而不是等级之后,使用ID进行分组,取最大值col_value。然后加入上一个df。

答案 3 :(得分:0)

您可以使用RDD API添加新列:

df.rdd.map(lambda r: r.asDict())\
       .map(lambda r: Row(Max_col=max([i for i in r.items() if i[0] != 'ID'], 
                                      key=lambda kv: kv[1])[0], **r) )\
       .toDF()

结果:

+---+-------+----+----+
| ID|Max_col|colA|colB|
+---+-------+----+----+
|ID1|   colB|   3|   5|
|ID2|   colB|   4|  12|
|ID3|   colA|   8|   3|
+---+-------+----+----+

答案 4 :(得分:0)

扩展Suresh的工作。...返回适当的列名

from pyspark.sql import functions as f
from pyspark.sql.types import IntegerType, StringType

import numpy as np

data = [(("ID1", 3, 5,78)), (("ID2", 4, 12,45)), (("ID3", 68, 3,67))]
df = spark.createDataFrame(data, ["ID", "colA", "colB","colC"])
df.show()

cols = df.columns
maxcol = f.udf(lambda row: cols[row.index(max(row)) +1], StringType())

maxDF = df.withColumn("Max_col", maxcol(f.struct([df[x] for x in df.columns[1:]])))
maxDF.show(truncate=False)

+---+----+----+----+------+
|ID |colA|colB|colC|Max_col|
+---+----+----+----+------+
|ID1|3   |5   |78  |colC  |
|ID2|4   |12  |45  |colC  |
|ID3|68  |3   |67  |colA  |
+---+----+----+----+------+