如何计算数据帧的百分比

时间:2017-04-18 13:16:11

标签: python apache-spark pyspark

我有一个模拟数据框的场景,如下所示。

Area   Type    NrPeople     
1      House    200
1      Flat     100
2      House    300
2      Flat     400
3      House   1000
4      Flat     250

如何计算并按降序返回每个区域的人的Nr,但最重要的是我很难计算总体百分比。

结果应如下所示:

Area   SumPeople      %     
3       1000        44%
2        700        31%
1        300        13%
4        250        11%

请参阅下面的代码示例:

HouseDf = spark.createDataFrame([("1", "House", "200"), 
                              ("1", "Flat", "100"), 
                              ("2", "House", "300"), 
                              ("2", "Flat", "400"),
                              ("3", "House", "1000"), 
                              ("4", "Flat", "250")],
                              ["Area", "Type", "NrPeople"])

import pyspark.sql.functions as fn 
Total = HouseDf.agg(fn.sum('NrPeople').alias('Total')) 

Top = HouseDf\
    .groupBy('Area')\
    .agg(fn.sum('NrPeople').alias('SumPeople'))\
    .orderBy('SumPeople', ascending=False)\
    .withColumn('%', fn.lit(HouseDf.agg(fn.sum('NrPeople'))/Total.Total))\
Top.show()

这失败了:/:'int'和'DataFrame'

的操作数类型不受支持

任何想法都欢迎如何做到这一点!

3 个答案:

答案 0 :(得分:4)

您需要窗口功能 -

import pyspark.sql.functions as fn 
from pyspark.sql.functions import rank,sum,col
from pyspark.sql import Window

window = Window.rowsBetween(Window.unboundedPreceding,Window.unboundedFollowing)

HouseDf\
.groupBy('Area')\
.agg(fn.sum('NrPeople').alias('SumPeople'))\
.orderBy('SumPeople', ascending=False)\
.withColumn('total',sum(col('SumPeople')).over(window))\
.withColumn('Percent',col('SumPeople')*100/col('total'))\
.drop(col('total')).show()

输出:

+----+---------+------------------+
|Area|SumPeople|           Percent|
+----+---------+------------------+
|   3|   1000.0| 44.44444444444444|
|   2|    700.0| 31.11111111111111|
|   1|    300.0|13.333333333333334|
|   4|    250.0| 11.11111111111111|
+----+---------+------------------+

答案 1 :(得分:3)

好吧,错误似乎很简单,Total是一个data.frame,你不能用数据帧划分整数。首先,您可以使用collect

将其转换为整数
Total = HouseDf.agg(fn.sum('NrPeople').alias('Total')).collect()[0][0] 

然后,通过一些额外的格式,以下应该可以正常工作

HouseDf\
    .groupBy('Area')\
    .agg(fn.sum('NrPeople').alias('SumPeople'))\
    .orderBy('SumPeople', ascending = False)\
    .withColumn('%', fn.format_string("%2.0f%%\n", col('SumPeople')/Total * 100))\
    .show() 

+----+---------+----+
|Area|SumPeople|   %|
+----+---------+----+
|   3|   1000.0|44%
|
|   2|    700.0|31%
|
|   1|    300.0|13%
|
|   4|    250.0|11%
|
+----+---------+----+

虽然我不确定%是否是一个非常好的列名,因为它会更难重用,但也可以考虑将其命名为Percent等。

答案 2 :(得分:0)

您可以使用这种方法来避免执行collect步骤:

HouseDf.registerTempTable("HouseDf")

df2 = HouseDf.groupby('Area').agg(f.sum(HouseDf.NrPeople).alias("SumPeople")).withColumn("%", f.expr('SumPeople/(select sum(NrPeople) from HouseDf)'))
df2.show()

我还没有测试过,但是我想这比本文中的其他答案要快

这与以下内容等效(物理计划非常相似):

HouseDf.registerTempTable("HouseDf")
sql = """


select g, sum(NrPeople) as sum, sum(NrPeople)/(select sum(NrPeople)  from HouseDf) as new
from HouseDf 
group by Area

"""

spark.sql(sql).explain(True)
spark.sql(sql).show()

几乎可以肯定,您不想在整个数据集的window中使用该选项(例如w = Window.partitionBy())。实际上,Spark会就此警告您:

WARN WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation.