在pyspark数据帧中生成列值总和和新列中的行总和的矩阵
colors = spark.createDataFrame([("Red","Re",20),("Blue","Bl",30),("Green","Gr",50)]).toDF("Colors","Prefix","Value")
+------+------+-----+
|Colors|Prefix|Value|
+------+------+-----+
| Red| Re| 20|
| Blue| Bl| 30|
| Green| Gr| 50|
+------+------+-----+
piv = colors.groupby("Colors").pivot("Prefix").sum("Value").fillna(0)
piv.withColumn("total",sum(piv[col] for col in piv.columns[1:])).show()
+------+---+---+---+-----+
|Colors| Bl| Gr| Re|total|
+------+---+---+---+-----+
| Green| 0| 50| 0| 50|
| Blue| 30| 0| 0| 30|
| Red| 0| 0| 20| 20|
+------+---+---+---+-----+
期望如下所示的均匀列总数(预期的动态代码,例如其具有更多的列和行)
Re Bl Gr TOTAL
Red 20 0 0 20
Blue 0 30 0 30
Green 0 0 50 50
TOTAL 20 30 50 100
答案 0 :(得分:1)
这是方法。我已经使用map
对所有列进行求和。
import pyspark.sql.functions as f
df = colors.groupby("Colors").pivot("Prefix").sum("Value").fillna(0)
cols = df.columns[1:]
df.union(df.agg(f.lit('Total').alias('Color'), *[f.sum(f.col(c)).alias(c) for c in cols])) \
.withColumn("Total", sum(f.col(c) for c in cols)) \
.show()
+------+---+---+---+-----+
|Colors| Bl| Gr| Re|Total|
+------+---+---+---+-----+
| Green| 0| 50| 0| 50|
| Blue| 30| 0| 0| 30|
| Red| 0| 0| 20| 20|
| Total| 30| 50| 20| 100|
+------+---+---+---+-----+