根据其他列的where条件在Pyspark数据框中添加新列

时间:2019-01-30 17:55:09

标签: python apache-spark pyspark apache-spark-sql pyspark-sql

我有一个Pyspark数据框,如下所示:

+------------+-------------+--------------------+
|package_id  | location    | package_scan_code  | 
+------------+-------------+--------------------+
|123         | Denver      |05                  |  
|123         | LosAngeles  |03                  |  
|123         | Dallas      |09                  |  
|123         | Vail        |02                  | 
|456         | Jacksonville|05                  |  
|456         | Nashville   |09                  |
|456         | Memphis     |03                  |

“ package_scan_code” 03表示包的来源。

我想在此数据帧中添加一列“起源”,以便对于每个包(由“ package_id”标识),新添加的“起源”列中的值将与“ package_scan_code” 03对应的位置相同。 / p>

在上述情况下,有两个唯一的程序包123和456,它们的起源分别为LosAngeles和Memphis(对应于package_scan_code 03)。

所以我希望输出如下:

+------------+-------------+--------------------+------------+
| package_id |location     | package_scan_code  |origin      |
+------------+-------------+--------------------+------------+
|123         | Denver      |05                  | LosAngeles |
|123         | LosAngeles  |03                  | LosAngeles |
|123         | Dallas      |09                  | LosAngeles |
|123         | Vail        |02                  | LosAngeles |
|456         | Jacksonville|05                  |  Memphis   |
|456         | Nashville   |09                  |  Memphis   |
|456         | Memphis     |03                  |  Memphis   |

如何在Pyspark中实现这一目标?我尝试了.withColumn方法,但条件不正确。

2 个答案:

答案 0 :(得分:3)

package_scan_code == '03'过滤数据帧,然后加入背面与原始数据帧:

(df.filter(df.package_scan_code == '03')
   .selectExpr('package_id', 'location as origin')
   .join(df, ['package_id'], how='right')
   .show())
+----------+----------+------------+-----------------+
|package_id|    origin|    location|package_scan_code|
+----------+----------+------------+-----------------+
|       123|LosAngeles|      Denver|               05|
|       123|LosAngeles|  LosAngeles|               03|
|       123|LosAngeles|      Dallas|               09|
|       123|LosAngeles|        Vail|               02|
|       456|   Memphis|Jacksonville|               05|
|       456|   Memphis|   Nashville|               09|
|       456|   Memphis|     Memphis|               03|
+----------+----------+------------+-----------------+

注意:这假设您每个package_scan_code最多有一个03等于package_id,否则逻辑将不正确,您需要重新考虑origin的方式应该定义

答案 1 :(得分:0)

无论数据帧中每个package_scan_code=03发生package_id多少次,此代码都应起作用。我又添加了一个(123,'LosAngeles','03')来演示-

第一步:创建数据框

values = [(123,'Denver','05'),(123,'LosAngeles','03'),(123,'Dallas','09'),(123,'Vail','02'),(123,'LosAngeles','03'),
          (456,'Jacksonville','05'),(456,'Nashville','09'),(456,'Memphis','03')]
df = sqlContext.createDataFrame(values,['package_id','location','package_scan_code'])

步骤2:创建一个package_idlocation的字典。

df_count = df.where(col('package_scan_code')=='03').groupby('package_id','location').count()
dict_location_scan_code = dict(df_count.rdd.map(lambda x: (x['package_id'], x['location'])).collect())
print(dict_location_scan_code)
    {456: 'Memphis', 123: 'LosAngeles'}

第3步::创建列,映射字典。

from pyspark.sql.functions import col, create_map, lit
from itertools import chain
mapping_expr = create_map([lit(x) for x in chain(*dict_location_scan_code.items())])
df = df.withColumn('origin', mapping_expr.getItem(col('package_id')))
df.show()
+----------+------------+-----------------+----------+
|package_id|    location|package_scan_code|    origin|
+----------+------------+-----------------+----------+
|       123|      Denver|               05|LosAngeles|
|       123|  LosAngeles|               03|LosAngeles|
|       123|      Dallas|               09|LosAngeles|
|       123|        Vail|               02|LosAngeles|
|       123|  LosAngeles|               03|LosAngeles|
|       456|Jacksonville|               05|   Memphis|
|       456|   Nashville|               09|   Memphis|
|       456|     Memphis|               03|   Memphis|
+----------+------------+-----------------+----------+