如何在PySpark的Dataframe中用逗号分隔的值拆分列?

时间:2018-08-03 11:37:57

标签: dataframe pyspark

我有一个PySpark数据框,其中的列包含逗号分隔的值。该列包含的值的数量是固定的(例如4)。 示例:

+----+----------------------+
|col1|                  col2|
+----+----------------------+
|   1|val1, val2, val3, val4|
|   2|val1, val2, val3, val4|
|   3|val1, val2, val3, val4|
|   4|val1, val2, val3, val4|
+----+----------------------+

在这里,我想将col2分成4个单独的列,如下所示:

+----+-------+-------+-------+-------+
|col1|  col21|  col22|  col23|  col24|
+----+-------+-------+-------+-------+
|   1|   val1|   val2|   val3|   val4|
|   2|   val1|   val2|   val3|   val4|
|   3|   val1|   val2|   val3|   val4|
|   4|   val1|   val2|   val3|   val4|
+----+-------+-------+-------+-------+

这怎么办?

1 个答案:

答案 0 :(得分:4)

我将拆分列,并使数组的每个元素成为新列。

from pyspark.sql import functions as F

df = spark.createDataFrame(sc.parallelize([['1', 'val1, val2, val3, val4'], ['2', 'val1, val2, val3, val4'], ['3', 'val1, val2, val3, val4'], ['4', 'val1, val2, val3, val4']]), ["col1", "col2"])

df2 = df.select('col1', F.split('col2', ', ').alias('col2'))

# If you don't know the number of columns:
df_sizes = df2.select(F.size('col2').alias('col2'))
df_max = df_sizes.agg(F.max('col2'))
nb_columns = df_max.collect()[0][0]

df_result = df2.select('col1', *[df2['col2'][i] for i in range(nb_columns)])
df_result.show()
>>>
+----+-------+-------+-------+-------+
|col1|col2[0]|col2[1]|col2[2]|col2[3]|
+----+-------+-------+-------+-------+
|   1|   val1|   val2|   val3|   val4|
|   2|   val1|   val2|   val3|   val4|
|   3|   val1|   val2|   val3|   val4|
|   4|   val1|   val2|   val3|   val4|
+----+-------+-------+-------+-------+