连接多行Pyspark

时间:2018-07-07 12:03:00

标签: python pyspark pyspark-sql apache-spark-ml

我需要将以下数据合并为一行:

vector_no_stopw_df.select("filtered").show(3, truncate=False)



  +----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|filtered                                                                                                                                                                                                                          |
+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|[, problem, population]                                                                                                                                                                                                           |
|[tyler, notes, global, population, increase, sharply, next, century, , almost, growth, occurring, relatively, underdeveloped, africa, south, asia, , contrast, , population, actually, decline, countries]                        |
|[many, economists, uncomfortable, population, issues, , perhaps, arent, covered, depth, standard, graduate, curriculum, , touch, topics, may, culturally, controversial, even, politically, incorrect, thats, unfortunate, future]|
+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+

看起来像

+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|filtered                                                                                                                                                                                                                          |
+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|[, problem, population,tyler, notes, global, population, increase, sharply, next, century, , almost, growth, occurring, relatively, underdeveloped, africa, south, asia, , contrast, , population, actually, decline, countries,many, economists, uncomfortable, population, issues, , perhaps, arent, covered, depth, standard, graduate, curriculum, , touch, topics, may, culturally, controversial, even, politically, incorrect, thats, unfortunate, future]|
+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+

我知道这很简单。但找不到解决方案。我尝试过concat_ws效果不佳。

我执行的concat_ws生成(vector_no_stopw_df.select(concat_ws(',', vector_no_stopw_df.filtered)).collect())如下:

[Row(concat_ws(,, filtered)='one,big,advantages,economist,long,time,council,economic,advisers,,years,ago,ive,gotten,know,follow,lot,people,thinking,,started,cea,august,,finished,july,,,first,academic,year,,fellow,senior,economists,paul,krugman,,lawrence,summers'),
 Row(concat_ws(,, filtered)='isnt,going,happen,anytime,soon,meantime,,tax,system,puts,place,much,higher,marginal,rates,people,acknowledge,people,keep,focusing,federal,income,taxes,alone,,marginal,rates,top,around,,percent,leaves,state'),
 Row(concat_ws(,, filtered)=',,

这里是解决方案,以防万一其他人需要它

我继续使用python的itertools库。

vector_no_stopw_df_count=vector_no_stopw_df.select("filtered").collect()
vector_no_stopw_df_count[0].filtered
vector_no_stopw_list=[i.filtered for i in vector_no_stopw_df_count]

展平列表

from itertools import chain
flattenlist= list(chain.from_iterable(vector_no_stopw_list))
flattenlist[:20]

结果

['',
 'problem',
 'population',
 'tyler',
 'notes',
 'global',
 'population',
 'increase',
 'sharply',
 'next',
 'century',
 '',
 'almost',
 'growth',
 'occurring',
 'relatively',
 'underdeveloped',
 'africa',
 'south',
 'asia']

1 个答案:

答案 0 :(得分:0)

从某种意义上说,您正在寻找explode的反面。

您可以为此使用collect_list

from pyspark.sql.functions as F
df.groupBy(<somecol>).agg(F.collect_list('filtered').alias('aggregated_filters'))