Spark将String列拆分为新的固定列数

时间:2018-04-08 20:42:50

标签: scala split spark-dataframe

我将Spark数据框列(String列)拆分为多个列,其中包含以下内容:

val dfSplitted = df.withColumn("splc", split(col("c_001"), "\\[|\\[.*?\\]\\|(^;\\])")).select(col("*") +: (0 until 12).map(i => col("splc").getItem(i).as(s"spl_c$i")): _*).drop("splc","c_001")

此处String列(c_001)具有类似于以下的结构。

Str|[ts1:tssub2|ts1:tssub2]|BLANK|[INT1|X.X.X.X|INT2|BLANK |BLANK | |X.X.X.X|[INT3|s1]]|[INT3|INT4|INT5|INT6|INT7|INT8|INT9|INT10|INT11|INT12|INT13|INT14|INT15]|BLANK |BLANK |[s2|s3|s4|INT16|INT17];[s5|s6|s7|INT18|INT19]|[[s8|s9|s10|INT20|INT21]|ts3:tssub3| | ];[[s11|s12|s13|INT21|INT22]|INT23:INT24|BLANK |BLANK ]|BLANK |BLANK |[s14|s15] 

我希望拆分列(spl_c0到spl_c11)像(表示为行)

Str
[ts1:tssub2|ts1:tssub2]
BLANK
[INT1|X.X.X.X|INT2|BLANK |BLANK | |X.X.X.X|[INT3|s1]]
[INT3|INT4|INT5|INT6|INT7|INT8|INT9|INT10|INT11|INT12|INT13|INT14|INT15]
BLANK
BLANK
[s2|s3|s4|INT16|INT17];[s5|s6|s7|INT18|INT19]
[[s8|s9|s10|INT20|INT21]|ts3:tssub3| | ];[[s11|s12|s13|INT21|INT22]|INT23:INT24|BLANK |BLANK ]
BLANK
BLANK
[s14|s15]

此处spl_c7:[s2|s3|s4|INT16|INT17];[s5|s6|s7|INT18|INT19]可能会有一个或多个重复,例如[s2|s3|s4|INT16|INT17]具有不同的值。在这种情况下,两次重复,用分号作为分隔符。

输出列(表示为行):

Str|
ts1:tssub2|ts1:tssub2]|BLANK|
INT1|X.X.X.X|INT2|BLANK |BLANK | |X.X.X.X|
INT3|s1]]|
INT3|INT4|INT5|INT6|INT7|INT8|INT9|INT10|INT11|INT12|INT13|INT14|INT15]|BLANK |BLANK |
s2|s3|s4|INT16|INT17];
s5|s6|s7|INT18|INT19]|

s8|s9|s10|INT20|INT21]|ts1:tssub2| | ];

s11|s12|s13|INT21|INT22]|INT23:INT24|BLANK |BLANK ]|BLANK |BLANK |
s14|s15]

我想知道为什么我的分裂不会给出预期的结果,是否会有另一种方法(特别是考虑到性能)?

1 个答案:

答案 0 :(得分:0)

The issue is with the regex itself. Writing a regular expression to match all the criteria seems to be tough ask(at least for my self). So instead of splitting string with a regex figured it out that using a function to split seems to be a better approach. Although Faced few hiccups , however with the help of StackOverFlow community able to figured it out.

相关问题