pyspark在lambda中使用正则表达式拆分字符串

时间:2017-12-20 06:35:12

标签: python apache-spark lambda pyspark pyspark-sql

我正在尝试基于lambda函数内的正则表达式拆分字符串,字符串不会被拆分。我确定正则表达式工作正常。检查正则表达式测试链接 https://regex101.com/r/ryRio6/1

from pyspark.sql.functions import col,split
import re

r = re.compile(r"(?=\s\w+=)")
adsample = sc.textFile("hdfs://hostname/user/hdfs/sample/Log18Dec.txt")
splitted_sample = adsample.flatMap(lambda (x): ((v) for v in r.split(x)))

for m in splitted_sample.collect():
    print(m)

不确定我哪里出错..

文件中的示例行:

|RECEIVE|Low| eventId=139569 msg=W4N Alert :: Critical : Interface Utilization for GigabitEthernet0/1 90.0 % in=2442 out=0 categorySignificance=/Normal categoryBehavior=/Communicate/Query categoryDeviceGroup=/Application

正则表达式应该与键前的空格匹配

输出

|RECEIVE|Low|
eventId=139569
msg=W4N Alert :: Critical : Interface Utilization for GigabitEthernet0/1 90.0 %
in=2442
out=0
categorySignificance=/Normal
categoryBehavior=/Communicate/Query
categoryDeviceGroup=/Application

1 个答案:

答案 0 :(得分:1)

from pyspark.sql.functions import col,split
import re

#r = re.compile(r"(?=\s\w+=)")
adsample = sc.textFile("hdfs://hostname/user/hdfs/sample/Log18Dec.txt")
splitted_sample = adsample.flatMap(lambda (x): ((v) for v in re.split('\s+(?=\w+=)',x)))

for m in splitted_sample.collect():
    print(m)