从列表中的元素中删除尾随空格

时间:2018-06-21 15:02:41

标签: python-3.x apache-spark pyspark apache-spark-sql

我有一个spark数据框,其中给定的列是一些文本。我正在尝试清除文本并按逗号分隔,这将输出包含单词列表的新列。

我遇到的问题是该列表中的某些元素包含我想删除的尾随空白。

代码

# Libraries
# Standard Libraries
from typing import Dict, List, Tuple

# Third Party Libraries
import pyspark
from pyspark.ml.feature import Tokenizer
from pyspark.sql import SparkSession
import pyspark.sql.functions as s_function


def tokenize(sdf, input_col="text", output_col="tokens"):
    # Remove email 
    sdf_temp = sdf.withColumn(
        colName=input_col,
        col=s_function.regexp_replace(s_function.col(input_col), "[\w\.-]+@[\w\.-]+\.\w+", ""))
    # Remove digits
    sdf_temp = sdf_temp.withColumn(
        colName=input_col,
        col=s_function.regexp_replace(s_function.col(input_col), "\d", ""))
    # Remove one(1) character that is not a word character except for
    # commas(,), since we still want to split on commas(,)
    sdf_temp = sdf_temp.withColumn(
        colName=input_col,
        col=s_function.regexp_replace(s_function.col(input_col), "[^a-zA-Z0-9,]+", " ")) 
    # Split the affiliation string based on a comma
    sdf_temp = sdf_temp.withColumn(
        colName=output_col,
        col=s_function.split(sdf_temp[input_col], ", "))

    return sdf_temp


if __name__ == "__main__":
    # Sample data
    a_1 = "Department of Bone and Joint Surgery, Ehime University Graduate"\
        " School of Medicine, Shitsukawa, Toon 791-0295, Ehime, Japan."\
        " shinyama@m.ehime-u.ac.jp." 
    a_2 = "Stroke Pharmacogenomics and Genetics, Fundació Docència i Recerca"\
        " Mútua Terrassa, Hospital Mútua de Terrassa, 08221 Terrassa, Spain."
    a_3 = "Neurovascular Research Laboratory, Vall d'Hebron Institute of Research,"\
        " Hospital Vall d'Hebron, 08035 Barcelona, Spain;catycarrerav@gmail.com"\
        " (C.C.). catycarrerav@gmail.com."

    data = [(1, a_1), (2, a_2), (3, a_3)]

    spark = SparkSession\
        .builder\
        .master("local[*]")\
        .appName("My_test")\
        .config("spark.ui.port", "37822")\
        .getOrCreate()
    sc = spark.sparkContext
    sc.setLogLevel("WARN")

    af_data = spark.createDataFrame(data, ["index", "text"])
    sdf_tokens = tokenize(af_data)
    # sdf_tokens.select("tokens").show(truncate=False)

输出

|[Department of Bone and Joint Surgery, Ehime University Graduate School of Medicine, Shitsukawa, Toon , Ehime, Japan ]                                                |
|[Stroke Pharmacogenomics and Genetics, Fundaci Doc ncia i Recerca M tua Terrassa, Hospital M tua de Terrassa, Terrassa, Spain ]                                       |
|[Neurovascular Research Laboratory, Vall d Hebron Institute of Research, Hospital Vall d Hebron, Barcelona, Spain C C ]  

所需的输出

|[Department of Bone and Joint Surgery, Ehime University Graduate School of Medicine, Shitsukawa, Toon, Ehime, Japan]                                                |
|[Stroke Pharmacogenomics and Genetics, Fundaci Doc ncia i Recerca M tua Terrassa, Hospital M tua de Terrassa, Terrassa, Spain]                                       |
|[Neurovascular Research Laboratory, Vall d Hebron Institute of Research, Hospital Vall d Hebron, Barcelona, Spain C C]  

这样

  1. 第一行:'Toon ' -> 'Toon''Japan ' -> 'Japan'
  2. 第二行:'Spain ' -> 'Spain'
  3. 第三行:'Spain C C ' -> 'Spain C C'

注意

结尾的空格不仅出现在列表的最后一个元素上,而且可以出现在任何元素上。

2 个答案:

答案 0 :(得分:4)

更新

原始解决方案不起作用,因为trim仅在整个字符串的开头和结尾操作,而您需要在每个标记上使用它。

@PatrickArtnersolution有效,但另一种方法是使用RegexTokenizer

以下是如何修改tokenize()函数的示例:

from pyspark.ml.feature import RegexTokenizer

def tokenize(sdf, input_col="text", output_col="tokens"):

    # Remove email 
    sdf_temp = sdf.withColumn(
        colName=input_col,
        col=s_function.regexp_replace(s_function.col(input_col), "[\w\.-]+@[\w\.-]+\.\w+", ""))
    # Remove digits
    sdf_temp = sdf_temp.withColumn(
        colName=input_col,
        col=s_function.regexp_replace(s_function.col(input_col), "\d", ""))
    # Remove one(1) character that is not a word character except for
    # commas(,), since we still want to split on commas(,)
    sdf_temp = sdf_temp.withColumn(
        colName=input_col,
        col=s_function.regexp_replace(s_function.col(input_col), "[^a-zA-Z0-9,]+", " "))

    # call trim to remove any trailing (or leading spaces)
    sdf_temp = sdf_temp.withColumn(
        colName=input_col,
        col=s_function.trim(sdf_temp[input_col]))

    # use RegexTokenizer to split on commas optionally surrounded by whitespace
    myTokenizer = RegexTokenizer(
        inputCol=input_col,
        outputCol=output_col,
        pattern="( +)?, ?")

    sdf_temp = myTokenizer.transform(sdf_temp)

    return sdf_temp

本质上,请在您的字符串上调用trim来照顾任何前导或尾随空格。然后使用RegexTokenizer使用模式"( +)?, ?"进行拆分。

  • ( +)?:在零个和无限个空格之间匹配
  • ,:完全匹配逗号
  • ?:匹配可选空格

这是

的输出
sdf_tokens.select('tokens', f.size('tokens').alias('size')).show(truncate=False)

您可以看到数组的长度(令牌数)是正确的,但是所有令牌都是小写字母(因为TokenizerRegexTokenizer就是这样做的。)

+------------------------------------------------------------------------------------------------------------------------------+----+
|tokens                                                                                                                        |size|
+------------------------------------------------------------------------------------------------------------------------------+----+
|[department of bone and joint surgery, ehime university graduate school of medicine, shitsukawa, toon, ehime, japan]          |6   |
|[stroke pharmacogenomics and genetics, fundaci doc ncia i recerca m tua terrassa, hospital m tua de terrassa, terrassa, spain]|5   |
|[neurovascular research laboratory, vall d hebron institute of research, hospital vall d hebron, barcelona, spain c c]        |5   |
+------------------------------------------------------------------------------------------------------------------------------+----+

原始答案

只要您使用的是Spark 1.5版或更高版本,就可以使用pyspark.sql.functions.trim(),它将:

  

修剪指定字符串列的两端空格。

所以一种方法是添加:

sdf_temp = sdf_temp.withColumn(
        colName=input_col,
        col=s_function.trim(sdf_temp[input_col]))

tokenize()函数的结尾。

但是您可能想要查看pyspark.ml.feature.Tokenizerpyspark.ml.feature.RegexTokenizer。一种想法是使用函数来清理字符串,然后使用Tokenizer来创建标记。 (我看到您已经导入了它,但是似乎没有在使用它。)

答案 1 :(得分:2)

为什么不简单地用' ,'替换','并用' $'替换''-类似于

sdf_temp = sdf_temp.withColumn(
    colName=input_col,
    col=s_function.regexp_replace(s_function.col(input_col), "( ,| $)", ","))

那应该可以处理您的数据。

根据您的输入,您可能需要替换多个空格,添加'+'即可。

sdf_temp = sdf_temp.withColumn(
    colName=input_col,
    col=s_function.regexp_replace(s_function.col(input_col), "( +,| +$)", ","))

在被', '分割之前。


免责声明:

只是基本的正则表达式知识-没有pyspark细节。