Pyspark(数据帧)逐行读取文件(将行转换为字符串)

时间:2018-08-27 23:01:25

标签: apache-spark pyspark pyspark-sql

我需要逐行读取文件,并将每一行拆分为单词,然后对单词执行操作。

我该怎么做?

我编写了以下代码:

logFile = "/home/hadoop/spark-2.3.1-bin-hadoop2.7/README.md"  # Should be 
some file on your system
spark = SparkSession.builder.appName("SimpleApp1").getOrCreate()
logData = spark.read.text(logFile).cache()
logData.printSchema()
logDataLines = logData.collect()

#The line variable below seems to be of type row. How I perform similar operations 
on row or how do I convert row to a string.

for line in logDataLines:
    words = line.select(explode(split(line,"\s+")))
    for word in words:
        print(word)
    print("----------------------------------")

1 个答案:

答案 0 :(得分:1)

我认为您应该将map函数应用于行。 您可以在自己创建的函数中应用任何内容:

data = spark.read.text("/home/spark/test_it.txt").cache()

def someFunction(row):
    wordlist = row[0].split(" ")
    result = list()
    for word in wordlist:
        result.append(word.upper())
    return result

data.rdd.map(someFunction).collect()

输出:

[[u'THIS', u'IS', u'JUST', u'A', u'TEST'], [u'TO', u'UNDERSTAND'], [u'THE', u'PROCESSING']]
相关问题