使用pyspark会话从本地文件读取时,如何跳过某些行?

时间:2019-01-24 08:37:39

标签: python apache-spark pyspark

我正在使用pyspark从本地.plt文件中读取和处理一些数据。该文件的外观如下:

Geolife trajectory
WGS 84
Altitude is in Feet
Reserved 3
0,2,255,My Track,0,0,2,8421376
0
39.984094,116.319236,0,492,39744.2451967593,2008-10-23,05:53:05
39.984198,116.319322,0,492,39744.2452083333,2008-10-23,05:53:06
39.984224,116.319402,0,492,39744.2452662037,2008-10-23,05:53:11
39.984211,116.319389,0,492,39744.2453240741,2008-10-23,05:53:16
......

如上所示,我对开始的6行不感兴趣,我想要的是从第7行开始的行。所以我想使用spark会话从第7行读取此文件。这是我尝试但失败的代码:

from pyspark.sql import SparkSession
session = SparkSession.builder.appName('file reader').master('local[*]').getOrCreate()
df = session.read.\
     option('delimiter', ',').\
     option('header', 'false').\
     csv('test.plt')
df.show()

有人可以给我一些建议吗?谢谢您的关注。

3 个答案:

答案 0 :(得分:3)

from pyspark.sql.types import *
from pyspark.sql import SparkSession
session = SparkSession.builder.appName('file reader').master('local[*]').getOrCreate()
schema = StructType([StructField("a", FloatType()),
                     StructField("b", FloatType()),
                     StructField("c", IntegerType()),
                     StructField("d", IntegerType()),
                     StructField("e", FloatType()),
                     StructField("f", StringType()),
                     StructField("g", StringType())])
df=session.read.option('mode','DROPMALFORMED').csv('test.plt',schema)

答案 1 :(得分:1)

除了@ Arnon Rotem-Gal-Oz建议的出色方法外,如果存在一个列,我们还可以利用任何列的某些特殊属性。

YQ. Wang's数据中,我们可以看到6th列是一个日期,而6th中的header列也将是一个日期的可能性几乎可以忽略不计。 date。因此,想法是检查6th列的此特殊属性。 to_date()string转换为date。如果此列不是date,则to_date()将返回Null,我们将使用.where()子句-

过滤掉所有此类行
from pyspark.sql.functions import to_date
from pyspark.sql.types import FloatType, StringType, StructType, StructField
df = spark.read.schema(schema)\
                    .format("csv")\
                    .option("header","false")\
                    .option("sep",',')\
                    .load('test.plt')\
                    .where(to_date(col('f'),'yyyy-MM-dd').isNotNull())
df.show()
+---------+----------+----+---+---------+----------+--------+
|        a|         b|   c|  d|        e|         f|       g|
+---------+----------+----+---+---------+----------+--------+
|39.984093| 116.31924|   0|492|39744.246|2008-10-23|05:53:05|
|  39.9842| 116.31932|   0|492|39744.246|2008-10-23|05:53:06|
|39.984222|116.319405|   0|492|39744.246|2008-10-23|05:53:11|
| 39.98421| 116.31939|   0|492|39744.246|2008-10-23|05:53:16|
+---------+----------+----+---+---------+----------+--------+

此方法也有缺点,例如如果缺少date,则整个行都会被过滤掉。

答案 2 :(得分:0)

假设从第7行开始的数据遵循以下所示的模式:

from pyspark.sql import SparkSession
session = SparkSession.builder.appName('file reader').master('local[*]').getOrCreate()
data = session.read.textFile('test.plt')

header = data.head(6)  # the first six rows

filtered = data.filter(row => row != header)
               .withColumn("a", split(col("value"), ",").getItem(0))
               .withColumn("b", split(col("value"), ",").getItem(1))
               .withColumn("c", split(col("value"), ",").getItem(2))
               .withColumn("d", split(col("value"), ",").getItem(3))
               .withColumn("e", split(col("value"), ",").getItem(4))
               .withColumn("f", split(col("value"), ",").getItem(5))
               .withColumn("g", split(col("value"), ",").getItem(6))
               .drop("value")
相关问题