如果它在Pyspark中包含特定的关键字,请从csv文件中跳过行

时间:2018-11-13 07:17:56

标签: python-3.x csv pyspark

我有一个CSV文件,其详细信息如下所示:

emp_id,emp_name,emp_city,emp_salary
1,VIKRANT SINGH RANA    ,NOIDA   ,10000
3,GOVIND NIMBHAL        ,DWARKA  ,92000
2,RAGHVENDRA KUMAR GUPTA,GURGAON ,50000
4,ABHIJAN SINHA         ,SAKET   ,65000
5,SUPER DEVELOPER       ,USA     ,50000
6,RAJAT TYAGI           ,UP      ,65000
7,AJAY SHARMA           ,NOIDA   ,70000
8,SIDDHARTH BASU        ,SAKET   ,72000
9,ROBERT                ,GURGAON ,70000
9,ABC                   ,ROBERT  ,10000
9,XYZ                   ,ROBERTGURGAON,70000

如果它包含关键字“ ROBERT”并且期望的输出为:

,我想跳过这些行
+------+--------------------+-------------+----------+
|emp_id|            emp_name|     emp_city|emp_salary|
+------+--------------------+-------------+----------+
|     1|VIKRANT SINGH RAN...|     NOIDA   |     10000|
|     3|GOVIND NIMBHAL   ...|     DWARKA  |     92000|
|     2|RAGHVENDRA KUMAR ...|     GURGAON |     50000|
|     4|ABHIJAN SINHA    ...|     SAKET   |     65000|
|     5|SUPER DEVELOPER  ...|     USA     |     50000|
|     6|RAJAT TYAGI      ...|     UP      |     65000|
|     7|AJAY SHARMA      ...|     NOIDA   |     70000|
|     8|SIDDHARTH BASU   ...|     SAKET   |     72000|
+------+--------------------+-------------+----------+

我可以将此文件加载到数据框中,并可以使用以下表达式对每一列进行过滤

newdf = emp_df.where(~ col("emp_city").like("ROBERT%"))

我正在寻找某种解决方案,以便可以在将其加载到数据帧之前对其进行过滤,而不必遍历所有列来查找特定的字符串。

1 个答案:

答案 0 :(得分:1)

我能够使用RDD对其进行过滤。

textdata = sc.textFile(PATH_TO_FILE)
header=textdata.first();
textnewdata = textdata.filter(lambda x:x != header)
newRDD = textnewdata.filter(lambda row : 'ROBERT' not in row)

[u'1,VIKRANT SINGH RANA    ,NOIDA   ,10000', 
u'3,GOVIND NIMBHAL        ,DWARKA  ,92000', 
u'2,RAGHVENDRA KUMAR GUPTA,GURGAON ,50000', 
u'4,ABHIJAN SINHA         ,SAKET   ,65000', 
u'5,SUPER DEVELOPER       ,USA     ,50000', 
u'6,RAJAT TYAGI           ,UP      ,65000', 
u'7,AJAY SHARMA           ,NOIDA   ,70000', 
u'8,SIDDHARTH BASU        ,SAKET   ,72000']

newsplitRDD = newRDD.map(lambda l: l.split(","))

newDF = newsplitRDD.toDF()

>>> newDF.show();
+---+--------------------+--------+-----+
| _1|                  _2|      _3|   _4|
+---+--------------------+--------+-----+
|  1|VIKRANT SINGH RAN...|NOIDA   |10000|
|  3|GOVIND NIMBHAL   ...|DWARKA  |92000|
|  2|RAGHVENDRA KUMAR ...|GURGAON |50000|
|  4|ABHIJAN SINHA    ...|SAKET   |65000|
|  5|SUPER DEVELOPER  ...|USA     |50000|
|  6|RAJAT TYAGI      ...|UP      |65000|
|  7|AJAY SHARMA      ...|NOIDA   |70000|
|  8|SIDDHARTH BASU   ...|SAKET   |72000|
+---+--------------------+--------+-----+
相关问题