如何通过布尔列过滤火花数据帧

时间:2016-04-22 02:56:55

标签: python apache-spark filter spark-dataframe

我创建了一个具有以下架构的数据框:

In [43]: yelp_df.printSchema()
root
 |-- business_id: string (nullable = true)
 |-- cool: integer (nullable = true)
 |-- date: string (nullable = true)
 |-- funny: integer (nullable = true)
 |-- id: string (nullable = true)
 |-- stars: integer (nullable = true)
 |-- text: string (nullable = true)
 |-- type: string (nullable = true)
 |-- useful: integer (nullable = true)
 |-- user_id: string (nullable = true)
 |-- name: string (nullable = true)
 |-- full_address: string (nullable = true)
 |-- latitude: double (nullable = true)
 |-- longitude: double (nullable = true)
 |-- neighborhoods: string (nullable = true)
 |-- open: boolean (nullable = true)
 |-- review_count: integer (nullable = true)
 |-- state: string (nullable = true)

现在我想只选择"打开"列是" true"。如下所示,很多都是"打开"。

business_id          cool date       funny id                   stars text                 type     useful user_id              name               full_address         latitude      longitude      neighborhoods open review_count state
9yKzy9PApeiPPOUJE... 2    2011-01-26 0     fWKvX83p0-ka4JS3d... 4     My wife took me h... business 5      rLtl8ZkDX5vH5nAx9... Morning Glory Cafe 6106 S 32nd St Ph... 33.3907928467 -112.012504578 []            true 116          AZ   
ZRJwVLyzEJq1VAihD... 0    2011-07-27 0     IjZ33sJrzXqU-0X6U... 4     I have no idea wh... business 0      0a2KyEL0d3Yb1V6ai... Spinato's Pizzeria 4848 E Chandler B... 33.305606842  -111.978759766 []            true 102          AZ   
6oRAC4uyJCsJl1X0W... 0    2012-06-14 0     IESLBzqUCLdSzSqm0... 4     love the gyro pla... business 1      0hT2KtfLiobPvh6cD... Haji-Baba          1513 E  Apache Bl... 33.4143447876 -111.913032532 []            true 265          AZ   
_1QQZuf4zZOyFCvXc... 1    2010-05-27 0     G-WvGaISbqqaMHlNn... 4     Rosie, Dakota, an... business 2      uZetl9T0NcROGOyFf... Chaparral Dog Park 5401 N Hayden Rd ... 33.5229454041 -111.90788269  []            true 88           AZ   
6ozycU1RpktNG2-1B... 0    2012-01-05 0     1uJFq2r5QfJG_6ExM... 4     General Manager S... business 0      vYmM4KTsC8ZfQBg-j... Discount Tire      1357 S Power Road... 33.3910255432 -111.68447876  []            true 5            AZ   

但是,我在pyspark中运行的以下命令不会返回任何内容:

yelp_df.filter(yelp_df["open"] == "true").collect()

这样做的正确方法是什么?

3 个答案:

答案 0 :(得分:9)

您正在错误地比较数据类型。 open被列为布尔值,而非字符串,因此yelp_df["open"] == "true"不正确 - "true"是一个字符串。

相反,你想做

yelp_df.filter(yelp_df["open"] == True).collect()

这正确地将open的值与布尔基元True进行比较,而不是非布尔字符串"true"

答案 1 :(得分:1)

根据filters文档,您似乎在使用PySpark:

filter(condition)-条件是Columntypes.BooleanType或SQL表达式的字符串。

open: boolean (nullable = true)起,以下工作有效,避免了Flake8的E712错误:

yelp_df.filter(yelp_df["open"]).collect()

答案 2 :(得分:-1)

在Spark-Scala中,我可以想到两种方法         方法1:Spark sql命令通过创建一个临时视图并从整个数据框中仅选择布尔列来获取所有布尔列。但是,这需要确定布尔列或根据数据类型从架构中选择列

    //define bool columns 
    val SqlBoolCols ="'boolcolumn1','boolcolumn2','boolcolumn3' 

    dataframe.createOrReplaceTempView("Booltable")
    val dfwithboolcolumns = sqlcontext.sql(s"Select ${SqlBoolCols} from Booltable")  

方法2:如果定义了架构,则过滤数据框

val strcolnames = rawdata.schema.fields.filter(x=>x.dataType == StringType).map(strtype=>strtype.name)   
val strdataframe= rawdata.select(strcolnames.head,strcolnames.tail:_*)
相关问题