将数组分解成列Spark

时间:2019-06-14 09:24:23

标签: apache-spark pyspark apache-spark-sql explode

Hi1,我有一个像beow这样的json:

{meta:{"clusters":[{"1":"Aged 35 to 49"},{"2":"Male"},{"5":"Aged 15 to 17"}]}}

,我想获得以下数据框:

+---------------+----+---------------+
|              1|   2| 5             |
+---------------+----+---------------+
|  Aged 35 to 49|Male|  Aged 15 to 17|
+---------------+----+---------------+   

如何在pyspark中做到呢?
谢谢

1 个答案:

答案 0 :(得分:1)

您可以使用 get_json_object() 函数来解析json列:

示例:

df=spark.createDataFrame([Row(jsn='{"meta":{"clusters":[{"1":"Aged 35 to 49"},{"2":"Male"},{"5":"Aged 15 to 17"}]}}')])

df.selectExpr("get_json_object(jsn,'$.meta.clusters[0].1') as `1`",
"get_json_object(jsn,'$.meta.clusters[*].2') as `2`",
"get_json_object(jsn,'$.meta.clusters[*].5') as `5`").show(10,False)

“输出”:

+-------------+------+---------------+
|1            |2     |5              |
+-------------+------+---------------+
|Aged 35 to 49|"Male"|"Aged 15 to 17"|
+-------------+------+---------------+