Pyspark:解析一列json字符串

时间:2016-12-12 19:10:01

标签: python json apache-spark pyspark

我有一个由一列组成的pyspark数据框,名为json,其中每一行都是一个json的unicode字符串。我想解析每一行并返回一个新的数据帧,其中每一行都是解析过的json。

# Sample Data Frame
jstr1 = u'{"header":{"id":12345,"foo":"bar"},"body":{"id":111000,"name":"foobar","sub_json":{"id":54321,"sub_sub_json":{"col1":20,"col2":"somethong"}}}}'
jstr2 = u'{"header":{"id":12346,"foo":"baz"},"body":{"id":111002,"name":"barfoo","sub_json":{"id":23456,"sub_sub_json":{"col1":30,"col2":"something else"}}}}'
jstr3 = u'{"header":{"id":43256,"foo":"foobaz"},"body":{"id":20192,"name":"bazbar","sub_json":{"id":39283,"sub_sub_json":{"col1":50,"col2":"another thing"}}}}'
df = sql_context.createDataFrame([Row(json=jstr1),Row(json=jstr2),Row(json=jstr3)])

我尝试使用json.loads

在每一行上进行映射
(df
  .select('json')
  .rdd
  .map(lambda x: json.loads(x))
  .toDF()
).show()

但这会返回TypeError: expected string or buffer

我怀疑问题的一部分是,从dataframe转换为rdd时,架构信息会丢失,因此我还尝试手动输入架构信息:

schema = StructType([StructField('json', StringType(), True)])
rdd = (df
  .select('json')
  .rdd
  .map(lambda x: json.loads(x))
)
new_df = sql_context.createDataFrame(rdd, schema)
new_df.show()

但我得到了相同的TypeError

查看this answer,看起来用flatMap展平行可能会有用,但我也没有成功:

schema = StructType([StructField('json', StringType(), True)])
rdd = (df
  .select('json')
  .rdd
  .flatMap(lambda x: x)
  .flatMap(lambda x: json.loads(x))
  .map(lambda x: x.get('body'))
)
new_df = sql_context.createDataFrame(rdd, schema)
new_df.show()

我收到此错误:AttributeError: 'unicode' object has no attribute 'get'

5 个答案:

答案 0 :(得分:22)

对于 Spark 2.1 + ,您可以使用from_json,它允许保留数据框中的其他非json列,如下所示:

from pyspark.sql.functions import from_json
json_schema = spark.read.json(df.rdd.map(lambda row: row.json)).schema
df.withColumn('json', from_json(col('json'), json_schema))

你让Spark派生出json字符串列的模式。然后df.json列不再是StringType,但正确解码的json结构,即嵌套StrucTypedf的所有其他列都保持原样。

您可以按如下方式访问json内容:

df.select(col('json.header').alias('header'))

答案 1 :(得分:21)

如果您之前将数据帧转换为字符串的RDD,那么将带有json字符串的数据帧转换为结构化数据帧实际上非常简单(参见:http://spark.apache.org/docs/latest/sql-programming-guide.html#json-datasets

例如:

>>> new_df = sql_context.read.json(df.rdd.map(lambda r: r.json))
>>> new_df.printSchema()
root
 |-- body: struct (nullable = true)
 |    |-- id: long (nullable = true)
 |    |-- name: string (nullable = true)
 |    |-- sub_json: struct (nullable = true)
 |    |    |-- id: long (nullable = true)
 |    |    |-- sub_sub_json: struct (nullable = true)
 |    |    |    |-- col1: long (nullable = true)
 |    |    |    |-- col2: string (nullable = true)
 |-- header: struct (nullable = true)
 |    |-- foo: string (nullable = true)
 |    |-- id: long (nullable = true)

答案 2 :(得分:5)

如果您的JSON格式不是完全/传统格式,则现有答案不起作用。例如,基于RDD的架构推断期望在大括号 UpdateContact(id, email){ var options = new ContactFindOptions(); options.filter = id; options.multiple = true; let proms = []; this.contacts.find(["id"], options).then(res=>{res.forEach( (item:Contact) => { //console.log(contact); proms.push(new Promise( (resolve, reject) => { var f = new ContactField('email',email,true); item.emails = []; item.emails.push(f); console.log('FIXED '+item.displayName); item.save().then(() => console.log('Contact saved!'), (error: any) => console.error('Error saving contact.', error) ); resolve(); })) }) }) } 中使用JSON,并且例如在您的数据如下时,将提供不正确的架构(导致{}值)

null

我编写了一个函数来解决此问题,方法是对JSON进行消毒,使其驻留在另一个JSON对象中:

[
  {
    "a": 1.0,
    "b": 1
  },
  {
    "a": 0.0,
    "b": 2
  }
]

注意:def parseJSONCols(df, *cols, sanitize=True): """Auto infer the schema of a json column and parse into a struct. rdd-based schema inference works if you have well-formatted JSON, like ``{"key": "value", ...}``, but breaks if your 'JSON' is just a string (``"data"``) or is an array (``[1, 2, 3]``). In those cases you can fix everything by wrapping the data in another JSON object (``{"key": [1, 2, 3]}``). The ``sanitize`` option (default True) automatically performs the wrapping and unwrapping. The schema inference is based on this `SO Post <https://stackoverflow.com/a/45880574)/>`_. Parameters ---------- df : pyspark dataframe Dataframe containing the JSON cols. *cols : string(s) Names of the columns containing JSON. sanitize : boolean Flag indicating whether you'd like to sanitize your records by wrapping and unwrapping them in another JSON object layer. Returns ------- pyspark dataframe A dataframe with the decoded columns. """ res = df for i in cols: # sanitize if requested. if sanitize: res = ( res.withColumn( i, psf.concat(psf.lit('{"data": '), i, psf.lit('}')) ) ) # infer schema and apply it schema = spark.read.json(res.rdd.map(lambda x: x[i])).schema res = res.withColumn(i, psf.from_json(psf.col(i), schema)) # unpack the wrapped object if needed if sanitize: res = res.withColumn(i, psf.col(i).data) return res = psf

答案 3 :(得分:0)

这是@ nolan-conaway的parseJSONCols函数的简洁(spark SQL)版本。

SELECT 
explode(
    from_json(
        concat('{"data":', 
               '[{"a": 1.0,"b": 1},{"a": 0.0,"b": 2}]', 
               '}'), 
        'data array<struct<a:DOUBLE, b:INT>>'
    ).data) as data;
  

PS。我还添加了爆炸功能:P

您需要了解一些HIVE SQL types

答案 4 :(得分:0)

from pyspark.sql import SparkSession
spark = SparkSession.builder.getOrCreate()

def map2json(dict):
    import json
    return json.dumps(dict)
from pyspark.sql.types import StringType
spark.udf.register("map2json", lambda dict: map2json(dict), StringType())

spark.sql("select map2json(map('a', '1'))").show()
相关问题