AttributeError:'StructField'对象没有属性'_get_object_id':使用自定义模式加载镶木地板文件

时间:2017-09-13 10:53:50

标签: apache-spark pyspark apache-spark-sql pyspark-sql

我正在尝试使用PySpark使用自定义架构读取一组镶木地板文件,但它给出了AttributeError:'StructField'对象没有属性'_get_object_id'错误。

以下是我的示例代码:

import pyspark
from pyspark.sql import SQLContext, SparkSession
from pyspark.sql import Row
import pyspark.sql.functions as func
from pyspark.sql.types import *

sc = pyspark.SparkContext()
spark = SparkSession(sc)
sqlContext = SQLContext(sc)

l = [('1',31200,'Execute',140,'ABC'),('2',31201,'Execute',140,'ABC'),('3',31202,'Execute',142,'ABC'),
     ('4',31103,'Execute',149,'DEF'),('5',31204,'Execute',145,'DEF'),('6',31205,'Execute',149,'DEF')]
rdd = sc.parallelize(l)
trades = rdd.map(lambda x: Row(global_order_id=int(x[0]), nanos=int(x[1]),message_type=x[2], price=int(x[3]),symbol=x[4]))
trades_df = sqlContext.createDataFrame(trades)
trades_df.printSchema()
trades_df.write.parquet('trades_parquet')

trades_df_Parquet = sqlContext.read.parquet('trades_parquet')
trades_df_Parquet.printSchema()

# The schema is encoded in a string.
schemaString = "global_order_id message_type nanos price symbol"

fields = [StructField(field_name, StringType(), True) for field_name in schemaString.split()]
schema = StructType(fields)

trades_df_Parquet_n = spark.read.format('parquet').load('trades_parquet',schema,inferSchema =False)
#trades_df_Parquet_n = spark.read.parquet('trades_parquet',schema)
trades_df_Parquet_n.printSchema()

任何人都可以帮助我提出你的建议。

1 个答案:

答案 0 :(得分:1)

指定选项schema的名称,以便它知道它不是format

Signature: trades_df_Parquet_n.load(path=None, format=None, schema=None, **options)

你得到:

trades_df_Parquet_n = spark.read.format('parquet').load('trades_parquet',schema=schema, inferSchema=False)