如何在SQL语句中比较两个数据框的架构?

时间:2018-09-04 04:41:24

标签: scala apache-spark apache-spark-sql

有很多方法可以验证火花中的两个数据帧的架构,例如here。但是我只想在SQL中验证两个数据帧的模式,我的意思是SparkSQL。

示例查询1:

SELECT DISTINCT target_person FROM INFORMATION_SCHEMA.COLUMNS WHERE COLUMN_NAME IN ('columnA','ColumnB') AND TABLE_SCHEMA='ad_facebook'

示例查询2:

SELECT count(*) FROM information_schema.columns WHERE table_name = 'ad_facebook'

我知道spark中没有数据库(schema)的概念,但我了解到元数据库中包含架构信息等。

我们可以在SparkSQL中编写如上所述的SQL查询吗?

编辑:

我只是检查为什么show create table在spark sql上不起作用,是因为它是一个临时表吗?

scala> val df1=spark.sql("SHOW SCHEMAS")
df1: org.apache.spark.sql.DataFrame = [databaseName: string]

scala> df1.show
+------------+
|databaseName|
+------------+
|     default|
+------------+


scala> val df2=spark.sql("SHOW TABLES in default")
df2: org.apache.spark.sql.DataFrame = [database: string, tableName: string ... 1 more field]

scala> df2.show
+--------+---------+-----------+
|database|tableName|isTemporary|
+--------+---------+-----------+
|        |       df|       true|
+--------+---------+-----------+


scala> val df3=spark.sql("SHOW CREATE TABLE default.df")
org.apache.spark.sql.catalyst.analysis.NoSuchTableException: Table or view 'df' not found in database 'default';
  at org.apache.spark.sql.catalyst.catalog.SessionCatalog.requireTableExists(SessionCatalog.scala:180)
  at org.apache.spark.sql.catalyst.catalog.SessionCatalog.getTableMetadata(SessionCatalog.scala:398)
  at org.apache.spark.sql.execution.command.ShowCreateTableCommand.run(tables.scala:834)
  at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:58)
  at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:56)
  at org.apache.spark.sql.execution.command.ExecutedCommandExec.executeCollect(commands.scala:67)
  at org.apache.spark.sql.Dataset.<init>(Dataset.scala:182)
  at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:67)
  at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:623)
  ... 48 elided

3 个答案:

答案 0 :(得分:1)

可以使用DESCRIBE [EXTENDED] [db_name.]table_name

查询架构

请参见https://docs.databricks.com/spark/latest/spark-sql/index.html#spark-sql-language-manual

答案 1 :(得分:0)

尝试使用此代码提取每个架构并进行比较。这将比较列的名称,列的数据类型,可为空或不为空的列。

val x = df1.schema.sortBy(x => x.name) // get dataframe 1 schema and sort it base on column name.
val y = df2.schema.sortBy(x => x.name) // // get dataframe 2 schema and sort it base on column name.

val out = x.zip(y).filter(x => x._1 != x._2) // zipping 1st column of df1, df2 ...2nd column of df1,df2 and so on for all columns and their datatypes. And filtering if any mismatch is there

if(out.size == 0) { // size of `out` should be 0 if matching
    println("matching")
}
else println("not matching")

答案 2 :(得分:0)

我们可以在SparkSQL中以两种方式获取模式。

方法1:

spark.sql("desc db_name table_name").show()

这将仅显示前20行,这与df.show()的数据框概念十分相似

  

(意思是,任何具有超过20列的表-架构都会显示   仅适用于前20列)

例如:

+--------------------+---------+-------+
|            col_name|data_type|comment|
+--------------------+---------+-------+
|                col1|   bigint|   null|
|                col2|   string|   null|
|                col3|   string|   null|
+--------------------+---------+-------+

方法2:

spark.sql("desc db_name table_name").collect().foreach(println)

这将显示所有列的完整架构。

例如:

[col1,bigint,null]
[col2,string,null]
[col3,string,null]
相关问题