将xml数据加载到hive表中:org.apache.hadoop.hive.ql.metadata.HiveException

时间:2014-07-07 09:50:09

标签: hive xmldataset

我正在尝试将XML数据加载到Hive中但我收到错误:

  

java.lang.RuntimeException:org.apache.hadoop.hive.ql.metadata.HiveException:处理行{" xmldata":""}

我使用的xml文件是:

<?xml version="1.0" encoding="UTF-8"?>
<catalog>
<book>
  <id>11</id>
  <genre>Computer</genre>
  <price>44</price>
</book>
<book>
  <id>44</id>
  <genre>Fantasy</genre>
  <price>5</price>
</book>
</catalog>

我使用的配置单元查询是:

1) Create TABLE xmltable(xmldata string) STORED AS TEXTFILE;
LOAD DATA lOCAL INPATH '/home/user/xmlfile.xml' OVERWRITE INTO TABLE xmltable;

2) CREATE VIEW xmlview (id,genre,price)
AS SELECT
xpath(xmldata, '/catalog[1]/book[1]/id'),
xpath(xmldata, '/catalog[1]/book[1]/genre'),
xpath(xmldata, '/catalog[1]/book[1]/price')
FROM xmltable;

3) CREATE TABLE xmlfinal AS SELECT * FROM xmlview;

4) SELECT * FROM xmlfinal WHERE id ='11

直到第二次查询一切都很好,但当我执行第三次查询时,它给了我错误:

错误如下:

java.lang.RuntimeException: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while processing row {"xmldata":"<?xml version=\"1.0\" encoding=\"UTF-8\"?>"}
    at org.apache.hadoop.hive.ql.exec.ExecMapper.map(ExecMapper.java:159)
    at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:50)
    at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:417)
    at org.apache.hadoop.mapred.MapTask.run(MapTask.java:332)
    at org.apache.hadoop.mapred.Child$4.run(Child.java:268)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1438)
    at org.apache.hadoop.mapred.Child.main(Child.java:262)
 Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error    while processing row {"xmldata":"<?xml version=\"1.0\" encoding=\"UTF-8\"?>"}
    at org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:675)
    at org.apache.hadoop.hive.ql.exec

FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.MapRedTask

那么它出了什么问题?我也使用正确的xml文件。

谢谢, 什里

6 个答案:

答案 0 :(得分:4)

错误原因:

1)case-1 :( 您的情况) - xml内容将逐行送入配置单元。

输入xml:

<?xml version="1.0" encoding="UTF-8"?>
<catalog>
<book>
  <id>11</id>
  <genre>Computer</genre>
  <price>44</price>
</book>
<book>
  <id>44</id>
  <genre>Fantasy</genre>
  <price>5</price>
</book>
</catalog>  

检查配置单元:

select count(*) from xmltable;  // return 13 rows - means each line in individual row with col xmldata  

错误原因:

XML被读作13件不统一。所以无效的XML

2)case-2 :xml内容应该作为singleString提供给配置单元 - XpathUDFs 工作 参考语法:所有函数都遵循以下形式:xpath _ (xml_string,xpath_expression_string)。* source

<强> input.xml中

<?xml version="1.0" encoding="UTF-8"?><catalog><book><id>11</id><genre>Computer</genre><price>44</price></book><book><id>44</id><genre>Fantasy</genre><price>5</price></book></catalog>

检查配置单元:

select count(*) from xmltable; // returns 1 row - XML is properly read as complete XML.

意味着:

xmldata   = <?xml version="1.0" encoding="UTF-8"?><catalog><book> ...... </catalog>

然后像这样应用你的xpathUDF

select xpath(xmldata, 'xpath_expression_string' ) from xmltable

答案 1 :(得分:4)

在这里找到罐子 - &gt; Brickhouse

此处的示例示例 - &gt; Example

stackoverflow中的类似示例 - here

<强>解决方案:

--Load xml data to table
DROP table xmltable;
Create TABLE xmltable(xmldata string) STORED AS TEXTFILE;
LOAD DATA lOCAL INPATH '/home/vijay/data-input.xml' OVERWRITE INTO TABLE xmltable;

-- check contents
SELECT * from xmltable;

-- create view
Drop view  MyxmlView;
CREATE VIEW MyxmlView(id, genre, price) AS
SELECT
 xpath(xmldata, 'catalog/book/id/text()'),
 xpath(xmldata, 'catalog/book/genre/text()'),
 xpath(xmldata, 'catalog/book/price/text()')
FROM xmltable;

-- check view
SELECT id, genre,price FROM MyxmlView;


ADD jar /home/vijay/brickhouse-0.7.0-SNAPSHOT.jar;  --Add brickhouse jar 

CREATE TEMPORARY FUNCTION array_index AS 'brickhouse.udf.collect.ArrayIndexUDF';
CREATE TEMPORARY FUNCTION numeric_range AS 'brickhouse.udf.collect.NumericRange';

SELECT 
   array_index( id, n ) as my_id,
   array_index( genre, n ) as my_genre,
   array_index( price, n ) as my_price
from MyxmlView
lateral view numeric_range( size( id )) MyxmlView as n;

<强>输出:

hive > SELECT
     >    array_index( id, n ) as my_id,
     >    array_index( genre, n ) as my_genre,
     >    array_index( price, n ) as my_price
     > from MyxmlView
     > lateral view numeric_range( size( id )) MyxmlView as n;
Automatically selecting local only mode for query
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks is set to 0 since there's no reduce operator
Execution log at: /tmp/vijay/.log
Job running in-process (local Hadoop)
Hadoop job information for null: number of mappers: 0; number of reducers: 0
2014-07-09 05:36:45,220 null map = 0%,  reduce = 0%
2014-07-09 05:36:48,226 null map = 100%,  reduce = 0%
Ended Job = job_local_0001
Execution completed successfully
Mapred Local Task Succeeded . Convert the Join into MapJoin
OK
my_id      my_genre      my_price
11      Computer        44
44      Fantasy 5

所用时间:8.541秒,提取:2行

根据问题所有者的要求

添加更多信息

enter image description here enter image description here

答案 2 :(得分:0)

首先尝试加载文件我的添加文件路径到文件,这将解决您的问题,因为它在我的情况下解决

答案 3 :(得分:0)

Oracle XML Extensions for Hive可用于通过XML创建Hive表。 https://docs.oracle.com/cd/E54130_01/doc.26/e54142/oxh_hive.htm#BDCUG691

答案 4 :(得分:0)

然后按照以下步骤按照您的意愿获得解决方案,只需更改源数据

 <catalog><book><id>11</id><genre>Computer</genre><price>44</price></book></catalog>
<catalog><book><id>44</id><genre>Fantasy</genre><price>5</price></book></catalog> 

现在尝试以下步骤:

select xpath(xmldata, '/catalog/book/id/text()')as id,
xpath(xmldata, '/catalog/book/genre/text()')as genre,
xpath(xmldata, '/catalog/book/price/text()')as price FROM xmltable;

现在你会得到这样的:

[&#34; 11&#34;] [&#34;计算机&#34;] [&#34; 44&#34;]

[&#34; 44&#34;] [&#34; Fantasy&#34;] [&#34; 5&#34;]

如果你应用xapth_string,xpath_int,xpath_int udfs,你会得到像

11计算机44

44幻想5。

由于

答案 5 :(得分:0)

还要确保XML文件在最后一个结束标记的末尾不包含任何空格。 在我的情况下,源文件有一个,每当我将文件加载到配置单元时,我的结果表中包含NULLS。 因此,每当我应用xpath函数时,结果都会有一些 [] [] [] [] [] []

虽然xpath_string函数有效,但xpath_double和xpath_int函数从未这样做过。它一直抛出这个例外 -

Diagnostic Messages for this Task:
java.lang.RuntimeException: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while processing row {"line":""}