FAILED:执行错误,从org.apache.hadoop.hive.ql.exec.mr.MapredLocalTask​​返回代码1

时间:2017-09-27 04:03:48

标签: hadoop hive

我是Hadoop的新手,并尝试在Hive上运行一些连接查询。 我创建了两个表(table1和table2)。我执行了一个Join查询,但收到以下错误消息:

var rawHtml = $("#summernote").summernote('code'); $(rawHtml).find('img').each(function () { $('img').addClass('img-responsive'); });

但是,当我在Hive UI中运行此查询时,查询将被执行,并且我得到正确的结果。有人可以帮助解释可能出错的地方吗?

7 个答案:

答案 0 :(得分:4)

我在运行查询之前添加了以下内容,但它确实有效。

SET hive.auto.convert.join=false;

答案 1 :(得分:4)

只需将此命令放在Query:

之前
SET hive.auto.convert.join=false;

绝对有效!

答案 2 :(得分:1)

我还在Cloudera Quick Start VM - 5.12上遇到了问题,该问题通过在hive提示符下执行以下语句来解决:

SET hive.auto.convert.join=false;

我希望以下信息更有用:

步骤1:从MySQL的retail_db数据库导入所有表

sqoop import-all-tables \
--connect jdbc:mysql://quickstart.cloudera:3306/retail_db \
--username retail_dba \
--password cloudera \
--num-mappers 1 \
--warehouse-dir /user/cloudera/sqoop/import-all-tables-text \
--as-textfile

步骤2:在Hive中创建名为retail_db和必需表的数据库

create database retail_db;
use retail_db;

create external table categories(
  category_id int,
  category_department_id int,
  category_name string)
row format delimited 
  fields terminated by ','
stored as textfile
location '/user/cloudera/sqoop/import-all-tables-text/categories';

create external table customers(
  customer_id int,
  customer_fname string,
  customer_lname string,
  customer_email string,
  customer_password string,
  customer_street string,
  customer_city string,
  customer_state string,
  customer_zipcode string)
row format delimited 
  fields terminated by ','
stored as textfile
location '/user/cloudera/sqoop/import-all-tables-text/customers';

create external table departments(
  department_id int,
  department_name string)
row format delimited
  fields terminated by ','
stored as textfile
location '/user/cloudera/sqoop/import-all-tables-text/departments';

create external table order_items(
  order_item_id int,
  order_item_order_id int,
  order_item_product_id int,
  order_item_quantity int,
  order_item_subtotal float,
  order_item_product_price float)
row format delimited
  fields terminated by ','
stored as textfile
location '/user/cloudera/sqoop/import-all-tables-text/order_items';

create external table orders(
  order_id int,
  order_date string,
  order_customer_id int,
  order_status string)
row format delimited
  fields terminated by ','
stored as textfile
location '/user/cloudera/sqoop/import-all-tables-text/orders';

create external table products(
  product_id int,
  product_category_id int,
  product_name string,
  product_description string,
  product_price float,
  product_image string)
row format delimited
  fields terminated by ','
stored as textfile
location '/user/cloudera/sqoop/import-all-tables-text/products';

第3步:执行加入查询

SET hive.cli.print.current.db=true;

select o.order_date, sum(oi.order_item_subtotal)
from orders o join order_items oi on (o.order_id = oi.order_item_order_id)
group by o.order_date 
limit 10;

以上查询提供了以下问题:

查询ID = cloudera_20171029182323_6eedd682-256b-466c-b2e5-58ea100715fb 总工作量= 1 FAILED:执行错误,从org.apache.hadoop.hive.ql.exec.mr.MapredLocalTask​​返回代码1

第4步:上述问题已在HIVE提示下执行以下声明解决:

SET hive.auto.convert.join=false;

第5步:查询结果

select o.order_date, sum(oi.order_item_subtotal)
from orders o join order_items oi on (o.order_id = oi.order_item_order_id)
group by o.order_date 
limit 10;

Query ID = cloudera_20171029182525_cfc70553-89d2-4c61-8a14-4bbeecadb3cf
Total jobs = 2
Launching Job 1 out of 2
Number of reduce tasks not specified. Estimated from input data size: 1
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
  set mapreduce.job.reduces=<number>
Starting Job = job_1509278183296_0005, Tracking URL = http://quickstart.cloudera:8088/proxy/application_1509278183296_0005/
Kill Command = /usr/lib/hadoop/bin/hadoop job  -kill job_1509278183296_0005
Hadoop job information for Stage-1: number of mappers: 2; number of reducers: 1
2017-10-29 18:25:19,861 Stage-1 map = 0%,  reduce = 0%
2017-10-29 18:25:26,181 Stage-1 map = 50%,  reduce = 0%, Cumulative CPU 2.72 sec
2017-10-29 18:25:27,240 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 5.42 sec
2017-10-29 18:25:32,479 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU 8.01 sec
MapReduce Total cumulative CPU time: 8 seconds 10 msec
Ended Job = job_1509278183296_0005
Launching Job 2 out of 2
Number of reduce tasks not specified. Estimated from input data size: 1
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
  set mapreduce.job.reduces=<number>
Starting Job = job_1509278183296_0006, Tracking URL = http://quickstart.cloudera:8088/proxy/application_1509278183296_0006/
Kill Command = /usr/lib/hadoop/bin/hadoop job  -kill job_1509278183296_0006
Hadoop job information for Stage-2: number of mappers: 1; number of reducers: 1
2017-10-29 18:25:38,676 Stage-2 map = 0%,  reduce = 0%
2017-10-29 18:25:43,925 Stage-2 map = 100%,  reduce = 0%, Cumulative CPU 0.85 sec
2017-10-29 18:25:49,142 Stage-2 map = 100%,  reduce = 100%, Cumulative CPU 2.13 sec
MapReduce Total cumulative CPU time: 2 seconds 130 msec
Ended Job = job_1509278183296_0006
MapReduce Jobs Launched: 
Stage-Stage-1: Map: 2  Reduce: 1   Cumulative CPU: 8.01 sec   HDFS Read: 8422614 HDFS Write: 17364 SUCCESS
Stage-Stage-2: Map: 1  Reduce: 1   Cumulative CPU: 2.13 sec   HDFS Read: 22571 HDFS Write: 407 SUCCESS
Total MapReduce CPU Time Spent: 10 seconds 140 msec
OK
2013-07-25 00:00:00.0   68153.83132743835
2013-07-26 00:00:00.0   136520.17266082764
2013-07-27 00:00:00.0   101074.34193611145
2013-07-28 00:00:00.0   87123.08192253113
2013-07-29 00:00:00.0   137287.09244918823
2013-07-30 00:00:00.0   102745.62186431885
2013-07-31 00:00:00.0   131878.06256484985
2013-08-01 00:00:00.0   129001.62241744995
2013-08-02 00:00:00.0   109347.00200462341
2013-08-03 00:00:00.0   95266.89186286926
Time taken: 35.721 seconds, Fetched: 10 row(s)

答案 3 :(得分:1)

尝试在连接上设置AuthMech参数

我已将其设置为2并定义了用户名

解决了我在ctas上的问题

此致 圆盘豆

答案 4 :(得分:0)

对于我来说,为<h1 class="titletext fade" id=titletext1>This is a Title!!</h1> <h1 class="titletext fade" id=titletext2>A second Title!!</h1> <h1 class="titletext" id=titletext1>Without fade</h1> <h1 class="titletext" id=titletext2>Without fade</h1> <h1 class="fade" id=titletext1>Without titletext</h1> <h1 class="fade" id=titletext2>Without titletext</h1>添加参数configuration将解决此问题。 此问题是由写访问冲突引起的。 您应该使用execute来确保您具有写访问权限。

答案 5 :(得分:0)

就我而言,这是未设置队列的问题,所以我做到了:

**设置mapred.job.queue.name = **队列名称

这解决了我的问题。希望这对某人有帮助。

答案 6 :(得分:0)

在使用 Hue 界面时遇到了同样的问题, 下面是答案 在 hdfs 中创建一个 /user/admin 并使用以下命令更改其权限:

[root@ip-10-0-0-163 ~]# su - hdfs

[hdfs@ip-10-0-0-163 ~]$ hadoop fs -mkdir /user/admin

[hdfs@ip-10-0-0-163 ~]$ hadoop fs -chown admin /user/admin

[hdfs@ip-10-0-0-163 ~]$ exit
相关问题