hadoop流失败,简单的映射&在hadoop 1.0.0中减少作业(使用nltk代码)

时间:2012-05-03 15:17:47

标签: hadoop

我的执行代码和

的输出
[hduser@Janardhan hadoop]$ bin/hadoop jar contrib/streaming/hadoop-streaming-1.0.0.jar -file /home/hduser/mapper.py -mapper mapper.py -file /home/hduser/reducer.py -reducer reducer.py -input /user/hduser/input.txt -output /home/hduser/outpututttt


 Warning: $HADOOP_HOME is deprecated.

    packageJobJar: [/home/hduser/mapper.py, /home/hduser/reducer.py, /app/hadoop/tmp/hadoop-unjar2185859252991058106/] [] /tmp/streamjob2973484922110272968.jar tmpDir=null
    12/05/03 20:36:02 INFO mapred.FileInputFormat: Total input paths to process : 1
    12/05/03 20:36:03 INFO streaming.StreamJob: getLocalDirs(): [/app/hadoop/tmp/mapred/local]
    12/05/03 20:36:03 INFO streaming.StreamJob: Running job: job_201205032014_0003
    12/05/03 20:36:03 INFO streaming.StreamJob: To kill this job, run:
    12/05/03 20:36:03 INFO streaming.StreamJob: /usr/local/hadoop/libexec/../bin/hadoop job  -Dmapred.job.tracker=localhost:54311 -kill job_201205032014_0003
    12/05/03 20:36:03 INFO streaming.StreamJob: Tracking URL: http://localhost.localdomain:50030/jobdetails.jsp?jobid=job_201205032014_0003
    12/05/03 20:36:04 INFO streaming.StreamJob:  map 0%  reduce 0%
    12/05/03 20:36:21 INFO streaming.StreamJob:  map 100%  reduce 0%
    12/05/03 20:36:24 INFO streaming.StreamJob:  map 0%  reduce 0%
    12/05/03 20:37:00 INFO streaming.StreamJob:  map 100%  reduce 100%
    12/05/03 20:37:00 INFO streaming.StreamJob: To kill this job, run:
    12/05/03 20:37:00 INFO streaming.StreamJob: /usr/local/hadoop/libexec/../bin/hadoop job  -Dmapred.job.tracker=localhost:54311 -kill job_201205032014_0003
    12/05/03 20:37:00 INFO streaming.StreamJob: Tracking URL: http://localhost.localdomain:50030/jobdetails.jsp?jobid=job_201205032014_0003
    12/05/03 20:37:00 ERROR streaming.StreamJob: Job not successful. Error: # of failed Map Tasks exceeded allowed limit. FailedCount: 1. LastFailedTask: task_201205032014_0003_m_000000
    12/05/03 20:37:00 INFO streaming.StreamJob: killJob...
    Streaming Job Failed! 

这是我从求职者那里得到的错误:

java.lang.RuntimeException: PipeMapRed.waitOutputThreads(): subprocess failed with code 1
    at org.apache.hadoop.streaming.PipeMapRed.waitOutputThreads(PipeMapRed.java:311)
    at org.apache.hadoop.streaming.PipeMapRed.mapRedFinished(PipeMapRed.java:545)
    at org.apache.hadoop.streaming.PipeMapper.close(PipeMapper.java:132)
    at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:57)
    at org.apache.hadoop.streaming.PipeMapRunner.run(PipeMapRunner.java:36)
    at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:436)
    at org.apache.hadoop.mapred.MapTask.run(MapTask.java:372)
    at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:416)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1083)
    at org.apache.hadoop.mapred.Child.main(Child.java:249)

它在本地使用此代码:

 [hduser@Janardhan ~]$ cat input.txt | ./mapper.py | sort | ./reducer.py 
('be', 'VB')    1
('ceremony', 'NN')  1
('first', 'JJ')     2
('for', 'IN')   2
('hi', 'NN')    1
('place', 'NN')     1
('the', 'DT')   2
('welcome', 'VBD')  1

1 个答案:

答案 0 :(得分:0)

您需要通过检查map和reduce任务失败的数据节点上的stderr日志进行调试。当本地运行的作业在群集上失败时,这些通常会消失很多。

您应该能够通过hadoop集群的jobtracker Web界面访问日志,通常位于http://master.node.ip.address:50030/jobtracker.jsp。你的工作应该出现在“失败的工作”下。单击作业ID,然后在地图上或在“失败”列中减少任务,您应该看到日志。

请注意,如果mapper.py和reducer.py不可执行(第一行#!/ usr / bin / python,文件属性设置正确),您可能需要将参数更改为“-mapper”python mapper.py '“等等。