Hadoop管道Wordcount示例:LocalJobRunner中的NullPointerException

时间:2015-02-17 23:20:10

标签: java c++ hadoop nullpointerexception pipe

我正在尝试在此tutorial中运行有关Hadoop Pipes的示例示例:

我成功编译了所有内容。但是,运行后它会显示NullPointerException错误。我尝试了很多方法,并阅读了许多类似的问题,但无法找到解决此问题的实际解决方案。 注意:我在伪分布式环境中的单台机器上运行。



hadoop pipes -D hadoop.pipes.java.recordreader=true -D hadoop.pipes.java.recordwriters=true -input /input -output /output -program /bin/wordcount
DEPRECATED: Use of this script to execute mapred command is deprecated.
Instead use the mapred command for it.

15/02/18 01:09:02 INFO Configuration.deprecation: session.id is deprecated. Instead, use dfs.metrics.session-id
15/02/18 01:09:02 INFO jvm.JvmMetrics: Initializing JVM Metrics with processName=JobTracker, sessionId=
15/02/18 01:09:02 INFO jvm.JvmMetrics: Cannot initialize JVM Metrics with processName=JobTracker, sessionId= - already initialized
15/02/18 01:09:03 WARN mapreduce.JobSubmitter: No job jar file set.  User classes may not be found. See Job or Job#setJar(String).
15/02/18 01:09:04 INFO mapred.FileInputFormat: Total input paths to process : 1
15/02/18 01:09:04 INFO mapreduce.JobSubmitter: number of splits:1
15/02/18 01:09:04 INFO Configuration.deprecation: hadoop.pipes.java.recordreader is deprecated. Instead, use mapreduce.pipes.isjavarecordreader
15/02/18 01:09:04 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_local143452495_0001
15/02/18 01:09:06 INFO mapred.LocalDistributedCacheManager: Localized hdfs://localhost:9000/bin/wordcount as file:/tmp/hadoop-abdulrahman/mapred/local/1424214545411/wordcount
15/02/18 01:09:06 INFO mapreduce.Job: The url to track the job: http://localhost:8080/
15/02/18 01:09:06 INFO mapred.LocalJobRunner: OutputCommitter set in config null
15/02/18 01:09:06 INFO mapreduce.Job: Running job: job_local143452495_0001
15/02/18 01:09:06 INFO mapred.LocalJobRunner: OutputCommitter is org.apache.hadoop.mapred.FileOutputCommitter
15/02/18 01:09:06 INFO mapred.LocalJobRunner: Waiting for map tasks
15/02/18 01:09:06 INFO mapred.LocalJobRunner: Starting task: attempt_local143452495_0001_m_000000_0
15/02/18 01:09:06 INFO mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
15/02/18 01:09:06 INFO mapred.MapTask: Processing split: hdfs://localhost:9000/input/data.txt:0+68
15/02/18 01:09:07 INFO mapred.MapTask: numReduceTasks: 1
15/02/18 01:09:07 INFO mapreduce.Job: Job job_local143452495_0001 running in uber mode : false
15/02/18 01:09:07 INFO mapreduce.Job:  map 0% reduce 0%
15/02/18 01:09:07 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584)
15/02/18 01:09:07 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
15/02/18 01:09:07 INFO mapred.MapTask: soft limit at 83886080
15/02/18 01:09:07 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600
15/02/18 01:09:07 INFO mapred.MapTask: kvstart = 26214396; length = 6553600
15/02/18 01:09:07 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
15/02/18 01:09:08 INFO mapred.LocalJobRunner: map task executor complete.
15/02/18 01:09:08 WARN mapred.LocalJobRunner: job_local143452495_0001
java.lang.Exception: java.lang.NullPointerException
	at org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(LocalJobRunner.java:462)
	at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:522)
Caused by: java.lang.NullPointerException
	at org.apache.hadoop.mapred.pipes.Application.<init>(Application.java:104)
	at org.apache.hadoop.mapred.pipes.PipesMapRunner.run(PipesMapRunner.java:69)
	at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:450)
	at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343)
	at org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:243)
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
	at java.util.concurrent.FutureTask.run(FutureTask.java:262)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
	at java.lang.Thread.run(Thread.java:745)
15/02/18 01:09:08 INFO mapreduce.Job: Job job_local143452495_0001 failed with state FAILED due to: NA
15/02/18 01:09:08 INFO mapreduce.Job: Counters: 0
Exception in thread "main" java.io.IOException: Job failed!
	at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:836)
	at org.apache.hadoop.mapred.pipes.Submitter.runJob(Submitter.java:264)
	at org.apache.hadoop.mapred.pipes.Submitter.run(Submitter.java:503)
	at org.apache.hadoop.mapred.pipes.Submitter.main(Submitter.java:518)
&#13;
&#13;
&#13;

编辑:我下载了hadoop的源代码并跟踪了发生异常的位置,似乎在初始化阶段发生异常,因此mapper / reducer中的代码不是&#39;真的是问题。

产生异常的Hadoop中的函数就是这个:

&#13;
&#13;
/** Run a set of tasks and waits for them to complete. */
435     private void runTasks(List<RunnableWithThrowable> runnables,
436         ExecutorService service, String taskType) throws Exception {
437       // Start populating the executor with work units.
438       // They may begin running immediately (in other threads).
439       for (Runnable r : runnables) {
440         service.submit(r);
441       }
442 
443       try {
444         service.shutdown(); // Instructs queue to drain.
445 
446         // Wait for tasks to finish; do not use a time-based timeout.
447         // (See http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=6179024)
448         LOG.info("Waiting for " + taskType + " tasks");
449         service.awaitTermination(Long.MAX_VALUE, TimeUnit.NANOSECONDS);
450       } catch (InterruptedException ie) {
451         // Cancel all threads.
452         service.shutdownNow();
453         throw ie;
454       }
455 
456       LOG.info(taskType + " task executor complete.");
457 
458       // After waiting for the tasks to complete, if any of these
459       // have thrown an exception, rethrow it now in the main thread context.
460       for (RunnableWithThrowable r : runnables) {
461         if (r.storedException != null) {
462           throw new Exception(r.storedException);
463         }
464       }
465     }
&#13;
&#13;
&#13;

但问题是存储异常然后抛出它,这使我无法知道异常的实际来源。

有任何帮助吗? 另外,如果您需要我发布更多详细信息,请告诉我。

谢谢,

1 个答案:

答案 0 :(得分:0)

经过大量研究后,我发现问题实际上是由管道/ Application.java中的这一行引起的(第104行):

byte[] password= jobToken.getPassword();

我更改了代码并重新编译了hadoop:

byte[] password= "no password".getBytes();
if (jobToken != null)
{
     password= jobToken.getPassword();
}

我是从here

得到的

这解决了问题,我的程序目前正在运行,但我面临的另一个问题是程序实际上挂起了地图0%减少0% 我将为这个问题打开另一个主题。

谢谢,