hadoop多节点集群 - 从节点无法执行mapreduce任务

时间:2014-03-06 23:59:09

标签: java hadoop configuration cluster-computing

我是hadoop的新手。我尝试按照Michael Noll的帖子设置hadoop(版本1.2.1)群集(1个主节点和5个从节点) http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-multi-node-cluster/

在我在群集上运行单词计数作业之前,一切似乎都很好。通过在主节点上运行以下cmd来启动集群时:

hadoop/start-all.sh

jps输出正确:

主人:

li@master:~$ jps
12839 TaskTracker
11814 NameNode
12535 JobTracker
25131 Jps
12118 DataNode
12421 SecondaryNameNode

在5个从属节点上:

li@slave1:~/hadoop/logs$ jps
4605 TaskTracker
19407 Jps
4388 DataNode

当我在master上运行stop cmd时:

hadoop/stop-all.sh

jps在主节点和从节点上没有任何内容

但是当我在集群上运行单词count job时,我认为集群不能正常工作。奴隶节点上的任务日志与Michael Noll在帖子中的内容不匹配。似乎这项工作只是在主人身上执行了。其他5个从属节点没有被分配地图缩减任务来执行。以下是一些日志文件:

Master上的控制台输出:

hadoop jar hadoop-examples-1.2.1.jar wordcount /user/li/gutenberg /user/li/gutenberg-output
14/03/06 17:11:09 INFO input.FileInputFormat: Total input paths to process : 7
14/03/06 17:11:09 INFO util.NativeCodeLoader: Loaded the native-hadoop library
14/03/06 17:11:09 WARN snappy.LoadSnappy: Snappy native library not loaded
14/03/06 17:11:10 INFO mapred.JobClient: Running job: job_201402211607_0014
14/03/06 17:11:11 INFO mapred.JobClient:  map 0% reduce 0%
14/03/06 17:11:17 INFO mapred.JobClient:  map 14% reduce 0%
14/03/06 17:11:19 INFO mapred.JobClient:  map 57% reduce 0%
14/03/06 17:11:20 INFO mapred.JobClient:  map 85% reduce 0%
14/03/06 17:11:21 INFO mapred.JobClient:  map 100% reduce 0%
14/03/06 17:11:24 INFO mapred.JobClient:  map 100% reduce 33%
14/03/06 17:11:27 INFO mapred.JobClient:  map 100% reduce 100%
14/03/06 17:11:28 INFO mapred.JobClient: Job complete: job_201402211607_0014
14/03/06 17:11:28 INFO mapred.JobClient: Counters: 30
14/03/06 17:11:28 INFO mapred.JobClient:   Job Counters 
14/03/06 17:11:28 INFO mapred.JobClient:     Launched reduce tasks=1
14/03/06 17:11:28 INFO mapred.JobClient:     SLOTS_MILLIS_MAPS=38126
14/03/06 17:11:28 INFO mapred.JobClient:     Total time spent by all reduces waiting after reserving slots (ms)=0
14/03/06 17:11:28 INFO mapred.JobClient:     Total time spent by all maps waiting after reserving slots (ms)=0
14/03/06 17:11:28 INFO mapred.JobClient:     Rack-local map tasks=2
14/03/06 17:11:28 INFO mapred.JobClient:     Launched map tasks=7
14/03/06 17:11:28 INFO mapred.JobClient:     Data-local map tasks=5
14/03/06 17:11:28 INFO mapred.JobClient:     SLOTS_MILLIS_REDUCES=9825
14/03/06 17:11:28 INFO mapred.JobClient:   File Output Format Counters 
14/03/06 17:11:28 INFO mapred.JobClient:     Bytes Written=1412505
14/03/06 17:11:28 INFO mapred.JobClient:   FileSystemCounters
14/03/06 17:11:28 INFO mapred.JobClient:     FILE_BYTES_READ=4462568
14/03/06 17:11:28 INFO mapred.JobClient:     HDFS_BYTES_READ=6950792
14/03/06 17:11:28 INFO mapred.JobClient:     FILE_BYTES_WRITTEN=7810309
14/03/06 17:11:28 INFO mapred.JobClient:     HDFS_BYTES_WRITTEN=1412505
14/03/06 17:11:28 INFO mapred.JobClient:   File Input Format Counters 
14/03/06 17:11:28 INFO mapred.JobClient:     Bytes Read=6950001
14/03/06 17:11:28 INFO mapred.JobClient:   Map-Reduce Framework
14/03/06 17:11:28 INFO mapred.JobClient:     Map output materialized bytes=2915072
14/03/06 17:11:28 INFO mapred.JobClient:     Map input records=137146
14/03/06 17:11:28 INFO mapred.JobClient:     Reduce shuffle bytes=2915072
14/03/06 17:11:28 INFO mapred.JobClient:     Spilled Records=507858
14/03/06 17:11:28 INFO mapred.JobClient:     Map output bytes=11435849
14/03/06 17:11:28 INFO mapred.JobClient:     Total committed heap usage (bytes)=1195069440
14/03/06 17:11:28 INFO mapred.JobClient:     CPU time spent (ms)=16520
14/03/06 17:11:28 INFO mapred.JobClient:     Combine input records=1174991
14/03/06 17:11:28 INFO mapred.JobClient:     SPLIT_RAW_BYTES=791
14/03/06 17:11:28 INFO mapred.JobClient:     Reduce input records=201010
14/03/06 17:11:28 INFO mapred.JobClient:     Reduce input groups=128513
14/03/06 17:11:28 INFO mapred.JobClient:     Combine output records=201010
14/03/06 17:11:28 INFO mapred.JobClient:     Physical memory (bytes) snapshot=1252454400
14/03/06 17:11:28 INFO mapred.JobClient:     Reduce output records=128513
14/03/06 17:11:28 INFO mapred.JobClient:     Virtual memory (bytes) snapshot=4080599040
14/03/06 17:11:28 INFO mapred.JobClient:     Map output records=1174991

tasktracker登录slave1:

li@slave1:~/hadoop/logs$ cat hadoop-li-tasktracker-slave1.log
2014-03-06 17:11:46,335 INFO org.apache.hadoop.mapred.TaskTracker: LaunchTaskAction (registerTask): attempt_201402211607_0014_m_000003_0 task's state:UNASSIGNED
2014-03-06 17:11:46,335 INFO org.apache.hadoop.mapred.TaskTracker: LaunchTaskAction (registerTask): attempt_201402211607_0014_m_000004_0 task's state:UNASSIGNED
2014-03-06 17:11:46,335 INFO org.apache.hadoop.mapred.TaskTracker: Trying to launch : attempt_201402211607_0014_m_000003_0 which needs 1 slots
2014-03-06 17:11:46,335 INFO org.apache.hadoop.mapred.TaskTracker: In TaskLauncher, current free slots : 2 and trying to launch attempt_201402211607_0014_m_000003_0 which needs 1 slots
2014-03-06 17:11:46,335 INFO org.apache.hadoop.mapred.TaskTracker: Trying to launch : attempt_201402211607_0014_m_000004_0 which needs 1 slots
2014-03-06 17:11:46,336 INFO org.apache.hadoop.mapred.TaskTracker: In TaskLauncher, current free slots : 1 and trying to launch attempt_201402211607_0014_m_000004_0 which needs 1 slots
2014-03-06 17:11:46,394 INFO org.apache.hadoop.mapred.JobLocalizer: Initializing user li on this TT.
2014-03-06 17:11:46,544 INFO org.apache.hadoop.mapred.JvmManager: In JvmRunner constructed JVM ID: jvm_201402211607_0014_m_-862426792
2014-03-06 17:11:46,544 INFO org.apache.hadoop.mapred.JvmManager: JVM Runner jvm_201402211607_0014_m_-862426792 spawned.
2014-03-06 17:11:46,545 INFO org.apache.hadoop.mapred.JvmManager: In JvmRunner constructed JVM ID: jvm_201402211607_0014_m_-696634639
2014-03-06 17:11:46,547 INFO org.apache.hadoop.mapred.JvmManager: JVM Runner jvm_201402211607_0014_m_-696634639 spawned.
2014-03-06 17:11:46,549 INFO org.apache.hadoop.mapred.TaskController: Writing commands to /home/li/hdfstmp/mapred/local/ttprivate/taskTracker/li/jobcache/job_201402211607_0014/attempt_201402211607_0014_m_000003_0/taskjvm.sh
2014-03-06 17:11:46,551 INFO org.apache.hadoop.mapred.TaskController: Writing commands to /home/li/hdfstmp/mapred/local/ttprivate/taskTracker/li/jobcache/job_201402211607_0014/attempt_201402211607_0014_m_000004_0/taskjvm.sh
2014-03-06 17:11:48,382 INFO org.apache.hadoop.mapred.TaskTracker: JVM with ID: jvm_201402211607_0014_m_-862426792 given task: attempt_201402211607_0014_m_000003_0
2014-03-06 17:11:48,383 INFO org.apache.hadoop.mapred.TaskTracker: JVM with ID: jvm_201402211607_0014_m_-696634639 given task: attempt_201402211607_0014_m_000004_0
2014-03-06 17:11:51,457 INFO org.apache.hadoop.mapred.TaskTracker: attempt_201402211607_0014_m_000004_0 1.0% 
2014-03-06 17:11:51,459 INFO org.apache.hadoop.mapred.TaskTracker: Task attempt_201402211607_0014_m_000004_0 is done.
2014-03-06 17:11:51,460 INFO org.apache.hadoop.mapred.TaskTracker: reported output size for attempt_201402211607_0014_m_000004_0  was 217654
2014-03-06 17:11:51,460 INFO org.apache.hadoop.mapred.TaskTracker: addFreeSlot : current free slots : 1
2014-03-06 17:11:51,470 INFO org.apache.hadoop.mapred.TaskTracker: attempt_201402211607_0014_m_000003_0 1.0% 
2014-03-06 17:11:51,472 INFO org.apache.hadoop.mapred.TaskTracker: Task attempt_201402211607_0014_m_000003_0 is done.
2014-03-06 17:11:51,472 INFO org.apache.hadoop.mapred.TaskTracker: reported output size for attempt_201402211607_0014_m_000003_0  was 267026
2014-03-06 17:11:51,473 INFO org.apache.hadoop.mapred.TaskTracker: addFreeSlot : current free slots : 2
2014-03-06 17:11:51,628 INFO org.apache.hadoop.mapred.JvmManager: JVM : jvm_201402211607_0014_m_-696634639 exited with exit code 0. Number of tasks it ran: 1
2014-03-06 17:11:51,631 INFO org.apache.hadoop.mapred.JvmManager: JVM : jvm_201402211607_0014_m_-862426792 exited with exit code 0. Number of tasks it ran: 1
2014-03-06 17:11:56,052 INFO org.apache.hadoop.mapred.TaskTracker.clienttrace: src: 192.168.1.111:50060, dest: 192.168.1.116:47652, bytes: 267026, op: MAPRED_SHUFFLE, cliID: attempt_201402211607_0014_m_000003_0, duration: 47537998
2014-03-06 17:11:56,076 INFO org.apache.hadoop.mapred.TaskTracker.clienttrace: src: 192.168.1.111:50060, dest: 192.168.1.116:47652, bytes: 217654, op: MAPRED_SHUFFLE, cliID: attempt_201402211607_0014_m_000004_0, duration: 15832312
2014-03-06 17:12:02,319 INFO org.apache.hadoop.mapred.TaskTracker: Received 'KillJobAction' for job: job_201402211607_0014
2014-03-06 17:12:02,320 INFO org.apache.hadoop.mapred.UserLogCleaner: Adding job_201402211607_0014 for user-log deletion with retainTimeStamp:1394233922320

tasktracker登录slave2:

2014-03-06 17:12:06,293 INFO org.apache.hadoop.mapred.TaskTracker: Received 'KillJobAction' for job: job_201402211607_0014
2014-03-06 17:12:06,293 WARN org.apache.hadoop.mapred.TaskTracker: Unknown job job_201402211607_0014 being deleted.

slave4和slave6与slave1具有相同的任务日志。 slave3与slave2具有相同的任务日志,只有2行。

我的问题:

1. Why the 5 slave nodes did not get task assigned?
2. Why slave2,3 have different task logs from slave1,4,6 when I set the same configuration on them
3. Is this a multinode configuration problem? How can I solve it?

1 个答案:

答案 0 :(得分:2)

您的任务节点看起来每个都有2个地图位置:

2014-03-06 17:11:46,335 INFO org.apache.hadoop.mapred.TaskTracker: In TaskLauncher, current free slots : 2 and trying to launch attempt_201402211607_0014_m_000003_0 which needs 1 slots

JobTracker意识到这一点,并决定将尽可能多的任务分配到单个节点上,而不是将它们分布在尽可能多的节点上。这可能是出于地方原因(最小化网络流量)而完成的。

  1. 这就是为什么你有两个空闲节点,因为5个任务只能分配给三个节点,有两个插槽(天花板(5 / 2.0 = 3))。

  2. 根据特定节点上运行的任务,您的日志会有所不同。因此,当您在群集上运行作业时,预计日志会快速分散,并且它们无法在各个节点之间平均分配。

  3. 这种不平等的分布并不表示有任何问题;这是群集的正常行为。请记住,Hadoop通常是为批处理工作而设计的,这意味着正常情况是群集在很多作业中大量使用,这样即使您的特定作业没有在所有节点上运行,也不会获得空闲节点。

  4. 最后注意:在这种特殊情况下,似乎你变得与众不同了     您遵循的教程中的行为,因为您可能正在运行     在AWS上(使用Elastic MapReduce)。显然,EMR有一个自定义调度程序     做出这些映射决策(每个分配多少个槽)     节点,以及如何通过它们分配任务)没有你自己     能够配置它。这个答案的更多细节:     Hadoop: number of available map slots based on cluster size