工作陷入地图0%在Hadoop 2中减少0%

时间:2013-12-16 06:50:07

标签: hadoop

cstur4@cstur4:~/hadoop$ hadoop jar test.jar org.cstur4.hadoop.test.WordCount
13/12/16 21:53:26 INFO client.RMProxy: Connecting to ResourceManager at /127.0.0.1:8080
13/12/16 21:53:27 WARN mapreduce.JobSubmitter: Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this.
13/12/16 21:53:27 INFO input.FileInputFormat: Total input paths to process : 1
13/12/16 21:53:27 INFO mapreduce.JobSubmitter: number of splits:1
13/12/16 21:53:27 INFO Configuration.deprecation: user.name is deprecated. Instead, use mapreduce.job.user.name
13/12/16 21:53:27 INFO Configuration.deprecation: mapred.jar is deprecated. Instead, use mapreduce.job.jar
13/12/16 21:53:27 INFO Configuration.deprecation: mapred.output.value.class is deprecated. Instead, use mapreduce.job.output.value.class
13/12/16 21:53:27 INFO Configuration.deprecation: mapreduce.combine.class is deprecated. Instead, use mapreduce.job.combine.class
13/12/16 21:53:27 INFO Configuration.deprecation: mapreduce.map.class is deprecated. Instead, use mapreduce.job.map.class
13/12/16 21:53:27 INFO Configuration.deprecation: mapred.job.name is deprecated. Instead, use mapreduce.job.name
13/12/16 21:53:27 INFO Configuration.deprecation: mapreduce.reduce.class is deprecated. Instead, use mapreduce.job.reduce.class
13/12/16 21:53:27 INFO Configuration.deprecation: mapred.input.dir is deprecated. Instead, use mapreduce.input.fileinputformat.inputdir
13/12/16 21:53:27 INFO Configuration.deprecation: mapred.output.dir is deprecated. Instead, use mapreduce.output.fileoutputformat.outputdir
13/12/16 21:53:27 INFO Configuration.deprecation: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps
13/12/16 21:53:27 INFO Configuration.deprecation: mapred.output.key.class is deprecated. Instead, use mapreduce.job.output.key.class
13/12/16 21:53:27 INFO Configuration.deprecation: mapred.working.dir is deprecated. Instead, use mapreduce.job.working.dir
13/12/16 21:53:28 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1387201845473_0001
13/12/16 21:53:28 INFO impl.YarnClientImpl: Submitted application application_1387201845473_0001 to ResourceManager at /127.0.0.1:8080
13/12/16 21:53:28 INFO mapreduce.Job: The url to track the job:      http://cstur4:8088/proxy/application_1387201845473_0001/
13/12/16 21:53:28 INFO mapreduce.Job: Running job: job_1387201845473_0001
13/12/16 21:53:38 INFO mapreduce.Job: Job job_1387201845473_0001 running in uber mode : false
13/12/16 21:53:38 INFO mapreduce.Job:  map 0% reduce 0%

我输入jps:

4961 RunJar
3931 DataNode
2543 org.eclipse.equinox.launcher_1.3.0.v20130327-1440.jar
4183 SecondaryNameNode
4569 NodeManager
4450 ResourceManager
6191 Jps
5043 MRAppMaster
3791 NameNode

datanode日志: 2013-12-16 21:53:27,502 INFO org.apache.hadoop.hdfs.server.datanode.DataNode:接收BP-692584010-127.0.0.1-1386505361103:blk_1073741910_1086 src:/127.0.0.1:51842 dest:/127.0。 0.1:50010 2013-12-16 21:53:27,577 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace:src:/127.0.0.1:51842,dest:/127.0.0.1:50010, bytes:5353,op :HDFS_WRITE,cliID:DFSClient_NONMAPREDUCE_-872182218_1,offset:0,srvID:DS-1763065-127.0.0.1-50010-1386505392539,blockid:BP-692584010-127.0.0.1-1386505361103:blk_1073741910_1086,duration:17218107 2013-12-16 21:53:27,577 INFO org.apache.hadoop.hdfs.server.datanode.DataNode:PacketResponder:BP-692584010-127.0.0.1-1386505361103:blk_1073741910_1086,type = LAST_IN_PIPELINE,downstreams = 0:[]终止 2013-12-16 21:53:27,784 INFO org.apache.hadoop.hdfs.server.datanode.DataNode:正在接收BP-692584010-127.0.0.1-1386505361103:blk_1073741911_1087 src:/127.0.0.1:51843 dest:/127.0。 0.1:50010 2013-12-16 21:53:27,787 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace:src:/127.0.0.1:51843,dest:/127.0.0.1:50010, bytes:107,op :HDFS_WRITE,cliID:DFSClient_NONMAPREDUCE_-872182218_1,偏移量:0,srvID:DS-1763065-127.0.0.1-50010-1386505392539,blockid:BP-692584010-127.0.0.1-1386505361103:blk_1073741911_1087,持续时间:2234455 2013-12-16 21:53:27,787 INFO org.apache.hadoop.hdfs.server.datanode.DataNode:PacketResponder:BP-692584010-127.0.0.1-1386505361103:blk_1073741911_1087,type = LAST_IN_PIPELINE,downstreams = 0:[]终止 2013-12-16 21:53:27,830 INFO org.apache.hadoop.hdfs.server.datanode.DataNode:正在接收BP-692584010-127.0.0.1-1386505361103:blk_1073741912_1088 src:/127.0.0.1:51844 dest:/127.0。 0.1:50010 2013-12-16 21:53:27,835 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace:src:/127.0.0.1:51844,dest:/127.0.0.1:50010, bytes:20,op :HDFS_WRITE,cliID:DFSClient_NONMAPREDUCE_-872182218_1,offset:0,srvID:DS-1763065-127.0.0.1-50010-1386505392539,blockid:BP-692584010-127.0.0.1-1386505361103:blk_1073741912_1088,duration:2791330 2013-12-16 21:53:27,835 INFO org.apache.hadoop.hdfs.server.datanode.DataNode:PacketResponder:BP-692584010-127.0.0.1-1386505361103:blk_1073741912_1088,type = LAST_IN_PIPELINE,downstreams = 0:[]终止 2013-12-16 21:53:28,368 INFO org.apache.hadoop.hdfs.server.datanode.DataNode:正在接收BP-692584010-127.0.0.1-1386505361103:blk_1073741913_1089 src:/127.0.0.1:51845 dest:/127.0。 0.1:50010 2013-12-16 21:53:28,378 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace:src:/127.0.0.1:51845,dest:/127.0.0.1:50010, bytes:65828,op :HDFS_WRITE,cliID:DFSClient_NONMAPREDUCE_-872182218_1,偏移量:0,srvID:DS-1763065-127.0.0.1-50010-1386505392539,blockid:BP-692584010-127.0.0.1-1386505361103:blk_1073741913_1089,持续时间:3274299 2013-12-16 21:53:28,378 INFO org.apache.hadoop.hdfs.server.datanode.DataNode:PacketResponder:BP-692584010-127.0.0.1-1386505361103:blk_1073741913_1089,type = LAST_IN_PIPELINE,downstreams = 0:[]终止 2013-12-16 21:53:30,678 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace:src:/127.0.0.1:50010,dest:/127.0.0.1:51849, bytes:5397,op :HDFS_READ,cliID:DFSClient_NONMAPREDUCE_-866639004_67,offset:0,srvID:DS-1763065-127.0.0.1-50010-1386505392539,blockid:BP-692584010-127.0.0.1-1386505361103:blk_1073741910_1086,duration:2308537 2013-12-16 21:53:30,772 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace:src:/127.0.0.1:50010,dest:/127.0.0.1:51849,bytes:24,op,op :HDFS_READ,cliID:DFSClient_NONMAPREDUCE_-866639004_67,offset:0,srvID:DS-1763065-127.0.0.1-50010-1386505392539,blockid:BP-692584010-127.0.0.1-1386505361103:blk_1073741912_1088,duration:278126 2013-12-16 21:53:30,815 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace:src:/127.0.0.1:50010,dest:/127.0.0.1:51849,port:111,op :HDFS_READ,cliID:DFSClient_NONMAPREDUCE_-866639004_67,offset:0,srvID:DS-1763065-127.0.0.1-50010-1386505392539,blockid:BP-692584010-127.0.0.1-1386505361103:blk_1073741911_1087,duration:254303 2013-12-16 21:53:30,854 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace:src:/127.0.0.1:50010,dest:/127.0.0.1:51849, bytes:66344,op :HDFS_READ,cliID:DFSClient_NONMAPREDUCE_-866639004_67,offset:0,srvID:DS-1763065-127.0.0.1-50010-1386505392539,blockid:BP-692584010-127.0.0.1-1386505361103:blk_1073741913_1089,duration:523440 2013-12-16 21:53:32,315 INFO org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner:BP-692584010-127.0.0.1-1386505361103验证成功:blk_1073741912_1088 2013-12-16 21:53:32,335 INFO org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner:BP-692584010-127.0.0.1-1386505361103验证成功:blk_1073741910_1086 2013-12-16 21:53:32,353 INFO org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner:BP-692584010-127.0.0.1-1386505361103验证成功:blk_1073741913_1089 2013-12-16 21:53:32,358 INFO org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner:BP-692584010-127.0.0.1-1386505361103验证成功:blk_1073741911_1087 2013-12-16 21:53:35,120 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace:src:/127.0.0.1:50010,dest:/127.0.0.1:51882,bytes:24,op,op :HDFS_READ,cliID:DFSClient_NONMAPREDUCE_-481183471_1,偏移量:0,srvID:DS-1763065-127.0.0.1-50010-1386505392539,blockid:BP-692584010-127.0.0.1-1386505361103:blk_1073741912_1088,持续时间:249581 2013-12-16 21:53:37,350 INFO org.apache.hadoop.hdfs.server.datanode.DataNode:正在接收BP-692584010-127.0.0.1-1386505361103:blk_1073741914_1090 src:/127.0.0.1:51890 dest:/127.0。 0.1:50010 2013-12-16 21:53:37,373 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace:src:/127.0.0.1:51890,dest:/127.0.0.1:50010, bytes:76571,op :HDFS_WRITE,cliID:DFSClient_NONMAPREDUCE_-481183471_1,offset:0,srvID:DS-1763065-127.0.0.1-50010-1386505392539,blockid:BP-692584010-127.0.0.1-1386505361103:blk_1073741914_1090,duration:17932386 2013-12-16 21:53:37,373 INFO org.apache.hadoop.hdfs.server.datanode.DataNode:PacketResponder:BP-692584010-127.0.0.1-1386505361103:blk_1073741914_1090,type = LAST_IN_PIPELINE,downstreams = 0:[]终止 2013-12-16 21:53:42,396 INFO org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner:BP-692584010-127.0.0.1-1386505361103验证成功:blk_1073741914_1090

我无法找到TaskTracker或JobTracker。很长一段时间让我困惑,我是hadoop的初学者。

0 个答案:

没有答案
相关问题