运行WordCount v1.0示例时,零件00000中没有输出

时间:2013-11-12 16:13:21

标签: hadoop mapreduce cloudera

我是Cloudera和Hadoop的新手,而Cloudera WordCount 1.0示例(part-00000)的输出是空的。我正在使用的步骤和文件是here。我想提供任何工作日志信息会有所帮助,同样适用于版本 - 我只需要一些指导,找到它们的位置。以下是作业输出和来源。在写的其他部分(部分00001到部分00011)中,非空部分是00001(Bye 1),00002部分(Hadoop 2),部分00004(再见1),部分00005(世界2)和part-00009(Hello 2)。任何帮助都会很棒。

以下是命令和输出:

[me@server ~]$ hadoop fs -cat /user/me/wordcount/input/file0
Hello World Bye World

[me@server ~]$ hadoop fs -cat /user/me/wordcount/input/file1
Hello Hadoop Goodbye Hadoop

[me@server ~]$ hadoop jar wordcount.jar org.myorg.WordCount /user/me/wordcount/input /user/me/wordcount/output
13/11/12 10:39:41 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.
13/11/12 10:39:41 INFO mapred.FileInputFormat: Total input paths to process : 2
13/11/12 10:39:42 INFO mapred.JobClient: Running job: job_201311051201_0014
13/11/12 10:39:43 INFO mapred.JobClient:  map 0% reduce 0%
13/11/12 10:39:49 INFO mapred.JobClient:  map 33% reduce 0%
13/11/12 10:39:52 INFO mapred.JobClient:  map 67% reduce 0%
13/11/12 10:39:53 INFO mapred.JobClient:  map 100% reduce 0%
13/11/12 10:39:58 INFO mapred.JobClient:  map 100% reduce 25%
13/11/12 10:40:01 INFO mapred.JobClient:  map 100% reduce 100%
13/11/12 10:40:04 INFO mapred.JobClient: Job complete: job_201311051201_0014
13/11/12 10:40:04 INFO mapred.JobClient: Counters: 33
13/11/12 10:40:04 INFO mapred.JobClient:   File System Counters
13/11/12 10:40:04 INFO mapred.JobClient:     FILE: Number of bytes read=313
13/11/12 10:40:04 INFO mapred.JobClient:     FILE: Number of bytes written=2695420
13/11/12 10:40:04 INFO mapred.JobClient:     FILE: Number of read operations=0
13/11/12 10:40:04 INFO mapred.JobClient:     FILE: Number of large read operations=0
13/11/12 10:40:04 INFO mapred.JobClient:     FILE: Number of write operations=0
13/11/12 10:40:04 INFO mapred.JobClient:     HDFS: Number of bytes read=410
13/11/12 10:40:04 INFO mapred.JobClient:     HDFS: Number of bytes written=41
13/11/12 10:40:04 INFO mapred.JobClient:     HDFS: Number of read operations=18
13/11/12 10:40:04 INFO mapred.JobClient:     HDFS: Number of large read operations=0
13/11/12 10:40:04 INFO mapred.JobClient:     HDFS: Number of write operations=24
13/11/12 10:40:04 INFO mapred.JobClient:   Job Counters
13/11/12 10:40:04 INFO mapred.JobClient:     Launched map tasks=3
13/11/12 10:40:04 INFO mapred.JobClient:     Launched reduce tasks=12
13/11/12 10:40:04 INFO mapred.JobClient:     Data-local map tasks=3
13/11/12 10:40:04 INFO mapred.JobClient:     Total time spent by all maps in occupied slots (ms)=16392
13/11/12 10:40:04 INFO mapred.JobClient:     Total time spent by all reduces in occupied slots (ms)=61486
13/11/12 10:40:04 INFO mapred.JobClient:     Total time spent by all maps waiting after reserving slots (ms)=0
13/11/12 10:40:04 INFO mapred.JobClient:     Total time spent by all reduces waiting after reserving slots (ms)=0
13/11/12 10:40:04 INFO mapred.JobClient:   Map-Reduce Framework
13/11/12 10:40:04 INFO mapred.JobClient:     Map input records=2
13/11/12 10:40:04 INFO mapred.JobClient:     Map output records=8
13/11/12 10:40:04 INFO mapred.JobClient:     Map output bytes=82
13/11/12 10:40:04 INFO mapred.JobClient:     Input split bytes=357
13/11/12 10:40:04 INFO mapred.JobClient:     Combine input records=8
13/11/12 10:40:04 INFO mapred.JobClient:     Combine output records=6
13/11/12 10:40:04 INFO mapred.JobClient:     Reduce input groups=5
13/11/12 10:40:04 INFO mapred.JobClient:     Reduce shuffle bytes=649
13/11/12 10:40:04 INFO mapred.JobClient:     Reduce input records=6
13/11/12 10:40:04 INFO mapred.JobClient:     Reduce output records=5
13/11/12 10:40:04 INFO mapred.JobClient:     Spilled Records=12
13/11/12 10:40:04 INFO mapred.JobClient:     CPU time spent (ms)=15650
13/11/12 10:40:04 INFO mapred.JobClient:     Physical memory (bytes) snapshot=3594293248
13/11/12 10:40:04 INFO mapred.JobClient:     Virtual memory (bytes) snapshot=18375352320
13/11/12 10:40:04 INFO mapred.JobClient:     Total committed heap usage (bytes)=6497697792
13/11/12 10:40:04 INFO mapred.JobClient:   org.apache.hadoop.mapreduce.lib.input.FileInputFormatCounter
13/11/12 10:40:04 INFO mapred.JobClient:     BYTES_READ=50

[me@server ~]$ hadoop fs -cat /user/me/wordcount/output/part-00000

[me@server ~]$ hdfs dfs -ls -R /user/me/wordcount/output
-rw-r--r--   3 me me          0 2013-11-12 10:40 /user/me/wordcount/output/_SUCCESS
drwxr-xr-x   - me me          0 2013-11-12 10:39 /user/me/wordcount/output/_logs
drwxr-xr-x   - me me          0 2013-11-12 10:39 /user/me/wordcount/output/_logs/history
-rw-r--r--   3 me me      67134 2013-11-12 10:40 /user/me/wordcount/output/_logs/history/job_201311051201_0014_1384270782432_me_wordcount
-rw-r--r--   3 me me      81866 2013-11-12 10:39 /user/me/wordcount/output/_logs/history/job_201311051201_0014_conf.xml
-rw-r--r--   3 me me          0 2013-11-12 10:39 /user/me/wordcount/output/part-00000
-rw-r--r--   3 me me          6 2013-11-12 10:39 /user/me/wordcount/output/part-00001
-rw-r--r--   3 me me          9 2013-11-12 10:39 /user/me/wordcount/output/part-00002
-rw-r--r--   3 me me          0 2013-11-12 10:39 /user/me/wordcount/output/part-00003
-rw-r--r--   3 me me         10 2013-11-12 10:39 /user/me/wordcount/output/part-00004
-rw-r--r--   3 me me          8 2013-11-12 10:39 /user/me/wordcount/output/part-00005
-rw-r--r--   3 me me          0 2013-11-12 10:39 /user/me/wordcount/output/part-00006
-rw-r--r--   3 me me          0 2013-11-12 10:39 /user/me/wordcount/output/part-00007
-rw-r--r--   3 me me          0 2013-11-12 10:39 /user/me/wordcount/output/part-00008
-rw-r--r--   3 me me          8 2013-11-12 10:39 /user/me/wordcount/output/part-00009
-rw-r--r--   3 me me          0 2013-11-12 10:39 /user/me/wordcount/output/part-00010
-rw-r--r--   3 me me          0 2013-11-12 10:39 /user/me/wordcount/output/part-00011
[me@server ~]$

这是source

package org.myorg;

import java.io.IOException;
import java.util.*;

import org.apache.hadoop.fs.Path;
import org.apache.hadoop.conf.*;
import org.apache.hadoop.io.*;
import org.apache.hadoop.mapred.*;
import org.apache.hadoop.util.*;

public class WordCount {

  public static class Map extends MapReduceBase implements Mapper<LongWritable, Text, Text, IntWritable> {
    private final static IntWritable one = new IntWritable(1);
    private Text word = new Text();

    public void map(LongWritable key, Text value, OutputCollector<Text, IntWritable> output, Reporter reporter) throws IOException {
      String line = value.toString();
      StringTokenizer tokenizer = new StringTokenizer(line);
      while (tokenizer.hasMoreTokens()) {
        word.set(tokenizer.nextToken());
        output.collect(word, one);
      }
    }
  }

  public static class Reduce extends MapReduceBase implements Reducer<Text, IntWritable, Text, IntWritable> {
    public void reduce(Text key, Iterator<IntWritable> values, OutputCollector<Text, IntWritable> output, Reporter reporter) throws IOException {
      int sum = 0;
      while (values.hasNext()) {
        sum += values.next().get();
      }
      output.collect(key, new IntWritable(sum));
    }
  }

  public static void main(String[] args) throws Exception {
    JobConf conf = new JobConf(WordCount.class);
    conf.setJobName("wordcount");

    conf.setOutputKeyClass(Text.class);
    conf.setOutputValueClass(IntWritable.class);

    conf.setMapperClass(Map.class);
    conf.setCombinerClass(Reduce.class);
    conf.setReducerClass(Reduce.class);

    conf.setInputFormat(TextInputFormat.class);
    conf.setOutputFormat(TextOutputFormat.class);

    FileInputFormat.setInputPaths(conf, new Path(args[0]));
    FileOutputFormat.setOutputPath(conf, new Path(args[1]));

    JobClient.runJob(conf);
  }
}

4 个答案:

答案 0 :(得分:1)

您正在启动12个reduce任务(Launched reduce tasks=12),尽管映射器只有五个输出:根据教程,您有五个预期的输出。在CDH3中,reducers的数量被设置为mapper输出的数量:CDH4中的这种行为可能已经改变了 - 看看你的配置文件,看看你是否有mapred.reduce.tasks或类似的东西。 / p>

答案 1 :(得分:1)

这是因为您在作业中使用的缩减器数量大于您实际拥有的键数,即单词数量。所以reducers的一些输出文件都是空的。检查默认分区器如何根据reducer的数量和它向减速器发送数据的密钥,即HashPartitioner Link

答案 2 :(得分:0)

好的,非常感谢Binary01和davek3的指导。我将不得不做一些阅读以了解正在发生的事情,但为了后人的缘故,我将在答案中分享详细信息:我通过编译v2.0 code让它工作,所以需要“-D mapred”。 reduce.tasks = 1“,这导致输出正确。只是为了踢,我在没有-D的Hamlet上运行它并且它也起作用。

答案 3 :(得分:0)

或者,您可以运行simple命令来组合所有零件文件的输出:

cat part-* > output.txt
相关问题