MapReduce:一行输入文件的两个拆分(执行map方法)

时间:2016-07-17 15:14:48

标签: java hadoop mapreduce

我开发了一个mapReduce程序来计算和登录请求文件30分钟内的请求数以及此期间搜索次数最多的单词。

我的输入文件是:

01_11_2012 12_02_10 132.227.045.028 life
02_11_2012 02_52_10 132.227.045.028 restaurent+kitchen
03_11_2012 12_32_10 132.227.045.028 guitar+music
04_11_2012 13_52_10 132.227.045.028 book+music
05_11_2012 12_22_10 132.227.045.028 animal+life
05_11_2012 12_22_10 132.227.045.028 history

DD_MM_YYYY | HH_MM_SS | ip |搜索的单词

我的输出文件应该显示如下:

between 02h30 and 2h59 restaurent 1  
between 13h30 and 13h59 book 1
between 12h00 and 12h29 life 3  
between 12h30 and 12h59 guitar 1 

第一行:restaurent是02h30到2h59之间最常用的词,1代表请求的数量。

我的问题是我得到同一行的redundent map执行。所以我用以下输入测试程序(我的文件中有1行)。

01_11_2012 12_02_10 132.227.045.028 life

当我每行调试eclipse行时,在下面的地图行上放置一个断点。

context.write(key, result);

我的程序在此行上传递了两次,并为唯一输入行写了两次相同的信息。

我被困在这一点上,我不知道为什么我会得到2个地图任务,因为我的输入应该只有一个分割。

该计划如下。 (对不起我的英文)

package fitec.lab.booble;

import java.io.IOException;
import java.util.Comparator;
import java.util.HashMap;
import java.util.Map;
import java.util.TreeMap;

import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;

public class BoobleByMinutes {

    public static class TokenizerMapper extends Mapper<Object, Text, Text, Text> {

        private final int TIME_INDEX = 1;
        private final int WORDS_INDEX = 3;

        @Override
        public void map(Object key, Text value, Context context) throws IOException, InterruptedException {

            String[] attributesTab = value.toString().split(" ");

            Text reduceKey = new Text();
            Text words = new Text();

            String time = attributesTab[TIME_INDEX];
            String[] timeSplitted = time.split("_");

            String heures = timeSplitted[0];
            String minutes = timeSplitted[1];

            if (29 < Integer.parseInt(minutes)) {
                reduceKey.set("entre " + heures + "h30 et " + heures + "h59");
            } else {
                reduceKey.set("entre " + heures + "h00 et " + heures + "h29");
            }
            words.set(attributesTab[WORDS_INDEX]);
            context.write(reduceKey, words);
        }
    }

    public static class PriceSumReducer extends Reducer<Text, Text, Text, Text> {

        public void reduce(Text key, Iterable<Text> groupedWords, Context context)
                throws IOException, InterruptedException {
            Text result = new Text();
            int requestCount = 0;
            Map<String, Integer> firstWordAndRequestCount = new HashMap<String, Integer>();
            for (Text words : groupedWords) {
                ++requestCount;
                String wordsString = words.toString().replace("+", "--");
                System.out.println(wordsString.toString());
                String[] wordTab = wordsString.split("--");
                for (String word : wordTab) {

                    if (firstWordAndRequestCount.containsKey(word)) {
                        Integer integer = firstWordAndRequestCount.get(word) + 1;
                        firstWordAndRequestCount.put(word, integer);
                    } else {
                        firstWordAndRequestCount.put(word, new Integer(1));
                    }
                }
            }

            ValueComparator valueComparator = new ValueComparator(firstWordAndRequestCount);
            TreeMap<String, Integer> sortedProductsSale = new TreeMap<String, Integer>(valueComparator);
            sortedProductsSale.putAll(firstWordAndRequestCount);
            result.set(sortedProductsSale.firstKey() + "__" + requestCount);
            context.write(key, result);
        }

        class ValueComparator implements Comparator<String> {
            Map<String, Integer> base;

            public ValueComparator(Map<String, Integer> base) {
                this.base = base;
            }

            public int compare(String a, String b) {
                if (base.get(a) >= base.get(b)) {
                    return -1;
                } else {
                    return 1;
                }
            }
        }
    }

    public static void main(String[] args) throws Exception {

        Job job = new org.apache.hadoop.mapreduce.Job();
        job.setJarByClass(BoobleByMinutes.class);
        job.setJobName("Booble mot le plus recherché et somme de requete par tranche de 30 minutes");

        FileInputFormat.addInputPath(job, new Path(args[0]));
        FileOutputFormat.setOutputPath(job, new Path(args[1]));
        job.setJarByClass(BoobleByMinutes.class);
        job.setMapperClass(TokenizerMapper.class);
//      job.setCombinerClass(PriceSumReducer.class);
        job.setReducerClass(PriceSumReducer.class);

        job.setNumReduceTasks(1);
        job.setMapOutputKeyClass(Text.class);
        job.setMapOutputValueClass(Text.class);
        job.setOutputKeyClass(Text.class);
        job.setOutputValueClass(Text.class);

        FileInputFormat.addInputPath(job, new Path(args[0]));
        FileOutputFormat.setOutputPath(job, new Path(args[1]));

        System.exit(job.waitForCompletion(true) ? 0 : 1);
    }
}

@Radim 我将带有纱线的罐子放入真正的hadoop我得到的数量为split = 2

我把日志放在

之下
16/07/18 02:56:39 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
16/07/18 02:56:40 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032
16/07/18 02:56:42 WARN mapreduce.JobResourceUploader: Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this.
16/07/18 02:56:42 INFO input.FileInputFormat: Total input paths to process : 2
16/07/18 02:56:43 INFO mapreduce.JobSubmitter: number of splits:2
16/07/18 02:56:43 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1468802929497_0002
16/07/18 02:56:44 INFO impl.YarnClientImpl: Submitted application application_1468802929497_0002
16/07/18 02:56:44 INFO mapreduce.Job: The url to track the job: http://moussa:8088/proxy/application_1468802929497_0002/
16/07/18 02:56:44 INFO mapreduce.Job: Running job: job_1468802929497_0002
16/07/18 02:56:56 INFO mapreduce.Job: Job job_1468802929497_0002 running in uber mode : false
16/07/18 02:56:56 INFO mapreduce.Job:  map 0% reduce 0%
16/07/18 02:57:14 INFO mapreduce.Job:  map 100% reduce 0%
16/07/18 02:57:23 INFO mapreduce.Job:  map 100% reduce 100%
16/07/18 02:57:25 INFO mapreduce.Job: Job job_1468802929497_0002 completed successfully
16/07/18 02:57:25 INFO mapreduce.Job: Counters: 49
    File System Counters
        FILE: Number of bytes read=66
        FILE: Number of bytes written=352628
        FILE: Number of read operations=0
        FILE: Number of large read operations=0
        FILE: Number of write operations=0
        HDFS: Number of bytes read=278
        HDFS: Number of bytes written=31
        HDFS: Number of read operations=9
        HDFS: Number of large read operations=0
        HDFS: Number of write operations=2
    Job Counters 
        Launched map tasks=2
        Launched reduce tasks=1
        Data-local map tasks=2
        Total time spent by all maps in occupied slots (ms)=29431
        Total time spent by all reduces in occupied slots (ms)=6783
        Total time spent by all map tasks (ms)=29431
        Total time spent by all reduce tasks (ms)=6783
        Total vcore-milliseconds taken by all map tasks=29431
        Total vcore-milliseconds taken by all reduce tasks=6783
        Total megabyte-milliseconds taken by all map tasks=30137344
        Total megabyte-milliseconds taken by all reduce tasks=6945792
    Map-Reduce Framework
        Map input records=2
        Map output records=2
        Map output bytes=56
        Map output materialized bytes=72
        Input split bytes=194
        Combine input records=0
        Combine output records=0
        Reduce input groups=1
        Reduce shuffle bytes=72
        Reduce input records=2
        Reduce output records=1
        Spilled Records=4
        Shuffled Maps =2
        Failed Shuffles=0
        Merged Map outputs=2
        GC time elapsed (ms)=460
        CPU time spent (ms)=2240
        Physical memory (bytes) snapshot=675127296
        Virtual memory (bytes) snapshot=5682606080
        Total committed heap usage (bytes)=529465344
    Shuffle Errors
        BAD_ID=0
        CONNECTION=0
        IO_ERROR=0
        WRONG_LENGTH=0
        WRONG_MAP=0
        WRONG_REDUCE=0
    File Input Format Counters 
        Bytes Read=84
    File Output Format Counters 
        Bytes Written=31

2 个答案:

答案 0 :(得分:1)

在您的主(作业)方法中,这些行是重复的:

FileInputFormat.addInputPath(job, new Path(args[0]));
FileOutputFormat.setOutputPath(job, new Path(args[1]));

也:job.setJarByClass(BoobleByMinutes.class);

但这一行应该导致重复输入:FileInputFormat.addInputPath(job, new Path(args[0]));

所以你的主要方法应该是:

 public static void main(String[] args) throws Exception {

        Job job = new org.apache.hadoop.mapreduce.Job();
        job.setJarByClass(BoobleByMinutes.class);
        job.setJobName("Booble mot le plus recherché et somme de requete par tranche de 30 minutes");

        job.setMapperClass(TokenizerMapper.class);
//      job.setCombinerClass(PriceSumReducer.class);
        job.setReducerClass(PriceSumReducer.class);

        job.setNumReduceTasks(1);
        job.setMapOutputKeyClass(Text.class);
        job.setMapOutputValueClass(Text.class);
        job.setOutputKeyClass(Text.class);
        job.setOutputValueClass(Text.class);

        FileInputFormat.addInputPath(job, new Path(args[0]));
        FileOutputFormat.setOutputPath(job, new Path(args[1]));

        System.exit(job.waitForCompletion(true) ? 0 : 1);
    }

答案 1 :(得分:0)

我从此链接获得解决方案:why is my sequence file being read twice in my hadoop mapper class?

我没有看到我得到处理的总输入路径:2在我的日志中。 正如他们在链接中所说,我只需要对该行进行评论

FileInputFormat.addInputPath(job, new Path(args[0]));

我不理解评论&#34;此行只是将输入添加回配置&#34; 请有人解释一下 任何想法表示赞赏

相关问题