如何在Map / Reduce中读取CSV文件?

时间:2013-10-18 23:31:18

标签: csv hadoop mapreduce

我有一个大小为6GB的大型CSV文件,以逗号分隔。下面是映射器函数

@Override
    public void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException {
        String[] tokens = value.toString().split(",");

        String crimeType = tokens[5].trim();      // column #5 is the crime type in the CSV file, serving key
//      int year = Integer.parseInt(tokens[17].trim()); // the year when the crime happened

        int year = 2010;

        CrimeTypeKey crimeTypeYearKey = new CrimeTypeKey(crimeType, year);

        context.write(crimeTypeYearKey, ONE);
}

正如您所看到的,我使用“.split”来分解每一行(或列?)。我想知道在这种情况下如何使用OpenCSV?请给我一个例子,非常感谢

0 个答案:

没有答案