在mapreduce中从DistributedCache读取HAR文件

时间:2013-03-04 12:51:10

标签: mapreduce hdfs cloudera distributed-cache

我写了一个oozie工作流程,它创建了HAR归档,然后运行需要从该归档中读取数据的MR-job。 1.存档已创建 2.作业运行时,映射器确实在分布式缓存中看到存档。 3. ???我怎么读这个arhive?从该存档逐行读取数据的API是什么(我的是一堆多个新行分隔的文本文件)。 注意:当我使用存储在DistirubtedCache中的常用文件(不是HAR存档)时,它可以正常工作。我在尝试从HAR读取数据时遇到了问题。

以下是代码段:

    InputStream inputStream;
    String cachedDatafileName = System.getProperty(DIST_CACHE_FILE_NAME);
    LOG.info(String.format("Looking for[%s]=[%s] in DistributedCache",DIST_CACHE_FILE_NAME, cachedDatafileName));

    URI[] uris = DistributedCache.getCacheArchives(getContext().getConfiguration());
    URI uriToCachedDatafile = null;
    for(URI uri : uris){
        if(uri.toString().endsWith(cachedDatafileName)){
            uriToCachedDatafile = uri;
            break;
        }
    }
    if(uriToCachedDatafile == null){
        throw new RuntimeConfigurationException(String.format("Looking for[%s]=[%s] in DistributedCache failed. There is no such file",
                DIST_CACHE_FILE_NAME, cachedDatafileName));
    }

    Path pathToFile = new Path(uriToCachedDatafile);
    LOG.info(String.format("[%s] has been found. Uri is: [%s]. The path is:[%s]",cachedDatafileName, uriToCachedDatafile, pathToFile));

    FileSystem fileSystem =  pathToFile.getFileSystem(getContext().getConfiguration());
    HarFileSystem harFileSystem = new HarFileSystem(fileSystem);
    inputStream = harFileSystem.open(pathToFile); //NULL POINTER EXCEPTION IS HERE!
    return inputStream;

1 个答案:

答案 0 :(得分:0)

protected InputStream getInputStreamToDistCacheFile() throws IOException{
        InputStream inputStream;
        String cachedDatafileName = System.getProperty(DIST_CACHE_FILE_NAME);
        LOG.info(String.format("Looking for[%s]=[%s] in DistributedCache",DIST_CACHE_FILE_NAME, cachedDatafileName));

        URI[] uris = DistributedCache.getCacheArchives(getContext().getConfiguration());
        URI uriToCachedDatafile = null;
        for(URI uri : uris){
            if(uri.toString().endsWith(cachedDatafileName)){
                uriToCachedDatafile = uri;
                break;
            }
        }
        if(uriToCachedDatafile == null){
            throw new RuntimeConfigurationException(String.format("Looking for[%s]=[%s] in DistributedCache failed. There is no such file",
                    DIST_CACHE_FILE_NAME, cachedDatafileName));
        }

        //Path pathToFile = new Path(uriToCachedDatafile +"/stf/db_bts_stf.txt");
        Path pathToFile = new Path("har:///"+"home/ssa/devel/megalabs/kyc-solution/kyc-mrjob/target/test-classes/GSMCellSubscriberHomeIntersectionJobDescriptionClusterMRTest/in/gsm_cell_location_stf.har" +"/stf/db_bts_stf.txt");
        //Path pathToFile = new Path(("har://home/ssa/devel/megalabs/kyc-solution/kyc-mrjob/target/test-classes/GSMCellSubscriberHomeIntersectionJobDescriptionClusterMRTest/in/gsm_cell_location_stf.har"));

        LOG.info(String.format("[%s] has been found. Uri is: [%s]. The path is:[%s]",cachedDatafileName, uriToCachedDatafile, pathToFile));
        FileSystem harFileSystem = pathToFile.getFileSystem(context.getConfiguration());
        FSDataInputStream fin = harFileSystem.open(pathToFile);
        LOG.info("fin: " + fin);
//        FileSystem fileSystem =  pathToFile.getFileSystem(getContext().getConfiguration());
//        HarFileSystem harFileSystem = new HarFileSystem(fileSystem);
//        harFileSystem.exists(new Path("har://home/ssa/devel/mycompany/my-solution/my-mrjob/target/test-classes/HomeJobDescriptionClusterMRTest/in/locations.har"));
//        LOG.info("harFileSystem.exists(pathToFile):"+ harFileSystem.exists(pathToFile));
//        harFileSystem.initialize(uriToCachedDatafile, context.getConfiguration());



        FileStatus[] statuses = harFileSystem.listStatus(new Path("har:///"+"har://home/ssa/devel/mycompany/my-solution/my-mrjob/target/test-classes/HomeJobDescriptionClusterMRTest/in/locations.har"));
        for(FileStatus fileStatus : statuses){
            LOG.info("fileStatus isDir"+fileStatus.isDirectory() +" len:" + fileStatus.getLen());
        }

//        String tmpPathToFile = "har:///"+pathToFile.toString(); //+"/stf/db_bts_stf.txt";
//        Path tmpPath = new Path(tmpPathToFile);
//        LOG.info("KILL ME PATH TO FILE IN ARCHIVE: " +tmpPath);
//        inputStream = harFileSystem.open(tmpPath);
//        return inputStream;
        return fin;
    }

正如你所看到的,这太可怕了。您已手动读取存档内存储的索引文件,并使用索引文件元数据重建路径。如果您知道存档中存储的文件的确切名称(如我的示例中所示),则可以手动构建路径。

这不方便。我确实期望像Zip-> zipEntry这样的东西,当你可以在不知道它的结构的情况下迭代存档条目。