Mapreduce到hbase输出卡在地图上100%降低100%

时间:2016-11-22 13:30:46

标签: hadoop mapreduce hbase cloudera

我正在运行mapreduce进程,以便从hdfs写入文件并写入hbase。

我已经简化了这个过程。这是源代码:

public class WriteHBaseDriver extends Configured implements Tool{

    private static Configuration conf = null;

    public static void main(String[] args){

        int exitCode;
        try {
            exitCode = ToolRunner.run(new Configuration(), new WriteHBaseDriver(), args);
            System.exit(exitCode);
        } catch (Exception e) {
            // TODO Auto-generated catch block
            e.printStackTrace();
        }
    }

    @Override
    public int run(String[] arg0) throws Exception {
        conf = HBaseConfig.getConfiguration();

        Job job = Job.getInstance(conf, WriteHBaseDriver.class.getSimpleName());
            job.setJarByClass(WriteHBaseDriver.class);
            job.setMapperClass(WriteHBaseMapper.class);
            job.setMapOutputKeyClass(Text.class);
            job.setMapOutputValueClass(IntWritable.class);
            job.setOutputFormatClass(TableOutputFormat.class);
            job.setReducerClass(WriteHBaseReducer.class);
            job.setNumReduceTasks(1);
            job.getConfiguration().set(TableOutputFormat.OUTPUT_TABLE, "NAMESPACE_NAME:TABLE_NAME");
            FileInputFormat.addInputPath(job, new Path("/user/myuser/data/input/"));
            job.waitForCompletion(true);

        return 0;


    }

    public class WriteHBaseMapper extends Mapper<LongWritable, Text, Text,IntWritable> {

@Override
public void map(LongWritable offset, Text record, Context context) throws IOException {

        try {
            context.write(new Text("key"), new IntWritable(1));
        } catch (InterruptedException e) {
            // TODO Auto-generated catch block
            e.printStackTrace();
        }
    }
}



public class WriteHBaseReducer extends TableReducer<Text, IntWritable, ImmutableBytesWritable>{

    public void reduce(Text key, Iterable<IntWritable> values, Context context) throws IOException, InterruptedException
    {

        Put put = new Put(Bytes.toBytes(new Date().getTime()));
        String family = "M";
        String qualifier = "D";
        put.addColumn(Bytes.toBytes(family), Bytes.toBytes(qualifier), Bytes.toBytes("value"));

        context.write(new ImmutableBytesWritable(Bytes.toBytes("NAMESPACE_NAME:TABLE_NAME")), put);
    }
}

群集是最近安装的cloudera群集CHH 5.9.0,包含一个主节点和4个区域服务器 主节点上只安装了一个zookeeper服务器。

当用hadoop jar运行进程时,一切似乎都运行正常。 但当进程处于100%映射并减少100%时,它会卡住并且没有任何内容写入hbase。

没有显示失败消息,我能找到的唯一错误消息是:

  

2016-11-21 12:52:23,584 INFO   org.apache.hadoop.hbase.zookeeper.MetaTableLocator:失败   验证hbase:meta ,, 1 at   地址= mdmtsthfs1.corp.ute.com.uy,60020,1479743178098,   exception = org.apache.hadoop.hbase.NotServingRegionException:Region   hbase:meta ,, 1不在线上   mdmtsthfs1.corp.ute.com.uy,60020,1479743524683           at org.apache.hadoop.hbase.regionserver.HRegionServer.getRegionByEncodedName(HRegionServer.java:2921)           在org.apache.hadoop.hbase.regionserver.RSRpcServices.getRegion(RSRpcServices.java:1053)           at org.apache.hadoop.hbase.regionserver.RSRpcServices.getRegionInfo(RSRpcServices.java:1333)           在org.apache.hadoop.hbase.protobuf.generated.AdminProtos $ AdminService $ 2.callBlockingMethod(AdminProtos.java:22233)           在org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2170)           在org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:109)           在org.apache.hadoop.hbase.ipc.RpcExecutor $ Handler.run(RpcExecutor.java:185)           在org.apache.hadoop.hbase.ipc.RpcExecutor $ Handler.run(RpcExecutor.java:165)

甚至不知道它是否与它有关。

我在这里失踪了什么?

发现来自zookeeper的错误跟踪:

    2016-11-22 15:21:02,882 INFO [main-SendThread(localhost:2181)] org.apache.zookeeper.ClientCnxn: Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error)
2016-11-22 15:21:02,883 WARN [main-SendThread(localhost:2181)] org.apache.zookeeper.ClientCnxn: Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect
java.net.ConnectException: Connection refused
    at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
    at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:739)
    at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:350)
    at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081)
2016-11-22 15:21:02,983 ERROR [main] org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper: ZooKeeper getData failed after 4 attempts
2016-11-22 15:21:02,983 WARN [main] org.apache.hadoop.hbase.zookeeper.ZKUtil: hconnection-0x4e90b4f40x0, quorum=localhost:2181, baseZNode=/hbase Unable to get data of znode /hbase/meta-region-server
org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /hbase/meta-region-server
    at org.apache.zookeeper.KeeperException.create(KeeperException.java:99)
    at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
    at org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.java:1151)
    at org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.getData(RecoverableZooKeeper.java:359)
    at org.apache.hadoop.hbase.zookeeper.ZKUtil.getData(ZKUtil.java:623)
    at org.apache.hadoop.hbase.zookeeper.MetaTableLocator.getMetaRegionState(MetaTableLocator.java:479)
    at org.apache.hadoop.hbase.zookeeper.MetaTableLocator.getMetaRegionLocation(MetaTableLocator.java:165)
    at org.apache.hadoop.hbase.zookeeper.MetaTableLocator.blockUntilAvailable(MetaTableLocator.java:597)
    at org.apache.hadoop.hbase.zookeeper.MetaTableLocator.blockUntilAvailable(MetaTableLocator.java:577)
    at org.apache.hadoop.hbase.zookeeper.MetaTableLocator.blockUntilAvailable(MetaTableLocator.java:556)
    at org.apache.hadoop.hbase.client.ZooKeeperRegistry.getMetaRegionLocation(ZooKeeperRegistry.java:61)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1195)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1179)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegionInMeta(ConnectionManager.java:1365)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1199)
    at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:395)
    at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:344)
    at org.apache.hadoop.hbase.client.BufferedMutatorImpl.backgroundFlushCommits(BufferedMutatorImpl.java:238)
    at org.apache.hadoop.hbase.client.BufferedMutatorImpl.close(BufferedMutatorImpl.java:163)
    at org.apache.hadoop.hbase.mapreduce.TableOutputFormat$TableRecordWriter.close(TableOutputFormat.java:120)
    at org.apache.hadoop.mapred.ReduceTask$NewTrackingRecordWriter.close(ReduceTask.java:550)
    at org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:629)
    at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:389)
    at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698)
    at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
2016-11-22 15:21:02,984 ERROR [main] org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher: hconnection-0x4e90b4f40x0, quorum=localhost:2181, baseZNode=/hbase Received unexpected KeeperException, re-throwing exception
org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /hbase/meta-region-server
    at org.apache.zookeeper.KeeperException.create(KeeperException.java:99)
    at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
    at org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.java:1151)
    at org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.getData(RecoverableZooKeeper.java:359)
    at org.apache.hadoop.hbase.zookeeper.ZKUtil.getData(ZKUtil.java:623)
    at org.apache.hadoop.hbase.zookeeper.MetaTableLocator.getMetaRegionState(MetaTableLocator.java:479)
    at org.apache.hadoop.hbase.zookeeper.MetaTableLocator.getMetaRegionLocation(MetaTableLocator.java:165)
    at org.apache.hadoop.hbase.zookeeper.MetaTableLocator.blockUntilAvailable(MetaTableLocator.java:597)
    at org.apache.hadoop.hbase.zookeeper.MetaTableLocator.blockUntilAvailable(MetaTableLocator.java:577)
    at org.apache.hadoop.hbase.zookeeper.MetaTableLocator.blockUntilAvailable(MetaTableLocator.java:556)
    at org.apache.hadoop.hbase.client.ZooKeeperRegistry.getMetaRegionLocation(ZooKeeperRegistry.java:61)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1195)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1179)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegionInMeta(ConnectionManager.java:1365)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1199)
    at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:395)
    at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:344)
    at org.apache.hadoop.hbase.client.BufferedMutatorImpl.backgroundFlushCommits(BufferedMutatorImpl.java:238)
    at org.apache.hadoop.hbase.client.BufferedMutatorImpl.close(BufferedMutatorImpl.java:163)
    at org.apache.hadoop.hbase.mapreduce.TableOutputFormat$TableRecordWriter.close(TableOutputFormat.java:120)
    at org.apache.hadoop.mapred.ReduceTask$NewTrackingRecordWriter.close(ReduceTask.java:550)
    at org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:629)
    at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:389)
    at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698)
    at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)

1 个答案:

答案 0 :(得分:0)

它的'解决了。

必须设置zookeeper quorum和zookeeper客户端端口。

    conf.set("hbase.zookeeper.quorum",<ips list>);
    conf.set("hbase.zookeeper.property.clientPort",<port>);

不确定为什么不直接从hbase-site获取此配置。 请注意,该过程在快速入门时运行正常,但它确实在最近安装的集群上运行。 现在,设置这些属性一切都很好。 非常感谢

相关问题