Hadoop 2.7.3:DataNode未运行。

时间:2017-01-27 19:38:16

标签: hadoop hadoop2

我正在使用VirtualBox 3节点集群主机,slave1和slave2。 我得到以下例外:

java.io.EOFException: End of File Exception between local host is: "master/10.0.0.1"; destination host is: "master":9000; : java.io.EOFException; For more details see:  http://wiki.apache.org/hadoop/EOFException

我的配置如下:

芯-site.xml中

<configuration>
    <property>        
        <name>fs.default.name</name>         
        <value>hdfs://master:9000/</value>         
        <description>NameNode URI</description>     
    </property>
</configuration>

HDFS-site.xml中

<configuration>
    <property>         
        <name>dfs.namenode.name.dir</name>         
        <value>file:/usr/local/tmp_hadoop/hdfs/namenode</value>        
        <description>NameNode directory for namespace and transaction logs storage.</description>     
    </property>
    <property>         
        <name>dfs.datanode.data.dir</name>         
        <value>file:/usr/local/tmp_hadoop/hdfs/datanode</value>        
        <description>DataNode directory</description>
    </property>           
    <property>         
        <name>dfs.replication</name>         
        <value>2</value>     
    </property>
</configuration>

mapred-site.xml中

<configuration>
    <property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
    </property>
    <property>
            <name>mapreduce.jobhistory.address</name>
            <value>master:10020</value>
    </property>
</configuration>

纱-site.xml中

<configuration>
    <property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
    </property>
    <property>
        <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
        <value>org.apache.hadoop.mapred.ShuffleHandler</value>
    </property>
    <property>
        <name>yarn.resourcemanager.resource-tracker.address</name>
        <value>master:8025</value>
    </property>
    <property>
        <name>yarn.resourcemanager.scheduler.address</name>
        <value>master:8030</value>
    </property>
    <property>
        <name>yarn.resourcemanager.address</name>
        <value>master:8050</value>
    </property>
</configuration>

这是datanode的日志文件部分

************************************************************/
2017-01-26 23:08:31,358 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: registered UNIX signal handlers for [TERM, HUP, INT]
2017-01-26 23:08:32,854 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
2017-01-26 23:08:33,022 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
2017-01-26 23:08:33,022 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: DataNode metrics system started
2017-01-26 23:08:33,030 INFO org.apache.hadoop.hdfs.server.datanode.BlockScanner: Initialized block scanner with targetBytesPerSec 1048576
2017-01-26 23:08:33,035 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Configured hostname is master
2017-01-26 23:08:33,049 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Starting DataNode with maxLockedMemory = 0
2017-01-26 23:08:33,094 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Opened streaming server at /0.0.0.0:50010
2017-01-26 23:08:33,099 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Balancing bandwith is 1048576 bytes/s
2017-01-26 23:08:33,099 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Number threads for balancing is 5
2017-01-26 23:08:33,258 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
2017-01-26 23:08:33,270 INFO org.apache.hadoop.security.authentication.server.AuthenticationFilter: Unable to initialize FileSignerSecretProvider, falling back to use random secrets.
2017-01-26 23:08:33,278 INFO org.apache.hadoop.http.HttpRequestLog: Http request log for http.requests.datanode is not defined
2017-01-26 23:08:33,285 INFO org.apache.hadoop.http.HttpServer2: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
2017-01-26 23:08:33,288 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context datanode
2017-01-26 23:08:33,288 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs
2017-01-26 23:08:33,289 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
2017-01-26 23:08:33,317 INFO org.apache.hadoop.http.HttpServer2: Jetty bound to port 33116
2017-01-26 23:08:33,319 INFO org.mortbay.log: jetty-6.1.26
2017-01-26 23:08:33,653 INFO org.mortbay.log: Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:33116
2017-01-26 23:08:33,888 INFO org.apache.hadoop.hdfs.server.datanode.web.DatanodeHttpServer: Listening HTTP traffic on /0.0.0.0:50075
2017-01-26 23:08:34,840 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: dnUserName = hduser
2017-01-26 23:08:34,840 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: supergroup = supergroup
2017-01-26 23:08:35,048 INFO org.apache.hadoop.ipc.CallQueueManager: Using callQueue class java.util.concurrent.LinkedBlockingQueue
2017-01-26 23:08:35,207 INFO org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 50020
2017-01-26 23:08:35,287 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Opened IPC server at /0.0.0.0:50020
2017-01-26 23:08:35,339 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Refresh request received for nameservices: null
2017-01-26 23:08:35,425 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Starting BPOfferServices for nameservices: <default>
2017-01-26 23:08:35,511 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool <registering> (Datanode Uuid unassigned) service to master/10.0.0.1:9000 starting to offer service
2017-01-26 23:08:35,554 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting
2017-01-26 23:08:35,564 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 50020: starting
2017-01-26 23:08:36,592 INFO org.apache.hadoop.hdfs.server.common.Storage: Using 1 threads to upgrade data directories (dfs.datanode.parallel.volumes.load.threads.num=1, dataDirs=1)
2017-01-26 23:08:36,613 INFO org.apache.hadoop.hdfs.server.common.Storage: Lock on /usr/local/tmp_hadoop/hdfs/datanode/in_use.lock acquired by nodename 2758@master
2017-01-26 23:08:36,614 INFO org.apache.hadoop.hdfs.server.common.Storage: Storage directory /usr/local/tmp_hadoop/hdfs/datanode is not formatted for namespace 289717595. Formatting...
2017-01-26 23:08:36,615 INFO org.apache.hadoop.hdfs.server.common.Storage: Generated new storageID DS-f1e0d3a2-ef9e-4d34-8838-fcd8aea22636 for directory /usr/local/tmp_hadoop/hdfs/datanode
2017-01-26 23:08:36,870 INFO org.apache.hadoop.hdfs.server.common.Storage: Analyzing storage directories for bpid BP-593192787-10.0.0.1-1485497283439
2017-01-26 23:08:36,870 INFO org.apache.hadoop.hdfs.server.common.Storage: Locking is disabled for /usr/local/tmp_hadoop/hdfs/datanode/current/BP-593192787-10.0.0.1-1485497283439
2017-01-26 23:08:36,880 INFO org.apache.hadoop.hdfs.server.common.Storage: Block pool storage directory /usr/local/tmp_hadoop/hdfs/datanode/current/BP-593192787-10.0.0.1-1485497283439 is not formatted for BP-593192787-10.0.0.1-1485497283439. Formatting ...
2017-01-26 23:08:36,880 INFO org.apache.hadoop.hdfs.server.common.Storage: Formatting block pool BP-593192787-10.0.0.1-1485497283439 directory /usr/local/tmp_hadoop/hdfs/datanode/current/BP-593192787-10.0.0.1-1485497283439/current
2017-01-26 23:08:36,891 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Setting up storage: nsid=289717595;bpid=BP-593192787-10.0.0.1-1485497283439;lv=-56;nsInfo=lv=-63;cid=CID-ebf46119-3786-4df3-90a5-2e4e964dc362;nsid=289717595;c=0;bpid=BP-593192787-10.0.0.1-1485497283439;dnuuid=null
2017-01-26 23:08:36,893 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Generated and persisted new Datanode UUID 53960dd8-53a0-4878-9211-76ae6a08310c
2017-01-26 23:08:37,045 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Added new volume: DS-f1e0d3a2-ef9e-4d34-8838-fcd8aea22636
2017-01-26 23:08:37,045 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Added volume - /usr/local/tmp_hadoop/hdfs/datanode/current, StorageType: DISK
2017-01-26 23:08:37,060 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Registered FSDatasetState MBean
2017-01-26 23:08:37,063 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Adding block pool BP-593192787-10.0.0.1-1485497283439
2017-01-26 23:08:37,064 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Scanning block pool BP-593192787-10.0.0.1-1485497283439 on volume /usr/local/tmp_hadoop/hdfs/datanode/current...
2017-01-26 23:08:37,097 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Time taken to scan block pool BP-593192787-10.0.0.1-1485497283439 on /usr/local/tmp_hadoop/hdfs/datanode/current: 33ms
2017-01-26 23:08:37,097 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Total time to scan all replicas for block pool BP-593192787-10.0.0.1-1485497283439: 34ms
2017-01-26 23:08:37,103 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Adding replicas to map for block pool BP-593192787-10.0.0.1-1485497283439 on volume /usr/local/tmp_hadoop/hdfs/datanode/current...
2017-01-26 23:08:37,103 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Time to add replicas to map for block pool BP-593192787-10.0.0.1-1485497283439 on volume /usr/local/tmp_hadoop/hdfs/datanode/current: 0ms
2017-01-26 23:08:37,103 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Total time to add all replicas to map: 6ms
2017-01-26 23:08:37,109 INFO org.apache.hadoop.hdfs.server.datanode.VolumeScanner: Now scanning bpid BP-593192787-10.0.0.1-1485497283439 on volume /usr/local/tmp_hadoop/hdfs/datanode
2017-01-26 23:08:37,111 INFO org.apache.hadoop.hdfs.server.datanode.VolumeScanner: VolumeScanner(/usr/local/tmp_hadoop/hdfs/datanode, DS-f1e0d3a2-ef9e-4d34-8838-fcd8aea22636): finished scanning block pool BP-593192787-10.0.0.1-1485497283439
2017-01-26 23:08:37,154 INFO org.apache.hadoop.hdfs.server.datanode.DirectoryScanner: Periodic Directory Tree Verification scan starting at 1485512456154 with interval 21600000
2017-01-26 23:08:37,186 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool BP-593192787-10.0.0.1-1485497283439 (Datanode Uuid null) service to master/10.0.0.1:9000 beginning handshake with NN
2017-01-26 23:08:37,311 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool Block pool BP-593192787-10.0.0.1-1485497283439 (Datanode Uuid null) service to master/10.0.0.1:9000 successfully registered with NN
2017-01-26 23:08:37,312 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: For namenode master/10.0.0.1:9000 using DELETEREPORT_INTERVAL of 300000 msec  BLOCKREPORT_INTERVAL of 21600000msec CACHEREPORT_INTERVAL of 10000msec Initial delay: 0msec; heartBeatInterval=3000
2017-01-26 23:08:37,465 INFO org.apache.hadoop.hdfs.server.datanode.VolumeScanner: VolumeScanner(/usr/local/tmp_hadoop/hdfs/datanode, DS-f1e0d3a2-ef9e-4d34-8838-fcd8aea22636): no suitable block pools found to scan.  Waiting 1814399644 ms.
2017-01-26 23:08:37,500 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Namenode Block pool BP-593192787-10.0.0.1-1485497283439 (Datanode Uuid 53960dd8-53a0-4878-9211-76ae6a08310c) service to master/10.0.0.1:9000 trying to claim ACTIVE state with txid=1
2017-01-26 23:08:37,500 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Acknowledging ACTIVE Namenode Block pool BP-593192787-10.0.0.1-1485497283439 (Datanode Uuid 53960dd8-53a0-4878-9211-76ae6a08310c) service to master/10.0.0.1:9000
2017-01-26 23:08:37,599 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Successfully sent block report 0x1deab379c7,  containing 1 storage report(s), of which we sent 1. The reports had 0 total blocks and used 1 RPC(s). This took 2 msec to generate and 96 msecs for RPC and NN processing. Got back one command: FinalizeCommand/5.
2017-01-26 23:08:37,601 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Got finalize command for block pool BP-593192787-10.0.0.1-1485497283439
2017-01-27 00:15:16,954 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Successfully sent block report 0x3c11c8860e8,  containing 1 storage report(s), of which we sent 1. The reports had 0 total blocks and used 1 RPC(s). This took 0 msec to generate and 1 msecs for RPC and NN processing. Got back one command: FinalizeCommand/5.
2017-01-27 00:15:16,955 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Got finalize command for block pool BP-593192787-10.0.0.1-1485497283439
2017-01-27 00:25:05,061 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: IOException in offerService
java.io.EOFException: End of File Exception between local host is: "master/10.0.0.1"; destination host is: "master":9000; : java.io.EOFException; For more details see:  http://wiki.apache.org/hadoop/EOFException
    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
    at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
    at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
    at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
    at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792)
    at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:765)
    at org.apache.hadoop.ipc.Client.call(Client.java:1479)
    at org.apache.hadoop.ipc.Client.call(Client.java:1412)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
    at com.sun.proxy.$Proxy15.sendHeartbeat(Unknown Source)
    at org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolClientSideTranslatorPB.sendHeartbeat(DatanodeProtocolClientSideTranslatorPB.java:152)
    at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.sendHeartBeat(BPServiceActor.java:554)
    at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:653)
    at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:824)
    at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.EOFException
    at java.io.DataInputStream.readInt(DataInputStream.java:392)
    at org.apache.hadoop.ipc.Client$Connection.receiveRpcResponse(Client.java:1084)
    at org.apache.hadoop.ipc.Client$Connection.run(Client.java:979)
2017-01-27 00:25:09,062 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: master/10.0.0.1:9000. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2017-01-27 00:25:09,880 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: RECEIVED SIGNAL 15: SIGTERM
2017-01-27 00:25:09,882 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down DataNode at master/10.0.0.1
************************************************************/

任何帮助都会受到高度关注。

0 个答案:

没有答案
相关问题