Hadoop-2.2.0 NammeNode启动问题

时间:2014-01-04 12:18:33

标签: hadoop

我是Hadoop的新手,在使用./hadoop-daemon.sh start namenode命令启动NameNode时面临以下问题。

我遵循的步骤:

1. Downloaded Ubuntu13 VM ans installed Java 1.6 and hadoop-2.2.0
2. updated the configuration files 
3. ran this hadoop namenode –format
4. ran this from sbin dir ./hadoop-daemon.sh start namenode

错误是:

2014-01-04 06:55:48,561 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
2014-01-04 06:55:48,565 INFO org.apache.hadoop.util.GSet: Computing capacity for map BlocksMap
2014-01-04 06:55:48,565 INFO org.apache.hadoop.util.GSet: VM type       = 32-bit
2014-01-04 06:55:48,571 INFO org.apache.hadoop.util.GSet: 2.0% max memory = 888.9 MB
2014-01-04 06:55:48,571 INFO org.apache.hadoop.util.GSet: capacity      = 2^22 = 4194304 entries
2014-01-04 06:55:48,603 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: dfs.block.access.token.enable=false
2014-01-04 06:55:48,604 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: defaultReplication         = 1
2014-01-04 06:55:48,604 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxReplication             = 512
2014-01-04 06:55:48,604 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: minReplication             = 1
2014-01-04 06:55:48,604 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxReplicationStreams      = 2
2014-01-04 06:55:48,604 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: shouldCheckForEnoughRacks  = false
2014-01-04 06:55:48,605 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: replicationRecheckInterval = 3000
2014-01-04 06:55:48,605 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: encryptDataTransfer        = false
2014-01-04 06:55:48,616 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner             = user (auth:SIMPLE)
2014-01-04 06:55:48,617 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup          = supergroup
2014-01-04 06:55:48,617 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled = true
2014-01-04 06:55:48,617 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: HA Enabled: false
2014-01-04 06:55:48,621 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Append Enabled: true
2014-01-04 06:55:48,717 INFO org.apache.hadoop.util.GSet: Computing capacity for map INodeMap
2014-01-04 06:55:48,717 INFO org.apache.hadoop.util.GSet: VM type       = 32-bit
2014-01-04 06:55:48,717 INFO org.apache.hadoop.util.GSet: 1.0% max memory = 888.9 MB
2014-01-04 06:55:48,717 INFO org.apache.hadoop.util.GSet: capacity      = 2^21 = 2097152 entries
2014-01-04 06:55:48,732 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names occuring more than 10 times
2014-01-04 06:55:48,738 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
2014-01-04 06:55:48,738 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
2014-01-04 06:55:48,738 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.extension     = 30000
2014-01-04 06:55:48,740 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Retry cache on namenode is enabled
2014-01-04 06:55:48,740 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
2014-01-04 06:55:48,744 INFO org.apache.hadoop.util.GSet: Computing capacity for map Namenode Retry Cache
2014-01-04 06:55:48,744 INFO org.apache.hadoop.util.GSet: VM type       = 32-bit
2014-01-04 06:55:48,744 INFO org.apache.hadoop.util.GSet: 0.029999999329447746% max memory = 888.9 MB
2014-01-04 06:55:48,744 INFO org.apache.hadoop.util.GSet: capacity      = 2^16 = 65536 entries
2014-01-04 06:55:48,768 INFO org.apache.hadoop.hdfs.server.common.Storage: Lock on /home/user/hadoop2_data/hdfs/namenode/in_use.lock acquired by nodename 12574@ubuntuvm
2014-01-04 06:55:48,785 INFO org.mortbay.log: Stopped SelectChannelConnector@0.0.0.0:50070
2014-01-04 06:55:48,789 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping NameNode metrics system...
2014-01-04 06:55:48,791 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system stopped.
2014-01-04 06:55:48,791 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system shutdown complete.
2014-01-04 06:55:48,793 **FATAL org.apache.hadoop.hdfs.server.namenode.NameNode: Exception in namenode join
java.io.IOException: NameNode is not formatted.**
    at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:210)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:787)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:568)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:443)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:491)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:684)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:669)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1254)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1320)
2014-01-04 06:55:48,798 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1
2014-01-04 06:55:48,803 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at ubuntuvm/127.0.1.1
************************************************************/

有人可以帮我解决这个问题吗,我试图谷歌但仍然没有找到解决方案。

1 个答案:

答案 0 :(得分:0)

看起来你的“hadoop namenode -format”没有采取(我假设你已经尝试再次点击该命令,但它仍然无效)。当您调用hadoop namenode -format时,您正在运行的用户必须具有对dfs.data.dir和dfs.name.dir中目录的写访问权。

默认情况下,它们设置为

${hadoop.tmp.dir}/dfs/data

${hadoop.tmp.dir}/dfs/name

其中hadoop.tmp.dir是另一个配置属性,默认为/ tmp / hadoop - $ {username}。

因此,默认情况下,hadoop数据文件保存在/ tmp目录下,这不是很好,特别是如果你有可以清除这些目录的脚本。

确保在core-site.xml中已将dfs.data.dir和dfs.name.dir设置为运行hadoop admin命令并启动hadoop守护程序的用户可以写入的目录。然后重新格式化HDFS并重试。

相关问题