master和Datanode上的secondarynamenode无法在slave hadoop 2.6.0上启动

时间:2015-06-29 06:06:43

标签: hadoop

当我使用start-all.sh启动hadoop之后,datanode和secondarynamenode没有在服务器和从属datanode上启动。 当我使用hdfs datanode进行故障排除时会出现此错误

    15/06/29 11:06:34 INFO datanode.DataNode: registered UNIX signal handlers for [TERM, HUP, INT]
15/06/29 11:06:35 WARN common.Util: Path /var/lib/hadoop/hdfs/datanode should be specified as a URI in configuration files. Please update hdfs configuration.
15/06/29 11:06:35 FATAL datanode.DataNode: Exception in secureMain
java.lang.RuntimeException: java.lang.reflect.InvocationTargetException
        at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:131)
        at org.apache.hadoop.security.Groups.<init>(Groups.java:70)
        at org.apache.hadoop.security.Groups.<init>(Groups.java:66)
        at org.apache.hadoop.security.Groups.getUserToGroupsMappingService(Groups.java:280)
        at org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:271)
        at org.apache.hadoop.security.UserGroupInformation.setConfiguration(UserGroupInformation.java:299)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2152)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2202)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2378)
        at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:2402)
Caused by: java.lang.reflect.InvocationTargetException
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
        at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:129)
        ... 9 more
Caused by: java.lang.UnsatisfiedLinkError: org.apache.hadoop.security.JniBasedUnixGroupsMapping.anchorNative()V
        at org.apache.hadoop.security.JniBasedUnixGroupsMapping.anchorNative(Native Method)
        at org.apache.hadoop.security.JniBasedUnixGroupsMapping.<clinit>(JniBasedUnixGroupsMapping.java:49)
        at org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback.<init>(JniBasedUnixGroupsMappingWithFallback.java:39)
        ... 14 more
15/06/29 11:06:35 INFO util.ExitUtil: Exiting with status 1
15/06/29 11:06:35 INFO datanode.DataNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down DataNode at localserver39/10.200.208.28

我的datanode在slave和master secondarynamenode上有什么问题?

start-dfs.sh on master

将此作为输出

hadoop@10.200.208.29's password: 10.200.208.28: starting datanode, logging to /home/hadoop/hadoop/logs/hadoop-hadoop-datanode-localserver39.out
10.200.208.28: nice: /usr/libexec/../bin/hdfs: No such file or directory
Starting secondary namenodes [0.0.0.0]
hadoop@0.0.0.0's password:
0.0.0.0: starting secondarynamenode, logging to /home/hadoop/hadoop/logs/hadoop-hadoop-secondarynamenode-MC-RND-1.out

在Jps得到这个

之后
bash-3.2$ jps
8103 Jps
7437 DataNode
7309 NameNode

core-site.xml

<configuration>
<property>
    <name>fs.defaultFS</name>
    <value>hdfs://10.200.208.29:9000/</value>
</property>

</configuration>

HDFS-site.xml中

<property>
  <name>dfs.replication</name>
  <value>3</value>
</property>

<property>
  <name>dfs.permissions</name>
  <value>false</value>
</property>

<property>
   <name>dfs.datanode.data.dir</name>
   <value>/Backup-HDD/hadoop/datanode</value>
</property>

<property>
        <name>dfs.namenode.data.dir</name>
        <value>/Backup-HDD/hadoop/namenode</value>
</property>


<property>
  <name>dfs.name.dir</name>
    <value>/Backup-HDD/hadoop/namenode</value>
</property>

<property>
  <name>dfs.data.dir</name>
    <value>/Backup-HDD/hadoop/datanode</value>
</property>

2 个答案:

答案 0 :(得分:0)

从hdfs-site.xml

中删除以下属性
<property>
   <name>dfs.datanode.data.dir</name>
   <value>/Backup-HDD/hadoop/datanode</value>
</property>

<property>
    <name>dfs.namenode.data.dir</name>
    <value>/Backup-HDD/hadoop/namenode</value>
</property>

<property>
    <name>dfs.name.dir</name>
    <value>/Backup-HDD/hadoop/namenode</value>
</property>

<property>
    <name>dfs.data.dir</name>
    <value>/Backup-HDD/hadoop/datanode</value>
</property>

在hdfs-site.xml

中添加以下两个属性
<property>
   <name>dfs.datanode.data.dir</name>
   <value>file:/home/user/Backup-HDD/hadoop/datanode</value>
</property>

<property>
    <name>dfs.namenode.name.dir</name>
    <value>file:/home/user/Backup-HDD/hadoop/namenode</value>
</property>

确保您的系统中存在名称和数据目录中指定的路径。

答案 1 :(得分:0)

在谷歌搜索后解决问题

Update .bashrc and .bash_profile
cat .bashrc
#!/bin/bash
unset all HADOOP environment variables
env | grep HADOOP | sed 's/.(HADOOP[^=])=.*/\1/' > un_var
while read line; do unset "$line"; done < un_var
rm un_var
export JAVA_HOME="/usr/java/latest/"
export HADOOP_PREFIX="/home/hadoop/hadoop"
export HADOOP_YARN_USER="hadoop"
export HADOOP_HOME="$HADOOP_PREFIX"
export HADOOP_CONF_DIR="$HADOOP_PREFIX/etc/hadoop"
export HADOOP_PID_DIR="$HADOOP_PREFIX"
export HADOOP_LOG_DIR="$HADOOP_PREFIX/logs"
export HADOOP_OPTS="$HADOOP_OPTS -Djava.io.tmpdir=$HADOOP_PREFIX/tmp"
export YARN_HOME="$HADOOP_PREFIX"
export YARN_CONF_DIR="$HADOOP_PREFIX/etc/hadoop"
export YARN_PID_DIR="$HADOOP_PREFIX"
export YARN_LOG_DIR="$HADOOP_PREFIX/logs"
export YARN_OPTS="$YARN_OPTS -Djava.io.tmpdir=$HADOOP_PREFIX/tmp"
cat .bash_profile
#!/bin/bash
if [ -f ~/.bashrc ]; then
source ~/.bashrc
fi

Bash配置文件问题