HDFS伪分发模式namenodes启动错误

时间:2014-03-12 01:43:03

标签: hadoop hdfs

我正在尝试以伪分布模式在Mac OS X(Java 7)上启动HDFS。我已经按照各个地方的说明(例如https://hadoop.apache.org/docs/r1.2.1/single_node_setup.html)创建了一个包含配置文件的目录。我可以ssh到localhost而无需密码。但是当我尝试启动hdfs时,我得到以下内容:

$ start-dfs.sh --config ~/hadoop-pseudodistributed
2014-03-12 01:15:14.125 java[84567:1903] Unable to load realm info from SCDynamicStore
Starting namenodes on [2014-03-12 01:15:14,380 WARN  [main] util.NativeCodeLoader (NativeCodeLoader.java:<clinit>(62)) - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
localhost]
2014-03-12: ssh: Could not resolve hostname 2014-03-12: nodename nor servname provided, or not known
Unable: ssh: Could not resolve hostname Unable: nodename nor servname provided, or not known
[main]: ssh: Could not resolve hostname [main]: nodename nor servname provided, or not known
WARN: ssh: Could not resolve hostname WARN: nodename nor servname provided, or not known
load: ssh: Could not resolve hostname load: nodename nor servname provided, or not known
-: ssh: Could not resolve hostname -: nodename nor servname provided, or not known
for: ssh: Could not resolve hostname for: nodename nor servname provided, or not known
native-hadoop: ssh: Could not resolve hostname native-hadoop: nodename nor servname provided, or not known
where: ssh: Could not resolve hostname where: nodename nor servname provided, or not known
builtin-java: ssh: Could not resolve hostname builtin-java: nodename nor servname provided, or not known
your: ssh: Could not resolve hostname your: nodename nor servname provided, or not known
applicable: ssh: Could not resolve hostname applicable: nodename nor servname provided, or not known
(NativeCodeLoader.java:<clinit>(62)): ssh: Could not resolve hostname (NativeCodeLoader.java:<clinit>(62)): nodename nor servname provided, or not known
using: ssh: Could not resolve hostname using: nodename nor servname provided, or not known
classes: ssh: Could not resolve hostname classes: nodename nor servname provided, or not known
platform...: ssh: Could not resolve hostname platform...: nodename nor servname provided, or not known
library: ssh: Could not resolve hostname library: nodename nor servname provided, or not known
localhost: starting namenode, logging to /usr/local/Cellar/hadoop/2.2.0/libexec/logs/hadoop-terry-namenode-Terrys-MacBook-Pro.local.out
01:15:14,380: ssh: Could not resolve hostname 01:15:14,380: nodename nor servname provided, or not known
to: ssh: connect to host to port 22: Connection refused
localhost: 2014-03-12 01:15:15,150 INFO  [main] namenode.NameNode (StringUtils.java:startupShutdownMessage(601)) - STARTUP_MSG:

还有更多输出(我尝试启动辅助名称节点时得到类似的抱怨),但上述内容显然是不受欢迎的,我显然想修复它。

看起来脚本正在运行某些东西以获取名称节点列表,并且该东西正在转储正被捕获并用作节点列表的错误(到stdout或stderr)。

我尝试通过添加到hadoop-env.sh(如stackoverflow上的其他建议)来清除“无法从SCDynamicStore加载域信息”错误。但这似乎超出了记录的设置步骤,不包括将hadoop-env.sh的副本放入我的配置目录。

我想这应该很容易,但现在已经很晚了,我累了:-(任何帮助都会受到赞赏。

谢谢!

3 个答案:

答案 0 :(得分:0)

  

2014-03-12 01:15:14.125 java [84567:1903]无法从SCDynamicStore加载领域信息   在[2014-03-12 01:15:14,380 WARN [main] util.NativeCodeLoader(NativeCodeLoader.java:(62))上启动名称节点 - 无法加载native-hadoop库 适用于您的平台...使用内置java类(适用的localhost)

错误中的上一行显示“ 无法加载native-hadoop库 ”。这是因为本机库是在32位上编译(构建)的,您可能在64位上运行它。我认为这些行应该是 警告 。如果它说错误,你应该遵循this link

此外:

  

2014-03-12:ssh:无法解析主机名2014-03-12:提供的nodename或servname,或者未知

我不知道Mac OS,但我会尝试对Ubuntu.Below行添加ubuntu的.bashrc中的类比。意味着我在.bashrc中提供本机库的路径,以便操作系统知道它。

  

export HADOOP_COMMON_LIB_NATIVE_DIR = $ HADOOP_INSTALL / lib / native

     

export HADOOP_OPTS =“ - Djava.library.path = $ HADOOP_INSTALL / lib”

以类似的方式,您必须在操作系统中设置本机库的路径。希望这将解决你的“无法解决主机名”问题(我遇到类似的问题,但在Ubuntu,它工作)

答案 1 :(得分:0)

尝试在MacOS上以伪分布式模式运行Hadoop时,我有一个非常令人满意的解决方法。

# use hadoop-daemon.sh instead of start-dfs.sh
# because start-dfs.sh relies on native libs not present in MacOS
/platform/hadoop/sbin/hadoop-daemon.sh start namenode
/platform/hadoop/sbin/hadoop-daemon.sh start secondarynamenode
/platform/hadoop/sbin/hadoop-daemon.sh start datanode

# use hadoop-daemon.sh instead of stop-dfs.sh
# because stop-dfs.sh relies on native libs not present in MacOS
/platform/hadoop/sbin/hadoop-daemon.sh stop datanode
/platform/hadoop/sbin/hadoop-daemon.sh stop secondarynamenode
/platform/hadoop/sbin/hadoop-daemon.sh stop namenode

我知道这已经有3年了,但希望这可以节省其他人我经历的麻烦和浪费 - 我花了太多时间尝试从源代码构建以获取您需要指向的本机库到hadoop-env.sh使start-dfs.sh和stop-dfs.sh工作,在看到这个问题之前,阅读脚本以查看他们正在调用的内容($ HADOOP_PREFIX / bin / hdfs getconf -namenodes)并实现我只对伪分布式模式感兴趣,每个类型有一个节点,我只能说'方便'脚本并使用hadoop-daemon.sh自己启动和停止它们。

我个人并不需要这个,但如果你正在做那些脚本被其他东西调用的话,你甚至可以用这种解决方法覆盖start-dfs.sh和stop-dfs.sh的内容。< / p>

答案 2 :(得分:-1)

如果您之前使用hadoop默认配置文件夹运行应用程序,则mac上的终端会话无法与localhost建立连接。

请关闭当前终端并在新终端中运行应用程序。