启动start-dfs.sh出现的报错

  • 出现:
ERROR: Attempting to operate on hdfs namenode as root
ERROR: but there is no HDFS NAMENODE USER defined. Aborting operation.
Starting datanodes

解决方法

在文件开头空白处添加以下内容:

start-dfs.sh, stop-dfs.sh

HDFS_DATANODE_USER=root
HADOOP_SECURE_DN_USER=hdfs
HDFS_NAMENODE_USER=root
HDFS_SECONDARYNAMENODE_USER=root 

start-yarn.sh, stop-yarn.sh

YARN_RESOURCEMANAGER_USER=root
HADOOP_SECURE_DN_USER=yarn
YARN_NODEMANAGER_USER=root

集群中所有机子都需要修改,之后不在赘述。

  • 出现
Starting namenodes on | hadoop100]
hadoop100: Warning: Permanently added'hadoop100,192.168.79.5' (ECDSA) to the list of known hosts.
hadoop100:Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).
localhost: Warning: Permanently added 'localhost'( ECDSA)to the list of known hosts.
localhost:Permission denied (publickey, gssapi-keyex, gssapi-with-mic, password)

此处是由于shh并没有权利访问localhost,也就是本地key无法访问本地,需要将本地key加入authorized_kyes里面。

cd ~/.ssh/
cat id_rsa.pub >> authorized_keys

Hadoop 2.7 与 3.0 特性区别

在Hadoop 3.0以上,不存在slaves文件,使用worker文件替代,如果出现在master机上启动dfs但是worker机上不显示DataNode则说明worker文件未修改好。

Hadoop 3.0默认端口区别

  • hdfs的web页面默认端口是9870

  • yarn的web页面端口是8088



大数据 Hadoop

本博客所有文章除特别声明外,均采用 CC BY-SA 3.0协议 。转载请注明出处!