[ACCEPTED]-Incorrect configuration: namenode address dfs.namenode.rpc-address is not configured-cloudera-cdh
I too was facing the same issue and finally 5 found that there was a space in fs.default.name 4 value. truncating the space fixed the issue. The 3 above core-site.xml doesn't seem to have 2 space so the issue may be different from 1 what i had. my 2 cents
These steps solved the problem for me:
export HADOOP_CONF_DIR = $HADOOP_HOME/etc/hadoop
echo $HADOOP_CONF_DIR
hdfs namenode -format
hdfs getconf -namenodes
./start-dfs.sh
0
check the core-site.xml under $HADOOP_INSTALL/etc/hadoop 2 dir. Verify that the property fs.default.name 1 is configured correctly
Obviously,your core-site.xml has configure 3 error.
<property>
<name>fs.defaultFS</name>
<value>hdfs://namenode:8020</value>
</property>
Your <name>fs.defaultFS</name>
setting as <value>hdfs://namenode:8020</value>
,but your machine 2 hostname is datanode1
.So you just need change namenode
to datanode1
will be 1 OK.
I had the exact same issue. I found a resolution 3 by checking the environment on the Data 2 Node:
$ sudo update-alternatives --install /etc/hadoop/conf hadoop-conf /etc/hadoop/conf.my_cluster 50
$ sudo update-alternatives --set hadoop-conf /etc/hadoop/conf.my_cluster
Make sure that the alternatives are 1 set correctly on the Data Nodes.
in my case, I have wrongly set HADOOP_CONF_DIR 2 to an other Hadoop installation.
Add to 1 hadoop-env.sh:
export HADOOP_CONF_DIR=/usr/local/hadoop/etc/hadoop/
Configuring the full host name in core-site.xml, masters 1 and slaves solved the issue for me.
Old: node1 (failed)
New: node1.krish.com (Succeed)
creating dfs.name.dir and dfs.data.dir directories 2 and configuring full hostname in core-site.xml, masters 1 & slaves is solved my issue
In my situation, I fixed by change /etc/hosts 1 config to lower case.
This type of problem mainly arises is there 5 is a space in the value or name of the property 4 in any one of the following files- core-site.xml, hdfs-site.xml, mapred-site.xml, yarn-site.xml
just 3 make sure you did not put any spaces or 2 (changed the line) in between the opening 1 and closing name and value tags.
Code:
<property>
<name>dfs.name.dir</name> <value>file:///home/hadoop/hadoop_tmp/hdfs/namenode</value>
<final>true</final>
</property>
I was facing the same issue, formatting 2 HDFS solved my issue. Don't format HDFS if you have important meta data.
Command for formatting 1 HDFS: hdfs namenode -format
Check your '/etc/hosts' file:
There must be a line 2 like below: (if not, so add that)
namenode 127.0.0.1
Replace 1 127.0.01 with your namenode IP.
Add the below line in hadoop-env.cmd
set HADOOP_HOME_WARN_SUPPRESS=1
0
More Related questions
We use cookies to improve the performance of the site. By staying on our site, you agree to the terms of use of cookies.