1) To see where HDFS is running: hdfs getconf -confKey fs.default.name 2) Exception encountered while connecting to the server org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS] Resolution: in krb5.conf comment below lines: # default_ccache_name = KEYRING:persistent:%{uid} 3) Failed to add storage directory [DISK]file:/storage/data Solution to Fix: Solution 1=> If you have valid data on cluster and do not want to delete it then, copy the clusterID from VERSION file of namenode and past it on datanode VERSION file. Solution 2 => delete all files from <dfs.datanode.data.dir> of datanode and<dfs.datanode.data.dir> of namenode directory and format namenode using below command 4) Restart Namenode: sbin/hadoop-daemon.sh start namenode 5) mkdir: Permission denied: user=root, access=WRITE, inode="/user":hdfs:supergroup:drwxr-xr-x Resolution: <<<<Need to retry because when kinit root was run, this started