Pivotal Knowledge Base


HDFS goes into readonly mode and errors out with "Name node is in safe mode"


Product Version
Pivotal HD (PHD)  2.x - 3.x


Namenode Reports

2012-12-05 04:07:52,870 ERROR org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException as:mapred cause:org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot delete /tmp/hadoop-root/mapred/system. Name node is in safe mode.

Error when reading/writing HDFS data

[root@centos-1 ~]# hadoop fs -copyFromLocal .bash_history /tmp/
copyFromLocal: org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot create file/tmp/.bash_history. Name node is in safe mode.


During start up, Namenode loads the filesystem state from fsimage and edits log file. It then waits for data nodes to report their blocks so that it does not prematurely start replicating the blocks though enough replicas already exist in the cluster. During this time, Namenode stays in safe mode. A Safemode for Namenode is essentially a read-only mode for the HDFS cluster, where it does not allow any modifications to file system or blocks. Normally, Namenode disables safe mode automatically at the beginning. If required, HDFS could be placed in safe mode explicitly using bin/hadoop dfsadmin -safemode command. Namenode front page shows whether safe mode is on or off.


One way to workaround this is to manually move the namenode out of safemode.  Before deciding to do that make sure you know and understand why the namenode is stuck in safemode by reviewing the status of all datanodes and the namenode logs.  In some cases manually disabling safemode can lead to dataloss.

Please note that you must run the command using the HDFS OS user which is the default super user for HDFS. Otherwise, you will encounter  the following error: "Access denied for user Hadoop. Superuser privilege is required".

  1. Run the command below using the HDFS OS user to disable safe mode:
    sudo -u hdfs hadoop dfsadmin -safemode leave
  2. After attempting to disable safe mode, try to write to the HDFS using the below command:
    [root@centos-1 ~]# hadoop fs -copyFromLocal .bash_history /tmp/
    [root@centos-1 ~]# hadoop fs -ls /tmp
    Found 6 items
    -rw-r--r-- 3 root supergroup 14904 2012-12-05 17:06 /tmp/.bash_history
    drwxrwxrwx   - root supergroup 0 2012-11-28 23:56 /tmp/hadoop-mapred
    drwxr-xr-x   - hdfs supergroup 0 2012-11-28 23:56 /tmp/hadoop-root
    drwxr-xr-x   - root supergroup 0 2012-11-28 23:27 /tmp/test_input
    drwxrwxrwx   - root supergroup&nbsp 0 2012-11-28 23:56 /tmp/test_output



Powered by Zendesk