Pivotal Knowledge Base

Follow

Hadoop NameNode Stuck in Safe Mode because of Error "Requested Data Length is Longer than Maximum Configured RPC Length"

Environment

Product Version
 Pivotal HD / HDP  2.1, 3.x / 2.3, 2.4

Symptom

Cannot write to HDFS because the NameNode is in safe mode:

$ hdfs dfs -put test /pxf_data/
put: Cannot create file/pxf_data/test._COPYING_. Name node is in safe mode.

The following symptoms may also be seen:

  • hdfs dfsadmin -report shows safe mode is ON:
$ hdfs dfsadmin -report 
Safe mode is ON
Configured Capacity: 3189940122255360 (2.83 PB)
Present Capacity: 1613987045597184 (1.43 PB)
DFS Remaining: 805525797548032 (732.62 TB)
DFS Used: 808461248049152 (735.29 TB)
DFS Used%: 50.09%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0
  • Safe mode is ON in dfsadmin -safemode get:
$ hdfs dfsadmin -safemode get 
Safe mode is OFF
  • NameNode shows these messages: 
2016-10-10 07:23:49,566 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 8020: readAndProcess from client 3.48.32.35 threw exception [java.io.IOException: Requested data length 70313631 is longer than maximum configured RPC length 67108864.  RPC came from 3.48.32.35]
java.io.IOException: Requested data length 70313631 is longer than maximum configured RPC length 67108864.  RPC came from 3.48.32.35
        at org.apache.hadoop.ipc.Server$Connection.checkDataLength(Server.java:1488)
        at org.apache.hadoop.ipc.Server$Connection.readAndProcess(Server.java:1550)
  • Datanode logs show these messages:
2016-10-10 07:25:43,828 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: IOException in offerService
java.io.IOException: Failed on local exception: java.io.EOFException; Host Details : local host is: "hdw1.gphd.local/3.48.32.16"; destination host is: "hdm1.gphd.local"
:8020;
        at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:764)
        at org.apache.hadoop.ipc.Client.call(Client.java:1351)

Cause

RPC requests containing block reports from the datanodes to the namenode are failing because they are too large (larger than the default size 64Mb).

Resolution

The default RPC size needs to be taken up by changing the value of ipc.maximum.data.length to 134217728 (128Mb).

In Pivotal HD 2.1: 

Use ICM client to reconfigure the cluster by adding the following lines to core-site.xml. 

 <property>
<name>ipc.maximum.data.length</name>
<value>134217728</value>
</property>


In Pivotal HD 3.0 / HDP 2.x:

  • Log into Ambari GUI
  • Go to HDFS / Configs / Advanced / Customer core-site / Add Property and add in  

  • Restart any services as requested by Ambari 

 

 

Comments

Powered by Zendesk