Pivotal Knowledge Base

Follow

Hadoop daemons reports an ERROR "No common protection layer between client and server"

Environment:

PHD 1.x

Symptom: 

Hadoop daemons (datanode, namenode, journal manager, etc.) may report an error "No common protection layer between client and server" during startup rendering hdfs filesystem unavailable

Error snippet below from logs:

2014-03-01 13:03:03,150 ERROR org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException as:hdfs/phd11-snn.saturn.local@SATURN.LOCAL (auth:KERBEROS) cause:javax.security.sasl.SaslException: No common protection layer between client and server

Background:

hadoop.rpc.protection parameter in core-site.xml sets the quality of protection for secured sasl connections. Possible values are authentication, integrity and privacy.

  • authentication: It means authentication only and no integrity or privacy
  • integrity: It implies authentication and integrity are enabled
  • privacy: It implies all of authentication, integrity and privacy are enabled

The error message ideally indicates that there is a difference between the protection quality (value of hadoop.rpc.protection) between different cluster nodes. However, it appears to be a bug. This error is observed if hadoop.rpc.protection is set to integrity / privacy.

Workaround:

Change the value of hadoop.rpc.protection in /etc/gphd/hadoop/conf/core-site.xml to authentication on all the cluster nodes are restart the cluster services.

<property>
<name>hadoop.rpc.protection</name>
<value>authentication</value>
</property>

Miscellaneous:

 

Comments

Powered by Zendesk