Pivotal Knowledge Base

Follow

Error "User: oozie/FQDN@GPHD.LOCAL is not allowed to impersonate oozie"

Environment

PHD 2.0.1 cluster.

HDFS namenode HA is enabled

HDFS and oozie are secured by Kerberos.

Problem

The following error was found when setup share lib service for oozie console

$sudo -u oozie oozie-setup sharelib create -fs hdfs://<namenode-host>:<namdenode-port> -locallib /usr/lib/gphd/oozie/oozie-sharelib.tar.gz
Error: User: oozie/FQDNl@GPHD.LOCAL is not allowed to impersonate oozie
Stack trace for the error was (for debug purposes):
--------------------------------------
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.authorize.AuthorizationException): User: oozie/secured-nn.gphd.local@GPHD.LOCAL is not allowed to impersonate oozie
    at org.apache.hadoop.ipc.Client.call(Client.java:1347)
    at org.apache.hadoop.ipc.Client.call(Client.java:1300)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
    at com.sun.proxy.$Proxy9.getFileInfo(Unknown Source)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB
...

Root Cause

When running oozie jobs, super user oozie/FQDN@GPHD.LOCAL needs to be able to run the jobs on behalf of another user. In above error, the super user  oozie/FQDN@GPHD.LOCAL is not allowed to impersonate user oozie.
Solution
There are three steps needs to be performed before running this command in HDFS HA enabled cluster:
1: Add the following contents to core-site.xml on all HDFS and mapreduce/yarn nodes. The second parameter is the key to fix the issue:
<property>
  <name>hadoop.proxyuser.oozie.hosts</name>
  <value>*</value>
</property>
<property>
  <name>hadoop.proxyuser.oozie.groups</name>
  <value>*</value>
</property>
2: Restart the entire PHD cluster.
3: Restart hadoop-hdfs-zkfc service manually from active and standby namenode. ICM doesn't handle this service in PHD version 2.0.1. Click here for more details.
Note:
1: A Jira has been filed to allow icm restart hadoop-hdfs-zkfc.
2: This error may occur if you have the wrong settings or don't restart corresponding hadoop services when submitting oozie jobs as well.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Comments

Powered by Zendesk