Pivotal Knowledge Base

Follow

HBase fails to start when HDFS HA is deployed and HBase is added afterwards

Environment

Product Version
Pivotal Hadoop 3.x
OS RHEL 6.x

Symptom

A cluster is deployed and running without HBase. HDFS HA is enabled and working. In this scenario, if we add HBase service, it will fail to start with the following message in Ambari:

Error Message:

Note: These log entries are found in /var/lib/ambari-agent/data/errors-403.txt in the node where the service failed to start. They are also available in Ambari interface:

2016-01-12 02:47:29,600 - Error while executing command 'start':
Traceback (most recent call last):
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 123, in execute
    method(env)
  File "/var/lib/ambari-agent/cache/stacks/PHD/2.0.6/services/HBASE/package/scripts/hbase_master.py", line 42, in start
    self.configure(env) # for security
  File "/var/lib/ambari-agent/cache/stacks/PHD/2.0.6/services/HBASE/package/scripts/hbase_master.py", line 37, in configure
    hbase(name='master')
  File "/var/lib/ambari-agent/cache/stacks/PHD/2.0.6/services/HBASE/package/scripts/hbase.py", line 135, in hbase
    params.HdfsDirectory(None, action="create")
  File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 148, in __init__
    self.env.run()
  File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 149, in run
    self.run_action(resource, action)
  File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 115, in run_action
    provider_action()
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_directory.py", line 107, in action_create
    not_if=format("su - {hdp_hdfs_user} -c 'export PATH=$PATH:{bin_dir} ; "
  File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 148, in __init__
    self.env.run()
  File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 149, in run
    self.run_action(resource, action)
  File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 115, in run_action
    provider_action()
  File "/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py", line 240, in action_run
    raise ex
Fail: Execution of 'hadoop --config /etc/hadoop/conf fs -mkdir `rpm -q hadoop | grep -q "hadoop-1" || echo "-p"` hdfs://ai3hdw1.dca,ai3hdm1.dca:8020/apps/hbase/data /apps/hbase/staging && hadoop --config /etc/hadoop/conf fs -chmod  711 /apps/hbase/staging && hadoop --config /etc/hadoop/conf fs -chown  hbase hdfs://ai3hdw1.dca,ai3hdm1.dca:8020/apps/hbase/data /apps/hbase/staging' returned 1. mkdir: Incomplete HDFS URI, no host: hdfs://ai3hdw1.dca,ai3hdm1.dca:8020/apps/hbase/data

Cause

The cause is a known bug in Ambari versions < 2.0.0 where namenode ID and port are used for hbase.rootdir instead of the HA nameservice id. This is the link to the bug: AMBARI-9319  

Resolution

Follow the steps to resolve this issue:

  1. Go to HBase configuration in Ambari
  2. Search for " property
  3. Fix the value to include the current HDFS HA name service id which should be the value of param fs.defaultFS found in core-site.xml
  4. Start (do not restart) HBase.

 

Comments

Powered by Zendesk