Pivotal Knowledge Base

Follow

How to deploy Hue on HDFS with Namenode HA

Environment

  • PHD 3.x

Introduction

In order to make File Browser of Hue be able to access HDFS with Namenode HA, Hadoop HttpFS component needs to be installed on the Hue server. 

As of PHD-3.0.1.0 there are still several places in Hadoop-HttpFS package where default installation directory of Hortonworks Hadoop release called "hdp" are referred. Therefore more changes are required for some files used by Hadoop HttpFS after it's installed.

Deployment steps

1. Install Hadoop HttpFS on Hue server 

[root@admin ~]# yum install hadoop-httpfs

2. Create a link for hadoop-httpfs service 

[root@admin ~]# ln -s /usr/phd/3.0.1.0-1/etc/rc.d/init.d/hadoop-httpfs /etc/init.d/hadoop-httpfs
[root@admin ~]# ls -l /etc/init.d/hadoop-httpfs
lrwxrwxrwx 1 root root 48 Oct 9 01:45 /etc/init.d/hadoop-httpfs -> /usr/phd/3.0.1.0-1/etc/rc.d/init.d/hadoop-httpfs

3. Make changes to file /usr/phd/3.0.1.0-1/etc/rc.d/init.d/hadoop-httpfs

- search word "hdp" and replace it with "phd"

export HADOOP_HOME="/usr/phd/current/hadoop-$SERVICE_NAME/../hadoop-httpfs"
ln -s /usr/phd/current/hadoop-httpfs/webapps ${DEPLOYMENT_TARGET}/

- add 2 "export CATALINA_" lines to function stop()

stop() {
log_success_msg "Stopping ${DESC}: " export CATALINA_BASE=${CATALINA_BASE:-"/var/lib/hadoop-httpfs/tomcat-deployment"}
export CATALINA_PID="$PIDFILE" # FIXME: workaround for BIGTOP-537
......

4. comment out following line in file /usr/phd/3.0.1.0-1/hadoop-httpfs/sbin/httpfs.sh

export CATALINA_BASE=/etc/hadoop-httpfs/tomcat-deployment 

5. Create link for httpfs-config.sh

[root@admin 3.0.1.0-1]# mkdir /usr/phd/current/hadoop-httpfs/libexec
[root@admin 3.0.1.0-1]# ln -s /usr/phd/current/hadoop-client/libexec/httpfs-config.sh /usr/phd/current/hadoop-httpfs/libexec/httpfs-config.sh

6. Modify /etc/hadoop-httpfs/conf/httpfs-site.xml on Hue server to configure HttpFS to talk to the cluster

<property>
<name>httpfs.proxyuser.hue.hosts</name>
<value>*</value>
</property> <property>
<name>httpfs.proxyuser.hue.groups</name>
<value>*</value>
</property>

7. Modify core-site.xml on Ambari web UI by add the following properties. Note that restart of HDFS is needed for changes to take effect

<property>
<name>hadoop.proxyuser.httpfs.groups</name>
<value>*</value>
</property> <property>
<name>hadoop.proxyuser.httpfs.hosts</name>
<value>*</value>
</property>

8. On Hue server modify subsection [hadoop][[hdfs_clusters]][[[default]]] in /etc/hue/conf/hue.ini 

fs_defaultfs

the fs.defaultFS property in core-site.xml

webhdfs_url

 URL to HttpFS server

Example:

fs_defaultfs=hdfs://phd301a
webhdfs_url=http://admin.hadoop.local:14000/webhdfs/v1

9. Start hadoop-httpfs service

[root@admin conf]#  service hadoop-httpfs start

10. Restart hue service

[root@admin conf]#  service hue restart

 Reference

Internal JIRA HD-12084

Comments

Powered by Zendesk