Pivotal Knowledge Base


How to create a heap dump for hiveserver2 to analyse heap out of memory issues


Product Version
Pivotal HD (PHD) 3.x
Ambari 1.7.1 / 2.x


For the case of hiveserver2 not responding because of out of memory errors, such as the one below, it may be necessary to collect the heap dump. This knowledge base article explains how to collect the dump.

Error message:

2016-05-27 13:46:55,629 ERROR [HiveServer2-Handler-Pool: Thread-35]: thrift.ProcessFunction (ProcessFunction.java:process(41)) - Internal error processing OpenSession 
2016-05-27 13:46:55,629 ERROR [HiveServer2-Handler-Pool: Thread-33]: thrift.ProcessFunction (ProcessFunction.java:process(41)) - Internal error processing OpenSession
java.lang.OutOfMemoryError: Java heap space
at java.util.Hashtable$Entry.clone(Hashtable.java:1052)
at java.util.Hashtable.clone(Hashtable.java:613)
at org.apache.hadoop.conf.Configuration.<init>(Configuration.java:696)


In order to enable Heap Dump in case of an Out Of Memory error, apply the following steps:

1. Open Ambari and go to: HIVE / Configs / Advanced / advanced hive-env / hive-env template and add the following lines to the end:

 if [ "$SERVICE" = "hiveserver2" ]; then 
  export HADOOP_CLIENT_OPTS="$HADOOP_CLIENT_OPTS -XX:HeapDumpPath=/var/log/hive -XX:+HeapDumpOnOutOfMemoryError"  

2. Save the configuration change. 

3. Restart all services requested to be restarted by Ambari.

4. Confirm the changes have taken effect on hiveserver2 by logging into the hiveserver2 host and running 'ps':


Powered by Zendesk