Pivotal Knowledge Base


Unable to insert data into writable external table on Isilon storage through PXF


Product Version
Pivotal HDB 1.3.x, 2.0.x
OS  RHEL 6.x
Hadoop  Isilon OneFS


Unable to insert data into the writable external table on Isilon; the following error messages are seen:

Error Message:

The following is the error message from the psql command line:

template1=# INSERT INTO ambari_hawq_pxf_hdfs_writable_test SELECT * FROM generate_series(1,10); 
ERROR: failed sending to remote component '' (libchurl.c:574) (seg0 hdc3.prod.smartods.local:40000 pid=333525) (dispatcher.c:1801)

HAWQ Master's pg_log:

2016-08-15 10:08:34.592381 PHT,"gpadmin","template1",p578617,th1990617216,"","60986",2016-08-15 10:04:49 PHT,36279,con7458,cmd17,seg-1,,,x36279,sx1,"ERROR","XX000","failed sending to remote component '' (libchurl.c:574) (seg0 hdc3.prod.smartods.local:40000 pid=333525) (dispatcher.c:1801)",,,,,,"INSERT INTO ambari_hawq_pxf_hdfs_writable_test SELECT * FROM generate_series(1,10);",0,,"dispatcher.c",1801,"Stack trace:
1 0x87463a postgres errstart (elog.c:497)
2 0x9aa3c9 postgres <symbol not found> (dispatcher.c:1799)
3 0x66ae5c postgres mppExecutorFinishup (execUtils.c:2206)
4 0x659107 postgres ExecutorEnd (execMain.c:1394)
: :
13 0x6c072a postgres main (main.c:226)
14 0x3fef21ed1d libc.so.6 __libc_start_main (??:0)
15 0x4a14e9 postgres <symbol not found> (??:0)


When enabling the GUC pxf_isilon, only the master node's postgresql.conf was edited based on the old PHD/Isilon integration document, which left all segment nodes with the param not enabled.

Enable pxf_isilon param using the "hawq config" command so as to enable the param across all nodes (masters and segment nodes) in the cluster and restart the HAWQ cluster.

$ hawq config -c pxf_isilon -v true --skipvalidation
$ hawq stop cluster -M fast
$ hawq start cluster
$ psql -c "show pxf_isilon"

In case of HAWQ 1.3.x, you may follow the following procedure instead:

$ gpconfig -c pxf_isilon -v true --skipvalidation
$ gpstop -r
$ psql -c "show pxf_isilon"

Additional Information 

1. Basically, we can use the following links for PHD (HDP)/Isilon integration; they are all written based on HDB 1.3.x . The only difference would be the GUC config command portion ('hawq config' vs 'gpconfig'): 


2. Ensure that pxf-service is running on master nodes and ALL segment nodes in the HAWQ cluster.

3. When composing a PXF LOCATION string for CREATE statement, you can use any hostname of a node with the pxf-service running. It doesn't need to be the master node but you MUST NOT use Isilon node name (or FQDN).

4. By default, the port number for the pxf LOCATION string is 51200. However, for read-only external file, it doesn't matter. Whatever port number is specified, it'll use the default port number of 51200 assuming the GUC pxf_service_address is set to the default of "localhost:51200".

5. For writeable external table, the port number specified will be used. The default is 51200.

6. Below are examples of DDLs & LOCATION strings.

CREATE WRITABLE EXTERNAL TABLE writable_test (col1 int) 
LOCATION ('pxf://hdm:51200/user/gpadmin/my_writable_test?PROFILE=HdfsTextSimple') FORMAT 'TEXT';  CREATE EXTERNAL TABLE mytable (col1 int)
LOCATION ('pxf://hdm:51200/tmp/mytest/mytable?PROFILE=HdfsTextSimple') FORMAT 'TEXT';

7. You may use the following settings from psql for extensive debug messages:

set log_min_messages=DEBUG2;
set client_min_messages=DEBUG3;




Powered by Zendesk