When Hawq queries an external table that points to GemFire XD, it only accesses the data that has been persisted to HDFS (this data may also reside in-memory). You can ensure that all data has been flushed by invoking SYS.HDFS_FLUSH_QUEUE. Also, set the CHECKPOINT parameter on the external table definition to determine if Hawq should read only compacted rows (default) or raw updates where you may get duplicates, etc.
Hawq queries both HDFS and inmemory data or only the HDFS data?
If we query a hawq external table from psql command prompt does it query only the data(which meet the eviction criteria ) persisted to hdfs or both the in memory data and the hdfs data?
Please sign in to leave a comment.
Can you explain when exactly data is persisted to HDFS and it depends on what ?
Every insert/update/delete written to both in-memory storage (and replicated to preserve data redundancy) and queued for an asynchronous HDFS write. The queue is flushed automatically every 60 sec or when the queue size reaches the BatchSize attribute on HDFSSTORE (32 MB by default). The queue can also be flushed manually using SYS.HDFS_FLUSH_QUEUE.
The EVICTION BY CRITIERIA clause affects which rows are kept in-memory but does not change what data is written to HDFS.
Please check HDFS store creation page for reference on the properties you can define and also HDFS eviction settings, which can set how eviction will work.
Thanks William...I got it.