Pivotal Knowledge Base


Query Fails with "Interconnect Error- Could not connect to seqserver, Connection timed out" when the Sequence Cache is Very Big


  • Pivotal Greenplum Database 4.3.x
  • Operating System- Red Hat Enterprise Linux 6.x


While running any query that fetches the next/current sequence value, you might encounter the following error

Interconnect Error: Could not connect to seqserver (connection: XX, host: X.X.X.X, port: XXXXX). (segX sliceX sdwX:X pid=X)","Connection timed out (connect errno 110)


When running queries with sequences - the nextval and curval are fetched by connecting to the seqserver demon which runs on the master, which is, in turn, is controlled by the main postmaster process

gpadmin  23673 23649  0 03:46 ?        00:00:00 postgres: port 48000, seqserver process  

The error above suggests that the route to the seqserver exists but the connections are timing out. This can happen when the sequence has been defined with a big cache (i.e. 10000000). The default cache value in a sequence is 1, but this value can be overwritten while a sequence creation. Documentation about CREATE SEQUENCE statement can be found here.

The problem with setting a high cache value in a sequence is that the seqserver can become very busy and stop responding to the segment communications, making any query that requires the seqserver to fail with a timeout. 


In this scenario, lowering the cache value can help get rid of this problem. 

The best approach to finding a convenient cache value would be to run benchmarks and study the behavior of the affected queries in high-throughput scenarios making sure the seqserver remains responsive.


Powered by Zendesk