Pivotal Knowledge Base


Performance Tuning Trick for Event Propagation by Configuring Message-Time-to-Live Parameter


 Product  Version
 Pivotal GemFire  Any versions


If you subscribe your GemFire client by registering your interests, specific kinds of server side events caused by region operation are queued to the client subscription queue and propagated to the actual client. This ability for your client to subscribe to certain events is certainly a useful feature, but it may add additional overhead to the client-server distributed system. This promptly may make it more difficult to meet your performance requirement for event propagation. This article provides a trick to speed up event propagation behavior.


One cause of additional overhead for event propagation are the expiration tasks for each event. The purpose of these expirations is to prevent each event from remaining in the subscription queue for too long before being consumed by the client. By default, if those events are not consumed and remain in the subscription queue for more than three minutes, those events are removed from the subscription queue by expiration.


Set message-time-to-live=0 in the server side cache.xml as given below:

 <cache ...>
<cache-server port="0" notify-by-subscription="true" message-time-to-live="0" />

If you set message-time-to-live=0, expiration tasks are not created for each event so you can eliminate the overhead caused by expirations related to subscription queues. This may help you to meet your performance requirements.

Additional Information

When setting message-time-to-live=0, there are no side effects IF the server side events are consumed by clients in a timely manner with no buildup in the subscription queues. However, if events start to build up in the subscription queue, with no expiration of those events, these events will consume much more memory. So you may need to tune some other parameters to limit memory usage by subscription queues. For more information related to this parameters, please refer to the following document: 



Powered by Zendesk