Post

2 followers Follow
0
Avatar

Region /regionA has potentially stale data. It is waiting for another member to recover the latest data.

I'm new to GemFire and in the process of learning how to use it looks like I made a mistake either in starting the servers from different shell windows or from different directories. After shutting everything down and attempting to start again I'm getting the following errors that I can't get around of:

gfsh>start server --name=server1 --server-port=40411 --force=true
Starting a GemFire Server in /opt/pivotal/gemfire/Pivotal_GemFire_820/server1...
................
Region /regionA has potentially stale data. It is waiting for another member to recover the latest data.
My persistent id:

DiskStore ID: 4419f03d-d8e2-4973-862f-de3c5b99c382
Name: server1
Location: /<my IP address>:/opt/pivotal/gemfire/Pivotal_GemFire_820/server1/.

Members with potentially new data:
[
DiskStore ID: 9b8b4ebd-7ee5-4975-8c2c-1d82f39fb109
Name: server2
Location: /<my IP address>:/opt/pivotal/gemfire/Pivotal_GemFire_820/server2/.
,
DiskStore ID: a5eca851-8801-472d-83bb-e42ecb55de6b
Name: server2
Location: /<my IP address>:/root/server2/.
,
DiskStore ID: ebd7faf5-1fb6-4175-930d-2587af411970
Name: server3
Location: /<my IP address>:/opt/pivotal/gemfire/Pivotal_GemFire_820/server3/.
]
Use the "gemfire list-missing-disk-stores" command to see all disk stores that are being waited on by other members.

Can somebody please advise how to fix this. My goal is to have 3 servers running from the same directory supporting the same region.
Thank you in advance

Michael

Please sign in to leave a comment.

1 comment

0
Avatar

The 'potentially stale data' message is basically saying that other members have potentially newer versions of the data in regionA.

In this case, these members have potentially newer data:

server1 -> /opt/pivotal/gemfire/Pivotal_GemFire_820/server1/.
server2 -> /opt/pivotal/gemfire/Pivotal_GemFire_820/server2/.
server2 -> /root/server2/.
server3 -> /opt/pivotal/gemfire/Pivotal_GemFire_820/server3/.

I think I would delete those directories and start from scratch.

I wish I could attach an entire example, but this forum doesn't support that.

Here are (hopefully) the relevant parts of one.

You can start a locator like:

gfsh start locator --name=locator --port=23456

You can start the servers something like below where the config directory contains the gemfire-server.properties and gemfire-server.xml files. The first time you start them, you can start them sequentially since there is no persistent data.

gfsh start server --name=server1 --classpath=$PWD/config --properties-file=$PWD/config/gemfire-server.properties

gfsh start server --name=server2 --classpath=$PWD/config --properties-file=$PWD/config/gemfire-server.properties

gfsh start server --name=server3 --classpath=$PWD/config --properties-file=$PWD/config/gemfire-server.properties

The gemfire-server.properties file would contain something like:

log-level=config
log-file=cacheserver.log
mcast-port=0
locators=localhost[23456]
statistic-archive-file=cacheserver.gfs
statistic-sampling-enabled=true
cache-xml-file=gemfire-server.xml
conserve-sockets=false

The gemfire-server.xml file would contain something like below. This will put the persistence files in the server1, server2 and server3 subdirectories of the directory where you start the servers.

<cache>

<cache-server port="0"/>

<disk-store name="data_store" max-oplog-size="10">
    <disk-dirs>
        <disk-dir>.</disk-dir>
    </disk-dirs>
</disk-store>

<region name="regionA" refid="PARTITION_REDUNDANT_PERSISTENT">
<region-attributes disk-store-name="data_store"/>
</region>

</cache>

When you shutdown the members, shut them all down at once like below. This shuts down the locator too. You can remove the '--include-locators=true' option if you don't want to shutdown the locator.

gfsh -e "connect --locator=localhost[23456]" -e "shutdown --include-locators=true"

When you restart the servers, restart all three servers simultaneously. This will avoid the 'potentially stale data' message.

Barry Oglesby 0 votes