|Pivotal Greenplum (GPDB)||All versions|
Pivotal Support generally requires Pivotal Greenplum database log files in order to diagnose issues. It is recommended to provide at least the master logs when opening a ticket. This will allow Pivotal Support to get to work more quickly and understand the issue.
gpsupport is the recommended tool for collecting logs, this will collect the most common logs from master and segment servers and generate a single archive file. This archive can be uploaded to Pivotal Support.
This articles provides instructions to download and install gpsupport as well as some common scenarios.
Additional information can be found in the gpsupport documentation.
Follow the instructions to download
- Go to Pivotal Greenplum Database on Pivotal Network.
- Go to Greenplum Support.
- Download the latest release of Greenplum Database Support Utility.
- Copy the file to the GPDB master node (mdw).
Follow the instructions to Install
- From the GPDB master node (mdw), extract the file:
- Make the file executable:
chmod +x gpsupport-X.X.X.X
- Collect logs from segment from current date:
$ ./gpsupport collect logs
Checking connectivity and authentication...
Validating segment file on remote hosts...
Segment file not found or invalid on some hosts. Installing...
Starting node collection
Checking for errors in node collection
Generating final tarfile /tmp/log_collector_2015-08-13_06-11-21-000.tar
- Collect logs from the last 3 days:
$ ./gpsupport startDate $(date '+%Y-%m-%d' --date='3 days ago') endDate $(date '+%Y-%m-%d') collect logs
- Diagnose UDP connectivity between segment hosts.
- Note that the mandatory hostfile used in conjunction with the tool should have a single hostname on each line and no extraneous whitespace.
$ ./gpsupport-188.8.131.52 hostfile=/home/gpadmin/hostfile diagnose connectivity Checking connectivity and authentication... Validating segment file on remote hosts... ~# Starting remote senders smdw --> ALL NODES sdw3 --> ALL NODES mdw --> ALL NODES sdw5 --> ALL NODES ~# Waiting for remote senders to start ~# Starting workload ~# Waiting for workload to complete ~# Completed workload ~# Stopping remote senders ~# IO Stream report smdw --> ALL NODES | 11.41mb/s | 0.00000% Loss | 342mb sent | 342mb received --> sdw3:7114 | 3.80mb/s | 0.00000% Loss | 114mb sent | 114mb received --> mdw:7114 | 3.80mb/s | 0.00000% Loss | 114mb sent | 114mb received --> sdw5:7114 | 3.80mb/s | 0.00000% Loss | 114mb sent | 114mb received sdw3 --> ALL NODES | 20.73mb/s | 4.44175% Loss | 650mb sent | 622mb received --> mdw:7115 | 7.23mb/s | 0.00000% Loss | 216mb sent | 216mb received --> sdw5:7115 | 7.23mb/s | 0.00000% Loss | 216mb sent | 216mb received --> smdw:7115 | 6.27mb/s | 13.32493% Loss | 216mb sent | 188mb received mdw --> ALL NODES | 20.85mb/s | 5.28495% Loss | 660mb sent | 625mb received --> smdw:7116 | 6.18mb/s | 15.85465% Loss | 220mb sent | 185mb received --> sdw3:7116 | 7.34mb/s | 0.00000% Loss | 220mb sent | 220mb received --> sdw5:7116 | 7.34mb/s | 0.00000% Loss | 220mb sent | 220mb received sdw5 --> ALL NODES | 20.71mb/s | 4.53547% Loss | 650mb sent | 621mb received --> smdw:7117 | 6.25mb/s | 13.60608% Loss | 216mb sent | 187mb received --> sdw3:7117 | 7.23mb/s | 0.00000% Loss | 216mb sent | 216mb received --> mdw:7117 | 7.23mb/s | 0.00000% Loss | 216mb sent | 216mb received ~# Finished