Pivotal Knowledge Base


DD Boost error 5005 "Not enough I/O streams available for file transfer"


  • Pivotal Greenplum Database 4.3.x
  • Operating System- Red Hat Enterprise L 6.x


You might not be able to perform a DDboost backup replication and instead get an error:

DD Boost error 5005: Not enough I/O streams available for file transfer.

Reducing the --max-stream for gpmfr is not resolving the issue either. 


When we encounter gpmfr issues, the first thing we need to look at is if the remote is still online or not. We can use the below command to list the remote DD's back up to check that

20170627:23:31:50:181479 gpmfr:mdw:gpadmin-[INFO]:-Starting gpmfr with args: --list --remote 
Listing backups on remote( Data Domain. 
Default backup directory: DCA_EDGE_SPS_PROD/prprodedge/prprodedge 
2017-June-16 21:18:08 (20170616211808) 
2017-June-19 21:20:25 (20170619212025) 
2017-June-20 21:22:24 (20170620212224) 
2017-June-21 21:17:59 (20170621211759) 
2017-June-22 21:20:53 (20170622212053) 
2017-June-23 21:19:28 (20170623211928) 
2017-June-26 21:23:44 (20170626212344)

If the remote DD is online, then we can check the gpmfr verbose log to have the full picture of the issue

Identifying backup files on local( Data Domain. 
Initiating transfer for 71 files from local( to remote( Data Domain. 
20170628:01:45:56:713259 gpmfr:mdw:gpadmin-[ERROR]:-gpmfr failed. exiting... 
Traceback (most recent call last): 
File "/usr/local/greenplum-db/lib/python/gppylib/mainUtils.py", line 281, in simple_main_locked 
exitCode = commandObject.run() 
File "/usr/local/greenplum-db/lib/python/gppylib/operations/__init__.py", line 53, in run 
self.ret = self.execute() 
File "/usr/local/greenplum-db/./bin/gpmfr.py", line 1197, in execute 
File "/usr/local/greenplum-db/./bin/gpmfr.py", line 1481, in replicateBackup 
File "/usr/local/greenplum-db/./bin/gpmfr.py", line 523, in verifyLogin 
File "/usr/local/greenplum-db/./bin/gpmfr.py", line 342, in printErrorAndAbort 
raise Exception(msg) 
Exception: DD Boost error 5005: Not enough I/O streams available for file transfer. 

From the above example, we can see the Python call stack (Unlike GDB, Please read from top to bottom); thus, the last function it called before the error handling code is verifyLogin(). As it is a python code, we can open the source code and check if gpmfr was using the gpddboost --verify to connect to the remote DD

  def verifyLogin(self):
        "gpddboost --verify" connects to DD system using configured username and
        password.  gpddboost also creates storage unit on the DD system if one
        doesn't exist.
        rc, lines = self._runDDBoost("--verify --ddboost-storage-unit %s" % self.DDStorageUnit)
        if rc != 0:
            logger.info("gpddboost --verify --ddboost-storage-unit %s: %s" % (self.DDStorageUnit, "\n".join(lines)))
            code = self._parseError(lines)

The command below can be used to verify the Datadomain

[gpadmin@mdw gpAdminLogs]$ gpddboost --verify --remote
20170628:02:46:10|ddboost-[DEBUG]:-Libraries were loaded successfully
20170628:02:46:10|ddboost-[INFO]:-opening LB on /home/gpadmin/DDBOOST_MFR_CONFIG
20170628:02:46:10|ddboost-[ERROR]:-ddboost create storage unit failed. Err = 5005
20170628:02:46:10|gpddboost-[ERROR]:-Could not connect to DD_host with DD_user and the DD_password.
[gpadmin@mdw gpAdminLogs]$

Check with Datadomain Support by using above error message as your reference.

Additional Information

The lack of stream/DDboost username password issue/Remote DD Space Full issue would normally be the cause of this error message.


Powered by Zendesk