Pivotal Knowledge Base


Troubleshooting Failed Uploads


Pivotal Web Services (PWS): All versions 


When pushing an application with cf it is possible for the process to fail during the upload or prior to the build pack running.  In these situations, it difficult to understand what failed as the application does not completely stage and hence there is not logging available.  This article outlines some of the possible causes for a failure during the upload or prior to the build pack running.


There are many possible causes of upload failures.  This article lists some of the common ones, but other problems may occur as well.

  1. Insufficient upload bandwidth, network latency that is too high or other networking related problems.  Typically resulting in an error like Server error, status code: 502.
  2. Your upload is too large.
  3. Your application has file names that are too long.
  4. Your application has a large number of small files.


This section lists the resolutions for the problems listed in the causes section.  The numbers in this section correspond to the problems listed above.

  1. The run.pivotal.io servers are located in the US (AWS US-East).  In most cases, if you are pushing from a location in the US and have sufficient upload bandwidth, you should not see any issues.  If you are pushing from a country that is outside of the US or from a location with limited bandwidth or with a high latency network you may see some issues.  This is due to the fact that you have only a finite amount of time to upload your application, currently limited at 15 minutes.

    If you find that you are hitting this limit, you may want to try first copying (rsync works good for this) your application to a computer or server located in the US and then pushing from that location. Because run.pivotal.io is run out on Amazon EC2, one of Amazon's free EC2 instances makes an ideal jump box for pushing your application.

    Alternatively, you can look at using a wrapper build pack like the scm build pack or the download build pack.  With these buildpacks you can upload your application somewhere, perhaps to Github, Dropbox or some other cloud hosting service, and then just point at the public files.  The build pack will then download them, which should be ultra fast, and use them for your application.
  2. Your upload is too large.  As of the writing of this post, we test uploads up to 1G in size.  If your upload is larger than this, you may want to look at breaking the application into smaller chunks or hosting large static assets elsewhere (S3, Akamai, a second application, etc...).

    It is also recommended that you double check the path that you are specifying with the -p argument to cf push (or the one set in your manifest.yml file).  Because cf will upload everything in the path that you have specified, if the path is pointing to the wrong location you may be uploading many more files than you would expect.  This can both slow down your uploads and cause you to go above the upload limit.  

    For most applications, like Ruby and Node.js, the path should point to your project directory.  For Java based applications, the path should point to your packaged application (i.e. WAR or JAR file).  If there are files under the path directory that you do not want to upload, you can ignore these by placing a .cfignore file in the same directory as your manifest file.
  3. Most file systems (NTFS, ext3, HFS+, etc...) support file names up to 255 characters, however CloudFoundry and Warden use a special file system called "aufs" which only supports file names up to 242 characters long.  If your application has file names longer than 242 characters, it will cause your application to fail prior to the build pack running.  In the output, you'll see "-----> Downloaded app package (xxM)", but you won't see any output from the build pack and the application will fail to stage.  This happens because CF tries to extract the application bundle to the Warden container where your application will run but fails (due to the long file names).  Unfortunately, the only generic solution here is to make the file names shorter.  For more information on this, see the following thread.
  4. Having a large number (more than a few thousand) of small files (less than 65K) can cause problems with the cf push process.  Small files (less than 65k) are not subject to the same upload caching as large files.  This means that small files are uploaded every time that you push the application, rather than just the first time.  Because of this, having a large number of small files in your project will slow down the push process.  

    In addition to this, having a large number of small files in your project may cause you to see the following error when pushing:  "Error: timed out waiting for async job".  This is indicating that the cf utility stopped waiting for the push process to complete on the server (because it exceeded the timeout).  It does not mean that the push failed, rather that it just needs more time to complete.  This thread describes how you can work around this condition.

Additional Information

Please also take a look at the Application Troubleshooting Guide in the run.pivotal.io documentation.  It contains more helpful pointer for troubleshooting deployment problems.



Powered by Zendesk