Pivotal Knowledge Base

Follow

How to manage build packs on PCF®

Environment

Product Version
Pivotal Cloud Foundry® (PCF) 1.5.x, 1.6.x and 1.7.x

Purpose

Each version of PCF ships with a set of build packs. Each build pack ships with a set of binaries supported by that build pack (these are listed in the release notes, for example, Ruby). Because the binaries that ship with the build packs iterate often, typically to patch bugs and security issues, so do the build packs. This article explains the common strategies used to keep build packs up-to-date.

Procedure

There are three general strategies for keeping build packs up-to-date and each one has it's pros and cons which are discussed below.

1. Upgrade Elastic Runtime

The first option is to simply keep Elastic Runtime up-to-date. New versions of Elastic Runtime pull in newer build packs (typically the latest versions available at the time of the release).

Pros

  • No work is required beyond keeping Elastic Runtime up-to-date.
  • Build pack versions are always compatible with Elastic Runtime, system applications, and errands.
  • Build packs are updated in-place. This can be good as it forces users to upgrade.

Cons

  • Build packs are released more often than Elastic Runtime, so you won't get security updates and bug fixes as quickly as the other methods.
  • This option requires staying current with Elastic Runtime releases. Falling behind on Elastic Runtime versions increases the lag time before users receive bug and security fixes from the build packs.
  • Build packs are updated in-place. This can be bad as it means an application that deployed correctly yesterday, might not deploy correctly post-update. This can confuse developers as things suddenly stop working. See the Impact/Risks section for more details.

This is a solid option and can be considered the default option. If you don't do anything beyond upgrade Elastic Runtime, then you're using this approach.

2. Upgrade build packs in-place

The second option is to manually upgrade build packs in-place. The general process for this is to download new build packs as they are released to Pivotal Networks. You can then use the cf update-buildpack command to update your existing build packs to the latest version, which makes sure you have the latest binaries for your environment (Java runtime, Tomcat, Ruby, HTTPD, etc.)

Pros

  • You always have the latest build packs and binaries available.
  • Build packs are updated in-place. This can be good as it forces users to upgrade and forces them to pull in the latest bug and security fixes.

Cons

  • Build packs are updated in-place. This can be bad as it means an application that deployed correctly yesterday, might not deploy correctly post-update. This can confuse developers as things suddenly stop working. See the Impact/Risks section for more details.
  • System applications and errands may not support the latest Elastic Runtime. See the Impact / Risks section for more details.

This is likely the best option for security-conscious users as it upgrades build packs quickly and it forces users to upgrade on the next stage of their application. It does have some challenges though, so refer to the Impact/Risks section for some suggestions on how to work through them.

3. Create new build packs

The third option is to upgrade by versioning and creating new build packs. The general process for this approach is to download new build packs as they are released to Pivotal Networks. Then instead of updating the build pack in-place, you use the cf create-buildpack command to create a new build pack that has a unique name, typically including the version number.

Pros

  • You always have the latest build packs and binaries available for use.
  • Developers can select a specific version of the build pack and set of binaries.  This allows them to update at their own pace.

Cons

  • The operator now needs to manage a larger number of build packs. Managing build packs can become complicated and likely will involve defining lifecycle policies, enforcing a naming and versioning convention and deleting old build packs.
  • While the latest build packs are always available, developers may choose to not upgrade and thus delay the deployment of bug and security fixes.
  • The order of build packs becomes very important as there are now multiple build packs available that can handle an application. It's critical to set the order such that users not specifying a specific build pack get the expected build pack (perhaps the latest version, but whatever makes sense for your users).
  • If the number of build packs grows too large, you can start to see staging failures. See the Impact / Risks section for more details.

This option seems to work well for operators that work with developers that naturally gravitate towards using the latest software. In that case, they'll willingly update and you can efficiently retire old versions of build packs. If you're in a situation where developers don't like to upgrade, then you will have to force them to upgrade at some point as you retire build packs and you'll end up having the additional problem where user apps can fail because of the forced build pack upgrades (See the Impact/Risks section for more details on this problem).

This option is also the option that offers the most flexibility for binaries as it allows an operator to support more than the current and previous versions of dependencies.  If developers need to have very specific version support, then this is likely the best route.

Impact/Risks

Security and Bug Fixes

Build packs are packaged with software binaries. Using older versions of build packs means you have older binaries of things like the Java Runtime, Apache Tomcat, Ruby, PHP, Python, and other dependencies. Using older versions may leave you susceptible to bugs that have not been patched or to security vulnerabilities.

Staging Failures due to build pack upgrades

When upgrading build packs in-place, you may see some users complain that an application pushed successfully yesterday and is failing today despite there being no changes to the application.

This can happen for a few reasons, but the main reason for this is because the supported versions for binaries evolve over time. Each build pack will only support the current and previous versions of a dependency. For example, we support v2.0.0 and v2.0.1 today, but the next build pack will support v2.0.1 and v2.0.2.  If your applications are set to use specific Runtime versions, then they can fail when the expected version is no longer available. From the previous example, if an app is set to use version 2.0.0 of a dependency, it will break when you upgrade the build pack because that version will no longer be available post-upgrade.

One strategy for working around this is by communicating with your developers. Let them know in advance of build pack changes and let them know what binary changes are coming up (Pivotal publishes the changes in the release notes for the build packs). Developers can then plan and know when changes will happen.

System application failures

This problem is similar to Staging Failures due to Build Pack Upgrades, but it happens with applications deployed by Elastic Runtime and other Pivotal tiles. The apps that we ship as a part of the platform, Apps Manager, for example, may have these same version challenges. If you upgrade the build packs enough without upgrading Elastic Runtime, you can get into a situation where the versions of a dependency that are available through your build packs will no longer meet the needs of the platform apps, which will then fail to stage.

Fortunately, you can work around this by keeping your Elastic Runtime environments up-to-date.  You don't have to deploy every maintenance release (although you certainly could), you just don't want to fall behind by many versions. There is no hard rule as to how many versions you can safely fall behind before seeing problems, as it depends on how quickly binaries update. That said, the further you fall behind, the more likely you'll encounter this problem.

Staging failures due to build pack downloads

This problem happens when there are a large number of build packs or when one build pack in particular is very large. Prior to an application staging on a Diego Cell, the cell must download the build packs (assuming they are not cached on the cell already). There is a finite amount of time for the staging process to occur and time spent downloading the build packs counts against this. If the build packs cannot be downloaded to the Cell within that time limit, then staging will fail. This problem usually occurs when an application is deployed to a new Cell, where no build packs are cached, and when the application does not specify a build pack, which causes the Cell to download all build packs. As such, the easiest solution is to specify a build pack when deploying an application.

The other issue that can occur when a large number of build packs are downloaded is that the Cell can run out of disk space. As previously mentioned, the Cell will cache build packs locally so that they are only downloaded once. When using the default values, there should be plenty of ephemeral disk space for this cache, but if there are a lot of applications deployed to the cell, or the applications deployed to the cell are using large amounts of disk space, or the Cell is configured with a small ephemeral disk, or the Cell is configured with a very large amount of memory (causes a large swap disk, which makes the ephemeral disk smaller) then you might run into the case where the Cell cannot download your build packs. The workaround then is to increase the size of the ephemeral disk.

Comments

Powered by Zendesk