On Mon, Nov 9, 2015 at 7:12 PM, Shawn Nielsen <sknielse(a)gmail.com> wrote:
Thanks for the quick reply.
My question is more around using an 'online' buildpack vs. 'offline'.
When I say 'online', I'm referring to the binaries being pulled from the s3
repo at staging time.
Right, but in your question you are specifically stating that the "offline"
(or "uncached") buildpacks has different URLs than the "online" (or
"cached"), and I want to make my point clear that this is not the case.
The URLs are identical between online and offline behavior for any specific
version of the buildpack. I believe you're comparing the behaviors of two
different versions of the buildpack.
If we use the offline buildpack binaries, we are restricted to node
versions explicitly defined in the manifest files.
This is intentional. It's the Buildpack policy to, *by default*, ship only
the two most recent releases on a major/minor branch, to help ensure that
developers using CF are updating to the most secure versions of these
binaries. This was announced earlier in the year in an extensive email
about reducing the sizes of buildpacks, while improving the security level
of the default platform.
You're free to change this policy for your CF deployment. Details are below.
This creates some issues when projects want to do validation testing
between one version and the next or need to roll back for whatever reason.
I understand this issue, and I empathize. In my opinion, the current *default
policy* is the is the least-pessimal (meaning, the least bad) solution
available to us. Including more binaries increases the vulnerability
surface area of the platform as well as slowing down deployment and staging
times. Including fewer binaries makes upgrades nearly impossible.
Again, you're free to change this policy for your CF deployment. Details
The community raised no objections to this policy, which leads me to
believe that this opinion is at least reasonably defensible.
It also requires us to manually update the buildpack every time there is a
new release. We've historically used the online buildpacks to prevent
these types of issues, but we'd like to better understand Pivotal's long
term strategy here.
I am an open-source PM of an open-source project, and we've been completely
transparent about the reasoning and logic behind this decision. We asked
the community for comments over an extended period of time, and received no
objections to implementing this policy.
Further, *we've open-sourced all the tooling that CF operators need to
implement a different buildpack policy*. If you feel the default policy
doesn't match the requirements of your organization, you are fully enabled
to create a custom buildpack with a custom set of binaries that match your
If you'd like help implementing your own buildpack releases, the
open-source Buildpacks team would be more than happy to help.
If you'd like to know more, you might start here:
and if you want to generate custom binaries, we can help with that, too:
Let me know how this open-source team can help you meet your company's
Going forward, what is Pivotal's strategy for online binaries that would
be used at staging time?
On Mon, Nov 9, 2015 at 4:03 PM, Mike Dalessio <mdalessio(a)pivotal.io>
Thanks for asking.
The actual URL used is going to depend on what version of the buildpack
you're using. Currently master (v1.5.1) is only referncing CF-generated
We removed references to s3pository.heroku.com in February, for v1.2.0
of the buildpack.
Let me know if you need more context around this. Thanks!
On Mon, Nov 9, 2015 at 4:40 PM, Shawn Nielsen <sknielse(a)gmail.com> wrote:
It looks like the online NodeJS buildpack is currently pointing to
Whereas the offline NodeJS buildpack manifest files seems to be pointing
to pivotal's repo:
Is there a reason for this discrepency?
I would think the online NodeJS buildpack should be pointing to the same
pivotal repo as the offline:
Let me know if you have any input here. Thanks,
On Fri, Jul 17, 2015 at 11:46 PM, James Bayer <jbayer(a)pivotal.io> wrote:
mike d is the best to answer, but i'll take a crack at it
On Fri, Jul 17, 2015 at 3:24 PM, Shawn Nielsen <sknielse(a)gmail.com>
Two questions on these cf-built binaries:for some buildpacks, we have been relying on heroku binaries which is
1. From my understanding, the purpose of this is to compile the
binaries so they are optimized for the cf specific stacks (e.g.
cflinuxfs2 ) as opposed to something that was generically compiled
(e.g. to 64 bit trusty). Can you confirm this or expound upon this if
there are other reasons?
an external dependency. we want the cf team to be in complete control of
how and when the binaries are built. this ensures cf can be in control of
our own destiny when for patching security issues or bugs. additionally it
means cf can take responsibility for how the binaries are compiled to
increase trust of the binary contents.
most/all cf buildpacks are available in online and offline mode as i
2. Are these binaries available in offline mode only or is there also
intent that they will be hosted, allowing us to consume them in an online
understand it. see:
On Mon, Jul 13, 2015 at 2:08 PM, Mike Dalessio <mdalessio(a)pivotal.io>
On Mon, Jul 13, 2015 at 1:08 PM, Daniel Mikusa <dmikusa(a)pivotal.io>
On Wed, Jul 8, 2015 at 1:55 PM, Mike Dalessio <mdalessio(a)pivotal.io>Good catch, I've created a story for this:
Hi all,Looks good mostly. One minor issue. It seems like the snmp
In the June CAB call, it was mentioned that the Buildpacks team was
working on generating CF-specific binaries to be packaged in the
buildpacks. I'm pleased to announce that the team is ready to start
shipping these binaries in the golang, nodejs, php, python, and ruby
buildpacks for the cflinuxfs2 stack.
*__We're planning to cut releases of these buildpacks with the CF*
*binaries on Monday, 20 July.__*
In the meantime, I'd like to ask the community to beta these
buildpacks and give us feedback.
We're successfully running [BRATs] (the buildpack runtime
acceptance tests) on these binaries, so we don't expect any issues; but
we'd love to hear if anyone experiences any issues deploying their apps.
Obviously, until we cut official releases, you should use judgement
when deciding where to use these beta buildpacks.
*## Timeline, Versioning and Proposed Changes*
Unless we hear of any blocking issues, we'll cut official releases
of the go, node, php, python and ruby buildpacks on July 20th.
When we cut these releases, we'll be bumping the major version
number, and removing the `manifest-including-unsupported.yml` file from the
repository HEAD. I'd love to hear anyone's opinion on these changes as well.
*## How to deploy with the "beta" binary buildpacks*
Until we cut official releases, we are maintaining a git branch in
each buildpack repository, named `cf-binary-beta`, in which the manifest
points at our CF-specific binaries.
If your CF deployment has access to the public internet, you can
push your app and specify the github repo and branch with the `-b` option.
`cf push <appname> -bhttps://github.com/cloudfoundry/go-buildpack#cf-binary-beta`
`cf push <appname> -bhttps://github.com/cloudfoundry/nodejs-buildpack#cf-binary-beta`
`cf push <appname> -bhttps://github.com/cloudfoundry/php-buildpack#cf-binary-beta`
extension is not loading correctly. Looks to be missing a required shared
2015-07-13T12:29:53.09-0400 [App/0] OUT 16:29:53 php-fpm |
[13-Jul-2015 16:29:53] NOTICE: PHP message: PHP Warning: PHP Startup:
Unable to load dynamic library
libnetsnmp.so.30: cannot open shared object file: No such file or directory
in Unknown on line 0
That's with PHP 5.4.42, 5.5.26 and 5.6.10.
Another good catch, I've created a story to look into it:
One other thing, which is not really an issue, but perhaps an
optimization. I noticed for the PHP extensions the bundle has both ".a"
and ".so" files for many of the extensions. The ".a" static libraries
should not be necessary, just the ".so" shared libraries. Seems like
removing them could save 8 to 9M. You could save another 25M by deleting
the bin/php-cgi file. You really only need bin/php for cmd line stuff and
sbin/php-fpm for web apps. Can't think of any reason you'd need / want to
do cgi. It's not a ton of space, 35M X 3 (each current version) + 35M X 3
(each of previous versions) - whatever compression will save. Just thought
I'd throw it out there though.
`cf push <appname> -bhttps://github.com/cloudfoundry/python-buildpack#cf-binary-beta`
`cf push <appname> -bhttps://github.com/cloudfoundry/ruby-buildpack#cf-binary-beta`
If your CF deployment doesn't have access to the public internet
and you'd like to try these buildpacks, please reach out directly and we'll
figure out the best way to accommodate.
*## Tooling Details*
If you'd like to take a look at how we're currently building these
binaries, our tooling is open-sourced at:
https://github.com/cloudfoundry/binary-builderNote that `binary-builder` uses the rootfs docker image to compile
each binary, which means that this tool can easily be extended to
essentially cross-compile **any** binary for a CF rootfs. We'd
love to hear your feedback on this, as well.
cf-dev mailing list
cf-dev mailing list
cf-dev mailing list
cf-dev mailing list
cf-dev mailing list