On Wed, Aug 5, 2015 at 6:53 AM, Guillaume Berche <bercheg(a)gmail.com> wrote:
Thanks Mike for your detailed response, more comments inline
On Tue, Aug 4, 2015 at 3:52 PM, Mike Dalessio wrote:
Just to clarify, applications intentionally do NOT require restaging to be
placed onto a new rootfs; a droplet is compatible with all future versionsthanks for correcting me on that. I had in kept mind the GHOST
of a specific rootfs.
When an operator deploys an update to CF with a new rootfs, the DEA VMs
get rolled, and all new application instances (when they come up) are
running on the new rootfs.
vulnerability into which statically linked binary in the app or buildpack
would require a restaging (cf http://pivotal.io/security/cve-2015-0235 )
but that's likely to not be that much a common case
Ah, yes, when libraries are statically linked, restaging is definitely
required; but I think it's a much less common use case
The cli displays the 'detected_buildpack' or 'buildpack' field returned by
It's possible that we're conflating rootfs patches with buildpack
Am I right to assume that there will be multiple distinct stack
instances returned by cc api calls such as  (with distinct guid but same
entity names) and that stacks are indeed immutable (i.e. the field
"updated_at" will remain null in  ) ? Writing a cli plugin to identify
apps that need restaging following a rootfs patch (similar to  but for a
minor version of a given stack), would therefore browse all stacks using
, and order those with the same name to understand minor patch version ?
I recall similar discussion related to buildpack versionning, where it
was mentionned that the buildpacks were mutable, and the only strategy to
support multiple versions of a buildpack is to append version numbers to
the buildpack names (and rely on priority ordering to have most recent
version be picked first). This has the drawback that an app specifying a
buildpack (e.g. to deambiguate or fix an invalid detect phase) will then
need to manually update the buildpack reference to benefit from new version
(a restaging won't be sufficient).
Is this correct ? On CF instances that don't version buildpack names,
how can users know whether apps were stage a vulrenable version of an
offline buildpack, and require restaging ? Is it by comparing staging date,
and known rootfs patch date ?
patches. I tried to address rootfs patches above, and so will address
buildpack patches here.
It's possible to determine the buildpack used to stage an app via the CC
API, and in fact versions of the CLI 6.8.0 and later will display this
information in the output of `cf app`.
the app summary endpoint [g2] such as reproduced below
$ cf app spring-startapp
Showing health and status for app spring-startapp in [...]
requested state: stopped
last uploaded: Wed Apr 29 15:14:48 UTC 2015
t#3bd15e1 open-jdk-jre=1.8.0_45 spring-auto-reconfiguration=1.7.0_RELEASE
I understand the detailed buildpack versionning info is the data returned
by the buildpack detect script [g3]. So buildpacks (such as javabuild pack)
that would provide detailed versionning info in the detect method would
help cf operators understand if some apps are running specific vulnerable
versions of the buildpack.
Thanks for pushing towards more transparency around buildpack versions.
I've prioritized a story to emit version information here: https://www.pivotaltracker.com/story/show/100757820
On an app which was targetting a specific buildpack (e.g. -b
java-buildpack), I understand the displayed detected buildpack would not
contain as much details, and mere display the buildpack name (or git url).
If you confirm, I'll try to send a PR for on docs-* repo related to  to
suggest to print out detailed versionning info for custom buildpacks
(currently suggests to display a "framework name" with "Ruby" as an
Yes, please do send a PR. Thank you!
The /v2/buildpacks endpoint (used by the "cf buildpacks" command) displays
the last update date for a buildpack, e.g.
Would'nt it make sense to have the CC increment a version number for each
update so that it becomes easier to query than only relying on dates
While it's great to have buildpacks provide themselves detailed
versionning info for their code and their most important
dependencies/remote artifacts, I feel the cf platform should provide a bit
more support to help identify versions of buildpacks used by apps, such as:
- refine the app summary endpoint [g2]:
- for system buildpacks: include the buildpack guid (in addition to the
buildpack name) as to allow correlation to /v2/buildpacks endpoint
- for custom buildpacks (url): record and display the git hash commit
for a buildpack url
- refine the app listing endpoints [g4] or v3 [g5] to
- support querying app per system buildpack id
- support querying app by dates of "package_updated_at" or best a
version number as suggested above
I'm wondering whether the CAPI team working on API V3 is planning some
work in this area, and could comment the suggestions above.
I'll let Dieu respond to these suggestions, as she's the CAPI PM.
However, that doesn't easily answer the question, "who's running aThanks Mike for detailing this promising work.
vulnerable version of the nodejs interpreter?", or even harder to answer,
"who's running a vulnerable version of the bcrypt npm package?" which I
think is more along the lines of what you're asking.
Currently, there's an exploratory track of work in the Buildpacks public
tracker that includes experimenting with hooks into the Buildpacks
staging life cycle. The intention is to provide extension points for both
application developers and operators to do things like this during staging:
* run static analysis
* run an OSS license scan
* capture the set of dependencies from the application's package manager
(pip, npm, maven, gradle, bundler, composer, etc.)
* look up the set of dependencies in the NIST vulnerability database
There's obviously a long way to go to get here, and it's not obvious how
we can implement some of this shared behavior across buildpacks and within
the buildpack app lifecycle; but we're putting a great deal of thought into
how we might make buildpacks much more flexible and extensible --
systemwide, and without having to fork them.
Have you considered an HTTP-based API for hooking into the staging process
(as an alternative to script-based hooks mentionned into [g6]) ? This would
allow such steps to be independent of the buildpacks. Apcera pluggeable
stager model might be inspiring [g7]
This is a great reference, and I'll ask the Buildpacks team specifically to
experiment with a web-services model. Here's the story (still unscheduled): https://www.pivotaltracker.com/story/show/100758730
One could wonder how some of the extensions you mentionned (lookup against
NIST vulnerabilty db) could be run periodically against running apps
without requiring them to restage. I guess the recent "staged droplet
download" [g8] would support such use-case.
Yes, ideally a scan of this nature would run regularly, using the
aggregated dependency data pull from each app at staging-time. Obviously,
that scope of work is broader than just the Buildpacks team, but I think
it's a compelling example of what can be built on top of the buildpack
lifecycle if it's made to be more-easily extended.