Re: Droplets and Stacks


Mike Dalessio
 

Small correction below:

On Tue, Aug 4, 2015 at 9:52 AM, Mike Dalessio <mdalessio(a)pivotal.io> wrote:

Hi Guillaume,

Thanks for asking these questions. Some comments inline.

On Mon, Aug 3, 2015 at 4:35 PM, Guillaume Berche <bercheg(a)gmail.com>
wrote:

Thanks Onsi for these precisions.

Similar to Colin's blog question, I'm wondering how much versionning is
currently provided to rootfs and buildpacks. In other words, when a rootfs
(say cflinuxfs2) gets patched (e.g. following CVE such as in [1]), what
support is provided in the platform to identify apps that are still running
an old version of cflinuxfs2 rootfs and require restaging ?
Just to clarify, applications intentionally do NOT require restaging to be
placed onto a new rootfs; a droplet is compatible with all future versions
of a specific rootfs.

When an operator deploys an update to CF with a new rootfs, the DEA VMs
get rolled, and all new application instances (when they come up) are
running on the new rootfs.



Am I right to assume that there will be multiple distinct stack instances
returned by cc api calls such as [3] (with distinct guid but same entity
names) and that stacks are indeed immutable (i.e. the field "updated_at"
will remain null in [4] ) ? Writing a cli plugin to identify apps that need
restaging following a rootfs patch (similar to [2] but for a minor version
of a given stack), would therefore browse all stacks using [5], and order
those with the same name to understand minor patch version ?

I recall similar discussion related to buildpack versionning, where it
was mentionned that the buildpacks were mutable, and the only strategy to
support multiple versions of a buildpack is to append version numbers to
the buildpack names (and rely on priority ordering to have most recent
version be picked first). This has the drawback that an app specifying a
buildpack (e.g. to deambiguate or fix an invalid detect phase) will then
need to manually update the buildpack reference to benefit from new version
(a restaging won't be sufficient).

Is this correct ? On CF instances that don't version buildpack names, how
can users know whether apps were stage a vulrenable version of an offline
buildpack, and require restaging ? Is it by comparing staging date, and
known rootfs patch date ?
It's possible that we're conflating rootfs patches with buildpack patches.
I tried to address rootfs patches above, and so will address buildpack
patches here.

It's possible to determine the buildpack used to stage an app via the CC
API, and in fact versions of the CLI 6.8.0[7] and later will display this
information in the output of `cf app`.
Actually, this is CLI version 6.12.0 and later:
https://github.com/cloudfoundry/cli/releases/tag/v6.12.0



However, that doesn't easily answer the question, "who's running a
vulnerable version of the nodejs interpreter?", or even harder to answer,
"who's running a vulnerable version of the bcrypt npm package?" which I
think is more along the lines of what you're asking.

Currently, there's an exploratory track of work in the Buildpacks public
tracker[8] that includes experimenting with hooks into the Buildpacks
staging life cycle. The intention is to provide extension points for both
application developers and operators to do things like this during staging:

* run static analysis
* run an OSS license scan
* capture the set of dependencies from the application's package manager
(pip, npm, maven, gradle, bundler, composer, etc.)
* look up the set of dependencies in the NIST vulnerability database

There's obviously a long way to go to get here, and it's not obvious how
we can implement some of this shared behavior across buildpacks and within
the buildpack app lifecycle; but we're putting a great deal of thought into
how we might make buildpacks much more flexible and extensible --
systemwide, and without having to fork them.

[7] https://www.pivotaltracker.com/story/show/96147958
[8] https://www.pivotaltracker.com/epic/show/1898760




Is there some improvements planned around this stack and buildpack
versionning by the CAPI team (in tracker [6] seems related, is there
something more specific planned) ?

Thanks,

Guillaume.

[1] https://www.pivotaltracker.com/n/projects/966314/stories/90428236
[2] https://github.com/simonleung8/cli-stack-changer
[3] http://apidocs.cloudfoundry.org/214/apps/get_app_summary.html
[4]
http://apidocs.cloudfoundry.org/214/stacks/retrieve_a_particular_stack.html
[5] http://apidocs.cloudfoundry.org/214/stacks/list_all_stacks.html
[6] https://www.pivotaltracker.com/story/show/91553650

On Wed, Jul 29, 2015 at 7:16 PM, Onsi Fakhouri <ofakhouri(a)pivotal.io>
wrote:

Hey Colin,

Good stuff. I like to draw a circle around the rootfs, the buildpacks,
the generated droplet, the Task/LRP recipes, and the lifecycle binaries
that run inside containers to stage and launch droplets. You could label
that circle an application lifecycle. Diego currently supports three
application lifecycles and is loosely coupled to those lifecycles:

1. The Linux-Buildpack App lifecycle: includes the cflinuxfs2 rootfs,
the various buildpacks (including a known interface for building custom
buildpacks), the droplets (compiled artifacts guaranteed to run with
cflinuxfs2), two binaries: the builder (performs staging) and the launcher
(runs applications), and code that can convert CC requests for staging and
running instances to Diego Task/LRP recipes.

2. The Windows App lifecycle: includes the notion of a correctly
configured windows environment, a windows-compatible droplet, a builder, a
launcher, and code that can generate Tasks/LRPs. In this context we do not
yet have/need the notion of a buildpack though we are free to add one
later. The builder simply prepares the droplet from source and the
launcher knows how to invoke it.

3. The Docker App lifecycle: has no rootfs as the docker image provides
the entire rootfs, includes a builder to extract docker-metadata and send
it back to CC for safe-keeping, and a launcher to launch the requested
process *and* present it with a standard CF environment. Again,
there's also code that knows how to translate CC requests for a
docker-based application into Tasks and LRPs.

The cool thing is that Diego doesn't care about any of these details and
you are free to construct your own lifecycles and have your own contracts
within each lifecycle. You are spot on in noting that there is an implicit
contract between the buildpacks and the rootfs. I'd go further and say
that that implicit contract covers everything in the lifecycle circle (e.g.
the builder has a contract with the buildpacks, it expects `detect`,
`compile` and `release` to work a certain way, the recipes have a contract
with the builder/launcher, they expect particular command line arguments,
etc...)

This is one reason why we've recently transitioned the ownership of the
rootfs from the runtime team to the buildpack team - as the buildpack team
is best suited to define and maintain the contract between the buildpacks
and the rootfs. Would love to explore ways to make all these contracts
more explicit.

One last point. I didn't use the word "stack" in this e-mail until just
now. I agree that it's an overloaded concept that is easily and often
misunderstood ;)

Onsi

On Wed, Jul 29, 2015 at 9:51 AM, Colin Humphreys <colin(a)cloudcredo.com>
wrote:

Hi All,

I wrote a couple of articles about droplets and stacks.

http://www.cloudcredo.com/a-droplet-of-value/

http://www.cloudcredo.com/stacks-of-problems/

The droplet post is fairly self-explanatory, and enabling the choice of
shipping droplets or source is well on the way in Cloud Foundry development.

I feel our story around stacks is far less complete. It seems to be an
overloaded concept inherited from Heroku and the contract with the stack
seems to cause issues for both app and buildpack developers.

I'd like to open the discussion on what the future should be for
stacks, or if you think they're perfect as they are.

Cheers,
Colin

CloudCredo Cheerleader

_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev

_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev

_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev

Join {cf-dev@lists.cloudfoundry.org to automatically receive all group messages.