Re: Buildpacks PMC - 2015-10-12 Notes

Guillaume Berche

Thanks Mike for your response and sorry for delay in following up.
Responses inline.

On Wed, Oct 14, 2015 at 3:18 PM, Mike Dalessio <mdalessio(a)> wrote:

Related to the architecture epic, What's the outcome of this epic and the
general direction the buildpack team is taking for pluggeable staging
pipeline ?
We're having some discussions now as to next steps. Ideally I'd like to
identify a track of feature work that will drive out a set of features for
extending the buildpack staging lifecycle. If you or anyone else has
suggestions, I'm all ears.
Following are some aspects that were previously discussed and I think
deserve some fixes related to buildpack staging life cycle, what about
setting up a design proposal as suggested by James Bayer into [0] to
detail them with the community ?

- buildpack versionning [b1]
- offer standard caching mechanism for pulled internet dependencies [b2]
- enabling buildpack debugging traces could be standardized across
buildpacks and potentially support added to cf cli and dea, e.g. displaying
last git commits for a custom git repo when debugging is enabled
- heroku compile ENV_DIR compatibility support in diego [b3]
- support for automatically restaging vulnerable apps, once corresponding
buildpacks vulnerabilities are fixed by a new buildpack [b4]
- buildpack governance support: in some organizations, there is a need to
scope some buildpacks per org/spaces, and possibly restrict usage of custom
- somewhat related, suggesting to improve the droplet download capability
to push the resulting droplet as a docket image into a [private] docker
repo. This might seem more natural than the current download tar.gz droplet
bits followed by a push to the binary buildpack that is suffering from
symlinks uploads portability issues [b5]


The Experiment #5 [3] relying on environment variables POST_BUILDPACK
seems pretty promising. Would it support an orderer list of post buildpacks
No reason it couldn't support an ordered set of buildpacks. I'm not fully
convinced this is the best way to proceed, but it's certainly the easiest,
and we're looking at it pretty hard at this point.

Concerning the story "Experiment #6: Investigate using a pluggable / web
services model for extending staging to operators and developers" [1] we
had discussed together into [2]. The story is marked as accepted, but I
can't see the result, and future work, including how this could be exposed
to CF operators or users.
This experiment was cut short, as the "web hook" model introduced too many
reliability concerns, in my opinion, especially around relying on external
services to stage an app. I'm open to revisiting it in the future, but
would like to try more pedestrian solutions first.
Can you please elaborate on your perception of reliability concerns with an
HTTP-based API for staging pipelines ? CloudFoundry currently heavily
relies on (internal) HTTP APIs for its internal workings, or external HTTP
APIs such as the service broker, or the upcoming route services.

What is then the preferred solution for now ? Is the mention of an
S3-API-based pipeline with processes handling their transformation, similar
to the proof-of-concept vito proposed into [p1]

[p1] ?

Can you share with the community a summary of learnings from these
experiments and where the "buildpack lifecycle" would be go in the future?
Absolutely, I will do so soon.

If ever you or part of the buildpacks team is making it to the Berlin Cf
Summit, that'd be great to have a community session around this, possibly
in the preceding unconference on sunday.

Join to automatically receive all group messages.