Re: Ubuntu Xenial stemcell and rootfs plans


Daniel Mikusa
 

On Wed, May 11, 2016 at 9:45 AM, Mike Dalessio <mdalessio(a)pivotal.io> wrote:

Hi Mike,

I totally agree with you on all points, but there are second-order effects
that are worth discussing and understanding, as they've influenced my own
thinking around the timing of this work.

Given the current state of automation in the Buildpacks Team's CI
pipelines, we could add a Xenial-based rootfs ("cflinuxfs3"?)
Could we please, please not call it `cflinuxfs3`? A very common question I
get is what is `cflinuxfs2` really? I then have to explain that it is
basically Ubuntu Trusty. That invariably results in the follow up
question, why it's called `cflinuxfs2` then, to which I have no good answer.

Since it would seem that this naming choice has resulted in confused users,
can we think of something that is more indicative of what you actually get
from the rootfs? I would throw out cfxenialfs as it indicates it's CF,
Xenial and a file system. This seems more accurate as the rootfs isn't
really about "linux", if you look at linux as being the kernel [1]. It's
about user land packages and those are Ubuntu Trusty or Xenial based, so it
seems like the name should reflect that.

[1] - https://en.wikipedia.org/wiki/GNU/Linux_naming_controversy

to CF pretty quickly (and in fact have considered doing exactly this), and
could build precompiled Xenial binaries to add to each buildpack pretty
easily.

Unfortunately, this would result in doubling (or nearly so) the size of
almost all of the buildpacks, since the majority of a buildpack's payload
are the precompiled binaries for the rootfs. For example, we'd need to
compile several Ruby binaries for Xenial and vendor them in the buildpack
alongside the existing Trusty-based binaries.

Larger buildpacks result in longer staging times, longer deploy times for
CF, and are just generally a burden to ship around, particularly for
operators and users that don't actually want or need two stacks.

A second solution is to ship a separate buildpack for each stack (so,
ruby_buildpack_cflinuxfs2 versus ruby_buildpack_cflinuxfs3), and have
`bin/detect` only select itself if it's running on the appropriate stack.

But this would simply be forcing all buildpacks to plug a leaky
abstraction, and so I'd like to endeavor to make buildpacks simpler to
maintain.

A third solution, and the one which I think we should pursue, is to ship
separate buildpacks for each stack, but make Cloud Controller aware of the
buildpack's "stackiness", and only invoke buildpacks that are appropriate
for that stack.

So, for example, the CC would know that the go_buildpack works on both
Trusty- and Xenial-based rootfses (as those binaries are statically
linked), and would also know that ruby_buildpack_cflinuxfs2 isn't valid for
applications running on cflinuxfs3.
Has there been any thought / consideration given to just not shipping
binaries with build packs? I know that we ship binaries with the build
packs so that they will work in offline environments, but doing so has the
obvious drawbacks you mentioned above (plus others). Have we considered
other ways to make the build packs work in offline environments? If the
build packs were just build pack code, it would make them *way* simpler to
manage and they could care much less about the stack.

One idea (sorry it's only half-baked) for enabling offline support but not
bundling binaries with the build packs would be to instead package binaries
into a separate job that runs as an HTTP server inside CF. Build packs
could then use that as an offline repo. Populating the repo could be done
in a few different ways. You could package binaries with the job, you
could have something (an errand maybe?) that uploads binaries to the VM,
you could have the HTTP server setup as a caching proxy that would fetch
them from some where else (perhaps just the proxy is allowed to access the
Internet) or the user could manually populate the files. It would also
give the user greater flexibility as to what versions of software are being
used in the environment, since build packs would no longer be limited by
the binary versions packaged with them, and instead just pull from what is
available on the repo. It would also change upgrading build packs to a
task that is mostly just pulling down the latest binaries to the HTTP
server. You'd only need to upgrade build packs when there is a problem
with the build pack itself.

Anyway, I like this option so I wanted to through it out there for
comment. Curious to hear thoughts from others. Happy to discuss further.

Thanks,

Dan




This work, however, will require some changes to CC's behavior, and that's
the critical path work that hasn't been scoped or prioritized yet.

Hope this helps everyone understand some of the concerns, and hopefully
explains why we haven't just shipped a Xenial-based stack.

-m


On Tue, May 10, 2016 at 1:34 PM, Mike Youngstrom <youngm(a)gmail.com> wrote:

I may not have anything that qualifies as compelling. But, here are some
of the reasons I've got:

* If skipping Xenial that give at the most 1 year to transition from
trusty to a 2018.04 based rootfs. Lets say it takes 6 months to get the
new rootFS into our customers hands and for everyone to be comfortable
enough with it to make it the default. I don't think 6 months is enough
time for my users to naturally transition all of their applications via
pushes and restages to the new rootfs. The more time we have with the new
rootfs as the default the less I will need to bother my customers to test
before I force them to change.

* Xenial uses OpenSSL 1.0.2. Improving security by not statically
compiling OpenSSL into Node would be nice.

* With the lucid rootfs after a while it became difficult to find
pre-built libraries for Lucid. This put increased burden on me to identify
and provide lucid compatible builds for some common tools. One example of
this is wkhtmltopdf a commonly used tool in my organization.

I think the biggest thing for me is that the move from Lucid to Trusty
was a nightmare for me and my customers. Though better planning and adding
a couple of more months to the process would help, giving my users a couple
of years to migrate would be better. :)

Mike

On Mon, May 9, 2016 at 2:05 PM, Danny Rosen <drosen(a)pivotal.io> wrote:

Hey Mike,

Thanks for reaching out. We've discussed supporting Xenial recently but
have had trouble identifying compelling reasons to do so. Our current
version of the rootfs is supported until April 2019 [1] and while we do not
plan on waiting until March 2019 :) we want to understand compelling
reasons to go forward with the work sooner than later.


On Mon, May 9, 2016 at 12:47 PM, Mike Youngstrom <youngm(a)gmail.com>
wrote:

Ubuntu Xenial Xerus was released a few weeks ago. Any plans to
incorporate Xenial into the platform? Stemcells and/or new root fs?

The recent lucid to trusty rootfs fire drill was frustrating to my
customers. I'm hoping that this year we can get a Xenial rootfs out
loooong before trusty support ends so I don't have to put another tight
deadline on my customers to test and move.

Thoughts?

Thanks,
Mike

Join cf-dev@lists.cloudfoundry.org to automatically receive all group messages.