We all have unique perspectives that we offer each other, I appreciate the thoughts and time you've put into formulating this alternative. I've not spent enough to refute or agree with your proposal. This might be the start of a compelling feature narrative. Let's discuss it in greater detail in real time on the cloud foundry slack.
Future readers: The original post re: Xenial stemcell and rootfs plans has been answered earlier in this thread.
toggle quoted message
Show quoted text
On May 12, 2016 1:53 PM, "Daniel Mikusa" <dmikusa(a)pivotal.io> wrote: On Thu, May 12, 2016 at 12:52 PM, Danny Rosen <drosen(a)pivotal.io> wrote:
Fair enough. Though if you divorce the binary from the buildpack you greatly increase the complexity of the environment.
I respectfully disagree. Build pack binaries are nothing more than packages (just a different format than rpm or deb). Any competent Linux administrator should be familiar with package management for their distro of choice, and that includes the ability to have an internal repo (i.e. mirror) for the distro's packages. Essentially the idea here would be to do the same thing but for build pack binaries. Because having internal repos / mirrors is something a lot of large companies do, I suspect many administrators will be familiar with these concepts.
I think this change could actually simplify build packs. Currently most of the build packs have ugly hacks in them to intercept requests for external files, translate URLs and load local copies of files instead [1]. The idea that I'm proposing would simply require the build packs to pull files from a repo. It doesn't matter if you are behind a firewall or on the public Internet. You just download the files from a configured repository. Simple and straightforward.
[1] - https://github.com/cloudfoundry/compile-extensions/tree/9932bb1d352b88883d76df41e797a6fa556844f0#download_dependency
I think we can simplify this conversation a bit though using our *current* architecture rather than creating new paradigms ... and more work for the buildpacks team :)
Again, I disagree. I don't think you can address them using the current architecture because it's the architecture that's the problem. Bundling binaries with the build packs is at it's core a bad idea. Mike D listed some of these earlier in this email thread. Summarizing the ones that come to mind below.
- large build packs are hard to distribute - large build packs increase staging time and in some cases cause staging failures - build packs are tied to the stack of the binaries they include - build packs are tied to specific versions of the binaries they include - supporting multiple sets of binaries requires multiple build packs or really large build packs - customizing build packs becomes more difficult as you now have to wrap up the binaries in your custom build pack - build packs are forced to release to keep up with their binaries, not because the build packs need to change at that pace
Separating the binaries and build packs would seem to address these issues. It would also turn binary management into a task that is more similar to what Linux admins do today for their distro's packages. Perhaps we could even piggy back on existing tools in this space to manage the binaries like Artifactory.
As an operator, I want my users to use custom buildpacks because the official buildpacks (their binaries, their configuration, etc) don't suit my needs.
- This can be achieved today! Via: - a proxy - an internet enabled environment - an internal git server
This is an over simplification and only partially addresses one issue with bundling binaries into the build pack. The original issue on this thread is being able to support the addition of a new stack. Mike D made the point that supporting an additional stack would be difficult because it would cause the size of the build pack to spike. He offered one possible solution, but that looked like it would require work to the cloud controller. I offered the idea of splitting the binaries out of the build pack. It doesn't require any cloud controller work and it would scale nicely as additional stacks are added (assuming you have an HTTP server with a large enough disk).
One idea we're throwing around is being able to use a url containing a zip file which could enable interesting solutions for operators who prefer the "bring your own buildpacks but not from the internet and don't ask me to upload it as an admin buildpack" solution.
I think that could be helpful. I remember back to the early days of Diego when it could pull in a zip archive and it was nice in certain situations. Having said that, I'm not seeing how this would help with the other issues caused by having build packs and binaries bundled together. In particular, the one of supporting multiple stacks.
Dan
If you're interested in working with us on this solution, let's talk!
We're happy to work with the community.
On Thu, May 12, 2016 at 12:26 PM, Daniel Mikusa <dmikusa(a)pivotal.io> wrote:
On Thu, May 12, 2016 at 11:59 AM, Danny Rosen <drosen(a)pivotal.io> wrote:
Thanks. This is helpful! I'd like get a better understanding of the following:
Why would an operator set their environment to be disconnected from the internet if they wanted to enable their users to run arbitrary binaries via a buildpack?
I don't think it wouldn't allow them to run arbitrary binaries, but it would allow them to run arbitrary build packs. If you divorce the binaries from the build pack, you can control the binaries separate in a corporate IT managed, not public repository of binaries. Then users can use any build pack they want so long as it points to the blessed internal repo of trusted binaries.
Dan
For if an operator wanted to provide users the flexibility of executing arbitrary binaries in buildpacks, custom buildpacks can be implemented via an environment with internet *or* by providing a proxy <http://docs.cloudfoundry.org/buildpacks/proxy-usage.html> that would allow custom buildpacks to be deployed <http://docs.cloudfoundry.org/buildpacks/custom.html#deploying-with-custom-buildpacks> with an app.
On Thu, May 12, 2016 at 11:37 AM, Mike Youngstrom <youngm(a)gmail.com> wrote:
See responses inline:
On Thu, May 12, 2016 at 9:04 AM, Danny Rosen <drosen(a)pivotal.io> wrote:
* One of the key value propositions of a buildpack is the lightweight process to fork and customize a buildpack. *The inclusion of binaries makes buildpack customization a much heavier process and less end user friendly in a number of ways.* -- I'm not sure I agree with this point and would like to understand your reasoning.
I may be missing something but it was my understanding that buildpacks with binaries included must (unless checking all binaries into git) be added as admin buildpacks which non admin users of CF cannot do. Therefore, if I am a simple user of cloud foundry I cannot customize a buldpack for my one off need without involving an administrator to upload and manage the one off buildpack. If binary dependencies were instead managed in a way like Daniel proposes the process would simply be to fork the buildpack and specifying that git repo when pushing. Completely self service without admin intervention. Making it a lighter weight process.
* For some of my customers the binary inclusion policies is too
restrictive. -- It's hard for me to understand this point as I do not know your customers' requirements. Would you mind providing details so we can better understand their needs?
I've attempted to express that need previously here: https://github.com/cloudfoundry/compile-extensions/issues/7 I don't view this as a major issue but I think it could be something to consider if buildpacks binary management is being reconsidered.
Hope those additional details help
Mike
|
|
On Thu, May 12, 2016 at 12:52 PM, Danny Rosen <drosen(a)pivotal.io> wrote: Fair enough. Though if you divorce the binary from the buildpack you greatly increase the complexity of the environment.
I respectfully disagree. Build pack binaries are nothing more than packages (just a different format than rpm or deb). Any competent Linux administrator should be familiar with package management for their distro of choice, and that includes the ability to have an internal repo (i.e. mirror) for the distro's packages. Essentially the idea here would be to do the same thing but for build pack binaries. Because having internal repos / mirrors is something a lot of large companies do, I suspect many administrators will be familiar with these concepts. I think this change could actually simplify build packs. Currently most of the build packs have ugly hacks in them to intercept requests for external files, translate URLs and load local copies of files instead [1]. The idea that I'm proposing would simply require the build packs to pull files from a repo. It doesn't matter if you are behind a firewall or on the public Internet. You just download the files from a configured repository. Simple and straightforward. [1] - https://github.com/cloudfoundry/compile-extensions/tree/9932bb1d352b88883d76df41e797a6fa556844f0#download_dependencyI think we can simplify this conversation a bit though using our *current* architecture rather than creating new paradigms ... and more work for the buildpacks team :)
Again, I disagree. I don't think you can address them using the current architecture because it's the architecture that's the problem. Bundling binaries with the build packs is at it's core a bad idea. Mike D listed some of these earlier in this email thread. Summarizing the ones that come to mind below. - large build packs are hard to distribute - large build packs increase staging time and in some cases cause staging failures - build packs are tied to the stack of the binaries they include - build packs are tied to specific versions of the binaries they include - supporting multiple sets of binaries requires multiple build packs or really large build packs - customizing build packs becomes more difficult as you now have to wrap up the binaries in your custom build pack - build packs are forced to release to keep up with their binaries, not because the build packs need to change at that pace Separating the binaries and build packs would seem to address these issues. It would also turn binary management into a task that is more similar to what Linux admins do today for their distro's packages. Perhaps we could even piggy back on existing tools in this space to manage the binaries like Artifactory. As an operator, I want my users to use custom buildpacks because the official buildpacks (their binaries, their configuration, etc) don't suit my needs.
- This can be achieved today! Via: - a proxy - an internet enabled environment - an internal git server
This is an over simplification and only partially addresses one issue with
bundling binaries into the build pack. The original issue on this thread is being able to support the addition of a new stack. Mike D made the point that supporting an additional stack would be difficult because it would cause the size of the build pack to spike. He offered one possible solution, but that looked like it would require work to the cloud controller. I offered the idea of splitting the binaries out of the build pack. It doesn't require any cloud controller work and it would scale nicely as additional stacks are added (assuming you have an HTTP server with a large enough disk). One idea we're throwing around is being able to use a url containing a zip file which could enable interesting solutions for operators who prefer the "bring your own buildpacks but not from the internet and don't ask me to upload it as an admin buildpack" solution.
I think that could be helpful. I remember back to the early days of Diego when it could pull in a zip archive and it was nice in certain situations. Having said that, I'm not seeing how this would help with the other issues caused by having build packs and binaries bundled together. In particular, the one of supporting multiple stacks. Dan If you're interested in working with us on this solution, let's talk! We're happy to work with the community.
On Thu, May 12, 2016 at 12:26 PM, Daniel Mikusa <dmikusa(a)pivotal.io> wrote:
On Thu, May 12, 2016 at 11:59 AM, Danny Rosen <drosen(a)pivotal.io> wrote:
Thanks. This is helpful! I'd like get a better understanding of the following:
Why would an operator set their environment to be disconnected from the internet if they wanted to enable their users to run arbitrary binaries via a buildpack?
I don't think it wouldn't allow them to run arbitrary binaries, but it would allow them to run arbitrary build packs. If you divorce the binaries from the build pack, you can control the binaries separate in a corporate IT managed, not public repository of binaries. Then users can use any build pack they want so long as it points to the blessed internal repo of trusted binaries.
Dan
For if an operator wanted to provide users the flexibility of executing arbitrary binaries in buildpacks, custom buildpacks can be implemented via an environment with internet *or* by providing a proxy <http://docs.cloudfoundry.org/buildpacks/proxy-usage.html> that would allow custom buildpacks to be deployed <http://docs.cloudfoundry.org/buildpacks/custom.html#deploying-with-custom-buildpacks> with an app.
On Thu, May 12, 2016 at 11:37 AM, Mike Youngstrom <youngm(a)gmail.com> wrote:
See responses inline:
On Thu, May 12, 2016 at 9:04 AM, Danny Rosen <drosen(a)pivotal.io> wrote:
* One of the key value propositions of a buildpack is the lightweight process to fork and customize a buildpack. *The inclusion of binaries makes buildpack customization a much heavier process and less end user friendly in a number of ways.* -- I'm not sure I agree with this point and would like to understand your reasoning.
I may be missing something but it was my understanding that buildpacks with binaries included must (unless checking all binaries into git) be added as admin buildpacks which non admin users of CF cannot do. Therefore, if I am a simple user of cloud foundry I cannot customize a buldpack for my one off need without involving an administrator to upload and manage the one off buildpack. If binary dependencies were instead managed in a way like Daniel proposes the process would simply be to fork the buildpack and specifying that git repo when pushing. Completely self service without admin intervention. Making it a lighter weight process.
* For some of my customers the binary inclusion policies is too
restrictive. -- It's hard for me to understand this point as I do not know your customers' requirements. Would you mind providing details so we can better understand their needs?
I've attempted to express that need previously here: https://github.com/cloudfoundry/compile-extensions/issues/7 I don't view this as a major issue but I think it could be something to consider if buildpacks binary management is being reconsidered.
Hope those additional details help
Mike
|
|
Fair enough. Though if you divorce the binary from the buildpack you greatly increase the complexity of the environment. I think we can simplify this conversation a bit though using our *current* architecture rather than creating new paradigms ... and more work for the buildpacks team :)
As an operator, I want my users to use custom buildpacks because the official buildpacks (their binaries, their configuration, etc) don't suit my needs.
- This can be achieved today! Via: - a proxy - an internet enabled environment - an internal git server
One idea we're throwing around is being able to use a url containing a zip file which could enable interesting solutions for operators who prefer the "bring your own buildpacks but not from the internet and don't ask me to upload it as an admin buildpack" solution.
If you're interested in working with us on this solution, let's talk! We're happy to work with the community.
toggle quoted message
Show quoted text
On Thu, May 12, 2016 at 12:26 PM, Daniel Mikusa <dmikusa(a)pivotal.io> wrote: On Thu, May 12, 2016 at 11:59 AM, Danny Rosen <drosen(a)pivotal.io> wrote:
Thanks. This is helpful! I'd like get a better understanding of the following:
Why would an operator set their environment to be disconnected from the internet if they wanted to enable their users to run arbitrary binaries via a buildpack?
I don't think it wouldn't allow them to run arbitrary binaries, but it would allow them to run arbitrary build packs. If you divorce the binaries from the build pack, you can control the binaries separate in a corporate IT managed, not public repository of binaries. Then users can use any build pack they want so long as it points to the blessed internal repo of trusted binaries.
Dan
For if an operator wanted to provide users the flexibility of executing arbitrary binaries in buildpacks, custom buildpacks can be implemented via an environment with internet *or* by providing a proxy <http://docs.cloudfoundry.org/buildpacks/proxy-usage.html> that would allow custom buildpacks to be deployed <http://docs.cloudfoundry.org/buildpacks/custom.html#deploying-with-custom-buildpacks> with an app.
On Thu, May 12, 2016 at 11:37 AM, Mike Youngstrom <youngm(a)gmail.com> wrote:
See responses inline:
On Thu, May 12, 2016 at 9:04 AM, Danny Rosen <drosen(a)pivotal.io> wrote:
* One of the key value propositions of a buildpack is the lightweight process to fork and customize a buildpack. *The inclusion of binaries makes buildpack customization a much heavier process and less end user friendly in a number of ways.* -- I'm not sure I agree with this point and would like to understand your reasoning.
I may be missing something but it was my understanding that buildpacks with binaries included must (unless checking all binaries into git) be added as admin buildpacks which non admin users of CF cannot do. Therefore, if I am a simple user of cloud foundry I cannot customize a buldpack for my one off need without involving an administrator to upload and manage the one off buildpack. If binary dependencies were instead managed in a way like Daniel proposes the process would simply be to fork the buildpack and specifying that git repo when pushing. Completely self service without admin intervention. Making it a lighter weight process.
* For some of my customers the binary inclusion policies is too
restrictive. -- It's hard for me to understand this point as I do not know your customers' requirements. Would you mind providing details so we can better understand their needs?
I've attempted to express that need previously here: https://github.com/cloudfoundry/compile-extensions/issues/7 I don't view this as a major issue but I think it could be something to consider if buildpacks binary management is being reconsidered.
Hope those additional details help
Mike
|
|
On Thu, May 12, 2016 at 11:59 AM, Danny Rosen <drosen(a)pivotal.io> wrote: Thanks. This is helpful! I'd like get a better understanding of the following:
Why would an operator set their environment to be disconnected from the internet if they wanted to enable their users to run arbitrary binaries via a buildpack?
I don't think it wouldn't allow them to run arbitrary binaries, but it would allow them to run arbitrary build packs. If you divorce the binaries from the build pack, you can control the binaries separate in a corporate IT managed, not public repository of binaries. Then users can use any build pack they want so long as it points to the blessed internal repo of trusted binaries. Dan For if an operator wanted to provide users the flexibility of executing arbitrary binaries in buildpacks, custom buildpacks can be implemented via an environment with internet *or* by providing a proxy <http://docs.cloudfoundry.org/buildpacks/proxy-usage.html> that would allow custom buildpacks to be deployed <http://docs.cloudfoundry.org/buildpacks/custom.html#deploying-with-custom-buildpacks> with an app.
On Thu, May 12, 2016 at 11:37 AM, Mike Youngstrom <youngm(a)gmail.com> wrote:
See responses inline:
On Thu, May 12, 2016 at 9:04 AM, Danny Rosen <drosen(a)pivotal.io> wrote:
* One of the key value propositions of a buildpack is the lightweight process to fork and customize a buildpack. *The inclusion of binaries makes buildpack customization a much heavier process and less end user friendly in a number of ways.* -- I'm not sure I agree with this point and would like to understand your reasoning.
I may be missing something but it was my understanding that buildpacks with binaries included must (unless checking all binaries into git) be added as admin buildpacks which non admin users of CF cannot do. Therefore, if I am a simple user of cloud foundry I cannot customize a buldpack for my one off need without involving an administrator to upload and manage the one off buildpack. If binary dependencies were instead managed in a way like Daniel proposes the process would simply be to fork the buildpack and specifying that git repo when pushing. Completely self service without admin intervention. Making it a lighter weight process.
* For some of my customers the binary inclusion policies is too
restrictive. -- It's hard for me to understand this point as I do not know your customers' requirements. Would you mind providing details so we can better understand their needs?
I've attempted to express that need previously here: https://github.com/cloudfoundry/compile-extensions/issues/7 I don't view this as a major issue but I think it could be something to consider if buildpacks binary management is being reconsidered.
Hope those additional details help
Mike
|
|
toggle quoted message
Show quoted text
On Thu, May 12, 2016 at 11:37 AM, Mike Youngstrom <youngm(a)gmail.com> wrote: See responses inline:
On Thu, May 12, 2016 at 9:04 AM, Danny Rosen <drosen(a)pivotal.io> wrote:
* One of the key value propositions of a buildpack is the lightweight process to fork and customize a buildpack. *The inclusion of binaries makes buildpack customization a much heavier process and less end user friendly in a number of ways.* -- I'm not sure I agree with this point and would like to understand your reasoning.
I may be missing something but it was my understanding that buildpacks with binaries included must (unless checking all binaries into git) be added as admin buildpacks which non admin users of CF cannot do. Therefore, if I am a simple user of cloud foundry I cannot customize a buldpack for my one off need without involving an administrator to upload and manage the one off buildpack. If binary dependencies were instead managed in a way like Daniel proposes the process would simply be to fork the buildpack and specifying that git repo when pushing. Completely self service without admin intervention. Making it a lighter weight process.
* For some of my customers the binary inclusion policies is too
restrictive. -- It's hard for me to understand this point as I do not know your customers' requirements. Would you mind providing details so we can better understand their needs?
I've attempted to express that need previously here: https://github.com/cloudfoundry/compile-extensions/issues/7 I don't view this as a major issue but I think it could be something to consider if buildpacks binary management is being reconsidered.
Hope those additional details help
Mike
|
|
Mike Youngstrom <youngm@...>
See responses inline: On Thu, May 12, 2016 at 9:04 AM, Danny Rosen <drosen(a)pivotal.io> wrote: * One of the key value propositions of a buildpack is the lightweight process to fork and customize a buildpack. *The inclusion of binaries makes buildpack customization a much heavier process and less end user friendly in a number of ways.* -- I'm not sure I agree with this point and would like to understand your reasoning.
I may be missing something but it was my understanding that buildpacks with binaries included must (unless checking all binaries into git) be added as admin buildpacks which non admin users of CF cannot do. Therefore, if I am a simple user of cloud foundry I cannot customize a buldpack for my one off need without involving an administrator to upload and manage the one off buildpack. If binary dependencies were instead managed in a way like Daniel proposes the process would simply be to fork the buildpack and specifying that git repo when pushing. Completely self service without admin intervention. Making it a lighter weight process. * For some of my customers the binary inclusion policies is too restrictive. -- It's hard for me to understand this point as I do not know your customers' requirements. Would you mind providing details so we can better understand their needs?
I've attempted to express that need previously here: https://github.com/cloudfoundry/compile-extensions/issues/7 I don't view this as a major issue but I think it could be something to consider if buildpacks binary management is being reconsidered. Hope those additional details help Mike
|
|
* One of the key value propositions of a buildpack is the lightweight process to fork and customize a buildpack. *The inclusion of binaries makes buildpack customization a much heavier process and less end user friendly in a number of ways.* -- I'm not sure I agree with this point and would like to understand your reasoning.
* For some of my customers the binary inclusion policies is too restrictive. -- It's hard for me to understand this point as I do not know your customers' requirements. Would you mind providing details so we can better understand their needs?
toggle quoted message
Show quoted text
On Wed, May 11, 2016 at 2:01 PM, Mike Youngstrom <youngm(a)gmail.com> wrote: I really like the idea of finding a way to move away from bundling binaries with the buildpacks while continuing to not require internet access. My organization actually doesn't even use the binary bundled buildpacks for our 2 main platforms (node and java).
Some issues we have with the offline buildpacks in addition to those already mentioned:
* One of the key value propositions of a buildpack is the lightweight process to fork and customize a buildpack. The inclusion of binaries makes buildpack customization a much heavier process and less end user friendly in a number of ways. * We require some java-buildpack binaries that are not packaged with the java-buildpack because of licensing issues, etc. * For some of my customers the binary inclusion policies is too restrictive.
So, I agree with your 100% Dan. I'd love to see some work more in the direction of not including binaries rather than making admin bulidpack selection more stack specific.
Mike
On Wed, May 11, 2016 at 11:09 AM, Daniel Mikusa <dmikusa(a)pivotal.io> wrote:
On Wed, May 11, 2016 at 9:45 AM, Mike Dalessio <mdalessio(a)pivotal.io> wrote:
Hi Mike,
I totally agree with you on all points, but there are second-order effects that are worth discussing and understanding, as they've influenced my own thinking around the timing of this work.
Given the current state of automation in the Buildpacks Team's CI pipelines, we could add a Xenial-based rootfs ("cflinuxfs3"?)
Could we please, please not call it `cflinuxfs3`? A very common question I get is what is `cflinuxfs2` really? I then have to explain that it is basically Ubuntu Trusty. That invariably results in the follow up question, why it's called `cflinuxfs2` then, to which I have no good answer.
Since it would seem that this naming choice has resulted in confused users, can we think of something that is more indicative of what you actually get from the rootfs? I would throw out cfxenialfs as it indicates it's CF, Xenial and a file system. This seems more accurate as the rootfs isn't really about "linux", if you look at linux as being the kernel [1]. It's about user land packages and those are Ubuntu Trusty or Xenial based, so it seems like the name should reflect that.
[1] - https://en.wikipedia.org/wiki/GNU/Linux_naming_controversy
to CF pretty quickly (and in fact have considered doing exactly this),
and could build precompiled Xenial binaries to add to each buildpack pretty easily.
Unfortunately, this would result in doubling (or nearly so) the size of almost all of the buildpacks, since the majority of a buildpack's payload are the precompiled binaries for the rootfs. For example, we'd need to compile several Ruby binaries for Xenial and vendor them in the buildpack alongside the existing Trusty-based binaries.
Larger buildpacks result in longer staging times, longer deploy times for CF, and are just generally a burden to ship around, particularly for operators and users that don't actually want or need two stacks.
A second solution is to ship a separate buildpack for each stack (so, ruby_buildpack_cflinuxfs2 versus ruby_buildpack_cflinuxfs3), and have `bin/detect` only select itself if it's running on the appropriate stack.
But this would simply be forcing all buildpacks to plug a leaky abstraction, and so I'd like to endeavor to make buildpacks simpler to maintain.
A third solution, and the one which I think we should pursue, is to ship separate buildpacks for each stack, but make Cloud Controller aware of the buildpack's "stackiness", and only invoke buildpacks that are appropriate for that stack.
So, for example, the CC would know that the go_buildpack works on both Trusty- and Xenial-based rootfses (as those binaries are statically linked), and would also know that ruby_buildpack_cflinuxfs2 isn't valid for applications running on cflinuxfs3.
Has there been any thought / consideration given to just not shipping binaries with build packs? I know that we ship binaries with the build packs so that they will work in offline environments, but doing so has the obvious drawbacks you mentioned above (plus others). Have we considered other ways to make the build packs work in offline environments? If the build packs were just build pack code, it would make them *way* simpler to manage and they could care much less about the stack.
One idea (sorry it's only half-baked) for enabling offline support but not bundling binaries with the build packs would be to instead package binaries into a separate job that runs as an HTTP server inside CF. Build packs could then use that as an offline repo. Populating the repo could be done in a few different ways. You could package binaries with the job, you could have something (an errand maybe?) that uploads binaries to the VM, you could have the HTTP server setup as a caching proxy that would fetch them from some where else (perhaps just the proxy is allowed to access the Internet) or the user could manually populate the files. It would also give the user greater flexibility as to what versions of software are being used in the environment, since build packs would no longer be limited by the binary versions packaged with them, and instead just pull from what is available on the repo. It would also change upgrading build packs to a task that is mostly just pulling down the latest binaries to the HTTP server. You'd only need to upgrade build packs when there is a problem with the build pack itself.
Anyway, I like this option so I wanted to through it out there for comment. Curious to hear thoughts from others. Happy to discuss further.
Thanks,
Dan
This work, however, will require some changes to CC's behavior, and that's the critical path work that hasn't been scoped or prioritized yet.
Hope this helps everyone understand some of the concerns, and hopefully explains why we haven't just shipped a Xenial-based stack.
-m
On Tue, May 10, 2016 at 1:34 PM, Mike Youngstrom <youngm(a)gmail.com> wrote:
I may not have anything that qualifies as compelling. But, here are some of the reasons I've got:
* If skipping Xenial that give at the most 1 year to transition from trusty to a 2018.04 based rootfs. Lets say it takes 6 months to get the new rootFS into our customers hands and for everyone to be comfortable enough with it to make it the default. I don't think 6 months is enough time for my users to naturally transition all of their applications via pushes and restages to the new rootfs. The more time we have with the new rootfs as the default the less I will need to bother my customers to test before I force them to change.
* Xenial uses OpenSSL 1.0.2. Improving security by not statically compiling OpenSSL into Node would be nice.
* With the lucid rootfs after a while it became difficult to find pre-built libraries for Lucid. This put increased burden on me to identify and provide lucid compatible builds for some common tools. One example of this is wkhtmltopdf a commonly used tool in my organization.
I think the biggest thing for me is that the move from Lucid to Trusty was a nightmare for me and my customers. Though better planning and adding a couple of more months to the process would help, giving my users a couple of years to migrate would be better. :)
Mike
On Mon, May 9, 2016 at 2:05 PM, Danny Rosen <drosen(a)pivotal.io> wrote:
Hey Mike,
Thanks for reaching out. We've discussed supporting Xenial recently but have had trouble identifying compelling reasons to do so. Our current version of the rootfs is supported until April 2019 [1] and while we do not plan on waiting until March 2019 :) we want to understand compelling reasons to go forward with the work sooner than later.
On Mon, May 9, 2016 at 12:47 PM, Mike Youngstrom <youngm(a)gmail.com> wrote:
Ubuntu Xenial Xerus was released a few weeks ago. Any plans to incorporate Xenial into the platform? Stemcells and/or new root fs?
The recent lucid to trusty rootfs fire drill was frustrating to my customers. I'm hoping that this year we can get a Xenial rootfs out loooong before trusty support ends so I don't have to put another tight deadline on my customers to test and move.
Thoughts?
Thanks, Mike
|
|
Mike Youngstrom <youngm@...>
I really like the idea of finding a way to move away from bundling binaries with the buildpacks while continuing to not require internet access. My organization actually doesn't even use the binary bundled buildpacks for our 2 main platforms (node and java).
Some issues we have with the offline buildpacks in addition to those already mentioned:
* One of the key value propositions of a buildpack is the lightweight process to fork and customize a buildpack. The inclusion of binaries makes buildpack customization a much heavier process and less end user friendly in a number of ways. * We require some java-buildpack binaries that are not packaged with the java-buildpack because of licensing issues, etc. * For some of my customers the binary inclusion policies is too restrictive.
So, I agree with your 100% Dan. I'd love to see some work more in the direction of not including binaries rather than making admin bulidpack selection more stack specific.
Mike
toggle quoted message
Show quoted text
On Wed, May 11, 2016 at 11:09 AM, Daniel Mikusa <dmikusa(a)pivotal.io> wrote: On Wed, May 11, 2016 at 9:45 AM, Mike Dalessio <mdalessio(a)pivotal.io> wrote:
Hi Mike,
I totally agree with you on all points, but there are second-order effects that are worth discussing and understanding, as they've influenced my own thinking around the timing of this work.
Given the current state of automation in the Buildpacks Team's CI pipelines, we could add a Xenial-based rootfs ("cflinuxfs3"?)
Could we please, please not call it `cflinuxfs3`? A very common question I get is what is `cflinuxfs2` really? I then have to explain that it is basically Ubuntu Trusty. That invariably results in the follow up question, why it's called `cflinuxfs2` then, to which I have no good answer.
Since it would seem that this naming choice has resulted in confused users, can we think of something that is more indicative of what you actually get from the rootfs? I would throw out cfxenialfs as it indicates it's CF, Xenial and a file system. This seems more accurate as the rootfs isn't really about "linux", if you look at linux as being the kernel [1]. It's about user land packages and those are Ubuntu Trusty or Xenial based, so it seems like the name should reflect that.
[1] - https://en.wikipedia.org/wiki/GNU/Linux_naming_controversy
to CF pretty quickly (and in fact have considered doing exactly this), and
could build precompiled Xenial binaries to add to each buildpack pretty easily.
Unfortunately, this would result in doubling (or nearly so) the size of almost all of the buildpacks, since the majority of a buildpack's payload are the precompiled binaries for the rootfs. For example, we'd need to compile several Ruby binaries for Xenial and vendor them in the buildpack alongside the existing Trusty-based binaries.
Larger buildpacks result in longer staging times, longer deploy times for CF, and are just generally a burden to ship around, particularly for operators and users that don't actually want or need two stacks.
A second solution is to ship a separate buildpack for each stack (so, ruby_buildpack_cflinuxfs2 versus ruby_buildpack_cflinuxfs3), and have `bin/detect` only select itself if it's running on the appropriate stack.
But this would simply be forcing all buildpacks to plug a leaky abstraction, and so I'd like to endeavor to make buildpacks simpler to maintain.
A third solution, and the one which I think we should pursue, is to ship separate buildpacks for each stack, but make Cloud Controller aware of the buildpack's "stackiness", and only invoke buildpacks that are appropriate for that stack.
So, for example, the CC would know that the go_buildpack works on both Trusty- and Xenial-based rootfses (as those binaries are statically linked), and would also know that ruby_buildpack_cflinuxfs2 isn't valid for applications running on cflinuxfs3.
Has there been any thought / consideration given to just not shipping binaries with build packs? I know that we ship binaries with the build packs so that they will work in offline environments, but doing so has the obvious drawbacks you mentioned above (plus others). Have we considered other ways to make the build packs work in offline environments? If the build packs were just build pack code, it would make them *way* simpler to manage and they could care much less about the stack.
One idea (sorry it's only half-baked) for enabling offline support but not bundling binaries with the build packs would be to instead package binaries into a separate job that runs as an HTTP server inside CF. Build packs could then use that as an offline repo. Populating the repo could be done in a few different ways. You could package binaries with the job, you could have something (an errand maybe?) that uploads binaries to the VM, you could have the HTTP server setup as a caching proxy that would fetch them from some where else (perhaps just the proxy is allowed to access the Internet) or the user could manually populate the files. It would also give the user greater flexibility as to what versions of software are being used in the environment, since build packs would no longer be limited by the binary versions packaged with them, and instead just pull from what is available on the repo. It would also change upgrading build packs to a task that is mostly just pulling down the latest binaries to the HTTP server. You'd only need to upgrade build packs when there is a problem with the build pack itself.
Anyway, I like this option so I wanted to through it out there for comment. Curious to hear thoughts from others. Happy to discuss further.
Thanks,
Dan
This work, however, will require some changes to CC's behavior, and that's the critical path work that hasn't been scoped or prioritized yet.
Hope this helps everyone understand some of the concerns, and hopefully explains why we haven't just shipped a Xenial-based stack.
-m
On Tue, May 10, 2016 at 1:34 PM, Mike Youngstrom <youngm(a)gmail.com> wrote:
I may not have anything that qualifies as compelling. But, here are some of the reasons I've got:
* If skipping Xenial that give at the most 1 year to transition from trusty to a 2018.04 based rootfs. Lets say it takes 6 months to get the new rootFS into our customers hands and for everyone to be comfortable enough with it to make it the default. I don't think 6 months is enough time for my users to naturally transition all of their applications via pushes and restages to the new rootfs. The more time we have with the new rootfs as the default the less I will need to bother my customers to test before I force them to change.
* Xenial uses OpenSSL 1.0.2. Improving security by not statically compiling OpenSSL into Node would be nice.
* With the lucid rootfs after a while it became difficult to find pre-built libraries for Lucid. This put increased burden on me to identify and provide lucid compatible builds for some common tools. One example of this is wkhtmltopdf a commonly used tool in my organization.
I think the biggest thing for me is that the move from Lucid to Trusty was a nightmare for me and my customers. Though better planning and adding a couple of more months to the process would help, giving my users a couple of years to migrate would be better. :)
Mike
On Mon, May 9, 2016 at 2:05 PM, Danny Rosen <drosen(a)pivotal.io> wrote:
Hey Mike,
Thanks for reaching out. We've discussed supporting Xenial recently but have had trouble identifying compelling reasons to do so. Our current version of the rootfs is supported until April 2019 [1] and while we do not plan on waiting until March 2019 :) we want to understand compelling reasons to go forward with the work sooner than later.
On Mon, May 9, 2016 at 12:47 PM, Mike Youngstrom <youngm(a)gmail.com> wrote:
Ubuntu Xenial Xerus was released a few weeks ago. Any plans to incorporate Xenial into the platform? Stemcells and/or new root fs?
The recent lucid to trusty rootfs fire drill was frustrating to my customers. I'm hoping that this year we can get a Xenial rootfs out loooong before trusty support ends so I don't have to put another tight deadline on my customers to test and move.
Thoughts?
Thanks, Mike
|
|
On Wed, May 11, 2016 at 9:45 AM, Mike Dalessio <mdalessio(a)pivotal.io> wrote: Hi Mike,
I totally agree with you on all points, but there are second-order effects that are worth discussing and understanding, as they've influenced my own thinking around the timing of this work.
Given the current state of automation in the Buildpacks Team's CI pipelines, we could add a Xenial-based rootfs ("cflinuxfs3"?)
Could we please, please not call it `cflinuxfs3`? A very common question I get is what is `cflinuxfs2` really? I then have to explain that it is basically Ubuntu Trusty. That invariably results in the follow up question, why it's called `cflinuxfs2` then, to which I have no good answer. Since it would seem that this naming choice has resulted in confused users, can we think of something that is more indicative of what you actually get from the rootfs? I would throw out cfxenialfs as it indicates it's CF, Xenial and a file system. This seems more accurate as the rootfs isn't really about "linux", if you look at linux as being the kernel [1]. It's about user land packages and those are Ubuntu Trusty or Xenial based, so it seems like the name should reflect that. [1] - https://en.wikipedia.org/wiki/GNU/Linux_naming_controversyto CF pretty quickly (and in fact have considered doing exactly this), and could build precompiled Xenial binaries to add to each buildpack pretty easily.
Unfortunately, this would result in doubling (or nearly so) the size of almost all of the buildpacks, since the majority of a buildpack's payload are the precompiled binaries for the rootfs. For example, we'd need to compile several Ruby binaries for Xenial and vendor them in the buildpack alongside the existing Trusty-based binaries.
Larger buildpacks result in longer staging times, longer deploy times for CF, and are just generally a burden to ship around, particularly for operators and users that don't actually want or need two stacks.
A second solution is to ship a separate buildpack for each stack (so, ruby_buildpack_cflinuxfs2 versus ruby_buildpack_cflinuxfs3), and have `bin/detect` only select itself if it's running on the appropriate stack.
But this would simply be forcing all buildpacks to plug a leaky abstraction, and so I'd like to endeavor to make buildpacks simpler to maintain.
A third solution, and the one which I think we should pursue, is to ship separate buildpacks for each stack, but make Cloud Controller aware of the buildpack's "stackiness", and only invoke buildpacks that are appropriate for that stack.
So, for example, the CC would know that the go_buildpack works on both Trusty- and Xenial-based rootfses (as those binaries are statically linked), and would also know that ruby_buildpack_cflinuxfs2 isn't valid for applications running on cflinuxfs3.
Has there been any thought / consideration given to just not shipping binaries with build packs? I know that we ship binaries with the build packs so that they will work in offline environments, but doing so has the obvious drawbacks you mentioned above (plus others). Have we considered other ways to make the build packs work in offline environments? If the build packs were just build pack code, it would make them *way* simpler to manage and they could care much less about the stack. One idea (sorry it's only half-baked) for enabling offline support but not bundling binaries with the build packs would be to instead package binaries into a separate job that runs as an HTTP server inside CF. Build packs could then use that as an offline repo. Populating the repo could be done in a few different ways. You could package binaries with the job, you could have something (an errand maybe?) that uploads binaries to the VM, you could have the HTTP server setup as a caching proxy that would fetch them from some where else (perhaps just the proxy is allowed to access the Internet) or the user could manually populate the files. It would also give the user greater flexibility as to what versions of software are being used in the environment, since build packs would no longer be limited by the binary versions packaged with them, and instead just pull from what is available on the repo. It would also change upgrading build packs to a task that is mostly just pulling down the latest binaries to the HTTP server. You'd only need to upgrade build packs when there is a problem with the build pack itself. Anyway, I like this option so I wanted to through it out there for comment. Curious to hear thoughts from others. Happy to discuss further. Thanks, Dan This work, however, will require some changes to CC's behavior, and that's the critical path work that hasn't been scoped or prioritized yet.
Hope this helps everyone understand some of the concerns, and hopefully explains why we haven't just shipped a Xenial-based stack.
-m
On Tue, May 10, 2016 at 1:34 PM, Mike Youngstrom <youngm(a)gmail.com> wrote:
I may not have anything that qualifies as compelling. But, here are some of the reasons I've got:
* If skipping Xenial that give at the most 1 year to transition from trusty to a 2018.04 based rootfs. Lets say it takes 6 months to get the new rootFS into our customers hands and for everyone to be comfortable enough with it to make it the default. I don't think 6 months is enough time for my users to naturally transition all of their applications via pushes and restages to the new rootfs. The more time we have with the new rootfs as the default the less I will need to bother my customers to test before I force them to change.
* Xenial uses OpenSSL 1.0.2. Improving security by not statically compiling OpenSSL into Node would be nice.
* With the lucid rootfs after a while it became difficult to find pre-built libraries for Lucid. This put increased burden on me to identify and provide lucid compatible builds for some common tools. One example of this is wkhtmltopdf a commonly used tool in my organization.
I think the biggest thing for me is that the move from Lucid to Trusty was a nightmare for me and my customers. Though better planning and adding a couple of more months to the process would help, giving my users a couple of years to migrate would be better. :)
Mike
On Mon, May 9, 2016 at 2:05 PM, Danny Rosen <drosen(a)pivotal.io> wrote:
Hey Mike,
Thanks for reaching out. We've discussed supporting Xenial recently but have had trouble identifying compelling reasons to do so. Our current version of the rootfs is supported until April 2019 [1] and while we do not plan on waiting until March 2019 :) we want to understand compelling reasons to go forward with the work sooner than later.
On Mon, May 9, 2016 at 12:47 PM, Mike Youngstrom <youngm(a)gmail.com> wrote:
Ubuntu Xenial Xerus was released a few weeks ago. Any plans to incorporate Xenial into the platform? Stemcells and/or new root fs?
The recent lucid to trusty rootfs fire drill was frustrating to my customers. I'm hoping that this year we can get a Xenial rootfs out loooong before trusty support ends so I don't have to put another tight deadline on my customers to test and move.
Thoughts?
Thanks, Mike
|
|
Mike Youngstrom <youngm@...>
Thanks Mike that helps. Hopefully that work will get prioritized in the next year or so. :) For the record, on the stemcell side I've been battling a non CF issue [0] with Trusty that I'm hoping is fixed in Xenial. I could verify if it is fixed without a stemcell. I'm just being lazy. :) Perhaps I'll verify first so I have a more concrete reason to request a Xenial stemcell. Thanks, Mike [0] https://github.com/hazelcast/hazelcast/issues/5209
toggle quoted message
Show quoted text
On Wed, May 11, 2016 at 7:45 AM, Mike Dalessio <mdalessio(a)pivotal.io> wrote: Hi Mike,
I totally agree with you on all points, but there are second-order effects that are worth discussing and understanding, as they've influenced my own thinking around the timing of this work.
Given the current state of automation in the Buildpacks Team's CI pipelines, we could add a Xenial-based rootfs ("cflinuxfs3"?) to CF pretty quickly (and in fact have considered doing exactly this), and could build precompiled Xenial binaries to add to each buildpack pretty easily.
Unfortunately, this would result in doubling (or nearly so) the size of almost all of the buildpacks, since the majority of a buildpack's payload are the precompiled binaries for the rootfs. For example, we'd need to compile several Ruby binaries for Xenial and vendor them in the buildpack alongside the existing Trusty-based binaries.
Larger buildpacks result in longer staging times, longer deploy times for CF, and are just generally a burden to ship around, particularly for operators and users that don't actually want or need two stacks.
A second solution is to ship a separate buildpack for each stack (so, ruby_buildpack_cflinuxfs2 versus ruby_buildpack_cflinuxfs3), and have `bin/detect` only select itself if it's running on the appropriate stack.
But this would simply be forcing all buildpacks to plug a leaky abstraction, and so I'd like to endeavor to make buildpacks simpler to maintain.
A third solution, and the one which I think we should pursue, is to ship separate buildpacks for each stack, but make Cloud Controller aware of the buildpack's "stackiness", and only invoke buildpacks that are appropriate for that stack.
So, for example, the CC would know that the go_buildpack works on both Trusty- and Xenial-based rootfses (as those binaries are statically linked), and would also know that ruby_buildpack_cflinuxfs2 isn't valid for applications running on cflinuxfs3.
This work, however, will require some changes to CC's behavior, and that's the critical path work that hasn't been scoped or prioritized yet.
Hope this helps everyone understand some of the concerns, and hopefully explains why we haven't just shipped a Xenial-based stack.
-m
On Tue, May 10, 2016 at 1:34 PM, Mike Youngstrom <youngm(a)gmail.com> wrote:
I may not have anything that qualifies as compelling. But, here are some of the reasons I've got:
* If skipping Xenial that give at the most 1 year to transition from trusty to a 2018.04 based rootfs. Lets say it takes 6 months to get the new rootFS into our customers hands and for everyone to be comfortable enough with it to make it the default. I don't think 6 months is enough time for my users to naturally transition all of their applications via pushes and restages to the new rootfs. The more time we have with the new rootfs as the default the less I will need to bother my customers to test before I force them to change.
* Xenial uses OpenSSL 1.0.2. Improving security by not statically compiling OpenSSL into Node would be nice.
* With the lucid rootfs after a while it became difficult to find pre-built libraries for Lucid. This put increased burden on me to identify and provide lucid compatible builds for some common tools. One example of this is wkhtmltopdf a commonly used tool in my organization.
I think the biggest thing for me is that the move from Lucid to Trusty was a nightmare for me and my customers. Though better planning and adding a couple of more months to the process would help, giving my users a couple of years to migrate would be better. :)
Mike
On Mon, May 9, 2016 at 2:05 PM, Danny Rosen <drosen(a)pivotal.io> wrote:
Hey Mike,
Thanks for reaching out. We've discussed supporting Xenial recently but have had trouble identifying compelling reasons to do so. Our current version of the rootfs is supported until April 2019 [1] and while we do not plan on waiting until March 2019 :) we want to understand compelling reasons to go forward with the work sooner than later.
On Mon, May 9, 2016 at 12:47 PM, Mike Youngstrom <youngm(a)gmail.com> wrote:
Ubuntu Xenial Xerus was released a few weeks ago. Any plans to incorporate Xenial into the platform? Stemcells and/or new root fs?
The recent lucid to trusty rootfs fire drill was frustrating to my customers. I'm hoping that this year we can get a Xenial rootfs out loooong before trusty support ends so I don't have to put another tight deadline on my customers to test and move.
Thoughts?
Thanks, Mike
|
|
Hi Mike,
I totally agree with you on all points, but there are second-order effects that are worth discussing and understanding, as they've influenced my own thinking around the timing of this work.
Given the current state of automation in the Buildpacks Team's CI pipelines, we could add a Xenial-based rootfs ("cflinuxfs3"?) to CF pretty quickly (and in fact have considered doing exactly this), and could build precompiled Xenial binaries to add to each buildpack pretty easily.
Unfortunately, this would result in doubling (or nearly so) the size of almost all of the buildpacks, since the majority of a buildpack's payload are the precompiled binaries for the rootfs. For example, we'd need to compile several Ruby binaries for Xenial and vendor them in the buildpack alongside the existing Trusty-based binaries.
Larger buildpacks result in longer staging times, longer deploy times for CF, and are just generally a burden to ship around, particularly for operators and users that don't actually want or need two stacks.
A second solution is to ship a separate buildpack for each stack (so, ruby_buildpack_cflinuxfs2 versus ruby_buildpack_cflinuxfs3), and have `bin/detect` only select itself if it's running on the appropriate stack.
But this would simply be forcing all buildpacks to plug a leaky abstraction, and so I'd like to endeavor to make buildpacks simpler to maintain.
A third solution, and the one which I think we should pursue, is to ship separate buildpacks for each stack, but make Cloud Controller aware of the buildpack's "stackiness", and only invoke buildpacks that are appropriate for that stack.
So, for example, the CC would know that the go_buildpack works on both Trusty- and Xenial-based rootfses (as those binaries are statically linked), and would also know that ruby_buildpack_cflinuxfs2 isn't valid for applications running on cflinuxfs3.
This work, however, will require some changes to CC's behavior, and that's the critical path work that hasn't been scoped or prioritized yet.
Hope this helps everyone understand some of the concerns, and hopefully explains why we haven't just shipped a Xenial-based stack.
-m
toggle quoted message
Show quoted text
On Tue, May 10, 2016 at 1:34 PM, Mike Youngstrom <youngm(a)gmail.com> wrote: I may not have anything that qualifies as compelling. But, here are some of the reasons I've got:
* If skipping Xenial that give at the most 1 year to transition from trusty to a 2018.04 based rootfs. Lets say it takes 6 months to get the new rootFS into our customers hands and for everyone to be comfortable enough with it to make it the default. I don't think 6 months is enough time for my users to naturally transition all of their applications via pushes and restages to the new rootfs. The more time we have with the new rootfs as the default the less I will need to bother my customers to test before I force them to change.
* Xenial uses OpenSSL 1.0.2. Improving security by not statically compiling OpenSSL into Node would be nice.
* With the lucid rootfs after a while it became difficult to find pre-built libraries for Lucid. This put increased burden on me to identify and provide lucid compatible builds for some common tools. One example of this is wkhtmltopdf a commonly used tool in my organization.
I think the biggest thing for me is that the move from Lucid to Trusty was a nightmare for me and my customers. Though better planning and adding a couple of more months to the process would help, giving my users a couple of years to migrate would be better. :)
Mike
On Mon, May 9, 2016 at 2:05 PM, Danny Rosen <drosen(a)pivotal.io> wrote:
Hey Mike,
Thanks for reaching out. We've discussed supporting Xenial recently but have had trouble identifying compelling reasons to do so. Our current version of the rootfs is supported until April 2019 [1] and while we do not plan on waiting until March 2019 :) we want to understand compelling reasons to go forward with the work sooner than later.
On Mon, May 9, 2016 at 12:47 PM, Mike Youngstrom <youngm(a)gmail.com> wrote:
Ubuntu Xenial Xerus was released a few weeks ago. Any plans to incorporate Xenial into the platform? Stemcells and/or new root fs?
The recent lucid to trusty rootfs fire drill was frustrating to my customers. I'm hoping that this year we can get a Xenial rootfs out loooong before trusty support ends so I don't have to put another tight deadline on my customers to test and move.
Thoughts?
Thanks, Mike
|
|
Mike Youngstrom <youngm@...>
I may not have anything that qualifies as compelling. But, here are some of the reasons I've got:
* If skipping Xenial that give at the most 1 year to transition from trusty to a 2018.04 based rootfs. Lets say it takes 6 months to get the new rootFS into our customers hands and for everyone to be comfortable enough with it to make it the default. I don't think 6 months is enough time for my users to naturally transition all of their applications via pushes and restages to the new rootfs. The more time we have with the new rootfs as the default the less I will need to bother my customers to test before I force them to change.
* Xenial uses OpenSSL 1.0.2. Improving security by not statically compiling OpenSSL into Node would be nice.
* With the lucid rootfs after a while it became difficult to find pre-built libraries for Lucid. This put increased burden on me to identify and provide lucid compatible builds for some common tools. One example of this is wkhtmltopdf a commonly used tool in my organization.
I think the biggest thing for me is that the move from Lucid to Trusty was a nightmare for me and my customers. Though better planning and adding a couple of more months to the process would help, giving my users a couple of years to migrate would be better. :)
Mike
toggle quoted message
Show quoted text
On Mon, May 9, 2016 at 2:05 PM, Danny Rosen <drosen(a)pivotal.io> wrote: Hey Mike,
Thanks for reaching out. We've discussed supporting Xenial recently but have had trouble identifying compelling reasons to do so. Our current version of the rootfs is supported until April 2019 [1] and while we do not plan on waiting until March 2019 :) we want to understand compelling reasons to go forward with the work sooner than later.
On Mon, May 9, 2016 at 12:47 PM, Mike Youngstrom <youngm(a)gmail.com> wrote:
Ubuntu Xenial Xerus was released a few weeks ago. Any plans to incorporate Xenial into the platform? Stemcells and/or new root fs?
The recent lucid to trusty rootfs fire drill was frustrating to my customers. I'm hoping that this year we can get a Xenial rootfs out loooong before trusty support ends so I don't have to put another tight deadline on my customers to test and move.
Thoughts?
Thanks, Mike
|
|
Hey Mike,
Thanks for reaching out. We've discussed supporting Xenial recently but have had trouble identifying compelling reasons to do so. Our current version of the rootfs is supported until April 2019 [1] and while we do not plan on waiting until March 2019 :) we want to understand compelling reasons to go forward with the work sooner than later.
toggle quoted message
Show quoted text
On Mon, May 9, 2016 at 12:47 PM, Mike Youngstrom <youngm(a)gmail.com> wrote: Ubuntu Xenial Xerus was released a few weeks ago. Any plans to incorporate Xenial into the platform? Stemcells and/or new root fs?
The recent lucid to trusty rootfs fire drill was frustrating to my customers. I'm hoping that this year we can get a Xenial rootfs out loooong before trusty support ends so I don't have to put another tight deadline on my customers to test and move.
Thoughts?
Thanks, Mike
|
|
Mike Youngstrom <youngm@...>
Ubuntu Xenial Xerus was released a few weeks ago. Any plans to incorporate Xenial into the platform? Stemcells and/or new root fs?
The recent lucid to trusty rootfs fire drill was frustrating to my customers. I'm hoping that this year we can get a Xenial rootfs out loooong before trusty support ends so I don't have to put another tight deadline on my customers to test and move.
Thoughts?
Thanks, Mike
|
|