Re: Buildpacks PMC - 2015-05-04 Notes
Chris Sterling
Glad to see that the static and null buildpacks will be shipping with
toggle quoted message
Show quoted text
cf-release soon. Chris Sterling chris.sterling(a)gmail.com twitter: @csterwa linkedin: http://www.linkedin.com/in/chrissterling On Mon, May 4, 2015 at 10:50 AM, Mike Dalessio <mdalessio(a)pivotal.io> wrote:
Hi all, |
|
Re: Buildpacks PMC - 2015-05-04 Notes
Wayne E. Seguin
The biggest issue with GDrive is that our folks in China can’t easily view them ;)
toggle quoted message
Show quoted text
My question/feedback comes from the other recent thread about buildpack sizing and efficiency. I did not see a bullet point in the list below for this (unless it was covered by different wording/terminology). I would love to see a way where buildpacks can become smaller not bigger whilst still supporting the vast array of languages+versions. Thank you for including them in this email and thank you for keeping us all in the loop, much appreciated! ~Wayne Wayne E. Seguin <wayneeseguin(a)starkandwayne.com <mailto:wayneeseguin(a)starkandwayne.com>> CTO ; Stark & Wayne, LLC On May 4, 2015, at 13:50 , Mike Dalessio <mdalessio(a)pivotal.io> wrote: |
|
Re: Addressing buildpack size
Wayne E. Seguin
Because of very good compatibility between versions (post 1.X) I would like to make a motion to do the following:
toggle quoted message
Show quoted text
Split the buildpack: have the default golang buildpack track the latest golang version Then handle older versions in one of two ways, either: a) have a large secondary for older versions or b) have multiple, one for each version of golang, users can specify a specific URL if they care about specific versions. This would improve space/time considerations for operations. Personally I would prefer b) because it allows you to enable supporting older go versions out of the box by design but still keeping each golang buildpack small. ~Wayne Wayne E. Seguin <wayneeseguin(a)starkandwayne.com <mailto:wayneeseguin(a)starkandwayne.com>> CTO ; Stark & Wayne, LLC On May 4, 2015, at 12:40 , Mike Dalessio <mdalessio(a)pivotal.io> wrote: |
|
Buildpacks PMC - 2015-05-04 Notes
Mike Dalessio
Hi all,
We held the first Buildpacks PMC meeting today; I'd like to share the agenda and notes. For reference, all agendas notes for the Buildpacks PMC will be kept in a public Google Drive folder at this URL: http://bit.ly/cf-buildpacks-pmc I realize GDrive isn't the most convenient medium for some in the CF community; I'd love to hear how we can better support transparency for everyone. Please feel free to respond with comments and questions! Cheers, -m ---- Attendees: - Chip Childers, Cloud Foundry Foundation - Mike Dalessio, Pivotal (PMC lead) - Christopher Ferriss, IBM - Michael Fraenkel, IBM - Mark Kropf, Pivotal Recent Inception Report and Stated Goals The Buildpacks core development team held a project inception on 2015-04-20, to gain a shared understanding of upcoming goals and tracks of work. Goals - Expand supported ecosystem to include more languages & frameworks - Cloud Foundry ownership of Buildpacks - Leverage new primitives in Diego (“app lifecycle”) - Enable 3rd party extensions to the Developer experience - Enable application developer extensions to the Developer experience - Set patterns for creating new buildpacks and for extending the Developer experience - Generate clearer diagnostics during staging - Enable Operator ease of updating common dependencies - Keep the `bin/detect` experience: buildpacks should Just Work™ - Exert more ownership over the rootfs - Binary buildpack support Risks - java-buildpack is diverging quickly from the core buildpacks - Lack of deep experience in some ecosystems - Wide variety in implementations across buildpacks - rootfs: with great power comes great responsibility (e.g., security response) - tight coupling between buildpacks and rootfs - versioning between buildpacks and rootfs Current Backlog and Priorities See https://www.pivotaltracker.com/n/projects/1042066 Notable near-term goals: - staticfile-buildpack support in `cf-release` - binary buildpack (a.k.a. “null buildpack”) support in `cf-release` - ability to generate and test CF rootfs-specific binaries; and tooling for CF operators to do the same Proposal: Buildpack Incubation Process Discussion today for PMC input; a draft document will be circulated for comment to cf-dev@ mailing list after the meeting, in a separate thread. |
|
Re: Addressing buildpack size
Onsi Fakhouri <ofakhouri@...>
the go community tends to move fast to adopt the latest versions of go. i
toggle quoted message
Show quoted text
imagine we can drop 1.1 and 1.2 without impacting most people. anyone on the list experience otherwise? onsi On Mon, May 4, 2015 at 9:40 AM, Mike Dalessio <mdalessio(a)pivotal.io> wrote:
Hi Wayne, |
|
Re: Addressing buildpack size
Mike Dalessio
Hi Wayne,
On Fri, May 1, 2015 at 1:29 PM, Wayne E. Seguin < wayneeseguin(a)starkandwayne.com> wrote: What an incredible step in the right direction, Awesome!!!Thanks for asking this question. Currently we're including the following binary dependencies in `go-buildpack`: ``` cache $ ls -lSh *_go* -rw-r--r-- 1 flavorjones flavorjones 60M 2015-05-04 12:36 https___storage.googleapis.com_golang_go1.4.2.linux-amd64.tar.gz -rw-r--r-- 1 flavorjones flavorjones 60M 2015-05-04 12:36 https___storage.googleapis.com_golang_go1.4.1.linux-amd64.tar.gz -rw-r--r-- 1 flavorjones flavorjones 54M 2015-05-04 12:36 https___storage.googleapis.com_golang_go1.2.2.linux-amd64.tar.gz -rw-r--r-- 1 flavorjones flavorjones 54M 2015-05-04 12:36 http___go.googlecode.com_files_go1.2.1.linux-amd64.tar.gz -rw-r--r-- 1 flavorjones flavorjones 51M 2015-05-04 12:36 https___storage.googleapis.com_golang_go1.3.3.linux-amd64.tar.gz -rw-r--r-- 1 flavorjones flavorjones 51M 2015-05-04 12:36 https___storage.googleapis.com_golang_go1.3.2.linux-amd64.tar.gz -rw-r--r-- 1 flavorjones flavorjones 40M 2015-05-04 12:36 http___go.googlecode.com_files_go1.1.2.linux-amd64.tar.gz -rw-r--r-- 1 flavorjones flavorjones 40M 2015-05-04 12:36 http___go.googlecode.com_files_go1.1.1.linux-amd64.tar.gz ``` One question we should ask, I think, is: should we still be supporting golang 1.1 and 1.2? Dropping those versions would cut the size of the buildpack in (approximately) half.
|
|
Re: Addressing buildpack size
Wayne E. Seguin
What an incredible step in the right direction, Awesome!!!
toggle quoted message
Show quoted text
Out of curiosity, why is the go buildpack still quite so large? On May 1, 2015, at 11:54 , Mike Dalessio <mdalessio(a)pivotal.io> wrote: |
|
Addressing buildpack size
Mike Dalessio
Skinny buildpacks have been cut for go, nodejs, php, python and ruby
buildpacks. | | current | previous | |--------+---------+----------| | go | 442MB | 633MB | | nodejs | 69MB | 417MB | | php | 804MB | 1.1GB | | python | 454MB | 654MB | | ruby | 365MB | 1.3GB | |--------+---------+----------| | total | 2.1GB | 4.1GB | for an aggregate 51% reduction in size. Details follow. Next Steps I recognize that every cloud operator may have a different policy on what versions of interpreters and libraries they want to support, based on the specific requirements of their users. These buildpacks reflect a "bare mininum" policy for a cloud to be operable, and I do not expect these buildpacks to be adopted as-is by many operators. These buildpacks have not yet been added to cf-release, specifically so that the community can prepare their own buildpacks if necessary. Over the next few days, the buildpacks core team will ship documentation and tooling to assist you in packaging specific dependencies for your instance of CF. I'll start a new thread on this list early next week to communicate this information. Call to Action In the meantime, please think about whether the policy implemented in these buildpacks ("last two patches (or teenies) on all supported major.minor releases") is suitable for your users; and if not, think about what dependencies you'll ideally be supporting. go-buildpack v1.3.0 Release notes are here <https://github.com/cloudfoundry/go-buildpack/releases/tag/v1.3.0>. Size reduced 30% from 633MB <https://github.com/cloudfoundry/go-buildpack/releases/tag/v1.2.0> to 442MB <https://github.com/cloudfoundry/go-buildpack/releases/tag/v1.3.0>. Supports (full manifest here <https://github.com/cloudfoundry/go-buildpack/blob/v1.3.0/manifest.yml>): - golang 1.4.{1,2} - golang 1.3.{2,3} - golang 1.2.{1,2} - golang 1.1.{1,2} nodejs-buildpack v1.3.0 Full release notes are here <https://github.com/cloudfoundry/nodejs-buildpack/releases/tag/v1.3.0>. Size reduced 83% from 417MB <https://github.com/cloudfoundry/nodejs-buildpack/releases/tag/v1.2.1> to 69MB <https://github.com/cloudfoundry/nodejs-buildpack/releases/tag/v1.3.0>. Supports (full manifest here <https://github.com/cloudfoundry/nodejs-buildpack/blob/v1.3.0/manifest.yml> ): - 0.8.{27,28} - 0.9.{11,12} - 0.10.{37,38} - 0.11.{15,16} - 0.12.{1,2} php-buildpack v3.2.0 Full release notes are here <https://github.com/cloudfoundry/php-buildpack/releases/tag/v3.2.0>. Size reduced 27% from 1.1GB <https://github.com/cloudfoundry/php-buildpack/releases/tag/v3.1.1> to 803MB <https://github.com/cloudfoundry/php-buildpack/releases/tag/v3.2.0>. Supports: (full manifest here <https://github.com/cloudfoundry/php-buildpack/blob/v3.2.0/manifest.yml>) *PHP*: - 5.6.{6,7} - 5.5.{22,23} - 5.4.{38,39} *HHVM* (lucid64 stack): - 3.2.0 *HHVM* (cflinuxfs2 stack): - 3.5.{0,1} - 3.6.{0,1} *Apache HTTPD*: - 2.4.12 *nginx*: - 1.7.10 - 1.6.2 - 1.5.13 python-buildpack v1.3.0 Full release notes are here <https://github.com/cloudfoundry/python-buildpack/releases/tag/v1.3.0>. Size reduced 30% from 654MB <https://github.com/cloudfoundry/python-buildpack/releases/tag/v1.2.0> to 454MB <https://github.com/cloudfoundry/python-buildpack/releases/tag/v1.3.0> . Supports: (full manifest here <https://github.com/cloudfoundry/python-buildpack/blob/v1.3.0/manifest.yml>) - 2.7.{8,9} - 3.2.{4,5} - 3.3.{5,6} - 3.4.{2,3} ruby-buildpack v1.4.0 Release notes are here <https://github.com/cloudfoundry/ruby-buildpack/releases/tag/v1.4.0>. Size reduced 71% from 1.3GB <https://github.com/cloudfoundry/ruby-buildpack/releases/tag/v1.3.1> to 365MB <https://github.com/cloudfoundry/ruby-buildpack/releases/tag/v1.4.0>. Supports: (full manifest here <https://github.com/cloudfoundry/ruby-buildpack/blob/v1.4.0/manifest.yml>) *MRI*: - 2.2.{1,2} - 2.1.{5,6} - 2.0.0p645 *JRuby*: - ruby-1.9.3-jruby-1.7.19 - ruby-2.0.0-jruby-1.7.19 - ruby-2.2.0-jruby-9.0.0.0.pre1 ---------- Forwarded message ---------- From: Mike Dalessio <mdalessio(a)pivotal.io> Date: Wed, Apr 8, 2015 at 11:10 AM Subject: Addressing buildpack size To: vcap-dev(a)cloudfoundry.org Hello vcap-dev! This email details a proposed change to how Cloud Foundry buildpacks are packaged, with respect to the ever-increasing number of binary dependencies being cached within them. This proposal's permanent residence is here: https://github.com/cloudfoundry-incubator/buildpack-packager/issues/4 Feel free to comment there or reply to this email. ------------------------------ Buildpack SizesWhere we are today Many of you have seen, and possibly been challenged by, the enormous sizes of some of the buildpacks that are currently shipping with cf-release. Here's the state of the world right now, as of v205: php-buildpack: 1.1G ruby-buildpack: 922M go-buildpack: 675M python-buildpack: 654M nodejs-buildpack: 403M ---------------------- total: 3.7G These enormous sizes are the result of the current policy of packaging every-version-of-everything-ever-supported ("EVOEES") within the buildpack. Most recently, this problem was exacerbated by the fact that buildpacks now contain binaries for two rootfses. Why this is a problem If continued, buildpacks will only continue to increase in size, leading to longer and longer build and deploy times, longer test times, slacker feedback loops, and therefore less frequent buildpack releases. Additionally, this also means that we're shipping versions of interpreters, web servers, and libraries that are deprecated, insecure, or both. Feedback from CF users has made it clear that many companies view this as an unnecessary security risk. This policy is clearly unsustainable. What we can do about it There are many things being discussed to ameliorate the impact that buildpack size is having on the operations of CF. Notably, Onsi has proposed a change to buildpack caching, to improve Diego staging times (link to proposal <https://github.com/pivotal-cf-experimental/diego-dev-notes/blob/master/proposals/better-buildpack-caching.md> ). However, there is an immediate solution available, which addresses both the size concerns as well as the security concern: packaging fewer binary dependencies within the buildpack. The proposal I'm proposing that we reduce the binary dependencies in each buildpack in a very specific way. Aside on terms I'll use below: - Versions of the form "1.2.3" are broken down as: MAJOR.MINOR.TEENY. Many language ecosystems refer to the "TEENY" as "PATCH" interchangeably, but we're going to use "TEENY" in this proposal. - We'll assume that TEENY gets bumped for API/ABI compatible changes. - We'll assume that MINOR and MAJOR get bumped when there are API/ABI *incompatible* changes. I'd like to move forward soon with the following changes: 1. For language interpreters/compilers, we'll package the two most-recent TEENY versions on each MAJOR.MINOR release. 2. For all other dependencies, we'll package only the single most-recent TEENY version on each MAJOR.MINOR release. 3. We will discontinue packaging versions of dependencies that have been deprecated. 4. We will no longer provide "EVOEES" buildpack releases. 5. We will no longer provide "online" buildpack releases, which download dependencies from the public internet. 6. We will document the process, and provide tooling, for CF operators to build their own buildpacks, choosing the dependencies that their organization wants to support or creating "online" buildpacks at operators' discretion. An example for #1 is that we'll go from packaging 34 versions of node v0.10.x to only packaging two: 0.10.37 and 0.10.38. An example for #2 is that we'll go from packaging 3 versions of nginx 1.5 in the PHP buildpack to only packaging one: 1.5.12. An example for #3 is that we'll discontinue packaging ruby 1.9.3 in the ruby-buildpack, which reached end-of-life in February 2015. Outcomes With these changes, the total buildpack size will be reduced greatly. As an example, we expect the ruby-buildpack size to go from 922M to 338M. We also want to set the expectation that, as new interpreter versions are released, either for new features or (more urgently) for security fixes, we'll release new buildpacks much more quickly than we do today. My hope is that we'll be able to do it within 24 hours of a new release. Planning These changes will be relatively easy to make, since all the buildpacks are now using a manifest.yml file to declare what's being packaged. We expect to be able to complete this work within the next two weeks. Stories are in the Tracker backlog under the Epic named "skinny-buildpacks", which you can see here: https://www.pivotaltracker.com/epic/show/1747328 ------------------------------ Please let me know how these changes will impact you and your organizations, and let me know of any counter-proposals or variations you'd like to consider. Thanks, -mike |
|
Re: Linking to individual threads?
Daniel Mikusa
+1 - I miss the link at the bottom too.
toggle quoted message
Show quoted text
Dan On Thu, Apr 30, 2015 at 3:38 PM, Filip Hanik <fhanik(a)pivotal.io> wrote:
there is, but it would be nicer if the link appeared in the email |
|
Re: Linking to individual threads?
Filip Hanik
there is, but it would be nicer if the link appeared in the email
http://lists.cloudfoundry.org/pipermail/cf-dev/2015-April/000002.html On Thu, Apr 30, 2015 at 1:29 PM, Quintessence Anx <qanx(a)starkandwayne.com> wrote: Is there a way to link to individual email threads with the new mailing |
|
Linking to individual threads?
Quintessence Anx
Is there a way to link to individual email threads with the new mailing
list format? Thanks! Quinn |
|
Re: [vcap-dev] Java OOM debugging
Lari Hotari <Lari@...>
Hi,
I created a few tools to debug OOM problems since the application I was responsible for running on CF was failing constantly because of OOM problems. The problems I had, turned out not to be actual memory leaks in the Java application. In the "cf events appname" log I would get entries like this: 2015-xx-xxTxx:xx:xx.00-0400 app.crash appname index: 1, reason: CRASHED, exit_description: out of memory, exit_status: 255 These type of entries are produced when the container goes over it's memory resource limits. It doesn't mean that there is a memory leak in the Java application. The container gets killed by the Linux kernel oom killer (https://github.com/cloudfoundry/warden/blob/master/warden/README.md#limit-handle-mem-value) based on the resource limits set to the warden container. The memory limit is specified in number of bytes. It is enforced usingIn my case it never got killed by the killjava.sh script that gets called in the java-buildpack when an OOM happens in Java. This is the tool I built to debug the problems: https://github.com/lhotari/java-buildpack-diagnostics-app I deployed that app as part of the forked buildpack I'm using. Please read the readme about what it's limitations are. It worked for me, but it might not work for you. It's opensource and you can fork it. :) There is a solution in my toolcase for creating a heapdump and uploading that to S3: https://github.com/lhotari/java-buildpack-diagnostics-app/blob/master/src/main/groovy/io/github/lhotari/jbpdiagnostics/HeapDumpServlet.groovy The readme explains how to setup Amazon S3 keys for this: https://github.com/lhotari/java-buildpack-diagnostics-app#amazon-s3-setup Once you get a dump, you can then analyse the dump in a java profiler tool like YourKit. I also have a solution that forks the java-buildpack modifies killjava.sh and adds a script that uploads the heapdump to S3 in the case of OOM: https://github.com/lhotari/java-buildpack/commit/2d654b80f3bf1a0e0f1bae4f29cb85f56f5f8c46 In java-buildpack-diagnostics-app I have also other tools for getting Linux operation system specific memory information, for example: https://github.com/lhotari/java-buildpack-diagnostics-app/blob/master/src/main/groovy/io/github/lhotari/jbpdiagnostics/MemoryInfoServlet.groovy https://github.com/lhotari/java-buildpack-diagnostics-app/blob/master/src/main/groovy/io/github/lhotari/jbpdiagnostics/MemorySmapServlet.groovy https://github.com/lhotari/java-buildpack-diagnostics-app/blob/master/src/main/groovy/io/github/lhotari/jbpdiagnostics/MallocInfoServlet.groovy These tools are handy for looking at details of the Java process RSS memory usage growth. There is also a solution for getting ssh shell access inside your application with tmate.io: https://github.com/lhotari/java-buildpack-diagnostics-app/blob/master/src/main/groovy/io/github/lhotari/jbpdiagnostics/TmateSshServlet.groovy (this version is only compatible with the new "cflinuxfs2" stack) It looks like there are serious problems on CloudFoundry with the memory sizing calculation. An application that doesn't have a OOM problem will get killed by the oom killer because the Java process will go over the memory limits. I filed this issue: https://github.com/cloudfoundry/java-buildpack/issues/157 , but that might not cover everything. The workaround for that in my case was to add a native key under memory_sizes in open_jdk_jre.yml and set the minimum to 330M (that is for a 2GB total memory). see example https://github.com/grails-samples/java-buildpack/blob/22e0f6a/config/open_jdk_jre.yml#L25 that was how I got the app I'm running on CF to stay within the memory bounds. I'm sure there is now also a way to get the keys without forking the buildpack. I could have also adjusted the percentage portions, but I wanted to set a hard minimum for this case. It was also required to do some other tuning. I added this to JAVA_OPTS: -XX:CompressedClassSpaceSize=256M -XX:InitialCodeCacheSize=64M -XX:CodeCacheExpansionSize=1M -XX:CodeCacheMinimumFreeSpace=1M -XX:ReservedCodeCacheSize=200M -XX:MinMetaspaceExpansion=1M -XX:MaxMetaspaceExpansion=8M -XX:MaxDirectMemorySize=96M while trying to keep the Java process from growing in RSS memory size. The memory overhead of a 64 bit Java process on Linux can be reduced by specifying these environment variables: stack: cflinuxfs2 . . . env: MALLOC_ARENA_MAX: 2 MALLOC_MMAP_THRESHOLD_: 131072 MALLOC_TRIM_THRESHOLD_: 131072 MALLOC_TOP_PAD_: 131072 MALLOC_MMAP_MAX_: 65536 MALLOC_ARENA_MAX works only on cflinuxfs2 stack (the lucid64 stack has a buggy version of glibc). explanation about MALLOC_ARENA_MAX from Heroku: https://devcenter.heroku.com/articles/tuning-glibc-memory-behavior some measurement data how it reduces memory consumption: https://devcenter.heroku.com/articles/testing-cedar-14-memory-use I have created a PR to add this to CF java-buildpack: https://github.com/cloudfoundry/java-buildpack/pull/160 I also created an issues https://github.com/cloudfoundry/java-buildpack/issues/163 and https://github.com/cloudfoundry/java-buildpack/pull/159 . I hope this information helps others struggling with OOM problems in CF. I'm not saying that this is a ready made solution just for you. YMMV. It worked for me. -Lari On 15-04-29 10:53 AM, Head-Rapson, David wrote:
|
|
Re: [vcap-dev] Java OOM debugging
Daniel Mikusa
Dave,
I've found using SSH tunnels can make debugging the JVM easier. I haven't tried using Yourkit via a tunnel, but I suspect it would work. Maybe worth a try. Some more details on setting up the tunnel here. http://mikusa.blogspot.com/2014/08/debugging-java-applications-on.html Dan On Wed, Apr 29, 2015 at 10:53 AM, Head-Rapson, David < David.Head-Rapson(a)fil.com> wrote: Hi, |
|