Date   

Re: stdout.log and stderr.log not show in CF197 with loggregator enabled

James Bayer
 

i believe those files were removed since loggregator gives you access to
the files (and you get get the content via syslog). you may be able to
adjust the start command to write them out again.

On Tue, May 5, 2015 at 4:31 PM, Zhang, Yuan <Yuan.Zhang(a)emc.com> wrote:

Hi,



We upgrade from CF172 to CF197 and enable loggregator on CF197. But for
application deployed to CF197 (with loggregator enabled), we DO NOT

see stdout.log and stderr.log anymore in application logs directory
anymore. We can see logs/stdout.log and logs/stderr.log in CF172.



CF197:

cf file <app> logs

Getting file contents... OK



staging_task.log 1.3K



Can you tell us what setting in CF 197 can affect stdout.log and
stderr.log show up or not? How to let logs/stdout.log and logs/stderr.log
show up?



Thanks,

Tina Zhang



_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev

--
Thank you,

James Bayer


Re: Purge files on NFS or S3?

James Bayer
 

john, i think the resource files may grow forever right now without
intervention.

i'm pretty confident that when apps are deleted that their droplets are
deleted with them and that proper garbage collection occurs with that.

i'm unaware of any NFS file system to s3 blob migration. you would need to
update the CC_DB references too i'm pretty sure. i'm interested if you find
out more.

On Tue, May 5, 2015 at 1:14 PM, John Wong <gokoproject(a)gmail.com> wrote:

Hi

I just looked at our disk usage on NFS server. We have used like 200G so
far, and I wonder if there's a systematic way to purge files we don't need
(or how do I know I don't need them)?

Similarly, if I were to replace NFS server with S3 instead, does the
existing process (if any) work with S3?

Thanks.

_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev

--
Thank you,

James Bayer


Re: Addressing buildpack size

Josh Ghiloni <ghiloni@...>
 

How does that jive with offline buildpacks? Would it be a matter of the
operator building it with a certain version of the binaries and then
uploading them combined?

On Fri, May 8, 2015 at 7:01 PM, Patrick Mueller <pmuellr(a)gmail.com> wrote:

Ya, it doesn't seem to make a lot of sense to me to bundle the buildpacks
with their typical binaries. Take io.js for instance [1]; prolly not
required to change the buildpack as often as new releases of the io.js
itself.

[1] https://github.com/iojs/io.js/blob/master/CHANGELOG.md

On Tue, May 5, 2015 at 1:33 PM, Daniel Mikusa <dmikusa(a)pivotal.io> wrote:

I'm happy to see the size of the build packs dropping, but I have to ask
why do we bundle the build packs with a fixed set of binaries?

The build packs themselves are very small, it's the binaries that are
huge. It seems like it would make sense to handle them as separate
concerns.

I don't want to come off too harsh, but in addition to the size of the
build packs when bundled with binaries, there are some other disadvantages
to doing things this way.

- Binaries and build packs are updated at different rates. Binaries
are usually updated often, to pick up new runtimes versions & security
fixes; build packs are generally changed at a slower pace, as features or
bug fixes for them are needed. Bundling the two together, requires an
operator to update the build packs more often, just to get updated
binaries. It's been my experience that users don't (or forget) to update
build packs which means they're likely running with older, possibly
insecure runtimes.

- It's difficult to bundle a set of runtime binaries that suite
everyone's needs, different users will update at different rates and will
want different sets of binaries. If build packs and binaries are packaged
together, users will end up needing to find a specific build pack bundle
that contains the runtime they want or users will need to build their own
custom bundles. If build packs and binaries are handled separately, there
will be more flexibility in what binaries a build pack has available as an
operator can manage binaries independently. Wayne's post seems to hit on
this point.

- At some point, I think this has already happened (jruby & java),
build packs are going to start having overlapping sets of binaries. If the
binaries are bundled with the build pack, there's no way that build packs
could ever share binaries.

My personal preference would be to see build packs bundled without
binaries and some other solution, which probably merits a separate thread,
for managing the binaries.

I'm curious to hear what others think or if I've missed something and
bundling build packs and binaries is clearly the way to go.

Dan

PS. If this is something that came up in the PMC, I apologize. I
skimmed the notes, but may have missed it.



On Mon, May 4, 2015 at 2:10 PM, Wayne E. Seguin <
wayneeseguin(a)starkandwayne.com> wrote:

Because of very good compatibility between versions (post 1.X) I would
like to make a motion to do the following:

Split the buildpack:

have the default golang buildpack track the latest golang version

Then handle older versions in one of two ways, either:

a) have a large secondary for older versions

or

b) have multiple, one for each version of golang, users can specify a
specific URL if they care about specific versions.

This would improve space/time considerations for operations. Personally
I would prefer b) because it allows you to enable supporting older go
versions out of the box by design but still keeping each golang buildpack
small.

~Wayne

Wayne E. Seguin <wayneeseguin(a)starkandwayne.com>
CTO ; Stark & Wayne, LLC

On May 4, 2015, at 12:40 , Mike Dalessio <mdalessio(a)pivotal.io> wrote:

Hi Wayne,

On Fri, May 1, 2015 at 1:29 PM, Wayne E. Seguin <
wayneeseguin(a)starkandwayne.com> wrote:

What an incredible step in the right direction, Awesome!!!

Out of curiosity, why is the go buildpack still quite so large?
Thanks for asking this question.

Currently we're including the following binary dependencies in
`go-buildpack`:

```
cache $ ls -lSh *_go*
-rw-r--r-- 1 flavorjones flavorjones 60M 2015-05-04 12:36
https___storage.googleapis.com_golang_go1.4.2.linux-amd64.tar.gz
-rw-r--r-- 1 flavorjones flavorjones 60M 2015-05-04 12:36
https___storage.googleapis.com_golang_go1.4.1.linux-amd64.tar.gz
-rw-r--r-- 1 flavorjones flavorjones 54M 2015-05-04 12:36
https___storage.googleapis.com_golang_go1.2.2.linux-amd64.tar.gz
-rw-r--r-- 1 flavorjones flavorjones 54M 2015-05-04 12:36
http___go.googlecode.com_files_go1.2.1.linux-amd64.tar.gz
-rw-r--r-- 1 flavorjones flavorjones 51M 2015-05-04 12:36
https___storage.googleapis.com_golang_go1.3.3.linux-amd64.tar.gz
-rw-r--r-- 1 flavorjones flavorjones 51M 2015-05-04 12:36
https___storage.googleapis.com_golang_go1.3.2.linux-amd64.tar.gz
-rw-r--r-- 1 flavorjones flavorjones 40M 2015-05-04 12:36
http___go.googlecode.com_files_go1.1.2.linux-amd64.tar.gz
-rw-r--r-- 1 flavorjones flavorjones 40M 2015-05-04 12:36
http___go.googlecode.com_files_go1.1.1.linux-amd64.tar.gz
```

One question we should ask, I think, is: should we still be supporting
golang 1.1 and 1.2? Dropping those versions would cut the size of the
buildpack in (approximately) half.





On May 1, 2015, at 11:54 , Mike Dalessio <mdalessio(a)pivotal.io> wrote:

Skinny buildpacks have been cut for go, nodejs, php, python and ruby
buildpacks.

| | current | previous |
|--------+---------+----------|
| go | 442MB | 633MB |
| nodejs | 69MB | 417MB |
| php | 804MB | 1.1GB |
| python | 454MB | 654MB |
| ruby | 365MB | 1.3GB |
|--------+---------+----------|
| total | 2.1GB | 4.1GB |

for an aggregate 51% reduction in size. Details follow.
Next Steps

I recognize that every cloud operator may have a different policy on
what versions of interpreters and libraries they want to support, based on
the specific requirements of their users.

These buildpacks reflect a "bare mininum" policy for a cloud to be
operable, and I do not expect these buildpacks to be adopted as-is by many
operators.

These buildpacks have not yet been added to cf-release, specifically
so that the community can prepare their own buildpacks if necessary.

Over the next few days, the buildpacks core team will ship
documentation and tooling to assist you in packaging specific dependencies
for your instance of CF. I'll start a new thread on this list early next
week to communicate this information.
Call to Action

In the meantime, please think about whether the policy implemented in
these buildpacks ("last two patches (or teenies) on all supported
major.minor releases") is suitable for your users; and if not, think about
what dependencies you'll ideally be supporting.
go-buildpack v1.3.0

Release notes are here
<https://github.com/cloudfoundry/go-buildpack/releases/tag/v1.3.0>.

Size reduced 30% from 633MB
<https://github.com/cloudfoundry/go-buildpack/releases/tag/v1.2.0> to
442MB
<https://github.com/cloudfoundry/go-buildpack/releases/tag/v1.3.0>.

Supports (full manifest here
<https://github.com/cloudfoundry/go-buildpack/blob/v1.3.0/manifest.yml>
):

- golang 1.4.{1,2}
- golang 1.3.{2,3}
- golang 1.2.{1,2}
- golang 1.1.{1,2}

nodejs-buildpack v1.3.0

Full release notes are here
<https://github.com/cloudfoundry/nodejs-buildpack/releases/tag/v1.3.0>.

Size reduced 83% from 417MB
<https://github.com/cloudfoundry/nodejs-buildpack/releases/tag/v1.2.1>
to 69MB
<https://github.com/cloudfoundry/nodejs-buildpack/releases/tag/v1.3.0>.

Supports (full manifest here
<https://github.com/cloudfoundry/nodejs-buildpack/blob/v1.3.0/manifest.yml>
):

- 0.8.{27,28}
- 0.9.{11,12}
- 0.10.{37,38}
- 0.11.{15,16}
- 0.12.{1,2}

php-buildpack v3.2.0

Full release notes are here
<https://github.com/cloudfoundry/php-buildpack/releases/tag/v3.2.0>.

Size reduced 27% from 1.1GB
<https://github.com/cloudfoundry/php-buildpack/releases/tag/v3.1.1> to
803MB
<https://github.com/cloudfoundry/php-buildpack/releases/tag/v3.2.0>.

Supports: (full manifest here
<https://github.com/cloudfoundry/php-buildpack/blob/v3.2.0/manifest.yml>
)

*PHP*:

- 5.6.{6,7}
- 5.5.{22,23}
- 5.4.{38,39}

*HHVM* (lucid64 stack):

- 3.2.0

*HHVM* (cflinuxfs2 stack):

- 3.5.{0,1}
- 3.6.{0,1}

*Apache HTTPD*:

- 2.4.12

*nginx*:

- 1.7.10
- 1.6.2
- 1.5.13

python-buildpack v1.3.0

Full release notes are here
<https://github.com/cloudfoundry/python-buildpack/releases/tag/v1.3.0>.

Size reduced 30% from 654MB
<https://github.com/cloudfoundry/python-buildpack/releases/tag/v1.2.0>
to 454MB
<https://github.com/cloudfoundry/python-buildpack/releases/tag/v1.3.0>.

Supports: (full manifest here
<https://github.com/cloudfoundry/python-buildpack/blob/v1.3.0/manifest.yml>
)

- 2.7.{8,9}
- 3.2.{4,5}
- 3.3.{5,6}
- 3.4.{2,3}

ruby-buildpack v1.4.0

Release notes are here
<https://github.com/cloudfoundry/ruby-buildpack/releases/tag/v1.4.0>.

Size reduced 71% from 1.3GB
<https://github.com/cloudfoundry/ruby-buildpack/releases/tag/v1.3.1>
to 365MB
<https://github.com/cloudfoundry/ruby-buildpack/releases/tag/v1.4.0>.

Supports: (full manifest here
<https://github.com/cloudfoundry/ruby-buildpack/blob/v1.4.0/manifest.yml>
)

*MRI*:

- 2.2.{1,2}
- 2.1.{5,6}
- 2.0.0p645

*JRuby*:

- ruby-1.9.3-jruby-1.7.19
- ruby-2.0.0-jruby-1.7.19
- ruby-2.2.0-jruby-9.0.0.0.pre1


---------- Forwarded message ----------
From: Mike Dalessio <mdalessio(a)pivotal.io>
Date: Wed, Apr 8, 2015 at 11:10 AM
Subject: Addressing buildpack size
To: vcap-dev(a)cloudfoundry.org


Hello vcap-dev!

This email details a proposed change to how Cloud Foundry buildpacks
are packaged, with respect to the ever-increasing number of binary
dependencies being cached within them.

This proposal's permanent residence is here:

https://github.com/cloudfoundry-incubator/buildpack-packager/issues/4

Feel free to comment there or reply to this email.
------------------------------
Buildpack SizesWhere we are today

Many of you have seen, and possibly been challenged by, the enormous
sizes of some of the buildpacks that are currently shipping with cf-release.

Here's the state of the world right now, as of v205:

php-buildpack: 1.1G
ruby-buildpack: 922M
go-buildpack: 675M
python-buildpack: 654M
nodejs-buildpack: 403M
----------------------
total: 3.7G

These enormous sizes are the result of the current policy of packaging
every-version-of-everything-ever-supported ("EVOEES") within the buildpack.

Most recently, this problem was exacerbated by the fact that buildpacks
now contain binaries for two rootfses.
Why this is a problem

If continued, buildpacks will only continue to increase in size,
leading to longer and longer build and deploy times, longer test times,
slacker feedback loops, and therefore less frequent buildpack releases.

Additionally, this also means that we're shipping versions of
interpreters, web servers, and libraries that are deprecated, insecure, or
both. Feedback from CF users has made it clear that many companies view
this as an unnecessary security risk.

This policy is clearly unsustainable.
What we can do about it

There are many things being discussed to ameliorate the impact that
buildpack size is having on the operations of CF.

Notably, Onsi has proposed a change to buildpack caching, to improve
Diego staging times (link to proposal
<https://github.com/pivotal-cf-experimental/diego-dev-notes/blob/master/proposals/better-buildpack-caching.md>
).

However, there is an immediate solution available, which addresses both
the size concerns as well as the security concern: packaging fewer binary
dependencies within the buildpack.
The proposal

I'm proposing that we reduce the binary dependencies in each buildpack
in a very specific way.

Aside on terms I'll use below:

- Versions of the form "1.2.3" are broken down as:
MAJOR.MINOR.TEENY. Many language ecosystems refer to the "TEENY" as "PATCH"
interchangeably, but we're going to use "TEENY" in this proposal.
- We'll assume that TEENY gets bumped for API/ABI compatible
changes.
- We'll assume that MINOR and MAJOR get bumped when there are
API/ABI *incompatible* changes.

I'd like to move forward soon with the following changes:

1. For language interpreters/compilers, we'll package the two
most-recent TEENY versions on each MAJOR.MINOR release.
2. For all other dependencies, we'll package only the single
most-recent TEENY version on each MAJOR.MINOR release.
3. We will discontinue packaging versions of dependencies that have
been deprecated.
4. We will no longer provide "EVOEES" buildpack releases.
5. We will no longer provide "online" buildpack releases, which
download dependencies from the public internet.
6. We will document the process, and provide tooling, for CF
operators to build their own buildpacks, choosing the dependencies that
their organization wants to support or creating "online" buildpacks at
operators' discretion.

An example for #1 is that we'll go from packaging 34 versions of node v0.10.x
to only packaging two: 0.10.37 and 0.10.38.

An example for #2 is that we'll go from packaging 3 versions of nginx 1.5
in the PHP buildpack to only packaging one: 1.5.12.

An example for #3 is that we'll discontinue packaging ruby 1.9.3 in the
ruby-buildpack, which reached end-of-life in February 2015.
Outcomes

With these changes, the total buildpack size will be reduced greatly.
As an example, we expect the ruby-buildpack size to go from 922M to 338M.

We also want to set the expectation that, as new interpreter versions
are released, either for new features or (more urgently) for security
fixes, we'll release new buildpacks much more quickly than we do today. My
hope is that we'll be able to do it within 24 hours of a new release.
Planning

These changes will be relatively easy to make, since all the buildpacks
are now using a manifest.yml file to declare what's being packaged. We
expect to be able to complete this work within the next two weeks.

Stories are in the Tracker backlog under the Epic named
"skinny-buildpacks", which you can see here:

https://www.pivotaltracker.com/epic/show/1747328

------------------------------

Please let me know how these changes will impact you and your
organizations, and let me know of any counter-proposals or variations you'd
like to consider.

Thanks,

-mike



_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev



_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev

_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev


--
Patrick Mueller
http://muellerware.org

_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev


Re: Addressing buildpack size

Patrick Mueller <pmuellr@...>
 

Ya, it doesn't seem to make a lot of sense to me to bundle the buildpacks
with their typical binaries. Take io.js for instance [1]; prolly not
required to change the buildpack as often as new releases of the io.js
itself.

[1] https://github.com/iojs/io.js/blob/master/CHANGELOG.md

On Tue, May 5, 2015 at 1:33 PM, Daniel Mikusa <dmikusa(a)pivotal.io> wrote:

I'm happy to see the size of the build packs dropping, but I have to ask
why do we bundle the build packs with a fixed set of binaries?

The build packs themselves are very small, it's the binaries that are
huge. It seems like it would make sense to handle them as separate
concerns.

I don't want to come off too harsh, but in addition to the size of the
build packs when bundled with binaries, there are some other disadvantages
to doing things this way.

- Binaries and build packs are updated at different rates. Binaries are
usually updated often, to pick up new runtimes versions & security fixes;
build packs are generally changed at a slower pace, as features or bug
fixes for them are needed. Bundling the two together, requires an operator
to update the build packs more often, just to get updated binaries. It's
been my experience that users don't (or forget) to update build packs which
means they're likely running with older, possibly insecure runtimes.

- It's difficult to bundle a set of runtime binaries that suite
everyone's needs, different users will update at different rates and will
want different sets of binaries. If build packs and binaries are packaged
together, users will end up needing to find a specific build pack bundle
that contains the runtime they want or users will need to build their own
custom bundles. If build packs and binaries are handled separately, there
will be more flexibility in what binaries a build pack has available as an
operator can manage binaries independently. Wayne's post seems to hit on
this point.

- At some point, I think this has already happened (jruby & java), build
packs are going to start having overlapping sets of binaries. If the
binaries are bundled with the build pack, there's no way that build packs
could ever share binaries.

My personal preference would be to see build packs bundled without
binaries and some other solution, which probably merits a separate thread,
for managing the binaries.

I'm curious to hear what others think or if I've missed something and
bundling build packs and binaries is clearly the way to go.

Dan

PS. If this is something that came up in the PMC, I apologize. I skimmed
the notes, but may have missed it.



On Mon, May 4, 2015 at 2:10 PM, Wayne E. Seguin <
wayneeseguin(a)starkandwayne.com> wrote:

Because of very good compatibility between versions (post 1.X) I would
like to make a motion to do the following:

Split the buildpack:

have the default golang buildpack track the latest golang version

Then handle older versions in one of two ways, either:

a) have a large secondary for older versions

or

b) have multiple, one for each version of golang, users can specify a
specific URL if they care about specific versions.

This would improve space/time considerations for operations. Personally I
would prefer b) because it allows you to enable supporting older go
versions out of the box by design but still keeping each golang buildpack
small.

~Wayne

Wayne E. Seguin <wayneeseguin(a)starkandwayne.com>
CTO ; Stark & Wayne, LLC

On May 4, 2015, at 12:40 , Mike Dalessio <mdalessio(a)pivotal.io> wrote:

Hi Wayne,

On Fri, May 1, 2015 at 1:29 PM, Wayne E. Seguin <
wayneeseguin(a)starkandwayne.com> wrote:

What an incredible step in the right direction, Awesome!!!

Out of curiosity, why is the go buildpack still quite so large?
Thanks for asking this question.

Currently we're including the following binary dependencies in
`go-buildpack`:

```
cache $ ls -lSh *_go*
-rw-r--r-- 1 flavorjones flavorjones 60M 2015-05-04 12:36
https___storage.googleapis.com_golang_go1.4.2.linux-amd64.tar.gz
-rw-r--r-- 1 flavorjones flavorjones 60M 2015-05-04 12:36
https___storage.googleapis.com_golang_go1.4.1.linux-amd64.tar.gz
-rw-r--r-- 1 flavorjones flavorjones 54M 2015-05-04 12:36
https___storage.googleapis.com_golang_go1.2.2.linux-amd64.tar.gz
-rw-r--r-- 1 flavorjones flavorjones 54M 2015-05-04 12:36
http___go.googlecode.com_files_go1.2.1.linux-amd64.tar.gz
-rw-r--r-- 1 flavorjones flavorjones 51M 2015-05-04 12:36
https___storage.googleapis.com_golang_go1.3.3.linux-amd64.tar.gz
-rw-r--r-- 1 flavorjones flavorjones 51M 2015-05-04 12:36
https___storage.googleapis.com_golang_go1.3.2.linux-amd64.tar.gz
-rw-r--r-- 1 flavorjones flavorjones 40M 2015-05-04 12:36
http___go.googlecode.com_files_go1.1.2.linux-amd64.tar.gz
-rw-r--r-- 1 flavorjones flavorjones 40M 2015-05-04 12:36
http___go.googlecode.com_files_go1.1.1.linux-amd64.tar.gz
```

One question we should ask, I think, is: should we still be supporting
golang 1.1 and 1.2? Dropping those versions would cut the size of the
buildpack in (approximately) half.





On May 1, 2015, at 11:54 , Mike Dalessio <mdalessio(a)pivotal.io> wrote:

Skinny buildpacks have been cut for go, nodejs, php, python and ruby
buildpacks.

| | current | previous |
|--------+---------+----------|
| go | 442MB | 633MB |
| nodejs | 69MB | 417MB |
| php | 804MB | 1.1GB |
| python | 454MB | 654MB |
| ruby | 365MB | 1.3GB |
|--------+---------+----------|
| total | 2.1GB | 4.1GB |

for an aggregate 51% reduction in size. Details follow.
Next Steps

I recognize that every cloud operator may have a different policy on
what versions of interpreters and libraries they want to support, based on
the specific requirements of their users.

These buildpacks reflect a "bare mininum" policy for a cloud to be
operable, and I do not expect these buildpacks to be adopted as-is by many
operators.

These buildpacks have not yet been added to cf-release, specifically so
that the community can prepare their own buildpacks if necessary.

Over the next few days, the buildpacks core team will ship documentation
and tooling to assist you in packaging specific dependencies for your
instance of CF. I'll start a new thread on this list early next week to
communicate this information.
Call to Action

In the meantime, please think about whether the policy implemented in
these buildpacks ("last two patches (or teenies) on all supported
major.minor releases") is suitable for your users; and if not, think about
what dependencies you'll ideally be supporting.
go-buildpack v1.3.0

Release notes are here
<https://github.com/cloudfoundry/go-buildpack/releases/tag/v1.3.0>.

Size reduced 30% from 633MB
<https://github.com/cloudfoundry/go-buildpack/releases/tag/v1.2.0> to
442MB <https://github.com/cloudfoundry/go-buildpack/releases/tag/v1.3.0>
.

Supports (full manifest here
<https://github.com/cloudfoundry/go-buildpack/blob/v1.3.0/manifest.yml>
):

- golang 1.4.{1,2}
- golang 1.3.{2,3}
- golang 1.2.{1,2}
- golang 1.1.{1,2}

nodejs-buildpack v1.3.0

Full release notes are here
<https://github.com/cloudfoundry/nodejs-buildpack/releases/tag/v1.3.0>.

Size reduced 83% from 417MB
<https://github.com/cloudfoundry/nodejs-buildpack/releases/tag/v1.2.1>
to 69MB
<https://github.com/cloudfoundry/nodejs-buildpack/releases/tag/v1.3.0>.

Supports (full manifest here
<https://github.com/cloudfoundry/nodejs-buildpack/blob/v1.3.0/manifest.yml>
):

- 0.8.{27,28}
- 0.9.{11,12}
- 0.10.{37,38}
- 0.11.{15,16}
- 0.12.{1,2}

php-buildpack v3.2.0

Full release notes are here
<https://github.com/cloudfoundry/php-buildpack/releases/tag/v3.2.0>.

Size reduced 27% from 1.1GB
<https://github.com/cloudfoundry/php-buildpack/releases/tag/v3.1.1> to
803MB
<https://github.com/cloudfoundry/php-buildpack/releases/tag/v3.2.0>.

Supports: (full manifest here
<https://github.com/cloudfoundry/php-buildpack/blob/v3.2.0/manifest.yml>
)

*PHP*:

- 5.6.{6,7}
- 5.5.{22,23}
- 5.4.{38,39}

*HHVM* (lucid64 stack):

- 3.2.0

*HHVM* (cflinuxfs2 stack):

- 3.5.{0,1}
- 3.6.{0,1}

*Apache HTTPD*:

- 2.4.12

*nginx*:

- 1.7.10
- 1.6.2
- 1.5.13

python-buildpack v1.3.0

Full release notes are here
<https://github.com/cloudfoundry/python-buildpack/releases/tag/v1.3.0>.

Size reduced 30% from 654MB
<https://github.com/cloudfoundry/python-buildpack/releases/tag/v1.2.0>
to 454MB
<https://github.com/cloudfoundry/python-buildpack/releases/tag/v1.3.0>.

Supports: (full manifest here
<https://github.com/cloudfoundry/python-buildpack/blob/v1.3.0/manifest.yml>
)

- 2.7.{8,9}
- 3.2.{4,5}
- 3.3.{5,6}
- 3.4.{2,3}

ruby-buildpack v1.4.0

Release notes are here
<https://github.com/cloudfoundry/ruby-buildpack/releases/tag/v1.4.0>.

Size reduced 71% from 1.3GB
<https://github.com/cloudfoundry/ruby-buildpack/releases/tag/v1.3.1> to
365MB
<https://github.com/cloudfoundry/ruby-buildpack/releases/tag/v1.4.0>.

Supports: (full manifest here
<https://github.com/cloudfoundry/ruby-buildpack/blob/v1.4.0/manifest.yml>
)

*MRI*:

- 2.2.{1,2}
- 2.1.{5,6}
- 2.0.0p645

*JRuby*:

- ruby-1.9.3-jruby-1.7.19
- ruby-2.0.0-jruby-1.7.19
- ruby-2.2.0-jruby-9.0.0.0.pre1


---------- Forwarded message ----------
From: Mike Dalessio <mdalessio(a)pivotal.io>
Date: Wed, Apr 8, 2015 at 11:10 AM
Subject: Addressing buildpack size
To: vcap-dev(a)cloudfoundry.org


Hello vcap-dev!

This email details a proposed change to how Cloud Foundry buildpacks are
packaged, with respect to the ever-increasing number of binary dependencies
being cached within them.

This proposal's permanent residence is here:

https://github.com/cloudfoundry-incubator/buildpack-packager/issues/4

Feel free to comment there or reply to this email.
------------------------------
Buildpack SizesWhere we are today

Many of you have seen, and possibly been challenged by, the enormous
sizes of some of the buildpacks that are currently shipping with cf-release.

Here's the state of the world right now, as of v205:

php-buildpack: 1.1G
ruby-buildpack: 922M
go-buildpack: 675M
python-buildpack: 654M
nodejs-buildpack: 403M
----------------------
total: 3.7G

These enormous sizes are the result of the current policy of packaging
every-version-of-everything-ever-supported ("EVOEES") within the buildpack.

Most recently, this problem was exacerbated by the fact that buildpacks
now contain binaries for two rootfses.
Why this is a problem

If continued, buildpacks will only continue to increase in size, leading
to longer and longer build and deploy times, longer test times, slacker
feedback loops, and therefore less frequent buildpack releases.

Additionally, this also means that we're shipping versions of
interpreters, web servers, and libraries that are deprecated, insecure, or
both. Feedback from CF users has made it clear that many companies view
this as an unnecessary security risk.

This policy is clearly unsustainable.
What we can do about it

There are many things being discussed to ameliorate the impact that
buildpack size is having on the operations of CF.

Notably, Onsi has proposed a change to buildpack caching, to improve
Diego staging times (link to proposal
<https://github.com/pivotal-cf-experimental/diego-dev-notes/blob/master/proposals/better-buildpack-caching.md>
).

However, there is an immediate solution available, which addresses both
the size concerns as well as the security concern: packaging fewer binary
dependencies within the buildpack.
The proposal

I'm proposing that we reduce the binary dependencies in each buildpack
in a very specific way.

Aside on terms I'll use below:

- Versions of the form "1.2.3" are broken down as:
MAJOR.MINOR.TEENY. Many language ecosystems refer to the "TEENY" as "PATCH"
interchangeably, but we're going to use "TEENY" in this proposal.
- We'll assume that TEENY gets bumped for API/ABI compatible changes.
- We'll assume that MINOR and MAJOR get bumped when there are
API/ABI *incompatible* changes.

I'd like to move forward soon with the following changes:

1. For language interpreters/compilers, we'll package the two
most-recent TEENY versions on each MAJOR.MINOR release.
2. For all other dependencies, we'll package only the single
most-recent TEENY version on each MAJOR.MINOR release.
3. We will discontinue packaging versions of dependencies that have
been deprecated.
4. We will no longer provide "EVOEES" buildpack releases.
5. We will no longer provide "online" buildpack releases, which
download dependencies from the public internet.
6. We will document the process, and provide tooling, for CF
operators to build their own buildpacks, choosing the dependencies that
their organization wants to support or creating "online" buildpacks at
operators' discretion.

An example for #1 is that we'll go from packaging 34 versions of node v0.10.x
to only packaging two: 0.10.37 and 0.10.38.

An example for #2 is that we'll go from packaging 3 versions of nginx 1.5
in the PHP buildpack to only packaging one: 1.5.12.

An example for #3 is that we'll discontinue packaging ruby 1.9.3 in the
ruby-buildpack, which reached end-of-life in February 2015.
Outcomes

With these changes, the total buildpack size will be reduced greatly. As
an example, we expect the ruby-buildpack size to go from 922M to 338M.

We also want to set the expectation that, as new interpreter versions
are released, either for new features or (more urgently) for security
fixes, we'll release new buildpacks much more quickly than we do today. My
hope is that we'll be able to do it within 24 hours of a new release.
Planning

These changes will be relatively easy to make, since all the buildpacks
are now using a manifest.yml file to declare what's being packaged. We
expect to be able to complete this work within the next two weeks.

Stories are in the Tracker backlog under the Epic named
"skinny-buildpacks", which you can see here:

https://www.pivotaltracker.com/epic/show/1747328

------------------------------

Please let me know how these changes will impact you and your
organizations, and let me know of any counter-proposals or variations you'd
like to consider.

Thanks,

-mike



_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev



_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev

_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev

--
Patrick Mueller
http://muellerware.org


Re: Meeting Minutes for Services PMC 2015-05-07

Duncan Johnston-Watt <duncan.johnstonwatt@...>
 

Shannon

That's tremendous news.

We look forward to working with the Services PMC.

See you at CF Summit.

Best

Duncan

On 8 May 2015 at 00:46, Shannon Coen <scoen(a)pivotal.io> wrote:


https://docs.google.com/document/d/10aOoLF_FPxuHYQfI813VCTF9z2VRC_dIuYDIGND3xbU/edit?usp=sharing

Highlights:
1. Two projects were approved for incubation: Brooklyn service broker from
Cloudsoft, and a MSSQL Server service broker from HP.
2. Updates provides for the Service Enablement and Notifications projects.

Best,

Shannon Coen
Product Manager, Cloud Foundry
Pivotal, Inc.

_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev

--
Duncan Johnston-Watt
CEO | Cloudsoft Corporation

Twitter | @duncanjw
Mobile | +44 777 190 2653
Skype | duncan_johnstonwatt
Linkedin | www.linkedin.com/in/duncanjohnstonwatt

Cloudsoft Corporation Limited, Registered in Scotland No: SC349230.
Registered Office: 13 Dryden Place, Edinburgh, EH9 1RP

This e-mail message is confidential and for use by the addressee only. If
the message is received by anyone other than the addressee, please return
the message to the sender by replying to it and then delete the message
from your computer. Internet e-mails are not necessarily secure. Cloudsoft
Corporation Limited does not accept responsibility for changes made to this
message after it was sent.

Whilst all reasonable care has been taken to avoid the transmission of
viruses, it is the responsibility of the recipient to ensure that the
onward transmission, opening or use of this message and any attachments
will not adversely affect its systems or data. No responsibility is
accepted by Cloudsoft Corporation Limited in this regard and the recipient
should carry out such virus and other checks as it considers appropriate.


Re: Addressing buildpack size

Mike Dalessio
 

Hey Dan,


On Tue, May 5, 2015 at 1:33 PM, Daniel Mikusa <dmikusa(a)pivotal.io> wrote:

I'm happy to see the size of the build packs dropping, but I have to ask
why do we bundle the build packs with a fixed set of binaries?

The build packs themselves are very small, it's the binaries that are
huge. It seems like it would make sense to handle them as separate
concerns.
You've nailed it. Yes, it makes a ton of sense to handle binaries as
separate concerns, and we're heading in that direction.

At one point very recently, we started doing some planning around how we
might cache buildpack assets in a structured way (like a blob store) and
seamlessly have everything Just Work™.

The first step towards separating these concerns was to extract the use of
dependencies out of the (generally upstream) buildpack code and into a
buildpack manifest file. Having done that, the dependencies are now
first-class artifacts that can be managed by operators.

We stopped there, at least for the time being, as it's not terribly clear
how to jam buildpack asset caching into the current API, CC buildpack
model, and staging process (though, again, the manifest is the best first
step, as it enables us to trap network calls and thus redirect them to a
cache either on disk or over the network).

It's also quite possible that the remaining pain will be further
ameliorated by the proposed Diego feature to attach persistent disk (on
which, presumably, the buildpacks and their assets are cached), which means
we're deferring further work until we've got more user feedback and data.




I don't want to come off too harsh, but in addition to the size of the
build packs when bundled with binaries, there are some other disadvantages
to doing things this way.

- Binaries and build packs are updated at different rates. Binaries are
usually updated often, to pick up new runtimes versions & security fixes;
build packs are generally changed at a slower pace, as features or bug
fixes for them are needed. Bundling the two together, requires an operator
to update the build packs more often, just to get updated binaries. It's
been my experience that users don't (or forget) to update build packs which
means they're likely running with older, possibly insecure runtimes.

- It's difficult to bundle a set of runtime binaries that suite
everyone's needs, different users will update at different rates and will
want different sets of binaries. If build packs and binaries are packaged
together, users will end up needing to find a specific build pack bundle
that contains the runtime they want or users will need to build their own
custom bundles. If build packs and binaries are handled separately, there
will be more flexibility in what binaries a build pack has available as an
operator can manage binaries independently. Wayne's post seems to hit on
this point.

- At some point, I think this has already happened (jruby & java), build
packs are going to start having overlapping sets of binaries. If the
binaries are bundled with the build pack, there's no way that build packs
could ever share binaries.

My personal preference would be to see build packs bundled without
binaries and some other solution, which probably merits a separate thread,
for managing the binaries.

I'm curious to hear what others think or if I've missed something and
bundling build packs and binaries is clearly the way to go.

Dan

PS. If this is something that came up in the PMC, I apologize. I skimmed
the notes, but may have missed it.



On Mon, May 4, 2015 at 2:10 PM, Wayne E. Seguin <
wayneeseguin(a)starkandwayne.com> wrote:

Because of very good compatibility between versions (post 1.X) I would
like to make a motion to do the following:

Split the buildpack:

have the default golang buildpack track the latest golang version

Then handle older versions in one of two ways, either:

a) have a large secondary for older versions

or

b) have multiple, one for each version of golang, users can specify a
specific URL if they care about specific versions.

This would improve space/time considerations for operations. Personally I
would prefer b) because it allows you to enable supporting older go
versions out of the box by design but still keeping each golang buildpack
small.

~Wayne

Wayne E. Seguin <wayneeseguin(a)starkandwayne.com>
CTO ; Stark & Wayne, LLC

On May 4, 2015, at 12:40 , Mike Dalessio <mdalessio(a)pivotal.io> wrote:

Hi Wayne,

On Fri, May 1, 2015 at 1:29 PM, Wayne E. Seguin <
wayneeseguin(a)starkandwayne.com> wrote:

What an incredible step in the right direction, Awesome!!!

Out of curiosity, why is the go buildpack still quite so large?
Thanks for asking this question.

Currently we're including the following binary dependencies in
`go-buildpack`:

```
cache $ ls -lSh *_go*
-rw-r--r-- 1 flavorjones flavorjones 60M 2015-05-04 12:36
https___storage.googleapis.com_golang_go1.4.2.linux-amd64.tar.gz
-rw-r--r-- 1 flavorjones flavorjones 60M 2015-05-04 12:36
https___storage.googleapis.com_golang_go1.4.1.linux-amd64.tar.gz
-rw-r--r-- 1 flavorjones flavorjones 54M 2015-05-04 12:36
https___storage.googleapis.com_golang_go1.2.2.linux-amd64.tar.gz
-rw-r--r-- 1 flavorjones flavorjones 54M 2015-05-04 12:36
http___go.googlecode.com_files_go1.2.1.linux-amd64.tar.gz
-rw-r--r-- 1 flavorjones flavorjones 51M 2015-05-04 12:36
https___storage.googleapis.com_golang_go1.3.3.linux-amd64.tar.gz
-rw-r--r-- 1 flavorjones flavorjones 51M 2015-05-04 12:36
https___storage.googleapis.com_golang_go1.3.2.linux-amd64.tar.gz
-rw-r--r-- 1 flavorjones flavorjones 40M 2015-05-04 12:36
http___go.googlecode.com_files_go1.1.2.linux-amd64.tar.gz
-rw-r--r-- 1 flavorjones flavorjones 40M 2015-05-04 12:36
http___go.googlecode.com_files_go1.1.1.linux-amd64.tar.gz
```

One question we should ask, I think, is: should we still be supporting
golang 1.1 and 1.2? Dropping those versions would cut the size of the
buildpack in (approximately) half.





On May 1, 2015, at 11:54 , Mike Dalessio <mdalessio(a)pivotal.io> wrote:

Skinny buildpacks have been cut for go, nodejs, php, python and ruby
buildpacks.

| | current | previous |
|--------+---------+----------|
| go | 442MB | 633MB |
| nodejs | 69MB | 417MB |
| php | 804MB | 1.1GB |
| python | 454MB | 654MB |
| ruby | 365MB | 1.3GB |
|--------+---------+----------|
| total | 2.1GB | 4.1GB |

for an aggregate 51% reduction in size. Details follow.
Next Steps

I recognize that every cloud operator may have a different policy on
what versions of interpreters and libraries they want to support, based on
the specific requirements of their users.

These buildpacks reflect a "bare mininum" policy for a cloud to be
operable, and I do not expect these buildpacks to be adopted as-is by many
operators.

These buildpacks have not yet been added to cf-release, specifically so
that the community can prepare their own buildpacks if necessary.

Over the next few days, the buildpacks core team will ship documentation
and tooling to assist you in packaging specific dependencies for your
instance of CF. I'll start a new thread on this list early next week to
communicate this information.
Call to Action

In the meantime, please think about whether the policy implemented in
these buildpacks ("last two patches (or teenies) on all supported
major.minor releases") is suitable for your users; and if not, think about
what dependencies you'll ideally be supporting.
go-buildpack v1.3.0

Release notes are here
<https://github.com/cloudfoundry/go-buildpack/releases/tag/v1.3.0>.

Size reduced 30% from 633MB
<https://github.com/cloudfoundry/go-buildpack/releases/tag/v1.2.0> to
442MB <https://github.com/cloudfoundry/go-buildpack/releases/tag/v1.3.0>
.

Supports (full manifest here
<https://github.com/cloudfoundry/go-buildpack/blob/v1.3.0/manifest.yml>
):

- golang 1.4.{1,2}
- golang 1.3.{2,3}
- golang 1.2.{1,2}
- golang 1.1.{1,2}

nodejs-buildpack v1.3.0

Full release notes are here
<https://github.com/cloudfoundry/nodejs-buildpack/releases/tag/v1.3.0>.

Size reduced 83% from 417MB
<https://github.com/cloudfoundry/nodejs-buildpack/releases/tag/v1.2.1>
to 69MB
<https://github.com/cloudfoundry/nodejs-buildpack/releases/tag/v1.3.0>.

Supports (full manifest here
<https://github.com/cloudfoundry/nodejs-buildpack/blob/v1.3.0/manifest.yml>
):

- 0.8.{27,28}
- 0.9.{11,12}
- 0.10.{37,38}
- 0.11.{15,16}
- 0.12.{1,2}

php-buildpack v3.2.0

Full release notes are here
<https://github.com/cloudfoundry/php-buildpack/releases/tag/v3.2.0>.

Size reduced 27% from 1.1GB
<https://github.com/cloudfoundry/php-buildpack/releases/tag/v3.1.1> to
803MB
<https://github.com/cloudfoundry/php-buildpack/releases/tag/v3.2.0>.

Supports: (full manifest here
<https://github.com/cloudfoundry/php-buildpack/blob/v3.2.0/manifest.yml>
)

*PHP*:

- 5.6.{6,7}
- 5.5.{22,23}
- 5.4.{38,39}

*HHVM* (lucid64 stack):

- 3.2.0

*HHVM* (cflinuxfs2 stack):

- 3.5.{0,1}
- 3.6.{0,1}

*Apache HTTPD*:

- 2.4.12

*nginx*:

- 1.7.10
- 1.6.2
- 1.5.13

python-buildpack v1.3.0

Full release notes are here
<https://github.com/cloudfoundry/python-buildpack/releases/tag/v1.3.0>.

Size reduced 30% from 654MB
<https://github.com/cloudfoundry/python-buildpack/releases/tag/v1.2.0>
to 454MB
<https://github.com/cloudfoundry/python-buildpack/releases/tag/v1.3.0>.

Supports: (full manifest here
<https://github.com/cloudfoundry/python-buildpack/blob/v1.3.0/manifest.yml>
)

- 2.7.{8,9}
- 3.2.{4,5}
- 3.3.{5,6}
- 3.4.{2,3}

ruby-buildpack v1.4.0

Release notes are here
<https://github.com/cloudfoundry/ruby-buildpack/releases/tag/v1.4.0>.

Size reduced 71% from 1.3GB
<https://github.com/cloudfoundry/ruby-buildpack/releases/tag/v1.3.1> to
365MB
<https://github.com/cloudfoundry/ruby-buildpack/releases/tag/v1.4.0>.

Supports: (full manifest here
<https://github.com/cloudfoundry/ruby-buildpack/blob/v1.4.0/manifest.yml>
)

*MRI*:

- 2.2.{1,2}
- 2.1.{5,6}
- 2.0.0p645

*JRuby*:

- ruby-1.9.3-jruby-1.7.19
- ruby-2.0.0-jruby-1.7.19
- ruby-2.2.0-jruby-9.0.0.0.pre1


---------- Forwarded message ----------
From: Mike Dalessio <mdalessio(a)pivotal.io>
Date: Wed, Apr 8, 2015 at 11:10 AM
Subject: Addressing buildpack size
To: vcap-dev(a)cloudfoundry.org


Hello vcap-dev!

This email details a proposed change to how Cloud Foundry buildpacks are
packaged, with respect to the ever-increasing number of binary dependencies
being cached within them.

This proposal's permanent residence is here:

https://github.com/cloudfoundry-incubator/buildpack-packager/issues/4

Feel free to comment there or reply to this email.
------------------------------
Buildpack SizesWhere we are today

Many of you have seen, and possibly been challenged by, the enormous
sizes of some of the buildpacks that are currently shipping with cf-release.

Here's the state of the world right now, as of v205:

php-buildpack: 1.1G
ruby-buildpack: 922M
go-buildpack: 675M
python-buildpack: 654M
nodejs-buildpack: 403M
----------------------
total: 3.7G

These enormous sizes are the result of the current policy of packaging
every-version-of-everything-ever-supported ("EVOEES") within the buildpack.

Most recently, this problem was exacerbated by the fact that buildpacks
now contain binaries for two rootfses.
Why this is a problem

If continued, buildpacks will only continue to increase in size, leading
to longer and longer build and deploy times, longer test times, slacker
feedback loops, and therefore less frequent buildpack releases.

Additionally, this also means that we're shipping versions of
interpreters, web servers, and libraries that are deprecated, insecure, or
both. Feedback from CF users has made it clear that many companies view
this as an unnecessary security risk.

This policy is clearly unsustainable.
What we can do about it

There are many things being discussed to ameliorate the impact that
buildpack size is having on the operations of CF.

Notably, Onsi has proposed a change to buildpack caching, to improve
Diego staging times (link to proposal
<https://github.com/pivotal-cf-experimental/diego-dev-notes/blob/master/proposals/better-buildpack-caching.md>
).

However, there is an immediate solution available, which addresses both
the size concerns as well as the security concern: packaging fewer binary
dependencies within the buildpack.
The proposal

I'm proposing that we reduce the binary dependencies in each buildpack
in a very specific way.

Aside on terms I'll use below:

- Versions of the form "1.2.3" are broken down as:
MAJOR.MINOR.TEENY. Many language ecosystems refer to the "TEENY" as "PATCH"
interchangeably, but we're going to use "TEENY" in this proposal.
- We'll assume that TEENY gets bumped for API/ABI compatible changes.
- We'll assume that MINOR and MAJOR get bumped when there are
API/ABI *incompatible* changes.

I'd like to move forward soon with the following changes:

1. For language interpreters/compilers, we'll package the two
most-recent TEENY versions on each MAJOR.MINOR release.
2. For all other dependencies, we'll package only the single
most-recent TEENY version on each MAJOR.MINOR release.
3. We will discontinue packaging versions of dependencies that have
been deprecated.
4. We will no longer provide "EVOEES" buildpack releases.
5. We will no longer provide "online" buildpack releases, which
download dependencies from the public internet.
6. We will document the process, and provide tooling, for CF
operators to build their own buildpacks, choosing the dependencies that
their organization wants to support or creating "online" buildpacks at
operators' discretion.

An example for #1 is that we'll go from packaging 34 versions of node v0.10.x
to only packaging two: 0.10.37 and 0.10.38.

An example for #2 is that we'll go from packaging 3 versions of nginx 1.5
in the PHP buildpack to only packaging one: 1.5.12.

An example for #3 is that we'll discontinue packaging ruby 1.9.3 in the
ruby-buildpack, which reached end-of-life in February 2015.
Outcomes

With these changes, the total buildpack size will be reduced greatly. As
an example, we expect the ruby-buildpack size to go from 922M to 338M.

We also want to set the expectation that, as new interpreter versions
are released, either for new features or (more urgently) for security
fixes, we'll release new buildpacks much more quickly than we do today. My
hope is that we'll be able to do it within 24 hours of a new release.
Planning

These changes will be relatively easy to make, since all the buildpacks
are now using a manifest.yml file to declare what's being packaged. We
expect to be able to complete this work within the next two weeks.

Stories are in the Tracker backlog under the Epic named
"skinny-buildpacks", which you can see here:

https://www.pivotaltracker.com/epic/show/1747328

------------------------------

Please let me know how these changes will impact you and your
organizations, and let me know of any counter-proposals or variations you'd
like to consider.

Thanks,

-mike



_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev



_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev


Re: Addressing buildpack size

Mike Dalessio
 

Jack,

On Mon, May 4, 2015 at 2:43 PM, Jack Cai <greensight(a)gmail.com> wrote:

+1

Thanks for the great work!

Over the next few days, the buildpacks core team will ship documentation
and tooling to assist you in packaging specific dependencies for your
instance of CF. I'll start a new thread on this list early next week to
communicate this information.

I hope this will be easy to customize as part of a bosh release
configuration. Specifically, it would be even better if the cloud operator
can customize some of the binary download URLs in the configuration, so
that they can use their own binaries. As I know, many enterprises only use
legal-cleared binary versions of open source components, hosted inside
their firewall. I understand today this can be achieved by modifying the
manifest.yml in each buildpack. But it would greater if it can be done
through some build/package configuration.
You're absolutely right, it would be tremendous if it were possible to do
this on the BOSH manifest level. I'm sure we'll get there eventually, but
there is obviously quite a bit of work to get there.

The good news, though, is that the best first step has already been made,
which was to extract dependencies out of the upstream buildpack code, and
declare it in a buildpack manifest.

In the meantime, we'll do our best to make sure operator tools are
available and easy to use to manipulate the buildpacks manifests and create
custom buildpacks.




Jack





On Mon, May 4, 2015 at 1:28 PM, Onsi Fakhouri <ofakhouri(a)pivotal.io>
wrote:

the go community tends to move fast to adopt the latest versions of go.
i imagine we can drop 1.1 and 1.2 without impacting most people.

anyone on the list experience otherwise?

onsi

On Mon, May 4, 2015 at 9:40 AM, Mike Dalessio <mdalessio(a)pivotal.io>
wrote:

Hi Wayne,

On Fri, May 1, 2015 at 1:29 PM, Wayne E. Seguin <
wayneeseguin(a)starkandwayne.com> wrote:

What an incredible step in the right direction, Awesome!!!

Out of curiosity, why is the go buildpack still quite so large?
Thanks for asking this question.

Currently we're including the following binary dependencies in
`go-buildpack`:

```
cache $ ls -lSh *_go*
-rw-r--r-- 1 flavorjones flavorjones 60M 2015-05-04 12:36
https___storage.googleapis.com_golang_go1.4.2.linux-amd64.tar.gz
-rw-r--r-- 1 flavorjones flavorjones 60M 2015-05-04 12:36
https___storage.googleapis.com_golang_go1.4.1.linux-amd64.tar.gz
-rw-r--r-- 1 flavorjones flavorjones 54M 2015-05-04 12:36
https___storage.googleapis.com_golang_go1.2.2.linux-amd64.tar.gz
-rw-r--r-- 1 flavorjones flavorjones 54M 2015-05-04 12:36
http___go.googlecode.com_files_go1.2.1.linux-amd64.tar.gz
-rw-r--r-- 1 flavorjones flavorjones 51M 2015-05-04 12:36
https___storage.googleapis.com_golang_go1.3.3.linux-amd64.tar.gz
-rw-r--r-- 1 flavorjones flavorjones 51M 2015-05-04 12:36
https___storage.googleapis.com_golang_go1.3.2.linux-amd64.tar.gz
-rw-r--r-- 1 flavorjones flavorjones 40M 2015-05-04 12:36
http___go.googlecode.com_files_go1.1.2.linux-amd64.tar.gz
-rw-r--r-- 1 flavorjones flavorjones 40M 2015-05-04 12:36
http___go.googlecode.com_files_go1.1.1.linux-amd64.tar.gz
```

One question we should ask, I think, is: should we still be supporting
golang 1.1 and 1.2? Dropping those versions would cut the size of the
buildpack in (approximately) half.





On May 1, 2015, at 11:54 , Mike Dalessio <mdalessio(a)pivotal.io> wrote:

Skinny buildpacks have been cut for go, nodejs, php, python and ruby
buildpacks.

| | current | previous |
|--------+---------+----------|
| go | 442MB | 633MB |
| nodejs | 69MB | 417MB |
| php | 804MB | 1.1GB |
| python | 454MB | 654MB |
| ruby | 365MB | 1.3GB |
|--------+---------+----------|
| total | 2.1GB | 4.1GB |

for an aggregate 51% reduction in size. Details follow.
Next Steps

I recognize that every cloud operator may have a different policy on
what versions of interpreters and libraries they want to support, based on
the specific requirements of their users.

These buildpacks reflect a "bare mininum" policy for a cloud to be
operable, and I do not expect these buildpacks to be adopted as-is by many
operators.

These buildpacks have not yet been added to cf-release, specifically
so that the community can prepare their own buildpacks if necessary.

Over the next few days, the buildpacks core team will ship
documentation and tooling to assist you in packaging specific dependencies
for your instance of CF. I'll start a new thread on this list early next
week to communicate this information.
Call to Action

In the meantime, please think about whether the policy implemented in
these buildpacks ("last two patches (or teenies) on all supported
major.minor releases") is suitable for your users; and if not, think about
what dependencies you'll ideally be supporting.
go-buildpack v1.3.0

Release notes are here
<https://github.com/cloudfoundry/go-buildpack/releases/tag/v1.3.0>.

Size reduced 30% from 633MB
<https://github.com/cloudfoundry/go-buildpack/releases/tag/v1.2.0> to
442MB
<https://github.com/cloudfoundry/go-buildpack/releases/tag/v1.3.0>.

Supports (full manifest here
<https://github.com/cloudfoundry/go-buildpack/blob/v1.3.0/manifest.yml>
):

- golang 1.4.{1,2}
- golang 1.3.{2,3}
- golang 1.2.{1,2}
- golang 1.1.{1,2}

nodejs-buildpack v1.3.0

Full release notes are here
<https://github.com/cloudfoundry/nodejs-buildpack/releases/tag/v1.3.0>.

Size reduced 83% from 417MB
<https://github.com/cloudfoundry/nodejs-buildpack/releases/tag/v1.2.1>
to 69MB
<https://github.com/cloudfoundry/nodejs-buildpack/releases/tag/v1.3.0>.

Supports (full manifest here
<https://github.com/cloudfoundry/nodejs-buildpack/blob/v1.3.0/manifest.yml>
):

- 0.8.{27,28}
- 0.9.{11,12}
- 0.10.{37,38}
- 0.11.{15,16}
- 0.12.{1,2}

php-buildpack v3.2.0

Full release notes are here
<https://github.com/cloudfoundry/php-buildpack/releases/tag/v3.2.0>.

Size reduced 27% from 1.1GB
<https://github.com/cloudfoundry/php-buildpack/releases/tag/v3.1.1> to
803MB
<https://github.com/cloudfoundry/php-buildpack/releases/tag/v3.2.0>.

Supports: (full manifest here
<https://github.com/cloudfoundry/php-buildpack/blob/v3.2.0/manifest.yml>
)

*PHP*:

- 5.6.{6,7}
- 5.5.{22,23}
- 5.4.{38,39}

*HHVM* (lucid64 stack):

- 3.2.0

*HHVM* (cflinuxfs2 stack):

- 3.5.{0,1}
- 3.6.{0,1}

*Apache HTTPD*:

- 2.4.12

*nginx*:

- 1.7.10
- 1.6.2
- 1.5.13

python-buildpack v1.3.0

Full release notes are here
<https://github.com/cloudfoundry/python-buildpack/releases/tag/v1.3.0>.

Size reduced 30% from 654MB
<https://github.com/cloudfoundry/python-buildpack/releases/tag/v1.2.0>
to 454MB
<https://github.com/cloudfoundry/python-buildpack/releases/tag/v1.3.0>.

Supports: (full manifest here
<https://github.com/cloudfoundry/python-buildpack/blob/v1.3.0/manifest.yml>
)

- 2.7.{8,9}
- 3.2.{4,5}
- 3.3.{5,6}
- 3.4.{2,3}

ruby-buildpack v1.4.0

Release notes are here
<https://github.com/cloudfoundry/ruby-buildpack/releases/tag/v1.4.0>.

Size reduced 71% from 1.3GB
<https://github.com/cloudfoundry/ruby-buildpack/releases/tag/v1.3.1>
to 365MB
<https://github.com/cloudfoundry/ruby-buildpack/releases/tag/v1.4.0>.

Supports: (full manifest here
<https://github.com/cloudfoundry/ruby-buildpack/blob/v1.4.0/manifest.yml>
)

*MRI*:

- 2.2.{1,2}
- 2.1.{5,6}
- 2.0.0p645

*JRuby*:

- ruby-1.9.3-jruby-1.7.19
- ruby-2.0.0-jruby-1.7.19
- ruby-2.2.0-jruby-9.0.0.0.pre1


---------- Forwarded message ----------
From: Mike Dalessio <mdalessio(a)pivotal.io>
Date: Wed, Apr 8, 2015 at 11:10 AM
Subject: Addressing buildpack size
To: vcap-dev(a)cloudfoundry.org


Hello vcap-dev!

This email details a proposed change to how Cloud Foundry buildpacks
are packaged, with respect to the ever-increasing number of binary
dependencies being cached within them.

This proposal's permanent residence is here:

https://github.com/cloudfoundry-incubator/buildpack-packager/issues/4

Feel free to comment there or reply to this email.
------------------------------
Buildpack SizesWhere we are today

Many of you have seen, and possibly been challenged by, the enormous
sizes of some of the buildpacks that are currently shipping with cf-release.

Here's the state of the world right now, as of v205:

php-buildpack: 1.1G
ruby-buildpack: 922M
go-buildpack: 675M
python-buildpack: 654M
nodejs-buildpack: 403M
----------------------
total: 3.7G

These enormous sizes are the result of the current policy of packaging
every-version-of-everything-ever-supported ("EVOEES") within the buildpack.

Most recently, this problem was exacerbated by the fact that buildpacks
now contain binaries for two rootfses.
Why this is a problem

If continued, buildpacks will only continue to increase in size,
leading to longer and longer build and deploy times, longer test times,
slacker feedback loops, and therefore less frequent buildpack releases.

Additionally, this also means that we're shipping versions of
interpreters, web servers, and libraries that are deprecated, insecure, or
both. Feedback from CF users has made it clear that many companies view
this as an unnecessary security risk.

This policy is clearly unsustainable.
What we can do about it

There are many things being discussed to ameliorate the impact that
buildpack size is having on the operations of CF.

Notably, Onsi has proposed a change to buildpack caching, to improve
Diego staging times (link to proposal
<https://github.com/pivotal-cf-experimental/diego-dev-notes/blob/master/proposals/better-buildpack-caching.md>
).

However, there is an immediate solution available, which addresses both
the size concerns as well as the security concern: packaging fewer binary
dependencies within the buildpack.
The proposal

I'm proposing that we reduce the binary dependencies in each buildpack
in a very specific way.

Aside on terms I'll use below:

- Versions of the form "1.2.3" are broken down as:
MAJOR.MINOR.TEENY. Many language ecosystems refer to the "TEENY" as "PATCH"
interchangeably, but we're going to use "TEENY" in this proposal.
- We'll assume that TEENY gets bumped for API/ABI compatible
changes.
- We'll assume that MINOR and MAJOR get bumped when there are
API/ABI *incompatible* changes.

I'd like to move forward soon with the following changes:

1. For language interpreters/compilers, we'll package the two
most-recent TEENY versions on each MAJOR.MINOR release.
2. For all other dependencies, we'll package only the single
most-recent TEENY version on each MAJOR.MINOR release.
3. We will discontinue packaging versions of dependencies that have
been deprecated.
4. We will no longer provide "EVOEES" buildpack releases.
5. We will no longer provide "online" buildpack releases, which
download dependencies from the public internet.
6. We will document the process, and provide tooling, for CF
operators to build their own buildpacks, choosing the dependencies that
their organization wants to support or creating "online" buildpacks at
operators' discretion.

An example for #1 is that we'll go from packaging 34 versions of node v0.10.x
to only packaging two: 0.10.37 and 0.10.38.

An example for #2 is that we'll go from packaging 3 versions of nginx 1.5
in the PHP buildpack to only packaging one: 1.5.12.

An example for #3 is that we'll discontinue packaging ruby 1.9.3 in the
ruby-buildpack, which reached end-of-life in February 2015.
Outcomes

With these changes, the total buildpack size will be reduced greatly.
As an example, we expect the ruby-buildpack size to go from 922M to 338M.

We also want to set the expectation that, as new interpreter versions
are released, either for new features or (more urgently) for security
fixes, we'll release new buildpacks much more quickly than we do today. My
hope is that we'll be able to do it within 24 hours of a new release.
Planning

These changes will be relatively easy to make, since all the buildpacks
are now using a manifest.yml file to declare what's being packaged. We
expect to be able to complete this work within the next two weeks.

Stories are in the Tracker backlog under the Epic named
"skinny-buildpacks", which you can see here:

https://www.pivotaltracker.com/epic/show/1747328

------------------------------

Please let me know how these changes will impact you and your
organizations, and let me know of any counter-proposals or variations you'd
like to consider.

Thanks,

-mike



_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev


_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev

_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev


Re: Addressing buildpack size

Mike Dalessio
 

Hi Wayne,

Thanks for thinking about this problem.

On Mon, May 4, 2015 at 2:10 PM, Wayne E. Seguin <
wayneeseguin(a)starkandwayne.com> wrote:

Because of very good compatibility between versions (post 1.X) I would
like to make a motion to do the following:

Split the buildpack:

have the default golang buildpack track the latest golang version

Then handle older versions in one of two ways, either:

a) have a large secondary for older versions

or

b) have multiple, one for each version of golang, users can specify a
specific URL if they care about specific versions.

This would improve space/time considerations for operations.
Which operations did you have in mind? Currently the DEAs download *all*
the buildpacks, so it won't save that operation when DEAs roll. Let me know
if you're thinking of something else?


Personally I would prefer b) because it allows you to enable supporting
older go versions out of the box by design but still keeping each golang
buildpack small.
I personally would like to see buildpacks have the option of being
stack-specific.

So the Ruby buildpack, for example, wouldn't have to package binaries for
both `cflinuxfs2` and `lucid64` (though this is complicated by the
additional presence of stack-agnostic packages like JRuby).

But if we did this, then you only have to use the buildpacks for the
stack(s) in your CF deployment. Because, really, asking a buildpack to
contain binaries for every supported stack isn't really a scalable
practice; though we get away with it in a world with only 1 or 2 stacks.



~Wayne

Wayne E. Seguin <wayneeseguin(a)starkandwayne.com>
CTO ; Stark & Wayne, LLC

On May 4, 2015, at 12:40 , Mike Dalessio <mdalessio(a)pivotal.io> wrote:

Hi Wayne,

On Fri, May 1, 2015 at 1:29 PM, Wayne E. Seguin <
wayneeseguin(a)starkandwayne.com> wrote:

What an incredible step in the right direction, Awesome!!!

Out of curiosity, why is the go buildpack still quite so large?
Thanks for asking this question.

Currently we're including the following binary dependencies in
`go-buildpack`:

```
cache $ ls -lSh *_go*
-rw-r--r-- 1 flavorjones flavorjones 60M 2015-05-04 12:36
https___storage.googleapis.com_golang_go1.4.2.linux-amd64.tar.gz
-rw-r--r-- 1 flavorjones flavorjones 60M 2015-05-04 12:36
https___storage.googleapis.com_golang_go1.4.1.linux-amd64.tar.gz
-rw-r--r-- 1 flavorjones flavorjones 54M 2015-05-04 12:36
https___storage.googleapis.com_golang_go1.2.2.linux-amd64.tar.gz
-rw-r--r-- 1 flavorjones flavorjones 54M 2015-05-04 12:36
http___go.googlecode.com_files_go1.2.1.linux-amd64.tar.gz
-rw-r--r-- 1 flavorjones flavorjones 51M 2015-05-04 12:36
https___storage.googleapis.com_golang_go1.3.3.linux-amd64.tar.gz
-rw-r--r-- 1 flavorjones flavorjones 51M 2015-05-04 12:36
https___storage.googleapis.com_golang_go1.3.2.linux-amd64.tar.gz
-rw-r--r-- 1 flavorjones flavorjones 40M 2015-05-04 12:36
http___go.googlecode.com_files_go1.1.2.linux-amd64.tar.gz
-rw-r--r-- 1 flavorjones flavorjones 40M 2015-05-04 12:36
http___go.googlecode.com_files_go1.1.1.linux-amd64.tar.gz
```

One question we should ask, I think, is: should we still be supporting
golang 1.1 and 1.2? Dropping those versions would cut the size of the
buildpack in (approximately) half.





On May 1, 2015, at 11:54 , Mike Dalessio <mdalessio(a)pivotal.io> wrote:

Skinny buildpacks have been cut for go, nodejs, php, python and ruby
buildpacks.

| | current | previous |
|--------+---------+----------|
| go | 442MB | 633MB |
| nodejs | 69MB | 417MB |
| php | 804MB | 1.1GB |
| python | 454MB | 654MB |
| ruby | 365MB | 1.3GB |
|--------+---------+----------|
| total | 2.1GB | 4.1GB |

for an aggregate 51% reduction in size. Details follow.
Next Steps

I recognize that every cloud operator may have a different policy on what
versions of interpreters and libraries they want to support, based on the
specific requirements of their users.

These buildpacks reflect a "bare mininum" policy for a cloud to be
operable, and I do not expect these buildpacks to be adopted as-is by many
operators.

These buildpacks have not yet been added to cf-release, specifically so
that the community can prepare their own buildpacks if necessary.

Over the next few days, the buildpacks core team will ship documentation
and tooling to assist you in packaging specific dependencies for your
instance of CF. I'll start a new thread on this list early next week to
communicate this information.
Call to Action

In the meantime, please think about whether the policy implemented in
these buildpacks ("last two patches (or teenies) on all supported
major.minor releases") is suitable for your users; and if not, think about
what dependencies you'll ideally be supporting.
go-buildpack v1.3.0

Release notes are here
<https://github.com/cloudfoundry/go-buildpack/releases/tag/v1.3.0>.

Size reduced 30% from 633MB
<https://github.com/cloudfoundry/go-buildpack/releases/tag/v1.2.0> to
442MB <https://github.com/cloudfoundry/go-buildpack/releases/tag/v1.3.0>.

Supports (full manifest here
<https://github.com/cloudfoundry/go-buildpack/blob/v1.3.0/manifest.yml>):

- golang 1.4.{1,2}
- golang 1.3.{2,3}
- golang 1.2.{1,2}
- golang 1.1.{1,2}

nodejs-buildpack v1.3.0

Full release notes are here
<https://github.com/cloudfoundry/nodejs-buildpack/releases/tag/v1.3.0>.

Size reduced 83% from 417MB
<https://github.com/cloudfoundry/nodejs-buildpack/releases/tag/v1.2.1>
to 69MB
<https://github.com/cloudfoundry/nodejs-buildpack/releases/tag/v1.3.0>.

Supports (full manifest here
<https://github.com/cloudfoundry/nodejs-buildpack/blob/v1.3.0/manifest.yml>
):

- 0.8.{27,28}
- 0.9.{11,12}
- 0.10.{37,38}
- 0.11.{15,16}
- 0.12.{1,2}

php-buildpack v3.2.0

Full release notes are here
<https://github.com/cloudfoundry/php-buildpack/releases/tag/v3.2.0>.

Size reduced 27% from 1.1GB
<https://github.com/cloudfoundry/php-buildpack/releases/tag/v3.1.1> to
803MB <https://github.com/cloudfoundry/php-buildpack/releases/tag/v3.2.0>
.

Supports: (full manifest here
<https://github.com/cloudfoundry/php-buildpack/blob/v3.2.0/manifest.yml>)

*PHP*:

- 5.6.{6,7}
- 5.5.{22,23}
- 5.4.{38,39}

*HHVM* (lucid64 stack):

- 3.2.0

*HHVM* (cflinuxfs2 stack):

- 3.5.{0,1}
- 3.6.{0,1}

*Apache HTTPD*:

- 2.4.12

*nginx*:

- 1.7.10
- 1.6.2
- 1.5.13

python-buildpack v1.3.0

Full release notes are here
<https://github.com/cloudfoundry/python-buildpack/releases/tag/v1.3.0>.

Size reduced 30% from 654MB
<https://github.com/cloudfoundry/python-buildpack/releases/tag/v1.2.0>
to 454MB
<https://github.com/cloudfoundry/python-buildpack/releases/tag/v1.3.0>.

Supports: (full manifest here
<https://github.com/cloudfoundry/python-buildpack/blob/v1.3.0/manifest.yml>
)

- 2.7.{8,9}
- 3.2.{4,5}
- 3.3.{5,6}
- 3.4.{2,3}

ruby-buildpack v1.4.0

Release notes are here
<https://github.com/cloudfoundry/ruby-buildpack/releases/tag/v1.4.0>.

Size reduced 71% from 1.3GB
<https://github.com/cloudfoundry/ruby-buildpack/releases/tag/v1.3.1> to
365MB
<https://github.com/cloudfoundry/ruby-buildpack/releases/tag/v1.4.0>.

Supports: (full manifest here
<https://github.com/cloudfoundry/ruby-buildpack/blob/v1.4.0/manifest.yml>
)

*MRI*:

- 2.2.{1,2}
- 2.1.{5,6}
- 2.0.0p645

*JRuby*:

- ruby-1.9.3-jruby-1.7.19
- ruby-2.0.0-jruby-1.7.19
- ruby-2.2.0-jruby-9.0.0.0.pre1


---------- Forwarded message ----------
From: Mike Dalessio <mdalessio(a)pivotal.io>
Date: Wed, Apr 8, 2015 at 11:10 AM
Subject: Addressing buildpack size
To: vcap-dev(a)cloudfoundry.org


Hello vcap-dev!

This email details a proposed change to how Cloud Foundry buildpacks are
packaged, with respect to the ever-increasing number of binary dependencies
being cached within them.

This proposal's permanent residence is here:

https://github.com/cloudfoundry-incubator/buildpack-packager/issues/4

Feel free to comment there or reply to this email.
------------------------------
Buildpack SizesWhere we are today

Many of you have seen, and possibly been challenged by, the enormous
sizes of some of the buildpacks that are currently shipping with cf-release.

Here's the state of the world right now, as of v205:

php-buildpack: 1.1G
ruby-buildpack: 922M
go-buildpack: 675M
python-buildpack: 654M
nodejs-buildpack: 403M
----------------------
total: 3.7G

These enormous sizes are the result of the current policy of packaging
every-version-of-everything-ever-supported ("EVOEES") within the buildpack.

Most recently, this problem was exacerbated by the fact that buildpacks
now contain binaries for two rootfses.
Why this is a problem

If continued, buildpacks will only continue to increase in size, leading
to longer and longer build and deploy times, longer test times, slacker
feedback loops, and therefore less frequent buildpack releases.

Additionally, this also means that we're shipping versions of
interpreters, web servers, and libraries that are deprecated, insecure, or
both. Feedback from CF users has made it clear that many companies view
this as an unnecessary security risk.

This policy is clearly unsustainable.
What we can do about it

There are many things being discussed to ameliorate the impact that
buildpack size is having on the operations of CF.

Notably, Onsi has proposed a change to buildpack caching, to improve
Diego staging times (link to proposal
<https://github.com/pivotal-cf-experimental/diego-dev-notes/blob/master/proposals/better-buildpack-caching.md>
).

However, there is an immediate solution available, which addresses both
the size concerns as well as the security concern: packaging fewer binary
dependencies within the buildpack.
The proposal

I'm proposing that we reduce the binary dependencies in each buildpack in
a very specific way.

Aside on terms I'll use below:

- Versions of the form "1.2.3" are broken down as: MAJOR.MINOR.TEENY.
Many language ecosystems refer to the "TEENY" as "PATCH" interchangeably,
but we're going to use "TEENY" in this proposal.
- We'll assume that TEENY gets bumped for API/ABI compatible changes.
- We'll assume that MINOR and MAJOR get bumped when there are API/ABI
*incompatible* changes.

I'd like to move forward soon with the following changes:

1. For language interpreters/compilers, we'll package the two
most-recent TEENY versions on each MAJOR.MINOR release.
2. For all other dependencies, we'll package only the single
most-recent TEENY version on each MAJOR.MINOR release.
3. We will discontinue packaging versions of dependencies that have
been deprecated.
4. We will no longer provide "EVOEES" buildpack releases.
5. We will no longer provide "online" buildpack releases, which
download dependencies from the public internet.
6. We will document the process, and provide tooling, for CF
operators to build their own buildpacks, choosing the dependencies that
their organization wants to support or creating "online" buildpacks at
operators' discretion.

An example for #1 is that we'll go from packaging 34 versions of node v0.10.x
to only packaging two: 0.10.37 and 0.10.38.

An example for #2 is that we'll go from packaging 3 versions of nginx 1.5
in the PHP buildpack to only packaging one: 1.5.12.

An example for #3 is that we'll discontinue packaging ruby 1.9.3 in the
ruby-buildpack, which reached end-of-life in February 2015.
Outcomes

With these changes, the total buildpack size will be reduced greatly. As
an example, we expect the ruby-buildpack size to go from 922M to 338M.

We also want to set the expectation that, as new interpreter versions are
released, either for new features or (more urgently) for security fixes,
we'll release new buildpacks much more quickly than we do today. My hope is
that we'll be able to do it within 24 hours of a new release.
Planning

These changes will be relatively easy to make, since all the buildpacks
are now using a manifest.yml file to declare what's being packaged. We
expect to be able to complete this work within the next two weeks.

Stories are in the Tracker backlog under the Epic named
"skinny-buildpacks", which you can see here:

https://www.pivotaltracker.com/epic/show/1747328

------------------------------

Please let me know how these changes will impact you and your
organizations, and let me know of any counter-proposals or variations you'd
like to consider.

Thanks,

-mike



_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev



Deploying CF in AWS

Alberto A. Flores
 

Hi Team,

I’ve been following the instructions to deploy CF found here:

http://docs.cloudfoundry.org/deploying/ec2/deploy_aws_cf.html

and when doing step 3, it takes a good amount of time. It appears that passing a YML file to the “upload release” command causes the tar ball to be created locally and then an upload happen. This is described here (http://bosh.io/docs/uploading-releases.html). My questions is about the download that occurs. I get a “FOUND REMOTE” on each package. Is this getting downloaded from somewhere? If so, from where?


-- 
Alberto Flores
Twitter: @albertoaflores


Re: [vcap-dev] Java OOM debugging

Lari Hotari <Lari@...>
 

For my case, it turned out to be essential to reserve enough memory for
"native" in the JBP. For the 2GB total memory, I set the minimum to
330M. With that setting I have been able to get over 2 weeks up time by
now.

I mentioned this in my previous email:
The workaround for that in my case was to add a native key under
memory_sizes in open_jdk_jre.yml and set the minimum to 330M (that is
for a 2GB total memory).
see example
https://github.com/grails-samples/java-buildpack/blob/22e0f6a/config/open_jdk_jre.yml#L25
that was how I got the app I'm running on CF to stay within the memory
bounds. I'm sure there is now also a way to get the keys without
forking the buildpack. I could have also adjusted the percentage
portions, but I wanted to set a hard minimum for this case.
I've been trying to get some insight by diffing the reports gathered
from the meminfo servlet
https://github.com/lhotari/java-buildpack-diagnostics-app/blob/master/src/main/groovy/io/github/lhotari/jbpdiagnostics/MemoryInfoServlet.groovy


Here is such an example of a diff:
https://gist.github.com/lhotari/ee77decc2585f56cf3ad#file-meminfo_diff_example-txt

meminfo has pmap output included to get the report of the memory map of
the process. I have just noticed that most of the memory has already
been mmap:ed from the OS and it's just growing in RSS size. For example:
< 00000000a7600000 1471488 1469556 1469556 rw--- [ anon ]
00000000a7600000 1471744 1470444 1470444 rw--- [ anon ]
The pmap output from lucid64 didn't include the RSS size, so you have to
use cflinuxfs2 for this. It's also better because of other reasons. The
glibc in lucid64 is old and has some bugs around the MALLOC_ARENA_MAX.

I was manually able to estimate the maximum size of the RSS size of what
the Java process will consume by simply picking the large anon-blocks
from the pmap report and calculating those blocks by the allocated
virtual size (VSS).
Based on this calculation, I picked the minimum of 330M for "native" in
open_jdk_jre.yml as I mentioned before.

It looks like these rows are for the Heap size:
< 00000000a7600000 1471488 1469556 1469556 rw--- [ anon ]
00000000a7600000 1471744 1470444 1470444 rw--- [ anon ]
It looks like the JVM doesn't fully allocate that block in RSS initially
and most of the growth of RSS size comes from that in my case. In your
case, it might be something different.

I also added a servlet for getting glibc malloc_info statistics in XML
format (). I haven't really analysed that information because of time
constraints and because I don't have a pressing problem any more. btw.
The malloc_info XML report is missing some key elements, that has been
added in later glibc versions
(https://github.com/bminor/glibc/commit/4d653a59ffeae0f46f76a40230e2cfa9587b7e7e).

If killjava.sh never fires and the app crashed with Warden out of memory
errors, then I believe it's the kernel's cgroups OOM killer that has
killed the container processes. I have found this location where Warden
oom notifier gets the OOM notification event:
https://github.com/cloudfoundry/warden/blob/ad18bff/warden/lib/warden/container/features/mem_limit.rb#L70
This is the oom.c source code:
https://github.com/cloudfoundry/warden/blob/ad18bff7dc56acbc55ff10bcc6045ebdf0b20c97/warden/src/oom/oom.c
. It reads the cgroups control files and receives events from the kernel
that way.

I'd suggest that you use pmap for the Java process after it has started
and calculate the maximum RSS size by calculating the VSS size of the
large anon blocks instead of RSS for the blocks that the Java process
has reserved for it's different memory areas (I think you shouldn't .
You should discard adding VSS for the CompressedClassSpaceSize block.
After this calculation, add enough memory to the "native" parameter in
JBP until the RSS size calculated this way stays under the limit.
That's the only "method" I have come up by now.

It might be required to have some RSS space allocated for any zip/jar
files read by the Java process. I think that Java uses mmap files for
zip file reading by default and that might go on top of all other limits.
To test this theory, I'd suggest testing by adding
-Dsun.zip.disableMemoryMapping=true system property setting to
JAVA_OPTS. That disables the native mmap for zip/jar file reading. I
haven't had time to test this assumption.

I guess the only way to understand how Java allocates memory is to look
at the source code.
from http://openjdk.java.net/projects/jdk8u/ , the instructions to get
the source code of JDK 8:
hg clone http://hg.openjdk.java.net/jdk8u/jdk8u;cd jdk8u;sh get_source.sh
This tool is really good for grepping and searching the source code:
http://geoff.greer.fm/ag/
On Ubuntu it's in silversearcher-ag package, "apt-get install
silversearcher-ag" and on MacOSX brew it's "brew install
the_silver_searcher".
This alias is pretty useful:
alias codegrep='ag --color --group --pager less -C 5'
Then you just search for the correct location in code by starting with
the tokens you know about:
codegrep MaxMetaspaceSize
this gives pretty good starting points in looking how the JDK allocates
memory.

So the JDK source code is only a few commands away.

It would be interesting to hear more about this if someone has the time
to dig in to this. This is about how far I got and I hope sharing this
information helps someone continue. :)


Lari
github/twitter: lhotari

On 15-05-08 10:02 AM, Daniel Jones wrote:
Hi Lari et al,

Thanks for your help Lari.

David and I are pairing on this issue, and we're yet to resolve it.
We're in the process of creating a repeatable test case (our most
crashy app makes calls to external services that need mocking), but in
the meantime, here's what we've seen.

Between Java Buildpack commit e89e546 and 17162df, we see apps
crashing with Warden out of memory errors. killjava.sh never fires,
and this has led us to believe that the kernel is shooting a cgroup
process in the head after the cgroup oversteps its memory limit. We
cannot find any evidence of the OOM killer firing in any logs, but we
may not be looking in the right place.

The JBP is setting heap to be 70%, metaspace to be 15% (with max set
to the same as initial), 5% for "stack", 5% for "normalised stack" and
10% for "native". We do not understand why this adds up to 105%, but
haven't looked into the JBP algorithm yet. Any pointers on what
"normalised stack" is would be much appreciated, as this doesn't
appear in the list of heuristics supplied via app env.

Other team members tried applying the same settings that you suggested
- thanks for this. Apps still crash with these settings, albeit less
frequently.

After reading the blog you linked to
(http://java.dzone.com/articles/java-8-permgen-metaspace) we wondered
whether the increased /reserved /metaspace claimed after metaspace GC
might be causing a problem; however we reused the test code to create
a metaspace leak in a CF app and saw metaspace GCs occur correctly,
and memory usage never grow over MaxMetaspaceSize. This figures, as
the committed metaspace is still less than MaxMetaspaceSize, and the
reserved appears to be whatever RAM is free across the whole DEA.

We noted that an Oracle blog
(https://blogs.oracle.com/poonam/entry/about_g1_garbage_collector_permanent)
mentions that the metaspace size parameters are approximate. We're
currently wondering if native allocations by Tomcat (APR, NIO) are
taking up more container memory, and so when the metaspace fills, it's
creeping slightly over the limit and triggering the kernel's OOM killer.

Any suggestions would be much appreciated. We've tried to resist
tweaking heuristics blindly, but are running out of options as we're
struggling to figure out how the Java process is using
/committed/ memory. pmap seems to show virtual memory, and so it's
hard to see if things like the metaspace or NIO ByteBuffers are
nabbing too much and trigger the kernel's OOM killer.

Thanks for all your help,

Daniel Jones & David Head-Rapson

On Wed, Apr 29, 2015 at 8:07 PM, Lari Hotari <Lari(a)hotari.net
<mailto:Lari(a)hotari.net>> wrote:

Hi,

I created a few tools to debug OOM problems since the application
I was responsible for running on CF was failing constantly because
of OOM problems. The problems I had, turned out not to be actual
memory leaks in the Java application.

In the "cf events appname" log I would get entries like this:
2015-xx-xxTxx:xx:xx.00-0400 app.crash appname
index: 1, reason: CRASHED, exit_description: out of memory,
exit_status: 255

These type of entries are produced when the container goes over
it's memory resource limits. It doesn't mean that there is a
memory leak in the Java application. The container gets killed by
the Linux kernel oom killer
(https://github.com/cloudfoundry/warden/blob/master/warden/README.md#limit-handle-mem-value)
based on the resource limits set to the warden container.
The memory limit is specified in number of bytes. It is enforced
using the control group associated with the container. When a
container exceeds this limit, one or more of its processes will
be killed by the kernel. Additionally, the Warden will be
notified that an OOM happened and it subsequently tears down the
container.
In my case it never got killed by the killjava.sh script that gets
called in the java-buildpack when an OOM happens in Java.

This is the tool I built to debug the problems:
https://github.com/lhotari/java-buildpack-diagnostics-app
I deployed that app as part of the forked buildpack I'm using.
Please read the readme about what it's limitations are. It worked
for me, but it might not work for you. It's opensource and you can
fork it. :)

There is a solution in my toolcase for creating a heapdump and
uploading that to S3:
https://github.com/lhotari/java-buildpack-diagnostics-app/blob/master/src/main/groovy/io/github/lhotari/jbpdiagnostics/HeapDumpServlet.groovy
The readme explains how to setup Amazon S3 keys for this:
https://github.com/lhotari/java-buildpack-diagnostics-app#amazon-s3-setup
Once you get a dump, you can then analyse the dump in a java
profiler tool like YourKit.

I also have a solution that forks the java-buildpack modifies
killjava.sh and adds a script that uploads the heapdump to S3 in
the case of OOM:
https://github.com/lhotari/java-buildpack/commit/2d654b80f3bf1a0e0f1bae4f29cb85f56f5f8c46

In java-buildpack-diagnostics-app I have also other tools for
getting Linux operation system specific memory information, for
example:

https://github.com/lhotari/java-buildpack-diagnostics-app/blob/master/src/main/groovy/io/github/lhotari/jbpdiagnostics/MemoryInfoServlet.groovy
https://github.com/lhotari/java-buildpack-diagnostics-app/blob/master/src/main/groovy/io/github/lhotari/jbpdiagnostics/MemorySmapServlet.groovy
https://github.com/lhotari/java-buildpack-diagnostics-app/blob/master/src/main/groovy/io/github/lhotari/jbpdiagnostics/MallocInfoServlet.groovy

These tools are handy for looking at details of the Java process
RSS memory usage growth.

There is also a solution for getting ssh shell access inside your
application with tmate.io <http://tmate.io>:
https://github.com/lhotari/java-buildpack-diagnostics-app/blob/master/src/main/groovy/io/github/lhotari/jbpdiagnostics/TmateSshServlet.groovy
(this version is only compatible with the new "cflinuxfs2" stack)

It looks like there are serious problems on CloudFoundry with the
memory sizing calculation. An application that doesn't have a OOM
problem will get killed by the oom killer because the Java process
will go over the memory limits.
I filed this issue:
https://github.com/cloudfoundry/java-buildpack/issues/157 , but
that might not cover everything.

The workaround for that in my case was to add a native key under
memory_sizes in open_jdk_jre.yml and set the minimum to 330M (that
is for a 2GB total memory).
see example
https://github.com/grails-samples/java-buildpack/blob/22e0f6a/config/open_jdk_jre.yml#L25
that was how I got the app I'm running on CF to stay within the
memory bounds. I'm sure there is now also a way to get the keys
without forking the buildpack. I could have also adjusted the
percentage portions, but I wanted to set a hard minimum for this case.

It was also required to do some other tuning.

I added this to JAVA_OPTS:
-XX:CompressedClassSpaceSize=256M -XX:InitialCodeCacheSize=64M
-XX:CodeCacheExpansionSize=1M -XX:CodeCacheMinimumFreeSpace=1M
-XX:ReservedCodeCacheSize=200M -XX:MinMetaspaceExpansion=1M
-XX:MaxMetaspaceExpansion=8M -XX:MaxDirectMemorySize=96M
while trying to keep the Java process from growing in RSS memory size.

The memory overhead of a 64 bit Java process on Linux can be
reduced by specifying these environment variables:

stack: cflinuxfs2
.
.
.
env:
MALLOC_ARENA_MAX: 2
MALLOC_MMAP_THRESHOLD_: 131072
MALLOC_TRIM_THRESHOLD_: 131072
MALLOC_TOP_PAD_: 131072
MALLOC_MMAP_MAX_: 65536

MALLOC_ARENA_MAX works only on cflinuxfs2 stack (the lucid64 stack
has a buggy version of glibc).

explanation about MALLOC_ARENA_MAX from Heroku:
https://devcenter.heroku.com/articles/tuning-glibc-memory-behavior
some measurement data how it reduces memory consumption:
https://devcenter.heroku.com/articles/testing-cedar-14-memory-use

I have created a PR to add this to CF java-buildpack:
https://github.com/cloudfoundry/java-buildpack/pull/160

I also created an issues
https://github.com/cloudfoundry/java-buildpack/issues/163 and
https://github.com/cloudfoundry/java-buildpack/pull/159 .

I hope this information helps others struggling with OOM problems
in CF.
I'm not saying that this is a ready made solution just for you.
YMMV. It worked for me.

-Lari




On 15-04-29 10:53 AM, Head-Rapson, David wrote:

Hi,

I’m after some guidance on how to get profile Java apps in CF, in
order to get to the bottom of memory issues.

We have an app that’s crashing every few hours with OOM error,
most likely it’s a memory leak.

I’d like to profile the JVM and work out what’s eating memory,
however tools like yourkit require connectivity INTO the JVM
server (i.e. the warden container), either via host / port or via
SSH.

Since warden containers cannot be connected to on ports other
than for HTTP and cannot be SSHd to, neither of these works for me.



I tried installed a standalone JDK onto the warden container,
however as soon as I ran ‘jmap’ to invoke the dump, warden
cleaned up the container – most likely for memory over-consumption.



I had previously found a hack in the Weblogic buildpack
(https://github.com/pivotal-cf/weblogic-buildpack/blob/master/docs/container-wls-monitoring.md)
for modifying the start script which, when used with
–XX:HeapDumpOnOutOfMemoryError, should copy any heapdump files to
a file share somewhere. I have my own custom buildpack so I
could use something similar.

Has anyone got a better solution than this?



We would love to use newrelic / app dynamics for this however
we’re not allowed. And I’m not 100% certain they could help with
this either.



Dave



The information transmitted is intended for the person or entity
to which it is addressed and may contain confidential, privileged
or copyrighted material. If you receive this in error, please
contact the sender and delete the material from any computer.
Fidelity only gives information on products and services and does
not give investment advice to retail clients based on individual
circumstances. Any comments or statements made are not
necessarily those of Fidelity. All e-mails may be monitored. FIL
Investments International (Reg. No.1448245), FIL Investment
Services (UK) Limited (Reg. No. 2016555), FIL Pensions
Management (Reg. No. 2015142) and Financial Administration
Services Limited (Reg. No. 1629709) are authorised and regulated
in the UK by the Financial Conduct Authority. FIL Life Insurance
Limited (Reg No. 3406905) is authorised in the UK by the
Prudential Regulation Authority and regulated in the UK by the
Financial Conduct Authority and the Prudential Regulation
Authority. Registered offices at Oakhill House, 130 Tonbridge
Road, Hildenborough, Tonbridge, Kent TN11 9DZ.

--
You received this message because you are subscribed to the
Google Groups "Cloud Foundry Developers" group.
To view this discussion on the web visit
https://groups.google.com/a/cloudfoundry.org/d/msgid/vcap-dev/DFFA4ADB9F3BC34194429921AB329336408CAB04%40UKFIL7006WIN.intl.intlroot.fid-intl.com
<https://groups.google.com/a/cloudfoundry.org/d/msgid/vcap-dev/DFFA4ADB9F3BC34194429921AB329336408CAB04%40UKFIL7006WIN.intl.intlroot.fid-intl.com?utm_medium=email&utm_source=footer>.
To unsubscribe from this group and stop receiving emails from it,
send an email to vcap-dev+unsubscribe(a)cloudfoundry.org
<mailto:vcap-dev+unsubscribe(a)cloudfoundry.org>.

_______________________________________________
Cf-dev mailing list
Cf-dev(a)lists.cloudfoundry.org <mailto:Cf-dev(a)lists.cloudfoundry.org>
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev




--
Regards,

Daniel Jones
EngineerBetter.com


Re: [vcap-dev] Java OOM debugging

Daniel Jones
 

Hi Lari et al,

Thanks for your help Lari.

David and I are pairing on this issue, and we're yet to resolve it. We're
in the process of creating a repeatable test case (our most crashy app
makes calls to external services that need mocking), but in the meantime,
here's what we've seen.

Between Java Buildpack commit e89e546 and 17162df, we see apps crashing
with Warden out of memory errors. killjava.sh never fires, and this has led
us to believe that the kernel is shooting a cgroup process in the head
after the cgroup oversteps its memory limit. We cannot find any evidence of
the OOM killer firing in any logs, but we may not be looking in the right
place.

The JBP is setting heap to be 70%, metaspace to be 15% (with max set to the
same as initial), 5% for "stack", 5% for "normalised stack" and 10% for
"native". We do not understand why this adds up to 105%, but haven't looked
into the JBP algorithm yet. Any pointers on what "normalised stack" is
would be much appreciated, as this doesn't appear in the list of heuristics
supplied via app env.

Other team members tried applying the same settings that you suggested -
thanks for this. Apps still crash with these settings, albeit less
frequently.

After reading the blog you linked to (
http://java.dzone.com/articles/java-8-permgen-metaspace) we wondered
whether the increased *reserved *metaspace claimed after metaspace GC might
be causing a problem; however we reused the test code to create a metaspace
leak in a CF app and saw metaspace GCs occur correctly, and memory usage
never grow over MaxMetaspaceSize. This figures, as the committed metaspace
is still less than MaxMetaspaceSize, and the reserved appears to be
whatever RAM is free across the whole DEA.

We noted that an Oracle blog (
https://blogs.oracle.com/poonam/entry/about_g1_garbage_collector_permanent)
mentions that the metaspace size parameters are approximate. We're
currently wondering if native allocations by Tomcat (APR, NIO) are taking
up more container memory, and so when the metaspace fills, it's creeping
slightly over the limit and triggering the kernel's OOM killer.

Any suggestions would be much appreciated. We've tried to resist tweaking
heuristics blindly, but are running out of options as we're struggling to
figure out how the Java process is using *committed* memory. pmap seems to
show virtual memory, and so it's hard to see if things like the metaspace
or NIO ByteBuffers are nabbing too much and trigger the kernel's OOM killer.

Thanks for all your help,

Daniel Jones & David Head-Rapson

On Wed, Apr 29, 2015 at 8:07 PM, Lari Hotari <Lari(a)hotari.net> wrote:

Hi,

I created a few tools to debug OOM problems since the application I was
responsible for running on CF was failing constantly because of OOM
problems. The problems I had, turned out not to be actual memory leaks in
the Java application.

In the "cf events appname" log I would get entries like this:
2015-xx-xxTxx:xx:xx.00-0400 app.crash appname index: 1,
reason: CRASHED, exit_description: out of memory, exit_status: 255

These type of entries are produced when the container goes over it's
memory resource limits. It doesn't mean that there is a memory leak in the
Java application. The container gets killed by the Linux kernel oom killer (
https://github.com/cloudfoundry/warden/blob/master/warden/README.md#limit-handle-mem-value)
based on the resource limits set to the warden container.

The memory limit is specified in number of bytes. It is enforced using the
control group associated with the container. When a container exceeds this
limit, one or more of its processes will be killed by the kernel.
Additionally, the Warden will be notified that an OOM happened and it
subsequently tears down the container.

In my case it never got killed by the killjava.sh script that gets called
in the java-buildpack when an OOM happens in Java.

This is the tool I built to debug the problems:
https://github.com/lhotari/java-buildpack-diagnostics-app
I deployed that app as part of the forked buildpack I'm using.
Please read the readme about what it's limitations are. It worked for me,
but it might not work for you. It's opensource and you can fork it. :)

There is a solution in my toolcase for creating a heapdump and uploading
that to S3:

https://github.com/lhotari/java-buildpack-diagnostics-app/blob/master/src/main/groovy/io/github/lhotari/jbpdiagnostics/HeapDumpServlet.groovy
The readme explains how to setup Amazon S3 keys for this:
https://github.com/lhotari/java-buildpack-diagnostics-app#amazon-s3-setup
Once you get a dump, you can then analyse the dump in a java profiler tool
like YourKit.

I also have a solution that forks the java-buildpack modifies killjava.sh
and adds a script that uploads the heapdump to S3 in the case of OOM:

https://github.com/lhotari/java-buildpack/commit/2d654b80f3bf1a0e0f1bae4f29cb85f56f5f8c46

In java-buildpack-diagnostics-app I have also other tools for getting
Linux operation system specific memory information, for example:


https://github.com/lhotari/java-buildpack-diagnostics-app/blob/master/src/main/groovy/io/github/lhotari/jbpdiagnostics/MemoryInfoServlet.groovy

https://github.com/lhotari/java-buildpack-diagnostics-app/blob/master/src/main/groovy/io/github/lhotari/jbpdiagnostics/MemorySmapServlet.groovy

https://github.com/lhotari/java-buildpack-diagnostics-app/blob/master/src/main/groovy/io/github/lhotari/jbpdiagnostics/MallocInfoServlet.groovy

These tools are handy for looking at details of the Java process RSS
memory usage growth.

There is also a solution for getting ssh shell access inside your
application with tmate.io:

https://github.com/lhotari/java-buildpack-diagnostics-app/blob/master/src/main/groovy/io/github/lhotari/jbpdiagnostics/TmateSshServlet.groovy
(this version is only compatible with the new "cflinuxfs2" stack)

It looks like there are serious problems on CloudFoundry with the memory
sizing calculation. An application that doesn't have a OOM problem will get
killed by the oom killer because the Java process will go over the memory
limits.
I filed this issue:
https://github.com/cloudfoundry/java-buildpack/issues/157 , but that
might not cover everything.

The workaround for that in my case was to add a native key under
memory_sizes in open_jdk_jre.yml and set the minimum to 330M (that is for a
2GB total memory).
see example
https://github.com/grails-samples/java-buildpack/blob/22e0f6a/config/open_jdk_jre.yml#L25
that was how I got the app I'm running on CF to stay within the memory
bounds. I'm sure there is now also a way to get the keys without forking
the buildpack. I could have also adjusted the percentage portions, but I
wanted to set a hard minimum for this case.

It was also required to do some other tuning.

I added this to JAVA_OPTS:
-XX:CompressedClassSpaceSize=256M -XX:InitialCodeCacheSize=64M
-XX:CodeCacheExpansionSize=1M -XX:CodeCacheMinimumFreeSpace=1M
-XX:ReservedCodeCacheSize=200M -XX:MinMetaspaceExpansion=1M
-XX:MaxMetaspaceExpansion=8M -XX:MaxDirectMemorySize=96M
while trying to keep the Java process from growing in RSS memory size.

The memory overhead of a 64 bit Java process on Linux can be reduced by
specifying these environment variables:

stack: cflinuxfs2
.
.
.
env:
MALLOC_ARENA_MAX: 2
MALLOC_MMAP_THRESHOLD_: 131072
MALLOC_TRIM_THRESHOLD_: 131072
MALLOC_TOP_PAD_: 131072
MALLOC_MMAP_MAX_: 65536

MALLOC_ARENA_MAX works only on cflinuxfs2 stack (the lucid64 stack has a
buggy version of glibc).

explanation about MALLOC_ARENA_MAX from Heroku:
https://devcenter.heroku.com/articles/tuning-glibc-memory-behavior
some measurement data how it reduces memory consumption:
https://devcenter.heroku.com/articles/testing-cedar-14-memory-use

I have created a PR to add this to CF java-buildpack:
https://github.com/cloudfoundry/java-buildpack/pull/160

I also created an issues
https://github.com/cloudfoundry/java-buildpack/issues/163 and
https://github.com/cloudfoundry/java-buildpack/pull/159 .

I hope this information helps others struggling with OOM problems in CF.
I'm not saying that this is a ready made solution just for you. YMMV. It
worked for me.

-Lari




On 15-04-29 10:53 AM, Head-Rapson, David wrote:

Hi,

I’m after some guidance on how to get profile Java apps in CF, in order to
get to the bottom of memory issues.

We have an app that’s crashing every few hours with OOM error, most likely
it’s a memory leak.

I’d like to profile the JVM and work out what’s eating memory, however
tools like yourkit require connectivity INTO the JVM server (i.e. the
warden container), either via host / port or via SSH.

Since warden containers cannot be connected to on ports other than for
HTTP and cannot be SSHd to, neither of these works for me.



I tried installed a standalone JDK onto the warden container, however as
soon as I ran ‘jmap’ to invoke the dump, warden cleaned up the container –
most likely for memory over-consumption.



I had previously found a hack in the Weblogic buildpack (
https://github.com/pivotal-cf/weblogic-buildpack/blob/master/docs/container-wls-monitoring.md)
for modifying the start script which, when used with
–XX:HeapDumpOnOutOfMemoryError, should copy any heapdump files to a file
share somewhere. I have my own custom buildpack so I could use something
similar.

Has anyone got a better solution than this?



We would love to use newrelic / app dynamics for this however we’re not
allowed. And I’m not 100% certain they could help with this either.



Dave



The information transmitted is intended for the person or entity to which
it is addressed and may contain confidential, privileged or copyrighted
material. If you receive this in error, please contact the sender and
delete the material from any computer. Fidelity only gives information on
products and services and does not give investment advice to retail clients
based on individual circumstances. Any comments or statements made are not
necessarily those of Fidelity. All e-mails may be monitored. FIL
Investments International (Reg. No.1448245), FIL Investment Services (UK)
Limited (Reg. No. 2016555), FIL Pensions Management (Reg. No. 2015142) and
Financial Administration Services Limited (Reg. No. 1629709) are authorised
and regulated in the UK by the Financial Conduct Authority. FIL Life
Insurance Limited (Reg No. 3406905) is authorised in the UK by the
Prudential Regulation Authority and regulated in the UK by the Financial
Conduct Authority and the Prudential Regulation Authority. Registered
offices at Oakhill House, 130 Tonbridge Road, Hildenborough, Tonbridge,
Kent TN11 9DZ.
--
You received this message because you are subscribed to the Google Groups
"Cloud Foundry Developers" group.
To view this discussion on the web visit
https://groups.google.com/a/cloudfoundry.org/d/msgid/vcap-dev/DFFA4ADB9F3BC34194429921AB329336408CAB04%40UKFIL7006WIN.intl.intlroot.fid-intl.com
<https://groups.google.com/a/cloudfoundry.org/d/msgid/vcap-dev/DFFA4ADB9F3BC34194429921AB329336408CAB04%40UKFIL7006WIN.intl.intlroot.fid-intl.com?utm_medium=email&utm_source=footer>
.
To unsubscribe from this group and stop receiving emails from it, send an
email to vcap-dev+unsubscribe(a)cloudfoundry.org.



_______________________________________________
Cf-dev mailing list
Cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev

--
Regards,

Daniel Jones
EngineerBetter.com


Re: Is there an auto-completion script?

Takeshi Morikawa
 

Hi Daniel

I found this

cf(cli) completion
https://github.com/cf-buildpacks/cf_completion

bosh cli completion
https://github.com/anfernee/bosh-completion

Is my answer what you're hoping for?

2015-05-08 14:28 GMT+09:00 Daniel Kaplan <dkaplan(a)pivotal.io>:

Hi DevList,

I think it would be extra convenient if there was Cloud Foundry
auto-completion script that worked similar to the way git's git-completion
<https://github.com/git/git/blob/master/contrib/completion/git-completion.bash>
works.

Does one already exist? If not, I might write it in my free time. Let me
know your thoughts.

Thanks,
Dan

_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev


Is there an auto-completion script?

Daniel Kaplan
 

Hi DevList,

I think it would be extra convenient if there was Cloud Foundry
auto-completion script that worked similar to the way git's git-completion
<https://github.com/git/git/blob/master/contrib/completion/git-completion.bash>
works.

Does one already exist? If not, I might write it in my free time. Let me
know your thoughts.

Thanks,
Dan


Meeting Minutes for Services PMC 2015-05-07

Shannon Coen
 

https://docs.google.com/document/d/10aOoLF_FPxuHYQfI813VCTF9z2VRC_dIuYDIGND3xbU/edit?usp=sharing

Highlights:
1. Two projects were approved for incubation: Brooklyn service broker from
Cloudsoft, and a MSSQL Server service broker from HP.
2. Updates provides for the Service Enablement and Notifications projects.

Best,

Shannon Coen
Product Manager, Cloud Foundry
Pivotal, Inc.


Re: Logging Infrastructure for CF components

Erik Jasiak <ejasiak@...>
 

Hi Ronak,

We're trying to make sure we understand the question here -

* Component logs do not go through loggregator today, only to the rsyslog
endpoint designated.

* All app logs would be available to an operator via the firehose.

* " it is more like pulling model from end component side so if components
before it are slow in sending logs then there will be buffers" - component
logs again are not part of the firehose - even then, there would be
buffers, but all messages are timestamped (if you're worried about
ordering?)

* "is there a need to implement something in between doppler and traffic
controller to push all the application logs " - not sure I understand, but
though you need at least dopper in every AZ that a metron is in today,
doppler can speak to traffic controllers cross-AZ.

Did that help?
Erik

On Wed, May 6, 2015 at 7:48 PM, ronak banka <ronakbanka.cse(a)gmail.com>
wrote:

Hi everyone,

I have some queries regarding persistent storage of application logs and
cf component logs .

As per my understanding

-->For application logs:
we can send the application logs to doppler with help of metron agent and
further stream using traffic controller (User Side).

-->For CF component syslog:
We can send cf component syslog via metron to custom syslog endpoint
(followed by parsing and other mining stuff)

On the operator side how can we store "Application logs for all the
applications" to a persistent storage??

If i look at firehose (or using noaa to get all the logs), it is more like
pulling model from end component side so if components before it are slow
in sending logs then there will be buffers .

Application logs are distributed on different doppler nodes based on AZ of
metron and doppler itself , so is there a need to implement something in
between doppler and traffic controller to push all the application logs ??

Thanks
Ronak Banka

_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev


Re: Utilities PMC - 2015-05-05 Notes

Mike Dalessio
 

Hi all,

In response to several suggestions, I've moved the Utilities PMC notes into
markdown files in a github repo.

I've created this public github repo:

https://github.com/cloudfoundry/pmc-notes


and the Utilities PMC notes will be within it, at:

https://github.com/cloudfoundry/pmc-notes/tree/master/Utilities


I've added a document to the GDrive directing visitors to the Github repo.


Cheers,
-m

On Wed, May 6, 2015 at 3:24 PM, Mike Dalessio <mdalessio(a)pivotal.io> wrote:

Hey everyone,

We held the first Utilities PMC meeting yesterday; I'd like to share the
agenda and notes.

For reference, all agendas notes for the Utilities PMC will be kept in a
public Google Drive folder at this URL:

http://bit.ly/cf-utilities-pmc


I realize GDrive isn't the most convenient medium for some in the CF
community; I'd love to hear how we can better support transparency for
everyone.

Please feel free to respond with comments and questions!

Cheers,
-m

---

*Attendees:*

- Chip Childers, Cloud Foundry Foundation
- Mike Dalessio, Pivotal (PMC lead)
- Christopher Ferriss, IBM
- Michael Fraenkel, IBM
- James Bayer, Pivotal
- Greg Oehmen, Pivotal
- Ryan Morgan, PIvotal


Utilities PMC Agenda and Notes - 2015-05-05


1.

Update on CI tools (Mike Dalessio)
2.

Update on CLI (Greg Oehman)
3.

Update on Eclipse plugin and Java tools (Ryan Morgan)
4.

Open Discussion



Update on CI tools (Mike Dalessio)

GoCD <http://www.go.cd/> still in use for some projects, but there’s
movement towards Concourse <http://concourse.ci/> and teams are
enthusiastic about it. Currently Diego, Garden, BOSH-lite, Loggregator, and
CLI have converted to Concourse; and BOSH, Services API, and Buildpacks are
in progress.

Timeline is open for individual teams to move to Concourse; some teams may
decide not to. Having a heterogenous CI environment is OK, as both GoCD and
Concourse can integrate via S3 buckets, which is where generated artifacts
are generally kept.


Update on CLI (Greg Oehmen)


Released 6.11.0 - 4/17

Released 6.11.1 - 4/20

Released 6.11.2 - 4/28

Big uptick in issues/PRs

Plugin API feature

Look Ahead:

1. help refactor work,

- refactor help

- invert syntax (object - action)

- tab/bash completion

2. support the move to cc API 3.0 and services api changes

3. the user security work (pwd expiration, inactivity-based session
timeout, RBAC maturation, etc.)

4. installer emphasis

- Auto-update within CLI

- signed mac installer

- signed windows installer

- etc

5. APM integration - something like blessed-contrib:
https://github.com/yaronn/blessed-contrib


Update on Eclipse plugin and Java tools (Ryan Morgan)

CF Eclipse Tooling: (1 dev at Pivotal, 4 splitting time at IBM)

-

1.8.0 (Released Feb 13th)
-

New Service wizard allowing for multiple service creation
-

Remote debug support via ngrok.com
-

1.8.1 (Released March 25th)
-

Map/Unmap project feature to map an existing eclipse workspace to
an app
-

Update password fixes
-

Free service plans now marked in the UI and preferred over paid
plans
-

1.8.2 (Release imminent)
-

JRebel support
-

Working on some last minute UI changes
-

Working on a proposal to move the Eclipse tooling to the Eclipse
Foundation
-

Should have a proposal for review mid-late May. Targeting Eclipse
4.5 SR1 update in the fall. Lots of work to be done to make that deadline.


CF Java Client: (1 dev at Pivotal, splitting time)

-

1.1.2 Released April 13th
-

No active development, PRs and Issues reviewed on-demand
-

Support of CC v3
-

Removal of Spring dependencies (v2.0 item)



Open Discussion

Please add any other suggested agenda topics for discussion here:

*Imminent additions to the Utilities PMC from HP (Chip).*

Voting took place via email on 2015-05-05 with unanimous consent to add
the following to the Utilities PMC as incubating projects:


- CF .NET SDK https://github.com/hpcloud/cf-dotnet-sdk
- CF Visual Studio Extension
https://github.com/hpcloud/cf-vs-extension-wpf
(will be renamed to https://github.com/hpcloud/cf-vs-extension)
- CF MSBuild Tasks https://github.com/hpcloud/cf-msbuild-tasks





Re: Buildpacks PMC - 2015-05-04 Notes

Mike Dalessio
 

Hi all,

In response to several suggestions, I've moved the Buildpacks notes into
markdown files in a github repo.

I've created this public github repo:

https://github.com/cloudfoundry/pmc-notes


and the Buildpacks PMC notes will be within it, at:

https://github.com/cloudfoundry/pmc-notes/tree/master/Buildpacks


I've added a document to the GDrive directing visitors to the Github repo.


Cheers,
-m

On Mon, May 4, 2015 at 1:50 PM, Mike Dalessio <mdalessio(a)pivotal.io> wrote:

Hi all,

We held the first Buildpacks PMC meeting today; I'd like to share the
agenda and notes.

For reference, all agendas notes for the Buildpacks PMC will be kept in a
public Google Drive folder at this URL:

http://bit.ly/cf-buildpacks-pmc


I realize GDrive isn't the most convenient medium for some in the CF
community; I'd love to hear how we can better support transparency for
everyone.

Please feel free to respond with comments and questions!

Cheers,
-m

----

Attendees:

-

Chip Childers, Cloud Foundry Foundation
-

Mike Dalessio, Pivotal (PMC lead)
-

Christopher Ferriss, IBM
-

Michael Fraenkel, IBM
-

Mark Kropf, Pivotal



Recent Inception Report and Stated Goals

The Buildpacks core development team held a project inception on
2015-04-20, to gain a shared understanding of upcoming goals and tracks of
work.


Goals


- Expand supported ecosystem to include more languages & frameworks
- Cloud Foundry ownership of Buildpacks
- Leverage new primitives in Diego (“app lifecycle”)
- Enable 3rd party extensions to the Developer experience
- Enable application developer extensions to the Developer
experience
- Set patterns for creating new buildpacks and for extending the
Developer experience
- Generate clearer diagnostics during staging
- Enable Operator ease of updating common dependencies
- Keep the `bin/detect` experience: buildpacks should Just Work™
- Exert more ownership over the rootfs
- Binary buildpack support


Risks


- java-buildpack is diverging quickly from the core buildpacks
- Lack of deep experience in some ecosystems
- Wide variety in implementations across buildpacks
- rootfs: with great power comes great responsibility (e.g.,
security response)
- tight coupling between buildpacks and rootfs
- versioning between buildpacks and rootfs


Current Backlog and Priorities

See https://www.pivotaltracker.com/n/projects/1042066

Notable near-term goals:


-

staticfile-buildpack support in `cf-release`
-

binary buildpack (a.k.a. “null buildpack”) support in `cf-release`
-

ability to generate and test CF rootfs-specific binaries; and tooling
for CF operators to do the same



Proposal: Buildpack Incubation Process

Discussion today for PMC input; a draft document will be circulated for
comment to cf-dev@ mailing list after the meeting, in a separate thread.




Re: Failed to start Native apps in CF using null-build pack

JT Archie <jarchie@...>
 

Balaramaraju,

It looks like in your *CF logs* that you are pushing the app with an
incorrect start command, most likely with the meta information from a
Procfile.

Please try deploying the application like this:

*cf push helloWorld2 -b https://github.com/ryandotsmith/null-buildpack
<https://github.com/ryandotsmith/null-buildpack> --no-route -c
"helloWorld2.sh" -s lucid64*

If you experience further problems with the app not starting. Please try
the new binary-buildpack <https://github.com/cloudfoundry/binary-buildpack>
that cf-release will start supporting.

Kind regards,

JT

On Thu, May 7, 2015 at 5:34 AM, Balaramaraju JLSP <balaramaraju(a)gmail.com>
wrote:

Hi All,


We are unable to push a sample c++ application using null-buildpack; seems
it has worked for others (as documented here :
https://groups.google.com/a/cloudfoundry.org/forum/#!searchin/vcap-dev/null-buildpack/vcap-dev/oTYbHg_JJXU/_e30a2m3qr4J)
, but we are not able to get it to work yet.



Steps followed :



1. build a sample c++ application using g++ compiler on the linux vm;

2. Transfer that file to windows system;

3. Push that application using null-buildpack to both pivotal CF


Source :-


#include <stdio.h>



int main(int argc, char* argv[]) {

while(1==1) {

printf("Hello World\n");



}

return 0;

}

Build Command :- gcc -Wall helloWorld.c -o bin/helloWorld.sh

OS :- Cent OS 6.5 x64

CF command [from win 7] :- D:\Cloud\Native>*cf push **helloWorld2 **-b
https://github.com/ryandotsmith/null-buildpack
<https://github.com/ryandotsmith/null-buildpack> --no-route -c "*
*helloWorld2 **.sh"*



CF logs :-


*D:\Cloud\Native>cf push helloWorld2 -b
https://github.com/ryandotsmith/null-buildpack
<https://github.com/ryandotsmith/null-buildpack> --no-route -c "web:
helloWorld2.sh" -s lucid64*

*Using stack lucid64...*

*OK*

*Updating app helloWorld2 in org rootOrg / space development as .*

*OK*


*App helloWorld2 is a worker, skipping route creation*

*Uploading helloWorld2...*

*Uploading app files from: D:\Cloud\Native*

*Uploading 6.6K, 2 files*

*OK*


*Stopping app helloWorld2 in org rootOrg / space development as .*

*OK*


*Starting app helloWorld2 in org rootOrg / space development as ...*

*OK*

*-----> Downloaded app package (4.0K)*

*-----> Downloaded app buildpack cache (4.0K)*

* Cloning into '/tmp/buildpacks/null-buildpack'...*

* -----> Nothing to do.*

*-----> Uploading droplet (4.0K)*


*0 of 1 instances running, 1 down*

*0 of 1 instances running, 1 down*

*0 of 1 instances running, 1 down*


Any help offered is greatly appreciated!!!

Is this native application need to build on Ubuntu alone, since CF uses it
?

_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev


Re: Discrepancy between `cf apps` memory usage and cgroup's memory.usage_in_bytes in CF 2.13.0

Daniel Jones
 

Aha - just found it, and it does indeed tally up. Thanks again!

On Thu, May 7, 2015 at 3:06 PM, Daniel Jones <
daniel.jones(a)engineerbetter.com> wrote:

Thanks for the reply Matthew!

We saw that bit of code when we were following it through. When we go as
far as the call from the DEA's Warden client to the Warden Server, we
struggled to find where those stats (total_rss, total_cache,
total_inactive_file) came from. DO you happen to know what the source of
truth is for those data?

Thanks for your help.

On Thu, May 7, 2015 at 1:58 PM, Matthew Sykes <matthew.sykes(a)gmail.com>
wrote:

The reported statistic is calculated here:


https://github.com/cloudfoundry/dea_ng/blob/310797e1097dcd5531bff4077ccd8f02f6091219/lib/dea/stat_collector.rb#L92-L94

On Thu, May 7, 2015 at 8:28 AM, Daniel Jones <
daniel.jones(a)engineerbetter.com> wrote:

Hi all,

Whilst investigating the Java Buildpack out-of-memory issues David
Head-Rapson mailed about the other day, we discovered a discrepancy between
the memory usage stat provided by `cf app` and the value stored in the
corresponding cgroup's `memory.usage_in_bytes` file. The latter seems to be
bumping right along the maximum allowed.


- We did a `cf app`, and got a memory stat of 847.6MiB of 896MiB.
- We got the appId from CF_TRACE, `bosh ssh`'d onto the right DEA
- We then did `cat
tmp/warden/cgroup/memory/instance-id/memory.usage_in_bytes` and got
939,515,904, which equates to 895.99ish MiB.

Does anyone know why the latter is so high, and why it would differ from
what the DEA reports back to the Cloud Controller? There's clearly a gap in
our understanding somewhere, so any help would be much appreciated.

Many thanks,

Daniel Jones
EngineerBetter.com

_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev


--
Matthew Sykes
matthew.sykes(a)gmail.com

_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev


--
Regards,

Daniel Jones
EngineerBetter.com


--
Regards,

Daniel Jones
EngineerBetter.com


Re: Discrepancy between `cf apps` memory usage and cgroup's memory.usage_in_bytes in CF 2.13.0

Daniel Jones
 

Thanks for the reply Matthew!

We saw that bit of code when we were following it through. When we go as
far as the call from the DEA's Warden client to the Warden Server, we
struggled to find where those stats (total_rss, total_cache,
total_inactive_file) came from. DO you happen to know what the source of
truth is for those data?

Thanks for your help.

On Thu, May 7, 2015 at 1:58 PM, Matthew Sykes <matthew.sykes(a)gmail.com>
wrote:

The reported statistic is calculated here:


https://github.com/cloudfoundry/dea_ng/blob/310797e1097dcd5531bff4077ccd8f02f6091219/lib/dea/stat_collector.rb#L92-L94

On Thu, May 7, 2015 at 8:28 AM, Daniel Jones <
daniel.jones(a)engineerbetter.com> wrote:

Hi all,

Whilst investigating the Java Buildpack out-of-memory issues David
Head-Rapson mailed about the other day, we discovered a discrepancy between
the memory usage stat provided by `cf app` and the value stored in the
corresponding cgroup's `memory.usage_in_bytes` file. The latter seems to be
bumping right along the maximum allowed.


- We did a `cf app`, and got a memory stat of 847.6MiB of 896MiB.
- We got the appId from CF_TRACE, `bosh ssh`'d onto the right DEA
- We then did `cat
tmp/warden/cgroup/memory/instance-id/memory.usage_in_bytes` and got
939,515,904, which equates to 895.99ish MiB.

Does anyone know why the latter is so high, and why it would differ from
what the DEA reports back to the Cloud Controller? There's clearly a gap in
our understanding somewhere, so any help would be much appreciated.

Many thanks,

Daniel Jones
EngineerBetter.com

_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev


--
Matthew Sykes
matthew.sykes(a)gmail.com

_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev


--
Regards,

Daniel Jones
EngineerBetter.com

9321 - 9340 of 9388