Date   

Re: [abacus-perf] Persisting Metrics performance

Jean-Sebastien Delfino
 

Hi Kevin,

One of the thing that I want to do is to persist the metrics performance
collected by abacus-perf.

Interesting feature! It'd be good to understand what you're trying to do
with that data (I think Assk for asking a similar question) as that'll help
us provide better implementation suggestions.

The first solution is to use Turbine...
...
The problem with this is that Turbine persists the metrics data
when there are no activity in the application, and it will flush the metrics
data when a new stats come in.

Not sure I'm following here. Can you give a bit more details to help us
understand how Turbine alters the stats and what problems that causes in
your collection + store logic?

Another solution is to mimic what the hystrix module is doing: Instead of streaming
the metrics to the hystrix dashboard, I would post to the database.

The Abacus-hystrix module responds to GET /hystrix.stream requests, and
doesn't do anything unless a monitor requests the stats. I'm not sure that
pro-actively posting the stats to a DB from each app instance will work so
well... as IMO that'll generate a lot of DB traffic from all these app
instances, will slow down these apps, and won't give you an aggregated
stats at the app level anyway (more on that below, however).

Here's a few more suggestions for you:

a) Give us a bit more context on how you intend to use the data you're
storing... if this is for use with Graphite for example, there's already a
number of blog posts out there that cover that; if you'd like to store the
data in ELK for searching then you might want to log these metrics and flow
them to ELK as part of your logs; if you'd like to store the data in your
own DB and render it using custom made dashboards later then we can explore
other solutions...

b) Try to leverage the current flow (with app instances providing stats on
demand at a /hystrix.stream endpoint and an external monitoring app
collecting these stats) rather than creating yet another completely
different flow; looking at the Hystrix Wiki, looks like that's what most
Hystrix integrations do (incl. the ones used to collect and store stats
into Graphite for example).

c) Decide if you want to store aggregations of stats from multiple app
instances (in that case understand how you can configure or 'fix' Turbine
to not alter the semantics of the original instance level stats, or
understand how/when to store the aggregated Turbine stats), or if it's
actually better to store stats from individual app instances... I'd
probably favor the latter, collect and store the individual instance data
in a DB and aggregate/interpret at rendering time later.

d) Investigate the CF firehose to see if it could help flow the metrics
you've collected to consumers that'll store them in your DBs; that firehose
will definitely be in the loop if you decide to flow the metrics with your
logs, then you can probably just connect a firehose nozzle to it that will
store the selected metrics to your DB.

HTH

- Jean-Sebastien

On Thu, Nov 12, 2015 at 2:36 PM, KRuelY <kevinyudhiswara(a)gmail.com> wrote:

Hi,

One of the thing that I want to do is to persist the metrics performance
collected by abacus-perf. What would be the best way to do this? I've been
through some solutions, but none of them seems to be the "correct"
solution.

The scenario is this: I have an application running, and there are 2
instances of this application currently running.

To collect the metrics performance of my application, I need to aggregate
the metrics data collected by each instance's abacus-perf and store them in
a database.

The first solution is to use Turbine. Using Eureka to keep track each
instance's ip address, I can configure Turbine to use Eureka instance
discovery. This way turbine will have aggregated metrics data collected by
each instance's abacus-perf. The next thing to do is to have a separate
application 'peeks' at the turbine stream at some interval and post them to
database. The problem with this is that Turbine persists the metrics data
when there are no activity in the application, and it will flush the
metrics
data when a new stats come in. Meaning that every time I peek into the
turbine stream, I have to check if I already posted these data to the
database.

The second solution is to have each instance post independently. By using
abacus-perf's 'all()' I can set an interval that would call all(), check
the
timewindow, and post accordingly. The restriction is that I can only post
the previous timewindow (since the current window is not yet done), and I
need to filter 0 data. Another restriction is that my interval cannot
exceed
perf's interval. The problem with this is that
I am playing with the time interval. There would be some occasion that I
might lose some data. I'm not sure that this would cover the time where
perf
flushes out old metrics when a new one comes in. I need to make sure that I
save the data before perf flushes.

Another solution is to mimic what the hystrix module is doing: Instead of
streaming the metrics to the hystrix dashboard, I would post to the
database. I have yet to try this solution.

Currently I'm not sure what is the best way to persist the metrics
performance collected by the abacus-perf with accuracy, and I would like to
have some inputs/suggestion on how to persist the metrics. Thanks!





--
View this message in context:
http://cf-dev.70369.x6.nabble.com/abacus-perf-Persisting-Metrics-performance-tp2693.html
Sent from the CF Dev mailing list archive at Nabble.com.


Re: Acceptance tests assume a multi level wildcard ssl cert

Felix Friedrich
 

Hello Christopher,

thanks for your reply. We are stumbling over the very same test again.
Just to confirm, the tests haven't been fixed according to [1], have
they? Can I help you in any way with fixing this test?


Best regards from Berlin,


Felix


[1] https://www.pivotaltracker.com/n/projects/1358110/stories/105340048

On Mon, 19 Oct 2015, at 17:46, Christopher Piraino wrote:
Hi Felix,

You are right, we have found this issue in one of our own environments as
well, we have a story here
<https://www.pivotaltracker.com/story/show/105340048> to address it by
skipping verification explicitly for this test only. Previously, I
believe
that test only used an http URL when curling, recent updates to allow
configuration of the protocol exposed this issue. We do not assume
multi-level wildcard certs.

The curl helper was also changed recently to set SSL verification
internally
for all curl commands
<https://github.com/cloudfoundry/cf-acceptance-tests/commit/06c83fa5641785ebca1c6dedb36c2370415e3005>,
so the skip_ssl_validation configuration should still be working
correctly.

If you want to see the tests pass, you could either set
"skip_ssl_validation" to false or "use_http" to true and the test should
work as intended. In any case, we are sorry for the failures and
hopefully
we can get a fix out soon.

- Chris

On Mon, Oct 19, 2015 at 7:32 AM, Felix Friedrich <felix(a)fri.edri.ch>
wrote:

Hello,

we've just upgraded our CF deployment from v215 to v220. Unfortunately
the acceptance tests fail: http://pastebin.com/rWrXX1HA
They reasonably fail. The test expects a valid ssl cert, but our cert is
only valid for *.test.cf.springer-sbm.com not for
*.*.test.cf.springer-sbm.com. The test seem to expect a multilevel SSL
cert, I am not sure if that's reasonable or not.

However, I wondered why this exact test did not fail in v215. I
suspected that the way curl gets executed in the v220 tests changed and
it apparently seems that I am right [1]. Thus I assume (!) that before
curl's return codes did not get propagated, while they are now. (Return
code 51 is "The peer's SSL certificate or SSH MD5 fingerprint was not
OK." according to the man page.)

Also the new way of executing ("curlCmd := runner.Curl(uri)") does not
look like it gets the skipSslValidation value. As a fact running the
acceptances tests with the skip_ssl_validation option still leads to
this test failing. However the used library looks like it is able to
skip SSL validation:

https://github.com/cloudfoundry-incubator/cf-test-helpers/blob/master/runner/run.go

Even if skip_ssl_validation would work, I am not very keen on activating
that option since that also applies to all other tests, which are not
using multi level wildcard certs.

Besides of the fact that curl seems to validate SSL certs no matter if
skip_ssl_validation is true or false, did you intentionally assume that
CF uses a multilevel wildcard cert?


Felix



[1]

https://github.com/cloudfoundry/cf-acceptance-tests/compare/353e06565a6a1a0d6b4c417f57b00eeecec604fa...72496c6fabd1c8ec51ae932d13a597a62ccf30dd


Re: [vcap-dev] Addressing buildpack size

Jack Cai
 

Thanks JT. Except the Java buildpack though :-)

Jack

On Fri, Nov 13, 2015 at 8:42 AM, JT Archie <jarchie(a)pivotal.io> wrote:

Jack,

This is correct.

The online version and the offline version of the buildpack only differ
one way. The offline version has the dependencies, defined in the
`manifest.yml`, packaged with it.

They both keep the same contract that we'll only support certain versions
of runtimes (Ruby, Python, etc).


-- JT

On Thu, Nov 12, 2015 at 2:46 PM, Jack Cai <greensight(a)gmail.com> wrote:

Thanks Mike. Is it true that even for the online version (aka when doing
"cf push -b https://github.com/cloudfoundry/nodejs-buildpack.git"),
users are now limited to use the runtime versions defined in the
manifest.yml?

Jack


On Thu, Nov 12, 2015 at 12:05 PM, Mike Dalessio <mdalessio(a)pivotal.io>
wrote:

This is a bug in the nodejs-buildpack v1.5.1, which we should have a fix
for later today.

Github issue is here:
https://github.com/cloudfoundry/nodejs-buildpack/issues/35

Tracker story is here:
https://www.pivotaltracker.com/story/show/107946000

Apologies for the inconvenience.

On Thu, Nov 12, 2015 at 12:03 PM, Jack Cai <greensight(a)gmail.com> wrote:


For the cached package of the buildpacks, I thought it would refuse to
provide a runtime version that's not cached. Yesterday I was playing with
the node.js buildpack and found it actually will download a non-cached
node.js runtime. Does it mean we kind of moved to the "hybrid" model I
suggested earlier in this thread? Does it work the same way for
java/go/php/ruby/python buildpacks as well?

Jack




On Mon, Apr 13, 2015 at 3:08 PM, Mike Dalessio <mdalessio(a)pivotal.io>
wrote:

Hi Jack,

Thanks so much for your feedback!

Based on my conversations with CF users to date, this is definitely
something that we would want to be "opt-in" behavior; the consensus-desired
default appears to be to disallow the downloading of old/deprecated
versions.

Notably, though, what we'll continue to support is the specification
of a buildpack using the `cf push` `-b` option:

```
-b Custom buildpack by name (e.g. my-buildpack) or GIT URL
```

Buildpacks used in this manner will behave in "online" mode, meaning
they'll attempt to download dependencies from the public internet. Does
that satisfy your needs, at least in the short-term?

-m


On Mon, Apr 13, 2015 at 1:59 PM, Jack Cai <greensight(a)gmail.com>
wrote:

We will no longer provide "online" buildpack releases, which
download dependencies from the public internet.

I think it would make sense to retain the ability to download
additional runtime versions on demand (that's not packaged in the
buildpack) if the user explicitly requests it. So basically it will be a
hybrid model, where the most recent versions are "cached", while old
versions are still available.

Jack


On Wed, Apr 8, 2015 at 11:36 PM, Onsi Fakhouri <ofakhouri(a)pivotal.io>
wrote:

Hey Patrick,

Sorry about that - the diego-dev-notes is an internal documentation
repo that the Diego team uses to stay on the same page and toss ideas
around.

There isn't much that's terribly interesting at that link - just
some ideas on how to extend diego's existing caching capabilities to avoid
copying cached artifacts into containers (we'd mount them in
directly instead).

Happy to share more detail if there is interest.

Onsi

On Wednesday, April 8, 2015, Patrick Mueller <pmuellr(a)gmail.com>
wrote:

I got a 404 on
https://github.com/pivotal-cf-experimental/diego-dev-notes/blob/master/proposals/better-buildpack-caching.md

On Wed, Apr 8, 2015 at 11:10 AM, Mike Dalessio <
mdalessio(a)pivotal.io> wrote:

Hello vcap-dev!

This email details a proposed change to how Cloud Foundry
buildpacks are packaged, with respect to the ever-increasing number of
binary dependencies being cached within them.

This proposal's permanent residence is here:


https://github.com/cloudfoundry-incubator/buildpack-packager/issues/4

Feel free to comment there or reply to this email.
------------------------------
Buildpack SizesWhere we are today

Many of you have seen, and possibly been challenged by, the
enormous sizes of some of the buildpacks that are currently shipping with
cf-release.

Here's the state of the world right now, as of v205:

php-buildpack: 1.1G
ruby-buildpack: 922M
go-buildpack: 675M
python-buildpack: 654M
nodejs-buildpack: 403M
----------------------
total: 3.7G

These enormous sizes are the result of the current policy of
packaging every-version-of-everything-ever-supported ("EVOEES") within the
buildpack.

Most recently, this problem was exacerbated by the fact that
buildpacks now contain binaries for two rootfses.
Why this is a problem

If continued, buildpacks will only continue to increase in size,
leading to longer and longer build and deploy times, longer test times,
slacker feedback loops, and therefore less frequent buildpack releases.

Additionally, this also means that we're shipping versions of
interpreters, web servers, and libraries that are deprecated, insecure, or
both. Feedback from CF users has made it clear that many companies view
this as an unnecessary security risk.

This policy is clearly unsustainable.
What we can do about it

There are many things being discussed to ameliorate the impact
that buildpack size is having on the operations of CF.

Notably, Onsi has proposed a change to buildpack caching, to
improve Diego staging times (link to proposal
<https://github.com/pivotal-cf-experimental/diego-dev-notes/blob/master/proposals/better-buildpack-caching.md>
).

However, there is an immediate solution available, which addresses
both the size concerns as well as the security concern: packaging fewer
binary dependencies within the buildpack.
The proposal

I'm proposing that we reduce the binary dependencies in each
buildpack in a very specific way.

Aside on terms I'll use below:

- Versions of the form "1.2.3" are broken down as:
MAJOR.MINOR.TEENY. Many language ecosystems refer to the "TEENY" as "PATCH"
interchangeably, but we're going to use "TEENY" in this proposal.
- We'll assume that TEENY gets bumped for API/ABI compatible
changes.
- We'll assume that MINOR and MAJOR get bumped when there are
API/ABI *incompatible* changes.

I'd like to move forward soon with the following changes:

1. For language interpreters/compilers, we'll package the two
most-recent TEENY versions on each MAJOR.MINOR release.
2. For all other dependencies, we'll package only the single
most-recent TEENY version on each MAJOR.MINOR release.
3. We will discontinue packaging versions of dependencies that
have been deprecated.
4. We will no longer provide "EVOEES" buildpack releases.
5. We will no longer provide "online" buildpack releases,
which download dependencies from the public internet.
6. We will document the process, and provide tooling, for CF
operators to build their own buildpacks, choosing the dependencies that
their organization wants to support or creating "online" buildpacks at
operators' discretion.

An example for #1 is that we'll go from packaging 34 versions of
node v0.10.x to only packaging two: 0.10.37 and 0.10.38.

An example for #2 is that we'll go from packaging 3 versions of
nginx 1.5 in the PHP buildpack to only packaging one: 1.5.12.

An example for #3 is that we'll discontinue packaging ruby 1.9.3
in the ruby-buildpack, which reached end-of-life in February 2015.
Outcomes

With these changes, the total buildpack size will be reduced
greatly. As an example, we expect the ruby-buildpack size to go from 922M
to 338M.

We also want to set the expectation that, as new interpreter
versions are released, either for new features or (more urgently) for
security fixes, we'll release new buildpacks much more quickly than we do
today. My hope is that we'll be able to do it within 24 hours of a new
release.
Planning

These changes will be relatively easy to make, since all the
buildpacks are now using a manifest.yml file to declare what's
being packaged. We expect to be able to complete this work within the next
two weeks.

Stories are in the Tracker backlog under the Epic named
"skinny-buildpacks", which you can see here:

https://www.pivotaltracker.com/epic/show/1747328

------------------------------

Please let me know how these changes will impact you and your
organizations, and let me know of any counter-proposals or variations you'd
like to consider.

Thanks,

-mike


--
You received this message because you are subscribed to the Google
Groups "Cloud Foundry Developers" group.
To view this discussion on the web visit
https://groups.google.com/a/cloudfoundry.org/d/msgid/vcap-dev/CAGeQLZwDbON2B6cAynyJY12tCWXO8XPKSCmhCc%3D%3DBu4KsHe%3DhA%40mail.gmail.com
<https://groups.google.com/a/cloudfoundry.org/d/msgid/vcap-dev/CAGeQLZwDbON2B6cAynyJY12tCWXO8XPKSCmhCc%3D%3DBu4KsHe%3DhA%40mail.gmail.com?utm_medium=email&utm_source=footer>
.

To unsubscribe from this group and stop receiving emails from it,
send an email to vcap-dev+unsubscribe(a)cloudfoundry.org.


--
Patrick Mueller
http://muellerware.org

--
You received this message because you are subscribed to the Google
Groups "Cloud Foundry Developers" group.
To view this discussion on the web visit
https://groups.google.com/a/cloudfoundry.org/d/msgid/vcap-dev/CACAoQH0-dCrN6o%2B%3Ds1kn3poJSusUWbV6Zzsk29FTRs_kQfYtaQ%40mail.gmail.com
<https://groups.google.com/a/cloudfoundry.org/d/msgid/vcap-dev/CACAoQH0-dCrN6o%2B%3Ds1kn3poJSusUWbV6Zzsk29FTRs_kQfYtaQ%40mail.gmail.com?utm_medium=email&utm_source=footer>
.

To unsubscribe from this group and stop receiving emails from it,
send an email to vcap-dev+unsubscribe(a)cloudfoundry.org.
--
You received this message because you are subscribed to the Google
Groups "Cloud Foundry Developers" group.
To view this discussion on the web visit
https://groups.google.com/a/cloudfoundry.org/d/msgid/vcap-dev/CAFwdB-x8JD36rQFCHOnuDpOojYpVFZ_xBH1JzoTHWaEKC5Vqog%40mail.gmail.com
<https://groups.google.com/a/cloudfoundry.org/d/msgid/vcap-dev/CAFwdB-x8JD36rQFCHOnuDpOojYpVFZ_xBH1JzoTHWaEKC5Vqog%40mail.gmail.com?utm_medium=email&utm_source=footer>
.

To unsubscribe from this group and stop receiving emails from it,
send an email to vcap-dev+unsubscribe(a)cloudfoundry.org.
--
You received this message because you are subscribed to the Google
Groups "Cloud Foundry Developers" group.
To view this discussion on the web visit
https://groups.google.com/a/cloudfoundry.org/d/msgid/vcap-dev/CANbAaBTNMr9QwyopJJJ6VpZWz%2BGYr-Q_xL0UsnX2ha6OzkAcag%40mail.gmail.com
<https://groups.google.com/a/cloudfoundry.org/d/msgid/vcap-dev/CANbAaBTNMr9QwyopJJJ6VpZWz%2BGYr-Q_xL0UsnX2ha6OzkAcag%40mail.gmail.com?utm_medium=email&utm_source=footer>
.

To unsubscribe from this group and stop receiving emails from it,
send an email to vcap-dev+unsubscribe(a)cloudfoundry.org.
--
You received this message because you are subscribed to the Google
Groups "Cloud Foundry Developers" group.
To view this discussion on the web visit
https://groups.google.com/a/cloudfoundry.org/d/msgid/vcap-dev/CAGeQLZyo9VszAcgQRCuAfQ3r%3Di2PksDq2Hkv6N-kObs053hOFg%40mail.gmail.com
<https://groups.google.com/a/cloudfoundry.org/d/msgid/vcap-dev/CAGeQLZyo9VszAcgQRCuAfQ3r%3Di2PksDq2Hkv6N-kObs053hOFg%40mail.gmail.com?utm_medium=email&utm_source=footer>
.

To unsubscribe from this group and stop receiving emails from it, send
an email to vcap-dev+unsubscribe(a)cloudfoundry.org.


Changing CF Encryption Keys (was Re: Re: Re: Re: Cloud Controller - s3 encryption for droplets)

Sandy Cash Jr <lhcash@...>
 

Hi,

I'm not sure what strategies exist either. This same topic came up
partially in the context of my resubmitted FIPS proposal, and I was curious
- is it worth creating an issue (or even a separate feature
proposal/blueprint) for tooling to rotate encryption keys? It's nontrivial
(unless there is tooling about which I am unaware) to do, and a good
solution in this space would IMHO fill a significant operational need.

Thoughts?

-Sandy


--
Sandy Cash
Certified Senior IT Architect/Senior SW Engineer
IBM BlueMix
lhcash(a)us.ibm.com
(919) 543-0209

"I skate to where the puck is going to be, not to where it has been.” -
Wayne Gretzky



From: Dieu Cao <dcao(a)pivotal.io>
To: "Discussions about Cloud Foundry projects and the system
overall." <cf-dev(a)lists.cloudfoundry.org>
Date: 11/12/2015 02:19 PM
Subject: [cf-dev] Re: Re: Re: Cloud Controller - s3 encryption for
droplets



Hi William,

Thanks for the links.
We don't have support for client side encryption currently.
Cloud Controller and Diego's blobstore clients would need to be modified to
encrypt and decrypt for client side encryption and I'm not clear what
strategies exist for rotation of keys in these scenarios.

If you're very interested in this feature and are open to working through
requirements with me and submitting a PR, please open up an issue on github
and we can discuss this further.

-Dieu

On Tue, Nov 10, 2015 at 4:16 PM, William C Penrod <wcpenrod(a)gmail.com>
wrote:
I first ran across it here:
http://cloudfoundryjp.github.io/docs/running/bosh/components/blobstore.html


and checked here for additional info:
https://github.com/cloudfoundry/bosh/blob/master/blobstore_client/lib/blobstore_client/s3_blobstore_client.rb


Re: [vcap-dev] Addressing buildpack size

JT Archie <jarchie@...>
 

Jack,

This is correct.

The online version and the offline version of the buildpack only differ one
way. The offline version has the dependencies, defined in the
`manifest.yml`, packaged with it.

They both keep the same contract that we'll only support certain versions
of runtimes (Ruby, Python, etc).


-- JT

On Thu, Nov 12, 2015 at 2:46 PM, Jack Cai <greensight(a)gmail.com> wrote:

Thanks Mike. Is it true that even for the online version (aka when doing
"cf push -b https://github.com/cloudfoundry/nodejs-buildpack.git"), users
are now limited to use the runtime versions defined in the manifest.yml?

Jack


On Thu, Nov 12, 2015 at 12:05 PM, Mike Dalessio <mdalessio(a)pivotal.io>
wrote:

This is a bug in the nodejs-buildpack v1.5.1, which we should have a fix
for later today.

Github issue is here:
https://github.com/cloudfoundry/nodejs-buildpack/issues/35

Tracker story is here:
https://www.pivotaltracker.com/story/show/107946000

Apologies for the inconvenience.

On Thu, Nov 12, 2015 at 12:03 PM, Jack Cai <greensight(a)gmail.com> wrote:


For the cached package of the buildpacks, I thought it would refuse to
provide a runtime version that's not cached. Yesterday I was playing with
the node.js buildpack and found it actually will download a non-cached
node.js runtime. Does it mean we kind of moved to the "hybrid" model I
suggested earlier in this thread? Does it work the same way for
java/go/php/ruby/python buildpacks as well?

Jack




On Mon, Apr 13, 2015 at 3:08 PM, Mike Dalessio <mdalessio(a)pivotal.io>
wrote:

Hi Jack,

Thanks so much for your feedback!

Based on my conversations with CF users to date, this is definitely
something that we would want to be "opt-in" behavior; the consensus-desired
default appears to be to disallow the downloading of old/deprecated
versions.

Notably, though, what we'll continue to support is the specification of
a buildpack using the `cf push` `-b` option:

```
-b Custom buildpack by name (e.g. my-buildpack) or GIT URL
```

Buildpacks used in this manner will behave in "online" mode, meaning
they'll attempt to download dependencies from the public internet. Does
that satisfy your needs, at least in the short-term?

-m


On Mon, Apr 13, 2015 at 1:59 PM, Jack Cai <greensight(a)gmail.com> wrote:

We will no longer provide "online" buildpack releases, which
download dependencies from the public internet.

I think it would make sense to retain the ability to download
additional runtime versions on demand (that's not packaged in the
buildpack) if the user explicitly requests it. So basically it will be a
hybrid model, where the most recent versions are "cached", while old
versions are still available.

Jack


On Wed, Apr 8, 2015 at 11:36 PM, Onsi Fakhouri <ofakhouri(a)pivotal.io>
wrote:

Hey Patrick,

Sorry about that - the diego-dev-notes is an internal documentation
repo that the Diego team uses to stay on the same page and toss ideas
around.

There isn't much that's terribly interesting at that link - just some
ideas on how to extend diego's existing caching capabilities to avoid
copying cached artifacts into containers (we'd mount them in
directly instead).

Happy to share more detail if there is interest.

Onsi

On Wednesday, April 8, 2015, Patrick Mueller <pmuellr(a)gmail.com>
wrote:

I got a 404 on
https://github.com/pivotal-cf-experimental/diego-dev-notes/blob/master/proposals/better-buildpack-caching.md

On Wed, Apr 8, 2015 at 11:10 AM, Mike Dalessio <mdalessio(a)pivotal.io
wrote:
Hello vcap-dev!

This email details a proposed change to how Cloud Foundry
buildpacks are packaged, with respect to the ever-increasing number of
binary dependencies being cached within them.

This proposal's permanent residence is here:


https://github.com/cloudfoundry-incubator/buildpack-packager/issues/4

Feel free to comment there or reply to this email.
------------------------------
Buildpack SizesWhere we are today

Many of you have seen, and possibly been challenged by, the
enormous sizes of some of the buildpacks that are currently shipping with
cf-release.

Here's the state of the world right now, as of v205:

php-buildpack: 1.1G
ruby-buildpack: 922M
go-buildpack: 675M
python-buildpack: 654M
nodejs-buildpack: 403M
----------------------
total: 3.7G

These enormous sizes are the result of the current policy of
packaging every-version-of-everything-ever-supported ("EVOEES") within the
buildpack.

Most recently, this problem was exacerbated by the fact that
buildpacks now contain binaries for two rootfses.
Why this is a problem

If continued, buildpacks will only continue to increase in size,
leading to longer and longer build and deploy times, longer test times,
slacker feedback loops, and therefore less frequent buildpack releases.

Additionally, this also means that we're shipping versions of
interpreters, web servers, and libraries that are deprecated, insecure, or
both. Feedback from CF users has made it clear that many companies view
this as an unnecessary security risk.

This policy is clearly unsustainable.
What we can do about it

There are many things being discussed to ameliorate the impact that
buildpack size is having on the operations of CF.

Notably, Onsi has proposed a change to buildpack caching, to
improve Diego staging times (link to proposal
<https://github.com/pivotal-cf-experimental/diego-dev-notes/blob/master/proposals/better-buildpack-caching.md>
).

However, there is an immediate solution available, which addresses
both the size concerns as well as the security concern: packaging fewer
binary dependencies within the buildpack.
The proposal

I'm proposing that we reduce the binary dependencies in each
buildpack in a very specific way.

Aside on terms I'll use below:

- Versions of the form "1.2.3" are broken down as:
MAJOR.MINOR.TEENY. Many language ecosystems refer to the "TEENY" as "PATCH"
interchangeably, but we're going to use "TEENY" in this proposal.
- We'll assume that TEENY gets bumped for API/ABI compatible
changes.
- We'll assume that MINOR and MAJOR get bumped when there are
API/ABI *incompatible* changes.

I'd like to move forward soon with the following changes:

1. For language interpreters/compilers, we'll package the two
most-recent TEENY versions on each MAJOR.MINOR release.
2. For all other dependencies, we'll package only the single
most-recent TEENY version on each MAJOR.MINOR release.
3. We will discontinue packaging versions of dependencies that
have been deprecated.
4. We will no longer provide "EVOEES" buildpack releases.
5. We will no longer provide "online" buildpack releases, which
download dependencies from the public internet.
6. We will document the process, and provide tooling, for CF
operators to build their own buildpacks, choosing the dependencies that
their organization wants to support or creating "online" buildpacks at
operators' discretion.

An example for #1 is that we'll go from packaging 34 versions of
node v0.10.x to only packaging two: 0.10.37 and 0.10.38.

An example for #2 is that we'll go from packaging 3 versions of
nginx 1.5 in the PHP buildpack to only packaging one: 1.5.12.

An example for #3 is that we'll discontinue packaging ruby 1.9.3 in
the ruby-buildpack, which reached end-of-life in February 2015.
Outcomes

With these changes, the total buildpack size will be reduced
greatly. As an example, we expect the ruby-buildpack size to go from 922M
to 338M.

We also want to set the expectation that, as new interpreter
versions are released, either for new features or (more urgently) for
security fixes, we'll release new buildpacks much more quickly than we do
today. My hope is that we'll be able to do it within 24 hours of a new
release.
Planning

These changes will be relatively easy to make, since all the
buildpacks are now using a manifest.yml file to declare what's
being packaged. We expect to be able to complete this work within the next
two weeks.

Stories are in the Tracker backlog under the Epic named
"skinny-buildpacks", which you can see here:

https://www.pivotaltracker.com/epic/show/1747328

------------------------------

Please let me know how these changes will impact you and your
organizations, and let me know of any counter-proposals or variations you'd
like to consider.

Thanks,

-mike


--
You received this message because you are subscribed to the Google
Groups "Cloud Foundry Developers" group.
To view this discussion on the web visit
https://groups.google.com/a/cloudfoundry.org/d/msgid/vcap-dev/CAGeQLZwDbON2B6cAynyJY12tCWXO8XPKSCmhCc%3D%3DBu4KsHe%3DhA%40mail.gmail.com
<https://groups.google.com/a/cloudfoundry.org/d/msgid/vcap-dev/CAGeQLZwDbON2B6cAynyJY12tCWXO8XPKSCmhCc%3D%3DBu4KsHe%3DhA%40mail.gmail.com?utm_medium=email&utm_source=footer>
.

To unsubscribe from this group and stop receiving emails from it,
send an email to vcap-dev+unsubscribe(a)cloudfoundry.org.


--
Patrick Mueller
http://muellerware.org

--
You received this message because you are subscribed to the Google
Groups "Cloud Foundry Developers" group.
To view this discussion on the web visit
https://groups.google.com/a/cloudfoundry.org/d/msgid/vcap-dev/CACAoQH0-dCrN6o%2B%3Ds1kn3poJSusUWbV6Zzsk29FTRs_kQfYtaQ%40mail.gmail.com
<https://groups.google.com/a/cloudfoundry.org/d/msgid/vcap-dev/CACAoQH0-dCrN6o%2B%3Ds1kn3poJSusUWbV6Zzsk29FTRs_kQfYtaQ%40mail.gmail.com?utm_medium=email&utm_source=footer>
.

To unsubscribe from this group and stop receiving emails from it,
send an email to vcap-dev+unsubscribe(a)cloudfoundry.org.
--
You received this message because you are subscribed to the Google
Groups "Cloud Foundry Developers" group.
To view this discussion on the web visit
https://groups.google.com/a/cloudfoundry.org/d/msgid/vcap-dev/CAFwdB-x8JD36rQFCHOnuDpOojYpVFZ_xBH1JzoTHWaEKC5Vqog%40mail.gmail.com
<https://groups.google.com/a/cloudfoundry.org/d/msgid/vcap-dev/CAFwdB-x8JD36rQFCHOnuDpOojYpVFZ_xBH1JzoTHWaEKC5Vqog%40mail.gmail.com?utm_medium=email&utm_source=footer>
.

To unsubscribe from this group and stop receiving emails from it,
send an email to vcap-dev+unsubscribe(a)cloudfoundry.org.
--
You received this message because you are subscribed to the Google
Groups "Cloud Foundry Developers" group.
To view this discussion on the web visit
https://groups.google.com/a/cloudfoundry.org/d/msgid/vcap-dev/CANbAaBTNMr9QwyopJJJ6VpZWz%2BGYr-Q_xL0UsnX2ha6OzkAcag%40mail.gmail.com
<https://groups.google.com/a/cloudfoundry.org/d/msgid/vcap-dev/CANbAaBTNMr9QwyopJJJ6VpZWz%2BGYr-Q_xL0UsnX2ha6OzkAcag%40mail.gmail.com?utm_medium=email&utm_source=footer>
.

To unsubscribe from this group and stop receiving emails from it, send
an email to vcap-dev+unsubscribe(a)cloudfoundry.org.
--
You received this message because you are subscribed to the Google
Groups "Cloud Foundry Developers" group.
To view this discussion on the web visit
https://groups.google.com/a/cloudfoundry.org/d/msgid/vcap-dev/CAGeQLZyo9VszAcgQRCuAfQ3r%3Di2PksDq2Hkv6N-kObs053hOFg%40mail.gmail.com
<https://groups.google.com/a/cloudfoundry.org/d/msgid/vcap-dev/CAGeQLZyo9VszAcgQRCuAfQ3r%3Di2PksDq2Hkv6N-kObs053hOFg%40mail.gmail.com?utm_medium=email&utm_source=footer>
.

To unsubscribe from this group and stop receiving emails from it, send
an email to vcap-dev+unsubscribe(a)cloudfoundry.org.


Re: Deploying a shell script driven java application to cf

Juan Antonio Breña Moral <bren at juanantonio.info...>
 

What is the purpose of that Shell Script?

In my humble opinion, Buildpacks interacts with a unique software artifact in your case in Java.

Another approach could be, the development of a Docker container with your system and deploy later on a Diego Cell.

Juan Antonio


Deploying a shell script driven java application to cf

dammina sahabandu
 

Hi All,
I have a java application which is a composition of several jars in the running environment. The main jar is invoked by a shell script. Is it possible to push such application into cloud foundry? And if it is possible can you please help me on how to achieve that. A simple guide will be really helpful.

Thank you in advance,
Dammina


Re: [abacus] Eureka vs gorouter

Saravanakumar A. Srinivasan
 

I believe that Assk (@sasrin) has started to document the beginning of that monitoring setup as well in doc/monitor.md [1]
Yes...We have started to write down the steps to setup the Hystrix Dashboard to monitor Abacus and Thanks to @Hristo, we now have steps to configure Hystrix Dashboard using Cloud Foundry environment as well. 

There are several ways to set up Hystrix to monitor Cloud apps, but Eureka comes handy when you don't know their IP addresses ahead of time. The usual setup is then to use Eureka + Turbine + Hystrix (as described in [2]). You get > your apps to register with Eureka, set up Turbine to get their IPs from Eureka, and serve an aggregated performance data stream to your Hystrix dashboard for all your apps.

Last few days, I am working on getting Hystrix Dashboard to use Turbine +  Eureka to monitor Abacus and will be updating the document with the steps needed to get this going.   


Thanks,
Saravanakumar Srinivasan (Assk),




Re: Pluggable Resource Scheduling

Zhang Lei <harryzhang@...>
 

You can add different scheduling strategy into Diego by implementing a scheduler plugin.


But not Mesos, that would be a huge task and another story.

The reason Kubernetes can integrate Mesos as scheduler (can work, not perfect) is due to Mesosphere is doing that part, I'm afraid ...

在 2015-11-13 03:57:52,"Deepak Vij (A)" <deepak.vij(a)huawei.com> 写道:


I did not mean to replace the whole “Diego” environment itself. What I was thinking was more in terms of plug-ability within Diego itself. This is so that “Auctioneer” component can be turned into a “Mesos Framework” as one of the scheduling options. By doing that, “Auctioneer” can start accepting “Mesos Offers” instead of native “Auctioning based Diego Resource Scheduling”. Rest of the runtime environment including Garden, Rep etc., they all stay the same. Nothing else changes. I hope this makes sense.



- Deepak



From: Gwenn Etourneau [mailto:getourneau(a)pivotal.io]
Sent: Wednesday, November 11, 2015 5:10 PM
To: Discussions about Cloud Foundry projects and the system overall.
Subject: [cf-dev] Re: Pluggable Resource Scheduling



Hi,



Interesting proposition, wondering if it make sense to hook into Diego or CF.

Diego is connected to CF by the CC-Bridge (big picture) why not create a CC-Bridge for other scheduling system ?







Thanks



On Thu, Nov 12, 2015 at 5:13 AM, Deepak Vij (A) <deepak.vij(a)huawei.com> wrote:

Hi folks, I would like to start a discussion thread and get community thoughts regarding availability of Pluggable Resource Scheduling within CF/Diego. Just like Kubernetes does, wouldn’t it be nice to have an option of choosing Diego native scheduling or other uber/global resource management environments, specifically Mesos.



Look forward to comments and feedback from the community. Thanks.



Regards,

Deepak Vij

(Huawei Software Lab., Santa Clara)


Re: [abacus-perf] Persisting Metrics performance

Saravanakumar A. Srinivasan
 

I would like to add one more to the list of possible solutions for further discussion:

How about extending abacus-perf to optionally persist collected performance metrics into a database? 
In my opinion, writing to a database at the source of the collected data would drastically reduce the programming complexity and would help to make the data more consistent with the source.

However, I always wonder why one would need to persist this data. any reasons? 


Thanks,
Saravanakumar Srinivasan (Assk),


-----KRuelY <kevinyudhiswara@...> wrote: -----
To: cf-dev@...
From: KRuelY <kevinyudhiswara@...>
Date: 11/12/2015 02:45PM
Subject: [cf-dev] [abacus-perf] Persisting Metrics performance

Hi,

One of the thing that I want to do is to persist the metrics performance
collected by abacus-perf. What would be the best way to do this? I've been
through some solutions, but none of them seems to be the "correct" solution.

The scenario is this: I have an application running, and there are 2
instances of this application currently running.

To collect the metrics performance of my application, I need to aggregate
the metrics data collected by each instance's abacus-perf and store them in
a database.

The first solution is to use Turbine. Using Eureka to keep track each
instance's ip address, I can configure Turbine to use Eureka instance
discovery. This way turbine will have aggregated metrics data collected by
each instance's abacus-perf. The next thing to do is to have a separate
application 'peeks' at the turbine stream at some interval and post them to
database. The problem with this is that Turbine persists the metrics data
when there are no activity in the application, and it will flush the metrics
data when a new stats come in. Meaning that every time I peek into the
turbine stream, I have to check if I already posted these data to the
database.

The second solution is to have each instance post independently. By using
abacus-perf's 'all()' I can set an interval that would call all(), check the
timewindow, and post accordingly. The restriction is that I can only post
the previous timewindow (since the current window is not yet done), and I
need to filter 0 data. Another restriction is that my interval cannot exceed
perf's interval. The problem with this is that
I am playing with the time interval. There would be some occasion that I
might lose some data. I'm not sure that this would cover the time where perf
flushes out old metrics when a new one comes in. I need to make sure that I
save the data before perf flushes.

Another solution is to mimic what the hystrix module is doing: Instead of
streaming the metrics to the hystrix dashboard, I would post to the
database. I have yet to try this solution.

Currently I'm not sure what is the best way to persist the metrics
performance collected by the abacus-perf with accuracy, and I would like to
have some inputs/suggestion on how to persist the metrics. Thanks!
 




--
View this message in context: http://cf-dev.70369.x6.nabble.com/abacus-perf-Persisting-Metrics-performance-tp2693.html
Sent from the CF Dev mailing list archive at Nabble.com.



[abacus-perf] Persisting Metrics performance

KRuelY <kevinyudhiswara@...>
 

Hi,

One of the thing that I want to do is to persist the metrics performance
collected by abacus-perf. What would be the best way to do this? I've been
through some solutions, but none of them seems to be the "correct" solution.

The scenario is this: I have an application running, and there are 2
instances of this application currently running.

To collect the metrics performance of my application, I need to aggregate
the metrics data collected by each instance's abacus-perf and store them in
a database.

The first solution is to use Turbine. Using Eureka to keep track each
instance's ip address, I can configure Turbine to use Eureka instance
discovery. This way turbine will have aggregated metrics data collected by
each instance's abacus-perf. The next thing to do is to have a separate
application 'peeks' at the turbine stream at some interval and post them to
database. The problem with this is that Turbine persists the metrics data
when there are no activity in the application, and it will flush the metrics
data when a new stats come in. Meaning that every time I peek into the
turbine stream, I have to check if I already posted these data to the
database.

The second solution is to have each instance post independently. By using
abacus-perf's 'all()' I can set an interval that would call all(), check the
timewindow, and post accordingly. The restriction is that I can only post
the previous timewindow (since the current window is not yet done), and I
need to filter 0 data. Another restriction is that my interval cannot exceed
perf's interval. The problem with this is that
I am playing with the time interval. There would be some occasion that I
might lose some data. I'm not sure that this would cover the time where perf
flushes out old metrics when a new one comes in. I need to make sure that I
save the data before perf flushes.

Another solution is to mimic what the hystrix module is doing: Instead of
streaming the metrics to the hystrix dashboard, I would post to the
database. I have yet to try this solution.

Currently I'm not sure what is the best way to persist the metrics
performance collected by the abacus-perf with accuracy, and I would like to
have some inputs/suggestion on how to persist the metrics. Thanks!





--
View this message in context: http://cf-dev.70369.x6.nabble.com/abacus-perf-Persisting-Metrics-performance-tp2693.html
Sent from the CF Dev mailing list archive at Nabble.com.


Re: service broker user delegation beyond service-dashboard

Guillaume Berche
 

Thanks Brian for your feedback.

Can you elaborate on use-cases for which somes scopes would need to be
auto-approved by the platform (i.e. without users providing their consent)
?

Do you rather see that as a syntaxic short-hand for cf users to avoid
repetively providing their consent, in this case would the following
approaches address your use-cases ?
$ cf create-service service-name service-plan service-instance
-grant-requested-scopes
or
$ cf config --always-grant-broker-scopes="openid"

Guillaume.

On Wed, Nov 11, 2015 at 12:46 AM, Brian Martin <bkmartin(a)gmail.com> wrote:

One addition, I would like the ability to have some scopes be auto
approved by the platform (eg openid)

Brian K Martin


On Nov 10, 2015, at 5:42 PM, Brian Martin <bkmartin(a)gmail.com> wrote:

This proposal looks good and addresses many of the same concerns we have
been seeing in Bluemix. I recently reached out to Max to bring forward a
similar proposal.

Brian K Martin


On Nov 10, 2015, at 4:33 PM, Guillaume Berche <bercheg(a)gmail.com> wrote:

Hi,

We are seing an increasing number of cases where service brokers need to
act on behalf of CF users, and where the service dashboard support is too
limited:
- dashboard URL is not exposed to requesters until the end of the
provision phase
- dashboard URL needs users browsing to it, making it hard to preserve
headless interactions (such as scripts or ci) using the CLI or CC API.

As suggested by Dieu and Shannon over exchanges we had, I formalized my
perception of the problem to be solved and possible ways to address it into:


https://docs.google.com/document/d/1DoAbJa_YiGIJbOZ_zPzakh7sc4TB9Tmadq41cfSX0dw/edit?usp=sharing

I'd interested in hearing the CF pms and the community feedback on whether
solving the above problems would benefit the CF community, or whether other
workarounds/solutions exists that I had missed.

Thanks in advance,

Guillaume.


Re: [abacus] Handling Notifications

Benjamin Cheng
 

How about keying by criteria as well to know when to trigger a Webhook call? and maybe allow multiple registrations per URL? (e.g please call me back at http://foo.com/bar on new usage for org abc, and also when usage for app xyz goes over 1000, would probably be two separate registration docs for a single URL)
I think that would work well too. I guess the main thing between the two approaches is number of docs versus size of a doc. Building a query for either wouldn't be too hard, but putting multiple registrations inside a doc would probably add additional complexity to the notification logic.

+1 for a sort of quarantine on an unreachable Webhook. How about slow Webhooks causing back pressure problems? Would we quarantine these too?
This is kind of hard for me to figure out. If they continue to cause problems, leaving the slow webhooks would probably compound the problems as time goes on, but at the same time, they aren't quite in the same category as an unreachable one since they're actually reachable. Somehow the client needs to know their webhook is causing issues or unreachable.

- should we let the rating service app do this or have a separate notification service app?
I don't have a concrete opinion on this, but both sides have their merits. With a separate notification app, this keeps notification logic outside of something like aggregator such that aggregator stays doing aggregator logic and doesn't expand to something that it might not want to do. One thing to consider is dealing with the load of everything coming to a single notification app. I'm also not sure if this approach would design the logic in such a way that it won't be pluggable in any other app such that someone would not have to the notifications application running to use it.
I don't have as much to say on putting the logic inside of rating. The only thing I can bring up is my previous statement of having rating doing notification logic instead of strictly rating logic, and it does give a bigger separation of logic in terms of determining if a criteria is met or not.

- with partitioning of the orgs across multiple deployments of our apps (for scalability or regional deployments for example) do I need to first find the right service to register with (e.g. register with the us-south notification service or the eu-west notification service)? or can I register with a central notification service that will then figure out which deployment instance will call me back?
It might work better on a central notification service to prevent things like a client registering to an incorrect region. If we keep it separate and configured criteria continue to evolve, would a central notification service help along the lines of something like "notify if all my organizations across all the regions exceed X amount?"

- how do we secure the registration calls and Webhook callbacks?
The security on registration would probably just validate if the user's token has access to that organization/space/etc. For the webhook, I'm not sure if this falls within abacus.usage.read or abacus.usage.write, or if we would need a new scope to handle this case.

- do we replay notifications when we can't deliver them?
Ideally, it would make sense. Whether it's a simple series of retries or have replay logic that would later play through the set of unsent notifications. I think this is related to the quarantine logic. Some notifications when replayed will probably go through, some will fail and may continue to fail. Those that continue to fail may be due to the webhook their attached to, and thus, it may make sense to quarantine that webhook based upon these replay failures.

- can I register to receive all notifications from a certain point in the logical stream of notifications matching a criteria (e.g. call me back if this org consumed too much per hour at any point since last week)
I'm not sure if I completely understand this question. Is this pretty much setting a adhoc time range upon which notifications should be sent?


Re: Pluggable Resource Scheduling

Deepak Vij
 

I did not mean to replace the whole “Diego” environment itself. What I was thinking was more in terms of plug-ability within Diego itself. This is so that “Auctioneer” component can be turned into a “Mesos Framework” as one of the scheduling options. By doing that, “Auctioneer” can start accepting “Mesos Offers” instead of native “Auctioning based Diego Resource Scheduling”. Rest of the runtime environment including Garden, Rep etc., they all stay the same. Nothing else changes. I hope this makes sense.


- Deepak

From: Gwenn Etourneau [mailto:getourneau(a)pivotal.io]
Sent: Wednesday, November 11, 2015 5:10 PM
To: Discussions about Cloud Foundry projects and the system overall.
Subject: [cf-dev] Re: Pluggable Resource Scheduling

Hi,

Interesting proposition, wondering if it make sense to hook into Diego or CF.
Diego is connected to CF by the CC-Bridge (big picture) why not create a CC-Bridge for other scheduling system ?



Thanks

On Thu, Nov 12, 2015 at 5:13 AM, Deepak Vij (A) <deepak.vij(a)huawei.com<mailto:deepak.vij(a)huawei.com>> wrote:
Hi folks, I would like to start a discussion thread and get community thoughts regarding availability of Pluggable Resource Scheduling within CF/Diego. Just like Kubernetes does, wouldn’t it be nice to have an option of choosing Diego native scheduling or other uber/global resource management environments, specifically Mesos.

Look forward to comments and feedback from the community. Thanks.

Regards,
Deepak Vij
(Huawei Software Lab., Santa Clara)


Re: [vcap-dev] Addressing buildpack size

Jack Cai
 

Thanks Mike. Is it true that even for the online version (aka when doing
"cf push -b https://github.com/cloudfoundry/nodejs-buildpack.git"), users
are now limited to use the runtime versions defined in the manifest.yml?

Jack


On Thu, Nov 12, 2015 at 12:05 PM, Mike Dalessio <mdalessio(a)pivotal.io>
wrote:

This is a bug in the nodejs-buildpack v1.5.1, which we should have a fix
for later today.

Github issue is here:
https://github.com/cloudfoundry/nodejs-buildpack/issues/35

Tracker story is here: https://www.pivotaltracker.com/story/show/107946000

Apologies for the inconvenience.

On Thu, Nov 12, 2015 at 12:03 PM, Jack Cai <greensight(a)gmail.com> wrote:


For the cached package of the buildpacks, I thought it would refuse to
provide a runtime version that's not cached. Yesterday I was playing with
the node.js buildpack and found it actually will download a non-cached
node.js runtime. Does it mean we kind of moved to the "hybrid" model I
suggested earlier in this thread? Does it work the same way for
java/go/php/ruby/python buildpacks as well?

Jack




On Mon, Apr 13, 2015 at 3:08 PM, Mike Dalessio <mdalessio(a)pivotal.io>
wrote:

Hi Jack,

Thanks so much for your feedback!

Based on my conversations with CF users to date, this is definitely
something that we would want to be "opt-in" behavior; the consensus-desired
default appears to be to disallow the downloading of old/deprecated
versions.

Notably, though, what we'll continue to support is the specification of
a buildpack using the `cf push` `-b` option:

```
-b Custom buildpack by name (e.g. my-buildpack) or GIT URL
```

Buildpacks used in this manner will behave in "online" mode, meaning
they'll attempt to download dependencies from the public internet. Does
that satisfy your needs, at least in the short-term?

-m


On Mon, Apr 13, 2015 at 1:59 PM, Jack Cai <greensight(a)gmail.com> wrote:

We will no longer provide "online" buildpack releases, which download
dependencies from the public internet.

I think it would make sense to retain the ability to download
additional runtime versions on demand (that's not packaged in the
buildpack) if the user explicitly requests it. So basically it will be a
hybrid model, where the most recent versions are "cached", while old
versions are still available.

Jack


On Wed, Apr 8, 2015 at 11:36 PM, Onsi Fakhouri <ofakhouri(a)pivotal.io>
wrote:

Hey Patrick,

Sorry about that - the diego-dev-notes is an internal documentation
repo that the Diego team uses to stay on the same page and toss ideas
around.

There isn't much that's terribly interesting at that link - just some
ideas on how to extend diego's existing caching capabilities to avoid
copying cached artifacts into containers (we'd mount them in
directly instead).

Happy to share more detail if there is interest.

Onsi

On Wednesday, April 8, 2015, Patrick Mueller <pmuellr(a)gmail.com>
wrote:

I got a 404 on
https://github.com/pivotal-cf-experimental/diego-dev-notes/blob/master/proposals/better-buildpack-caching.md

On Wed, Apr 8, 2015 at 11:10 AM, Mike Dalessio <mdalessio(a)pivotal.io>
wrote:

Hello vcap-dev!

This email details a proposed change to how Cloud Foundry buildpacks
are packaged, with respect to the ever-increasing number of binary
dependencies being cached within them.

This proposal's permanent residence is here:

https://github.com/cloudfoundry-incubator/buildpack-packager/issues/4

Feel free to comment there or reply to this email.
------------------------------
Buildpack SizesWhere we are today

Many of you have seen, and possibly been challenged by, the enormous
sizes of some of the buildpacks that are currently shipping with cf-release.

Here's the state of the world right now, as of v205:

php-buildpack: 1.1G
ruby-buildpack: 922M
go-buildpack: 675M
python-buildpack: 654M
nodejs-buildpack: 403M
----------------------
total: 3.7G

These enormous sizes are the result of the current policy of
packaging every-version-of-everything-ever-supported ("EVOEES") within the
buildpack.

Most recently, this problem was exacerbated by the fact that
buildpacks now contain binaries for two rootfses.
Why this is a problem

If continued, buildpacks will only continue to increase in size,
leading to longer and longer build and deploy times, longer test times,
slacker feedback loops, and therefore less frequent buildpack releases.

Additionally, this also means that we're shipping versions of
interpreters, web servers, and libraries that are deprecated, insecure, or
both. Feedback from CF users has made it clear that many companies view
this as an unnecessary security risk.

This policy is clearly unsustainable.
What we can do about it

There are many things being discussed to ameliorate the impact that
buildpack size is having on the operations of CF.

Notably, Onsi has proposed a change to buildpack caching, to improve
Diego staging times (link to proposal
<https://github.com/pivotal-cf-experimental/diego-dev-notes/blob/master/proposals/better-buildpack-caching.md>
).

However, there is an immediate solution available, which addresses
both the size concerns as well as the security concern: packaging fewer
binary dependencies within the buildpack.
The proposal

I'm proposing that we reduce the binary dependencies in each
buildpack in a very specific way.

Aside on terms I'll use below:

- Versions of the form "1.2.3" are broken down as:
MAJOR.MINOR.TEENY. Many language ecosystems refer to the "TEENY" as "PATCH"
interchangeably, but we're going to use "TEENY" in this proposal.
- We'll assume that TEENY gets bumped for API/ABI compatible
changes.
- We'll assume that MINOR and MAJOR get bumped when there are
API/ABI *incompatible* changes.

I'd like to move forward soon with the following changes:

1. For language interpreters/compilers, we'll package the two
most-recent TEENY versions on each MAJOR.MINOR release.
2. For all other dependencies, we'll package only the single
most-recent TEENY version on each MAJOR.MINOR release.
3. We will discontinue packaging versions of dependencies that
have been deprecated.
4. We will no longer provide "EVOEES" buildpack releases.
5. We will no longer provide "online" buildpack releases, which
download dependencies from the public internet.
6. We will document the process, and provide tooling, for CF
operators to build their own buildpacks, choosing the dependencies that
their organization wants to support or creating "online" buildpacks at
operators' discretion.

An example for #1 is that we'll go from packaging 34 versions of
node v0.10.x to only packaging two: 0.10.37 and 0.10.38.

An example for #2 is that we'll go from packaging 3 versions of
nginx 1.5 in the PHP buildpack to only packaging one: 1.5.12.

An example for #3 is that we'll discontinue packaging ruby 1.9.3 in
the ruby-buildpack, which reached end-of-life in February 2015.
Outcomes

With these changes, the total buildpack size will be reduced
greatly. As an example, we expect the ruby-buildpack size to go from 922M
to 338M.

We also want to set the expectation that, as new interpreter
versions are released, either for new features or (more urgently) for
security fixes, we'll release new buildpacks much more quickly than we do
today. My hope is that we'll be able to do it within 24 hours of a new
release.
Planning

These changes will be relatively easy to make, since all the
buildpacks are now using a manifest.yml file to declare what's
being packaged. We expect to be able to complete this work within the next
two weeks.

Stories are in the Tracker backlog under the Epic named
"skinny-buildpacks", which you can see here:

https://www.pivotaltracker.com/epic/show/1747328

------------------------------

Please let me know how these changes will impact you and your
organizations, and let me know of any counter-proposals or variations you'd
like to consider.

Thanks,

-mike


--
You received this message because you are subscribed to the Google
Groups "Cloud Foundry Developers" group.
To view this discussion on the web visit
https://groups.google.com/a/cloudfoundry.org/d/msgid/vcap-dev/CAGeQLZwDbON2B6cAynyJY12tCWXO8XPKSCmhCc%3D%3DBu4KsHe%3DhA%40mail.gmail.com
<https://groups.google.com/a/cloudfoundry.org/d/msgid/vcap-dev/CAGeQLZwDbON2B6cAynyJY12tCWXO8XPKSCmhCc%3D%3DBu4KsHe%3DhA%40mail.gmail.com?utm_medium=email&utm_source=footer>
.

To unsubscribe from this group and stop receiving emails from it,
send an email to vcap-dev+unsubscribe(a)cloudfoundry.org.


--
Patrick Mueller
http://muellerware.org

--
You received this message because you are subscribed to the Google
Groups "Cloud Foundry Developers" group.
To view this discussion on the web visit
https://groups.google.com/a/cloudfoundry.org/d/msgid/vcap-dev/CACAoQH0-dCrN6o%2B%3Ds1kn3poJSusUWbV6Zzsk29FTRs_kQfYtaQ%40mail.gmail.com
<https://groups.google.com/a/cloudfoundry.org/d/msgid/vcap-dev/CACAoQH0-dCrN6o%2B%3Ds1kn3poJSusUWbV6Zzsk29FTRs_kQfYtaQ%40mail.gmail.com?utm_medium=email&utm_source=footer>
.

To unsubscribe from this group and stop receiving emails from it,
send an email to vcap-dev+unsubscribe(a)cloudfoundry.org.
--
You received this message because you are subscribed to the Google
Groups "Cloud Foundry Developers" group.
To view this discussion on the web visit
https://groups.google.com/a/cloudfoundry.org/d/msgid/vcap-dev/CAFwdB-x8JD36rQFCHOnuDpOojYpVFZ_xBH1JzoTHWaEKC5Vqog%40mail.gmail.com
<https://groups.google.com/a/cloudfoundry.org/d/msgid/vcap-dev/CAFwdB-x8JD36rQFCHOnuDpOojYpVFZ_xBH1JzoTHWaEKC5Vqog%40mail.gmail.com?utm_medium=email&utm_source=footer>
.

To unsubscribe from this group and stop receiving emails from it, send
an email to vcap-dev+unsubscribe(a)cloudfoundry.org.
--
You received this message because you are subscribed to the Google
Groups "Cloud Foundry Developers" group.
To view this discussion on the web visit
https://groups.google.com/a/cloudfoundry.org/d/msgid/vcap-dev/CANbAaBTNMr9QwyopJJJ6VpZWz%2BGYr-Q_xL0UsnX2ha6OzkAcag%40mail.gmail.com
<https://groups.google.com/a/cloudfoundry.org/d/msgid/vcap-dev/CANbAaBTNMr9QwyopJJJ6VpZWz%2BGYr-Q_xL0UsnX2ha6OzkAcag%40mail.gmail.com?utm_medium=email&utm_source=footer>
.

To unsubscribe from this group and stop receiving emails from it, send
an email to vcap-dev+unsubscribe(a)cloudfoundry.org.
--
You received this message because you are subscribed to the Google
Groups "Cloud Foundry Developers" group.
To view this discussion on the web visit
https://groups.google.com/a/cloudfoundry.org/d/msgid/vcap-dev/CAGeQLZyo9VszAcgQRCuAfQ3r%3Di2PksDq2Hkv6N-kObs053hOFg%40mail.gmail.com
<https://groups.google.com/a/cloudfoundry.org/d/msgid/vcap-dev/CAGeQLZyo9VszAcgQRCuAfQ3r%3Di2PksDq2Hkv6N-kObs053hOFg%40mail.gmail.com?utm_medium=email&utm_source=footer>
.

To unsubscribe from this group and stop receiving emails from it, send
an email to vcap-dev+unsubscribe(a)cloudfoundry.org.


Re: Cloud Controller - s3 encryption for droplets

Dieu Cao <dcao@...>
 

Hi William,

Thanks for the links.
We don't have support for client side encryption currently.
Cloud Controller and Diego's blobstore clients would need to be modified to
encrypt and decrypt for client side encryption and I'm not clear what
strategies exist for rotation of keys in these scenarios.

If you're very interested in this feature and are open to working through
requirements with me and submitting a PR, please open up an issue on github
and we can discuss this further.

-Dieu

On Tue, Nov 10, 2015 at 4:16 PM, William C Penrod <wcpenrod(a)gmail.com>
wrote:

I first ran across it here:
http://cloudfoundryjp.github.io/docs/running/bosh/components/blobstore.html

and checked here for additional info:

https://github.com/cloudfoundry/bosh/blob/master/blobstore_client/lib/blobstore_client/s3_blobstore_client.rb


Re: [vcap-dev] Addressing buildpack size

Mike Dalessio
 

This is a bug in the nodejs-buildpack v1.5.1, which we should have a fix
for later today.

Github issue is here:
https://github.com/cloudfoundry/nodejs-buildpack/issues/35

Tracker story is here: https://www.pivotaltracker.com/story/show/107946000

Apologies for the inconvenience.

On Thu, Nov 12, 2015 at 12:03 PM, Jack Cai <greensight(a)gmail.com> wrote:


For the cached package of the buildpacks, I thought it would refuse to
provide a runtime version that's not cached. Yesterday I was playing with
the node.js buildpack and found it actually will download a non-cached
node.js runtime. Does it mean we kind of moved to the "hybrid" model I
suggested earlier in this thread? Does it work the same way for
java/go/php/ruby/python buildpacks as well?

Jack




On Mon, Apr 13, 2015 at 3:08 PM, Mike Dalessio <mdalessio(a)pivotal.io>
wrote:

Hi Jack,

Thanks so much for your feedback!

Based on my conversations with CF users to date, this is definitely
something that we would want to be "opt-in" behavior; the consensus-desired
default appears to be to disallow the downloading of old/deprecated
versions.

Notably, though, what we'll continue to support is the specification of a
buildpack using the `cf push` `-b` option:

```
-b Custom buildpack by name (e.g. my-buildpack) or GIT URL
```

Buildpacks used in this manner will behave in "online" mode, meaning
they'll attempt to download dependencies from the public internet. Does
that satisfy your needs, at least in the short-term?

-m


On Mon, Apr 13, 2015 at 1:59 PM, Jack Cai <greensight(a)gmail.com> wrote:

We will no longer provide "online" buildpack releases, which download
dependencies from the public internet.

I think it would make sense to retain the ability to download additional
runtime versions on demand (that's not packaged in the buildpack) if the
user explicitly requests it. So basically it will be a hybrid model, where
the most recent versions are "cached", while old versions are still
available.

Jack


On Wed, Apr 8, 2015 at 11:36 PM, Onsi Fakhouri <ofakhouri(a)pivotal.io>
wrote:

Hey Patrick,

Sorry about that - the diego-dev-notes is an internal documentation
repo that the Diego team uses to stay on the same page and toss ideas
around.

There isn't much that's terribly interesting at that link - just some
ideas on how to extend diego's existing caching capabilities to avoid
copying cached artifacts into containers (we'd mount them in
directly instead).

Happy to share more detail if there is interest.

Onsi

On Wednesday, April 8, 2015, Patrick Mueller <pmuellr(a)gmail.com> wrote:

I got a 404 on
https://github.com/pivotal-cf-experimental/diego-dev-notes/blob/master/proposals/better-buildpack-caching.md

On Wed, Apr 8, 2015 at 11:10 AM, Mike Dalessio <mdalessio(a)pivotal.io>
wrote:

Hello vcap-dev!

This email details a proposed change to how Cloud Foundry buildpacks
are packaged, with respect to the ever-increasing number of binary
dependencies being cached within them.

This proposal's permanent residence is here:

https://github.com/cloudfoundry-incubator/buildpack-packager/issues/4

Feel free to comment there or reply to this email.
------------------------------
Buildpack SizesWhere we are today

Many of you have seen, and possibly been challenged by, the enormous
sizes of some of the buildpacks that are currently shipping with cf-release.

Here's the state of the world right now, as of v205:

php-buildpack: 1.1G
ruby-buildpack: 922M
go-buildpack: 675M
python-buildpack: 654M
nodejs-buildpack: 403M
----------------------
total: 3.7G

These enormous sizes are the result of the current policy of
packaging every-version-of-everything-ever-supported ("EVOEES") within the
buildpack.

Most recently, this problem was exacerbated by the fact that
buildpacks now contain binaries for two rootfses.
Why this is a problem

If continued, buildpacks will only continue to increase in size,
leading to longer and longer build and deploy times, longer test times,
slacker feedback loops, and therefore less frequent buildpack releases.

Additionally, this also means that we're shipping versions of
interpreters, web servers, and libraries that are deprecated, insecure, or
both. Feedback from CF users has made it clear that many companies view
this as an unnecessary security risk.

This policy is clearly unsustainable.
What we can do about it

There are many things being discussed to ameliorate the impact that
buildpack size is having on the operations of CF.

Notably, Onsi has proposed a change to buildpack caching, to improve
Diego staging times (link to proposal
<https://github.com/pivotal-cf-experimental/diego-dev-notes/blob/master/proposals/better-buildpack-caching.md>
).

However, there is an immediate solution available, which addresses
both the size concerns as well as the security concern: packaging fewer
binary dependencies within the buildpack.
The proposal

I'm proposing that we reduce the binary dependencies in each
buildpack in a very specific way.

Aside on terms I'll use below:

- Versions of the form "1.2.3" are broken down as:
MAJOR.MINOR.TEENY. Many language ecosystems refer to the "TEENY" as "PATCH"
interchangeably, but we're going to use "TEENY" in this proposal.
- We'll assume that TEENY gets bumped for API/ABI compatible
changes.
- We'll assume that MINOR and MAJOR get bumped when there are
API/ABI *incompatible* changes.

I'd like to move forward soon with the following changes:

1. For language interpreters/compilers, we'll package the two
most-recent TEENY versions on each MAJOR.MINOR release.
2. For all other dependencies, we'll package only the single
most-recent TEENY version on each MAJOR.MINOR release.
3. We will discontinue packaging versions of dependencies that
have been deprecated.
4. We will no longer provide "EVOEES" buildpack releases.
5. We will no longer provide "online" buildpack releases, which
download dependencies from the public internet.
6. We will document the process, and provide tooling, for CF
operators to build their own buildpacks, choosing the dependencies that
their organization wants to support or creating "online" buildpacks at
operators' discretion.

An example for #1 is that we'll go from packaging 34 versions of node v0.10.x
to only packaging two: 0.10.37 and 0.10.38.

An example for #2 is that we'll go from packaging 3 versions of nginx 1.5
in the PHP buildpack to only packaging one: 1.5.12.

An example for #3 is that we'll discontinue packaging ruby 1.9.3 in
the ruby-buildpack, which reached end-of-life in February 2015.
Outcomes

With these changes, the total buildpack size will be reduced greatly.
As an example, we expect the ruby-buildpack size to go from 922M to 338M.

We also want to set the expectation that, as new interpreter versions
are released, either for new features or (more urgently) for security
fixes, we'll release new buildpacks much more quickly than we do today. My
hope is that we'll be able to do it within 24 hours of a new release.
Planning

These changes will be relatively easy to make, since all the
buildpacks are now using a manifest.yml file to declare what's being
packaged. We expect to be able to complete this work within the next two
weeks.

Stories are in the Tracker backlog under the Epic named
"skinny-buildpacks", which you can see here:

https://www.pivotaltracker.com/epic/show/1747328

------------------------------

Please let me know how these changes will impact you and your
organizations, and let me know of any counter-proposals or variations you'd
like to consider.

Thanks,

-mike


--
You received this message because you are subscribed to the Google
Groups "Cloud Foundry Developers" group.
To view this discussion on the web visit
https://groups.google.com/a/cloudfoundry.org/d/msgid/vcap-dev/CAGeQLZwDbON2B6cAynyJY12tCWXO8XPKSCmhCc%3D%3DBu4KsHe%3DhA%40mail.gmail.com
<https://groups.google.com/a/cloudfoundry.org/d/msgid/vcap-dev/CAGeQLZwDbON2B6cAynyJY12tCWXO8XPKSCmhCc%3D%3DBu4KsHe%3DhA%40mail.gmail.com?utm_medium=email&utm_source=footer>
.

To unsubscribe from this group and stop receiving emails from it,
send an email to vcap-dev+unsubscribe(a)cloudfoundry.org.


--
Patrick Mueller
http://muellerware.org

--
You received this message because you are subscribed to the Google
Groups "Cloud Foundry Developers" group.
To view this discussion on the web visit
https://groups.google.com/a/cloudfoundry.org/d/msgid/vcap-dev/CACAoQH0-dCrN6o%2B%3Ds1kn3poJSusUWbV6Zzsk29FTRs_kQfYtaQ%40mail.gmail.com
<https://groups.google.com/a/cloudfoundry.org/d/msgid/vcap-dev/CACAoQH0-dCrN6o%2B%3Ds1kn3poJSusUWbV6Zzsk29FTRs_kQfYtaQ%40mail.gmail.com?utm_medium=email&utm_source=footer>
.

To unsubscribe from this group and stop receiving emails from it, send
an email to vcap-dev+unsubscribe(a)cloudfoundry.org.
--
You received this message because you are subscribed to the Google
Groups "Cloud Foundry Developers" group.
To view this discussion on the web visit
https://groups.google.com/a/cloudfoundry.org/d/msgid/vcap-dev/CAFwdB-x8JD36rQFCHOnuDpOojYpVFZ_xBH1JzoTHWaEKC5Vqog%40mail.gmail.com
<https://groups.google.com/a/cloudfoundry.org/d/msgid/vcap-dev/CAFwdB-x8JD36rQFCHOnuDpOojYpVFZ_xBH1JzoTHWaEKC5Vqog%40mail.gmail.com?utm_medium=email&utm_source=footer>
.

To unsubscribe from this group and stop receiving emails from it, send
an email to vcap-dev+unsubscribe(a)cloudfoundry.org.
--
You received this message because you are subscribed to the Google
Groups "Cloud Foundry Developers" group.
To view this discussion on the web visit
https://groups.google.com/a/cloudfoundry.org/d/msgid/vcap-dev/CANbAaBTNMr9QwyopJJJ6VpZWz%2BGYr-Q_xL0UsnX2ha6OzkAcag%40mail.gmail.com
<https://groups.google.com/a/cloudfoundry.org/d/msgid/vcap-dev/CANbAaBTNMr9QwyopJJJ6VpZWz%2BGYr-Q_xL0UsnX2ha6OzkAcag%40mail.gmail.com?utm_medium=email&utm_source=footer>
.

To unsubscribe from this group and stop receiving emails from it, send
an email to vcap-dev+unsubscribe(a)cloudfoundry.org.
--
You received this message because you are subscribed to the Google Groups
"Cloud Foundry Developers" group.
To view this discussion on the web visit
https://groups.google.com/a/cloudfoundry.org/d/msgid/vcap-dev/CAGeQLZyo9VszAcgQRCuAfQ3r%3Di2PksDq2Hkv6N-kObs053hOFg%40mail.gmail.com
<https://groups.google.com/a/cloudfoundry.org/d/msgid/vcap-dev/CAGeQLZyo9VszAcgQRCuAfQ3r%3Di2PksDq2Hkv6N-kObs053hOFg%40mail.gmail.com?utm_medium=email&utm_source=footer>
.

To unsubscribe from this group and stop receiving emails from it, send an
email to vcap-dev+unsubscribe(a)cloudfoundry.org.


[vcap-dev] Addressing buildpack size

Jack Cai
 

For the cached package of the buildpacks, I thought it would refuse to
provide a runtime version that's not cached. Yesterday I was playing with
the node.js buildpack and found it actually will download a non-cached
node.js runtime. Does it mean we kind of moved to the "hybrid" model I
suggested earlier in this thread? Does it work the same way for
java/go/php/ruby/python buildpacks as well?

Jack

On Mon, Apr 13, 2015 at 3:08 PM, Mike Dalessio <mdalessio(a)pivotal.io> wrote:

Hi Jack,

Thanks so much for your feedback!

Based on my conversations with CF users to date, this is definitely
something that we would want to be "opt-in" behavior; the consensus-desired
default appears to be to disallow the downloading of old/deprecated
versions.

Notably, though, what we'll continue to support is the specification of a
buildpack using the `cf push` `-b` option:

```
-b Custom buildpack by name (e.g. my-buildpack) or GIT URL
```

Buildpacks used in this manner will behave in "online" mode, meaning
they'll attempt to download dependencies from the public internet. Does
that satisfy your needs, at least in the short-term?

-m


On Mon, Apr 13, 2015 at 1:59 PM, Jack Cai <greensight(a)gmail.com> wrote:

We will no longer provide "online" buildpack releases, which download
dependencies from the public internet.

I think it would make sense to retain the ability to download additional
runtime versions on demand (that's not packaged in the buildpack) if the
user explicitly requests it. So basically it will be a hybrid model, where
the most recent versions are "cached", while old versions are still
available.

Jack


On Wed, Apr 8, 2015 at 11:36 PM, Onsi Fakhouri <ofakhouri(a)pivotal.io>
wrote:

Hey Patrick,

Sorry about that - the diego-dev-notes is an internal documentation repo
that the Diego team uses to stay on the same page and toss ideas around.

There isn't much that's terribly interesting at that link - just some
ideas on how to extend diego's existing caching capabilities to avoid
copying cached artifacts into containers (we'd mount them in
directly instead).

Happy to share more detail if there is interest.

Onsi

On Wednesday, April 8, 2015, Patrick Mueller <pmuellr(a)gmail.com> wrote:

I got a 404 on
https://github.com/pivotal-cf-experimental/diego-dev-notes/blob/master/proposals/better-buildpack-caching.md

On Wed, Apr 8, 2015 at 11:10 AM, Mike Dalessio <mdalessio(a)pivotal.io>
wrote:

Hello vcap-dev!

This email details a proposed change to how Cloud Foundry buildpacks
are packaged, with respect to the ever-increasing number of binary
dependencies being cached within them.

This proposal's permanent residence is here:

https://github.com/cloudfoundry-incubator/buildpack-packager/issues/4

Feel free to comment there or reply to this email.
------------------------------
Buildpack SizesWhere we are today

Many of you have seen, and possibly been challenged by, the enormous
sizes of some of the buildpacks that are currently shipping with cf-release.

Here's the state of the world right now, as of v205:

php-buildpack: 1.1G
ruby-buildpack: 922M
go-buildpack: 675M
python-buildpack: 654M
nodejs-buildpack: 403M
----------------------
total: 3.7G

These enormous sizes are the result of the current policy of packaging
every-version-of-everything-ever-supported ("EVOEES") within the buildpack.

Most recently, this problem was exacerbated by the fact that
buildpacks now contain binaries for two rootfses.
Why this is a problem

If continued, buildpacks will only continue to increase in size,
leading to longer and longer build and deploy times, longer test times,
slacker feedback loops, and therefore less frequent buildpack releases.

Additionally, this also means that we're shipping versions of
interpreters, web servers, and libraries that are deprecated, insecure, or
both. Feedback from CF users has made it clear that many companies view
this as an unnecessary security risk.

This policy is clearly unsustainable.
What we can do about it

There are many things being discussed to ameliorate the impact that
buildpack size is having on the operations of CF.

Notably, Onsi has proposed a change to buildpack caching, to improve
Diego staging times (link to proposal
<https://github.com/pivotal-cf-experimental/diego-dev-notes/blob/master/proposals/better-buildpack-caching.md>
).

However, there is an immediate solution available, which addresses
both the size concerns as well as the security concern: packaging fewer
binary dependencies within the buildpack.
The proposal

I'm proposing that we reduce the binary dependencies in each buildpack
in a very specific way.

Aside on terms I'll use below:

- Versions of the form "1.2.3" are broken down as:
MAJOR.MINOR.TEENY. Many language ecosystems refer to the "TEENY" as "PATCH"
interchangeably, but we're going to use "TEENY" in this proposal.
- We'll assume that TEENY gets bumped for API/ABI compatible
changes.
- We'll assume that MINOR and MAJOR get bumped when there are
API/ABI *incompatible* changes.

I'd like to move forward soon with the following changes:

1. For language interpreters/compilers, we'll package the two
most-recent TEENY versions on each MAJOR.MINOR release.
2. For all other dependencies, we'll package only the single
most-recent TEENY version on each MAJOR.MINOR release.
3. We will discontinue packaging versions of dependencies that
have been deprecated.
4. We will no longer provide "EVOEES" buildpack releases.
5. We will no longer provide "online" buildpack releases, which
download dependencies from the public internet.
6. We will document the process, and provide tooling, for CF
operators to build their own buildpacks, choosing the dependencies that
their organization wants to support or creating "online" buildpacks at
operators' discretion.

An example for #1 is that we'll go from packaging 34 versions of node v0.10.x
to only packaging two: 0.10.37 and 0.10.38.

An example for #2 is that we'll go from packaging 3 versions of nginx 1.5
in the PHP buildpack to only packaging one: 1.5.12.

An example for #3 is that we'll discontinue packaging ruby 1.9.3 in
the ruby-buildpack, which reached end-of-life in February 2015.
Outcomes

With these changes, the total buildpack size will be reduced greatly.
As an example, we expect the ruby-buildpack size to go from 922M to 338M.

We also want to set the expectation that, as new interpreter versions
are released, either for new features or (more urgently) for security
fixes, we'll release new buildpacks much more quickly than we do today. My
hope is that we'll be able to do it within 24 hours of a new release.
Planning

These changes will be relatively easy to make, since all the
buildpacks are now using a manifest.yml file to declare what's being
packaged. We expect to be able to complete this work within the next two
weeks.

Stories are in the Tracker backlog under the Epic named
"skinny-buildpacks", which you can see here:

https://www.pivotaltracker.com/epic/show/1747328

------------------------------

Please let me know how these changes will impact you and your
organizations, and let me know of any counter-proposals or variations you'd
like to consider.

Thanks,

-mike


--
You received this message because you are subscribed to the Google
Groups "Cloud Foundry Developers" group.
To view this discussion on the web visit
https://groups.google.com/a/cloudfoundry.org/d/msgid/vcap-dev/CAGeQLZwDbON2B6cAynyJY12tCWXO8XPKSCmhCc%3D%3DBu4KsHe%3DhA%40mail.gmail.com
<https://groups.google.com/a/cloudfoundry.org/d/msgid/vcap-dev/CAGeQLZwDbON2B6cAynyJY12tCWXO8XPKSCmhCc%3D%3DBu4KsHe%3DhA%40mail.gmail.com?utm_medium=email&utm_source=footer>
.

To unsubscribe from this group and stop receiving emails from it, send
an email to vcap-dev+unsubscribe(a)cloudfoundry.org.


--
Patrick Mueller
http://muellerware.org

--
You received this message because you are subscribed to the Google
Groups "Cloud Foundry Developers" group.
To view this discussion on the web visit
https://groups.google.com/a/cloudfoundry.org/d/msgid/vcap-dev/CACAoQH0-dCrN6o%2B%3Ds1kn3poJSusUWbV6Zzsk29FTRs_kQfYtaQ%40mail.gmail.com
<https://groups.google.com/a/cloudfoundry.org/d/msgid/vcap-dev/CACAoQH0-dCrN6o%2B%3Ds1kn3poJSusUWbV6Zzsk29FTRs_kQfYtaQ%40mail.gmail.com?utm_medium=email&utm_source=footer>
.

To unsubscribe from this group and stop receiving emails from it, send
an email to vcap-dev+unsubscribe(a)cloudfoundry.org.
--
You received this message because you are subscribed to the Google
Groups "Cloud Foundry Developers" group.
To view this discussion on the web visit
https://groups.google.com/a/cloudfoundry.org/d/msgid/vcap-dev/CAFwdB-x8JD36rQFCHOnuDpOojYpVFZ_xBH1JzoTHWaEKC5Vqog%40mail.gmail.com
<https://groups.google.com/a/cloudfoundry.org/d/msgid/vcap-dev/CAFwdB-x8JD36rQFCHOnuDpOojYpVFZ_xBH1JzoTHWaEKC5Vqog%40mail.gmail.com?utm_medium=email&utm_source=footer>
.

To unsubscribe from this group and stop receiving emails from it, send
an email to vcap-dev+unsubscribe(a)cloudfoundry.org.
--
You received this message because you are subscribed to the Google Groups
"Cloud Foundry Developers" group.
To view this discussion on the web visit
https://groups.google.com/a/cloudfoundry.org/d/msgid/vcap-dev/CANbAaBTNMr9QwyopJJJ6VpZWz%2BGYr-Q_xL0UsnX2ha6OzkAcag%40mail.gmail.com
<https://groups.google.com/a/cloudfoundry.org/d/msgid/vcap-dev/CANbAaBTNMr9QwyopJJJ6VpZWz%2BGYr-Q_xL0UsnX2ha6OzkAcag%40mail.gmail.com?utm_medium=email&utm_source=footer>
.

To unsubscribe from this group and stop receiving emails from it, send an
email to vcap-dev+unsubscribe(a)cloudfoundry.org.
--
You received this message because you are subscribed to the Google Groups
"Cloud Foundry Developers" group.
To view this discussion on the web visit
https://groups.google.com/a/cloudfoundry.org/d/msgid/vcap-dev/CAGeQLZyo9VszAcgQRCuAfQ3r%3Di2PksDq2Hkv6N-kObs053hOFg%40mail.gmail.com
<https://groups.google.com/a/cloudfoundry.org/d/msgid/vcap-dev/CAGeQLZyo9VszAcgQRCuAfQ3r%3Di2PksDq2Hkv6N-kObs053hOFg%40mail.gmail.com?utm_medium=email&utm_source=footer>
.

To unsubscribe from this group and stop receiving emails from it, send an
email to vcap-dev+unsubscribe(a)cloudfoundry.org.


Re: abacus collector doesn't work

MaggieMeng
 

Hi, Sebastien

Yes, the url with protocol specified works. Thanks a lot for your great help!

Regards,
Maggie

From: Jean-Sebastien Delfino [mailto:jsdelfino(a)gmail.com]
Sent: 2015年11月7日 8:12
To: Discussions about Cloud Foundry projects and the system overall.
Subject: [cf-dev] Re: Re: Re: Re: abacus collector doesn't work

Hi Maggie,

Good to hear that you've been able to make progress.

You're correct that we're defaulting to https if you don't specify a protocol in the METER, COUCHDB etc env variables.

I believe that we'll use the protocol you want if you set one. Can you try to include a protocol in your env variables like this:
METER=http://abacus-usage-meter.bjngiscf-dev.dctmlabs.com

If that doesn't work, then we'll be happy to work with you to improve this. Thanks!

- Jean-Sebastien

On Fri, Nov 6, 2015 at 2:11 AM, Meng, Xiangyi <xiangyi.meng(a)emc.com<mailto:xiangyi.meng(a)emc.com>> wrote:
Hi, Jean

The root cause is that abacus takes “https” as default protocol when connect between abacus components like abacus-eureka-stub and abacus-dbserver. But in my CF env, I didn’t have any proxy server.

It may not be an issue. But I would like to have the protocol be configurable. May you consider it as an future improvement?

Thanks a lot for your and Hristo’s help!

Thanks,
Maggie

From: Jean-Sebastien Delfino [mailto:jsdelfino(a)gmail.com<mailto:jsdelfino(a)gmail.com>]
Sent: 2015年11月6日 0:15
To: Discussions about Cloud Foundry projects and the system overall.
Subject: [cf-dev] Re: Re: abacus collector doesn't work

Hi Maggie,

Which level of abacus are you using? Are you using the v0.0.2 release or a specific commit from the Abacus Github repository?

You can get a more verbose log with the following env variable:
DEBUG=e-abacus-*,abacus-request,abacus-router

You can set it like this:
cf set-env abacus-usage-collector DEBUG "e-abacus-*,abacus-request,abacus-router"
cf restage abacus-usage-collector

BTW, with the latest version of the Abacus master branch that DEBUG variable is already set to "e-abacus-*" (log all errors) in our default CF manifest.yml files.

Would you mind creating a Github issue including the log from the abacus-usage-collector app with the DEBUG variable set as above? We'll take a look.

Thanks!

- Jean-Sebastien

On Wed, Nov 4, 2015 at 7:48 PM, Meng, Xiangyi <xiangyi.meng(a)emc.com<mailto:xiangyi.meng(a)emc.com>> wrote:
Hi, Hristo

I think I am using the bosh-lite. And I tried to change the environment variable as below. But I still got the same error. Which application does collector want to connect?

dmadmin(a)dmadmin-Lenovo-Product:~/cloudfoundry/cf-abacus/cf-abacus$ cf env abacus-usage-collector
Getting env variables for app abacus-usage-collector in org test / space space as admin...
OK

System-Provided:
...
User-Provided:
CONF: default
COUCHDB: abacus-dbserver.bjngiscf-dev.dctmlabs.com<http://abacus-dbserver.bjngiscf-dev.dctmlabs.com>
DEBUG: e-abacus-*
EUREKA: abacus-eureka-stub.bjngiscf-dev.dctmlabs.com<http://abacus-eureka-stub.bjngiscf-dev.dctmlabs.com>
METER: abacus-usage-meter.bjngiscf-dev.dctmlabs.com<http://abacus-usage-meter.bjngiscf-dev.dctmlabs.com>
PROVISIONING: abacus-provisioning-stub.bjngiscf-dev.dctmlabs.com<http://abacus-provisioning-stub.bjngiscf-dev.dctmlabs.com>
SECURED: false

dmadmin(a)dmadmin-Lenovo-Product:~/cloudfoundry/cf-abacus/cf-abacus$ cf apps
Getting apps in org test / space space as admin...
OK

name requested state instances memory disk urls
abacus-account-stub started 1/1 512M 512M abacus-account-stub.bjngiscf-dev.dctmlabs.com<http://abacus-account-stub.bjngiscf-dev.dctmlabs.com>
abacus-usage-reporting started 1/1 512M 512M abacus-usage-reporting.bjngiscf-dev.dctmlabs.com<http://abacus-usage-reporting.bjngiscf-dev.dctmlabs.com>
abacus-usage-meter started 1/1 512M 512M abacus-usage-meter.bjngiscf-dev.dctmlabs.com<http://abacus-usage-meter.bjngiscf-dev.dctmlabs.com>
abacus-usage-accumulator started 1/1 512M 512M abacus-usage-accumulator.bjngiscf-dev.dctmlabs.com<http://abacus-usage-accumulator.bjngiscf-dev.dctmlabs.com>
abacus-dbserver started 1/1 1G 512M abacus-dbserver.bjngiscf-dev.dctmlabs.com<http://abacus-dbserver.bjngiscf-dev.dctmlabs.com>
abacus-eureka-stub started 1/1 512M 512M abacus-eureka-stub.bjngiscf-dev.dctmlabs.com<http://abacus-eureka-stub.bjngiscf-dev.dctmlabs.com>
abacus-usage-rate started 1/1 512M 512M abacus-usage-rate.bjngiscf-dev.dctmlabs.com<http://abacus-usage-rate.bjngiscf-dev.dctmlabs.com>
abacus-authserver-stub started 1/1 512M 512M abacus-authserver-stub.bjngiscf-dev.dctmlabs.com<http://abacus-authserver-stub.bjngiscf-dev.dctmlabs.com>
abacus-provisioning-stub started 1/1 512M 512M abacus-provisioning-stub.bjngiscf-dev.dctmlabs.com<http://abacus-provisioning-stub.bjngiscf-dev.dctmlabs.com>
abacus-usage-aggregator started 1/1 512M 512M abacus-usage-aggregator.bjngiscf-dev.dctmlabs.com<http://abacus-usage-aggregator.bjngiscf-dev.dctmlabs.com>
abacus-usage-collector started 1/1 512M 512M abacus-usage-collector.bjngiscf-dev.dctmlabs.com<http://abacus-usage-collector.bjngiscf-dev.dctmlabs.com>

dmadmin(a)dmadmin-Lenovo-Product:~/cloudfoundry/cf-abacus/cf-abacus$ cf security-group abacus
Getting info for security group abacus as admin
OK

Name abacus
Rules
[
{
"destination": "0.0.0.0/0<http://0.0.0.0/0>",
"ports": "1-65535",
"protocol": "tcp"
},
{
"destination": "0.0.0.0/0<http://0.0.0.0/0>",
"ports": "1-65535",
"protocol": "udp"
}
]

Organization Space
#0 test space

Thanks,
Maggie


Re: Cloud Controller - s3 encryption for droplets

Noburou TANIGUCHI
 

William and all,

Sorry for an off-topic post.

http://cloudfoundryjp.github.io/ is totally old and shouldn't be referred as
a reliable source of information.

I had asked a member of the owning organization to delete the repository and
now it was deleted.


William C Penrod wrote
I first ran across it here:
http://cloudfoundryjp.github.io/docs/running/bosh/components/blobstore.html

and checked here for additional info:
https://github.com/cloudfoundry/bosh/blob/master/blobstore_client/lib/blobstore_client/s3_blobstore_client.rb




-----
I'm not a ...
noburou taniguchi
--
View this message in context: http://cf-dev.70369.x6.nabble.com/cf-dev-Cloud-Controller-s3-encryption-for-droplets-tp2637p2684.html
Sent from the CF Dev mailing list archive at Nabble.com.

6721 - 6740 of 9429