Date   

Re: Loggregator has updated protobufs definitions and compiler for dropsonde

Jim CF Campbell
 

Protobufs is smart. Given how we added the map, it should just work either
way.

On Wed, May 11, 2016 at 9:54 AM, Mike Youngstrom <youngm(a)gmail.com> wrote:

You mention that this change is non-breaking for metrics users. I don't
know much about protobufs backwards compatibility story. Can you detail a
little more the implications from a compatibility standpoint with this
change? Some questions:

* If I compile the new protocol via protobuf3 can I still communicate with
an old loggregator deployment?
* If I don't use the new protocol can I talk I communicate with a
loggregator that is using the new protocol? (absent the new field of course)

Thanks,
Mike

On Wed, May 11, 2016 at 8:58 AM, Jim CF Campbell <jcampbell(a)pivotal.io>
wrote:

Hopefully it's OK because you're bought into the value of tagged
metrics...

On Tue, May 10, 2016 at 5:32 PM, Mike Youngstrom <youngm(a)gmail.com>
wrote:

Thanks for the heads up Jim,

I use wire (https://github.com/square/wire) for a number of my
projects. Unfortunately, it appears the map<string, string> syntax is too
new for wire. Looks like I have some rewriting to do. :(

Mike

On Tue, May 10, 2016 at 3:59 PM, Jim CF Campbell <jcampbell(a)pivotal.io>
wrote:

Hi cf-dev,

In support of the epic to add tagging to CF metrics
<https://www.pivotaltracker.com/epic/show/2362529>, we recently added
a map to the dropsonde-protocol
<https://github.com/cloudfoundry/dropsonde-protocol> envelope type.
This is non-breaking to metric users. However if you compile in dropsonde,
this message applies to you. This change forced us to update to a newer
protobuf compiler. If you are using the .proto definitions directly you
will need to update to the new compiler as well. You can find the latest
protobuf compiler release on github
<https://github.com/google/protobuf/releases>.

Thanks, The Loggregator Team
--
Jim Campbell | Product Manager | Cloud Foundry | Pivotal.io |
303.618.0963

--
Jim Campbell | Product Manager | Cloud Foundry | Pivotal.io |
303.618.0963
--
Jim Campbell | Product Manager | Cloud Foundry | Pivotal.io | 303.618.0963


Re: aligning cf push health-check default value

Nicholas Calugar
 

Hi Dies,

I spoke with Eric and he explained that this could be the desired UX for a
majority of the apps pushed with --no-route. There are more advanced
deployment strategies that might set --no-route and bind to routes later,
but I think we can expect these users to be explicit with their health
check as well. I think my discomfort with this arose when you mentioned to
me that we might want to do this in the Cloud Controller. As long as this
continues to be explicit from the API perspective, I'm fine with changing
the UX of the CLI per your above proposal.


Thanks,

Nick

On Wed, May 4, 2016 at 1:13 PM, Shannon Coen <scoen(a)pivotal.io> wrote:

Hi Dies,

IMO the healthcheck of the app should be determined independently of
whether a developer wants their app routable.

My understanding of the implications for you proposal are that a developer
could not have a port-based healthcheck without mapping a route. This seems
unnecessarily restrictive. Soon developers will be able to specify http
healthchecks. Would these be prevented also?

Best,

Shannon Coen
Product Manager, Cloud Foundry
Pivotal, Inc.

On Wed, May 4, 2016 at 12:26 PM, Nicholas Calugar <ncalugar(a)pivotal.io>
wrote:

Hi Dies,

I considered this a bit more after we chatted yesterday and I don't think
we should try to create parity between DEAs and Diego in this case. My
personal opinion is that behavior should be explicit and these two flags
provide a more correct experience with the Diego backend.


Thanks,

Nick

On Mon, Apr 25, 2016 at 6:09 PM, Koper, Dies <diesk(a)fast.au.fujitsu.com>
wrote:

Hi CLI users,



With apps deployed to DEAs, a health check is performed at application
start-up targeting the app’s port, unless you specified `--no-route`, in
which case the process is monitored.

With Diego, the health check is performed continuously and the type of
check was exposed through an option to the `cf push` command.

This option defaults to `port`, which isn't always appropriate for apps
pushed without a route, such as worker apps.



We propose fixing the `--health-check-type` option’s default value to
align with the behaviour seen for DEAs, i.e. to use “none” if option
`--no-route` is used:



--health-check-type, -u Application health check type (Default:
'none' if '--no-route' is set, otherwise 'port')`

Would anyone object to such change?



Cheers,

Dies Koper
Cloud Foundry CLI PM





Re: Ubuntu Xenial stemcell and rootfs plans

Mike Youngstrom
 

I really like the idea of finding a way to move away from bundling binaries
with the buildpacks while continuing to not require internet access. My
organization actually doesn't even use the binary bundled buildpacks for
our 2 main platforms (node and java).

Some issues we have with the offline buildpacks in addition to those
already mentioned:

* One of the key value propositions of a buildpack is the lightweight
process to fork and customize a buildpack. The inclusion of binaries makes
buildpack customization a much heavier process and less end user friendly
in a number of ways.
* We require some java-buildpack binaries that are not packaged with the
java-buildpack because of licensing issues, etc.
* For some of my customers the binary inclusion policies is too restrictive.

So, I agree with your 100% Dan. I'd love to see some work more in the
direction of not including binaries rather than making admin bulidpack
selection more stack specific.

Mike

On Wed, May 11, 2016 at 11:09 AM, Daniel Mikusa <dmikusa(a)pivotal.io> wrote:

On Wed, May 11, 2016 at 9:45 AM, Mike Dalessio <mdalessio(a)pivotal.io>
wrote:

Hi Mike,

I totally agree with you on all points, but there are second-order
effects that are worth discussing and understanding, as they've influenced
my own thinking around the timing of this work.

Given the current state of automation in the Buildpacks Team's CI
pipelines, we could add a Xenial-based rootfs ("cflinuxfs3"?)
Could we please, please not call it `cflinuxfs3`? A very common question
I get is what is `cflinuxfs2` really? I then have to explain that it is
basically Ubuntu Trusty. That invariably results in the follow up
question, why it's called `cflinuxfs2` then, to which I have no good answer.

Since it would seem that this naming choice has resulted in confused
users, can we think of something that is more indicative of what you
actually get from the rootfs? I would throw out cfxenialfs as it indicates
it's CF, Xenial and a file system. This seems more accurate as the rootfs
isn't really about "linux", if you look at linux as being the kernel [1].
It's about user land packages and those are Ubuntu Trusty or Xenial based,
so it seems like the name should reflect that.

[1] - https://en.wikipedia.org/wiki/GNU/Linux_naming_controversy

to CF pretty quickly (and in fact have considered doing exactly this), and
could build precompiled Xenial binaries to add to each buildpack pretty
easily.

Unfortunately, this would result in doubling (or nearly so) the size of
almost all of the buildpacks, since the majority of a buildpack's payload
are the precompiled binaries for the rootfs. For example, we'd need to
compile several Ruby binaries for Xenial and vendor them in the buildpack
alongside the existing Trusty-based binaries.

Larger buildpacks result in longer staging times, longer deploy times for
CF, and are just generally a burden to ship around, particularly for
operators and users that don't actually want or need two stacks.

A second solution is to ship a separate buildpack for each stack (so,
ruby_buildpack_cflinuxfs2 versus ruby_buildpack_cflinuxfs3), and have
`bin/detect` only select itself if it's running on the appropriate stack.

But this would simply be forcing all buildpacks to plug a leaky
abstraction, and so I'd like to endeavor to make buildpacks simpler to
maintain.

A third solution, and the one which I think we should pursue, is to ship
separate buildpacks for each stack, but make Cloud Controller aware of the
buildpack's "stackiness", and only invoke buildpacks that are appropriate
for that stack.

So, for example, the CC would know that the go_buildpack works on both
Trusty- and Xenial-based rootfses (as those binaries are statically
linked), and would also know that ruby_buildpack_cflinuxfs2 isn't valid for
applications running on cflinuxfs3.
Has there been any thought / consideration given to just not shipping
binaries with build packs? I know that we ship binaries with the build
packs so that they will work in offline environments, but doing so has the
obvious drawbacks you mentioned above (plus others). Have we considered
other ways to make the build packs work in offline environments? If the
build packs were just build pack code, it would make them *way* simpler to
manage and they could care much less about the stack.

One idea (sorry it's only half-baked) for enabling offline support but not
bundling binaries with the build packs would be to instead package binaries
into a separate job that runs as an HTTP server inside CF. Build packs
could then use that as an offline repo. Populating the repo could be done
in a few different ways. You could package binaries with the job, you
could have something (an errand maybe?) that uploads binaries to the VM,
you could have the HTTP server setup as a caching proxy that would fetch
them from some where else (perhaps just the proxy is allowed to access the
Internet) or the user could manually populate the files. It would also
give the user greater flexibility as to what versions of software are being
used in the environment, since build packs would no longer be limited by
the binary versions packaged with them, and instead just pull from what is
available on the repo. It would also change upgrading build packs to a
task that is mostly just pulling down the latest binaries to the HTTP
server. You'd only need to upgrade build packs when there is a problem
with the build pack itself.

Anyway, I like this option so I wanted to through it out there for
comment. Curious to hear thoughts from others. Happy to discuss further.

Thanks,

Dan




This work, however, will require some changes to CC's behavior, and
that's the critical path work that hasn't been scoped or prioritized yet.

Hope this helps everyone understand some of the concerns, and hopefully
explains why we haven't just shipped a Xenial-based stack.

-m


On Tue, May 10, 2016 at 1:34 PM, Mike Youngstrom <youngm(a)gmail.com>
wrote:

I may not have anything that qualifies as compelling. But, here are
some of the reasons I've got:

* If skipping Xenial that give at the most 1 year to transition from
trusty to a 2018.04 based rootfs. Lets say it takes 6 months to get the
new rootFS into our customers hands and for everyone to be comfortable
enough with it to make it the default. I don't think 6 months is enough
time for my users to naturally transition all of their applications via
pushes and restages to the new rootfs. The more time we have with the new
rootfs as the default the less I will need to bother my customers to test
before I force them to change.

* Xenial uses OpenSSL 1.0.2. Improving security by not statically
compiling OpenSSL into Node would be nice.

* With the lucid rootfs after a while it became difficult to find
pre-built libraries for Lucid. This put increased burden on me to identify
and provide lucid compatible builds for some common tools. One example of
this is wkhtmltopdf a commonly used tool in my organization.

I think the biggest thing for me is that the move from Lucid to Trusty
was a nightmare for me and my customers. Though better planning and adding
a couple of more months to the process would help, giving my users a couple
of years to migrate would be better. :)

Mike

On Mon, May 9, 2016 at 2:05 PM, Danny Rosen <drosen(a)pivotal.io> wrote:

Hey Mike,

Thanks for reaching out. We've discussed supporting Xenial recently but
have had trouble identifying compelling reasons to do so. Our current
version of the rootfs is supported until April 2019 [1] and while we do not
plan on waiting until March 2019 :) we want to understand compelling
reasons to go forward with the work sooner than later.


On Mon, May 9, 2016 at 12:47 PM, Mike Youngstrom <youngm(a)gmail.com>
wrote:

Ubuntu Xenial Xerus was released a few weeks ago. Any plans to
incorporate Xenial into the platform? Stemcells and/or new root fs?

The recent lucid to trusty rootfs fire drill was frustrating to my
customers. I'm hoping that this year we can get a Xenial rootfs out
loooong before trusty support ends so I don't have to put another tight
deadline on my customers to test and move.

Thoughts?

Thanks,
Mike


Re: Announcing support for TCP Routing

Shannon Coen
 

Hello Ruben,

We're currently adding support for quota management of route ports, as they
can be a limited resource in some environments. This is particularly bad on
AWS, where an ELB can be configured to listen on a maximum of 100 ports.

We're nearly done with quota support for route ports, and the CLI team is
adding support for the new quota attribute. Once the API and CLI are
delivered, we'll look at offering TCP routing on PWS to a limited audience.

Best,

Shannon Coen
Product Manager, Cloud Foundry
Pivotal, Inc.

On Wed, May 11, 2016 at 2:54 AM, Ruben Koster <superruup(a)gmail.com> wrote:

Really nice!! When will we be able to play with this functionality on PWS?


Re: Ubuntu Xenial stemcell and rootfs plans

Daniel Mikusa
 

On Wed, May 11, 2016 at 9:45 AM, Mike Dalessio <mdalessio(a)pivotal.io> wrote:

Hi Mike,

I totally agree with you on all points, but there are second-order effects
that are worth discussing and understanding, as they've influenced my own
thinking around the timing of this work.

Given the current state of automation in the Buildpacks Team's CI
pipelines, we could add a Xenial-based rootfs ("cflinuxfs3"?)
Could we please, please not call it `cflinuxfs3`? A very common question I
get is what is `cflinuxfs2` really? I then have to explain that it is
basically Ubuntu Trusty. That invariably results in the follow up
question, why it's called `cflinuxfs2` then, to which I have no good answer.

Since it would seem that this naming choice has resulted in confused users,
can we think of something that is more indicative of what you actually get
from the rootfs? I would throw out cfxenialfs as it indicates it's CF,
Xenial and a file system. This seems more accurate as the rootfs isn't
really about "linux", if you look at linux as being the kernel [1]. It's
about user land packages and those are Ubuntu Trusty or Xenial based, so it
seems like the name should reflect that.

[1] - https://en.wikipedia.org/wiki/GNU/Linux_naming_controversy

to CF pretty quickly (and in fact have considered doing exactly this), and
could build precompiled Xenial binaries to add to each buildpack pretty
easily.

Unfortunately, this would result in doubling (or nearly so) the size of
almost all of the buildpacks, since the majority of a buildpack's payload
are the precompiled binaries for the rootfs. For example, we'd need to
compile several Ruby binaries for Xenial and vendor them in the buildpack
alongside the existing Trusty-based binaries.

Larger buildpacks result in longer staging times, longer deploy times for
CF, and are just generally a burden to ship around, particularly for
operators and users that don't actually want or need two stacks.

A second solution is to ship a separate buildpack for each stack (so,
ruby_buildpack_cflinuxfs2 versus ruby_buildpack_cflinuxfs3), and have
`bin/detect` only select itself if it's running on the appropriate stack.

But this would simply be forcing all buildpacks to plug a leaky
abstraction, and so I'd like to endeavor to make buildpacks simpler to
maintain.

A third solution, and the one which I think we should pursue, is to ship
separate buildpacks for each stack, but make Cloud Controller aware of the
buildpack's "stackiness", and only invoke buildpacks that are appropriate
for that stack.

So, for example, the CC would know that the go_buildpack works on both
Trusty- and Xenial-based rootfses (as those binaries are statically
linked), and would also know that ruby_buildpack_cflinuxfs2 isn't valid for
applications running on cflinuxfs3.
Has there been any thought / consideration given to just not shipping
binaries with build packs? I know that we ship binaries with the build
packs so that they will work in offline environments, but doing so has the
obvious drawbacks you mentioned above (plus others). Have we considered
other ways to make the build packs work in offline environments? If the
build packs were just build pack code, it would make them *way* simpler to
manage and they could care much less about the stack.

One idea (sorry it's only half-baked) for enabling offline support but not
bundling binaries with the build packs would be to instead package binaries
into a separate job that runs as an HTTP server inside CF. Build packs
could then use that as an offline repo. Populating the repo could be done
in a few different ways. You could package binaries with the job, you
could have something (an errand maybe?) that uploads binaries to the VM,
you could have the HTTP server setup as a caching proxy that would fetch
them from some where else (perhaps just the proxy is allowed to access the
Internet) or the user could manually populate the files. It would also
give the user greater flexibility as to what versions of software are being
used in the environment, since build packs would no longer be limited by
the binary versions packaged with them, and instead just pull from what is
available on the repo. It would also change upgrading build packs to a
task that is mostly just pulling down the latest binaries to the HTTP
server. You'd only need to upgrade build packs when there is a problem
with the build pack itself.

Anyway, I like this option so I wanted to through it out there for
comment. Curious to hear thoughts from others. Happy to discuss further.

Thanks,

Dan




This work, however, will require some changes to CC's behavior, and that's
the critical path work that hasn't been scoped or prioritized yet.

Hope this helps everyone understand some of the concerns, and hopefully
explains why we haven't just shipped a Xenial-based stack.

-m


On Tue, May 10, 2016 at 1:34 PM, Mike Youngstrom <youngm(a)gmail.com> wrote:

I may not have anything that qualifies as compelling. But, here are some
of the reasons I've got:

* If skipping Xenial that give at the most 1 year to transition from
trusty to a 2018.04 based rootfs. Lets say it takes 6 months to get the
new rootFS into our customers hands and for everyone to be comfortable
enough with it to make it the default. I don't think 6 months is enough
time for my users to naturally transition all of their applications via
pushes and restages to the new rootfs. The more time we have with the new
rootfs as the default the less I will need to bother my customers to test
before I force them to change.

* Xenial uses OpenSSL 1.0.2. Improving security by not statically
compiling OpenSSL into Node would be nice.

* With the lucid rootfs after a while it became difficult to find
pre-built libraries for Lucid. This put increased burden on me to identify
and provide lucid compatible builds for some common tools. One example of
this is wkhtmltopdf a commonly used tool in my organization.

I think the biggest thing for me is that the move from Lucid to Trusty
was a nightmare for me and my customers. Though better planning and adding
a couple of more months to the process would help, giving my users a couple
of years to migrate would be better. :)

Mike

On Mon, May 9, 2016 at 2:05 PM, Danny Rosen <drosen(a)pivotal.io> wrote:

Hey Mike,

Thanks for reaching out. We've discussed supporting Xenial recently but
have had trouble identifying compelling reasons to do so. Our current
version of the rootfs is supported until April 2019 [1] and while we do not
plan on waiting until March 2019 :) we want to understand compelling
reasons to go forward with the work sooner than later.


On Mon, May 9, 2016 at 12:47 PM, Mike Youngstrom <youngm(a)gmail.com>
wrote:

Ubuntu Xenial Xerus was released a few weeks ago. Any plans to
incorporate Xenial into the platform? Stemcells and/or new root fs?

The recent lucid to trusty rootfs fire drill was frustrating to my
customers. I'm hoping that this year we can get a Xenial rootfs out
loooong before trusty support ends so I don't have to put another tight
deadline on my customers to test and move.

Thoughts?

Thanks,
Mike


Re: Loggregator has updated protobufs definitions and compiler for dropsonde

Mike Youngstrom
 

You mention that this change is non-breaking for metrics users. I don't
know much about protobufs backwards compatibility story. Can you detail a
little more the implications from a compatibility standpoint with this
change? Some questions:

* If I compile the new protocol via protobuf3 can I still communicate with
an old loggregator deployment?
* If I don't use the new protocol can I talk I communicate with a
loggregator that is using the new protocol? (absent the new field of course)

Thanks,
Mike

On Wed, May 11, 2016 at 8:58 AM, Jim CF Campbell <jcampbell(a)pivotal.io>
wrote:

Hopefully it's OK because you're bought into the value of tagged metrics...

On Tue, May 10, 2016 at 5:32 PM, Mike Youngstrom <youngm(a)gmail.com> wrote:

Thanks for the heads up Jim,

I use wire (https://github.com/square/wire) for a number of my
projects. Unfortunately, it appears the map<string, string> syntax is too
new for wire. Looks like I have some rewriting to do. :(

Mike

On Tue, May 10, 2016 at 3:59 PM, Jim CF Campbell <jcampbell(a)pivotal.io>
wrote:

Hi cf-dev,

In support of the epic to add tagging to CF metrics
<https://www.pivotaltracker.com/epic/show/2362529>, we recently added a
map to the dropsonde-protocol
<https://github.com/cloudfoundry/dropsonde-protocol> envelope type.
This is non-breaking to metric users. However if you compile in dropsonde,
this message applies to you. This change forced us to update to a newer
protobuf compiler. If you are using the .proto definitions directly you
will need to update to the new compiler as well. You can find the latest
protobuf compiler release on github
<https://github.com/google/protobuf/releases>.

Thanks, The Loggregator Team
--
Jim Campbell | Product Manager | Cloud Foundry | Pivotal.io |
303.618.0963

--
Jim Campbell | Product Manager | Cloud Foundry | Pivotal.io | 303.618.0963


Re: Ubuntu Xenial stemcell and rootfs plans

Mike Youngstrom
 

Thanks Mike that helps. Hopefully that work will get prioritized in the
next year or so. :)

For the record, on the stemcell side I've been battling a non CF issue [0]
with Trusty that I'm hoping is fixed in Xenial. I could verify if it is
fixed without a stemcell. I'm just being lazy. :) Perhaps I'll verify
first so I have a more concrete reason to request a Xenial stemcell.

Thanks,
Mike

[0] https://github.com/hazelcast/hazelcast/issues/5209

On Wed, May 11, 2016 at 7:45 AM, Mike Dalessio <mdalessio(a)pivotal.io> wrote:

Hi Mike,

I totally agree with you on all points, but there are second-order effects
that are worth discussing and understanding, as they've influenced my own
thinking around the timing of this work.

Given the current state of automation in the Buildpacks Team's CI
pipelines, we could add a Xenial-based rootfs ("cflinuxfs3"?) to CF pretty
quickly (and in fact have considered doing exactly this), and could build
precompiled Xenial binaries to add to each buildpack pretty easily.

Unfortunately, this would result in doubling (or nearly so) the size of
almost all of the buildpacks, since the majority of a buildpack's payload
are the precompiled binaries for the rootfs. For example, we'd need to
compile several Ruby binaries for Xenial and vendor them in the buildpack
alongside the existing Trusty-based binaries.

Larger buildpacks result in longer staging times, longer deploy times for
CF, and are just generally a burden to ship around, particularly for
operators and users that don't actually want or need two stacks.

A second solution is to ship a separate buildpack for each stack (so,
ruby_buildpack_cflinuxfs2 versus ruby_buildpack_cflinuxfs3), and have
`bin/detect` only select itself if it's running on the appropriate stack.

But this would simply be forcing all buildpacks to plug a leaky
abstraction, and so I'd like to endeavor to make buildpacks simpler to
maintain.

A third solution, and the one which I think we should pursue, is to ship
separate buildpacks for each stack, but make Cloud Controller aware of the
buildpack's "stackiness", and only invoke buildpacks that are appropriate
for that stack.

So, for example, the CC would know that the go_buildpack works on both
Trusty- and Xenial-based rootfses (as those binaries are statically
linked), and would also know that ruby_buildpack_cflinuxfs2 isn't valid for
applications running on cflinuxfs3.

This work, however, will require some changes to CC's behavior, and that's
the critical path work that hasn't been scoped or prioritized yet.

Hope this helps everyone understand some of the concerns, and hopefully
explains why we haven't just shipped a Xenial-based stack.

-m


On Tue, May 10, 2016 at 1:34 PM, Mike Youngstrom <youngm(a)gmail.com> wrote:

I may not have anything that qualifies as compelling. But, here are some
of the reasons I've got:

* If skipping Xenial that give at the most 1 year to transition from
trusty to a 2018.04 based rootfs. Lets say it takes 6 months to get the
new rootFS into our customers hands and for everyone to be comfortable
enough with it to make it the default. I don't think 6 months is enough
time for my users to naturally transition all of their applications via
pushes and restages to the new rootfs. The more time we have with the new
rootfs as the default the less I will need to bother my customers to test
before I force them to change.

* Xenial uses OpenSSL 1.0.2. Improving security by not statically
compiling OpenSSL into Node would be nice.

* With the lucid rootfs after a while it became difficult to find
pre-built libraries for Lucid. This put increased burden on me to identify
and provide lucid compatible builds for some common tools. One example of
this is wkhtmltopdf a commonly used tool in my organization.

I think the biggest thing for me is that the move from Lucid to Trusty
was a nightmare for me and my customers. Though better planning and adding
a couple of more months to the process would help, giving my users a couple
of years to migrate would be better. :)

Mike

On Mon, May 9, 2016 at 2:05 PM, Danny Rosen <drosen(a)pivotal.io> wrote:

Hey Mike,

Thanks for reaching out. We've discussed supporting Xenial recently but
have had trouble identifying compelling reasons to do so. Our current
version of the rootfs is supported until April 2019 [1] and while we do not
plan on waiting until March 2019 :) we want to understand compelling
reasons to go forward with the work sooner than later.


On Mon, May 9, 2016 at 12:47 PM, Mike Youngstrom <youngm(a)gmail.com>
wrote:

Ubuntu Xenial Xerus was released a few weeks ago. Any plans to
incorporate Xenial into the platform? Stemcells and/or new root fs?

The recent lucid to trusty rootfs fire drill was frustrating to my
customers. I'm hoping that this year we can get a Xenial rootfs out
loooong before trusty support ends so I don't have to put another tight
deadline on my customers to test and move.

Thoughts?

Thanks,
Mike


Re: Loggregator has updated protobufs definitions and compiler for dropsonde

Jim CF Campbell
 

Hopefully it's OK because you're bought into the value of tagged metrics...

On Tue, May 10, 2016 at 5:32 PM, Mike Youngstrom <youngm(a)gmail.com> wrote:

Thanks for the heads up Jim,

I use wire (https://github.com/square/wire) for a number of my projects.
Unfortunately, it appears the map<string, string> syntax is too new for
wire. Looks like I have some rewriting to do. :(

Mike

On Tue, May 10, 2016 at 3:59 PM, Jim CF Campbell <jcampbell(a)pivotal.io>
wrote:

Hi cf-dev,

In support of the epic to add tagging to CF metrics
<https://www.pivotaltracker.com/epic/show/2362529>, we recently added a
map to the dropsonde-protocol
<https://github.com/cloudfoundry/dropsonde-protocol> envelope type. This
is non-breaking to metric users. However if you compile in dropsonde, this
message applies to you. This change forced us to update to a newer protobuf
compiler. If you are using the .proto definitions directly you will need to
update to the new compiler as well. You can find the latest protobuf
compiler release on github <https://github.com/google/protobuf/releases>.

Thanks, The Loggregator Team
--
Jim Campbell | Product Manager | Cloud Foundry | Pivotal.io |
303.618.0963
--
Jim Campbell | Product Manager | Cloud Foundry | Pivotal.io | 303.618.0963


Re: HTTP request status text is changed

Ben Hale <bhale@...>
 

I’m unsure why this is, although I haven’t been able to deploy the docker image you’ve created yet. I’m still attempting to get the image to start on a CF instance for validation and diagnostics.


-Ben Hale
Cloud Foundry Java Experience

On May 11, 2016, at 6:57 AM, Stanley Shen <meteorping(a)gmail.com> wrote:

Thanks for your information, I will have a try.

Just wondering why it's also reproducible when it's a docker image?
In this case, this is nothing related to java buildpack.


Re: HTTP request status text is changed

Stanley Shen <meteorping@...>
 

Thanks for your information, I will have a try.

Just wondering why it's also reproducible when it's a docker image?
In this case, this is nothing related to java buildpack.


Re: Ubuntu Xenial stemcell and rootfs plans

Mike Dalessio
 

Hi Mike,

I totally agree with you on all points, but there are second-order effects
that are worth discussing and understanding, as they've influenced my own
thinking around the timing of this work.

Given the current state of automation in the Buildpacks Team's CI
pipelines, we could add a Xenial-based rootfs ("cflinuxfs3"?) to CF pretty
quickly (and in fact have considered doing exactly this), and could build
precompiled Xenial binaries to add to each buildpack pretty easily.

Unfortunately, this would result in doubling (or nearly so) the size of
almost all of the buildpacks, since the majority of a buildpack's payload
are the precompiled binaries for the rootfs. For example, we'd need to
compile several Ruby binaries for Xenial and vendor them in the buildpack
alongside the existing Trusty-based binaries.

Larger buildpacks result in longer staging times, longer deploy times for
CF, and are just generally a burden to ship around, particularly for
operators and users that don't actually want or need two stacks.

A second solution is to ship a separate buildpack for each stack (so,
ruby_buildpack_cflinuxfs2 versus ruby_buildpack_cflinuxfs3), and have
`bin/detect` only select itself if it's running on the appropriate stack.

But this would simply be forcing all buildpacks to plug a leaky
abstraction, and so I'd like to endeavor to make buildpacks simpler to
maintain.

A third solution, and the one which I think we should pursue, is to ship
separate buildpacks for each stack, but make Cloud Controller aware of the
buildpack's "stackiness", and only invoke buildpacks that are appropriate
for that stack.

So, for example, the CC would know that the go_buildpack works on both
Trusty- and Xenial-based rootfses (as those binaries are statically
linked), and would also know that ruby_buildpack_cflinuxfs2 isn't valid for
applications running on cflinuxfs3.

This work, however, will require some changes to CC's behavior, and that's
the critical path work that hasn't been scoped or prioritized yet.

Hope this helps everyone understand some of the concerns, and hopefully
explains why we haven't just shipped a Xenial-based stack.

-m

On Tue, May 10, 2016 at 1:34 PM, Mike Youngstrom <youngm(a)gmail.com> wrote:

I may not have anything that qualifies as compelling. But, here are some
of the reasons I've got:

* If skipping Xenial that give at the most 1 year to transition from
trusty to a 2018.04 based rootfs. Lets say it takes 6 months to get the
new rootFS into our customers hands and for everyone to be comfortable
enough with it to make it the default. I don't think 6 months is enough
time for my users to naturally transition all of their applications via
pushes and restages to the new rootfs. The more time we have with the new
rootfs as the default the less I will need to bother my customers to test
before I force them to change.

* Xenial uses OpenSSL 1.0.2. Improving security by not statically
compiling OpenSSL into Node would be nice.

* With the lucid rootfs after a while it became difficult to find
pre-built libraries for Lucid. This put increased burden on me to identify
and provide lucid compatible builds for some common tools. One example of
this is wkhtmltopdf a commonly used tool in my organization.

I think the biggest thing for me is that the move from Lucid to Trusty was
a nightmare for me and my customers. Though better planning and adding a
couple of more months to the process would help, giving my users a couple
of years to migrate would be better. :)

Mike

On Mon, May 9, 2016 at 2:05 PM, Danny Rosen <drosen(a)pivotal.io> wrote:

Hey Mike,

Thanks for reaching out. We've discussed supporting Xenial recently but
have had trouble identifying compelling reasons to do so. Our current
version of the rootfs is supported until April 2019 [1] and while we do not
plan on waiting until March 2019 :) we want to understand compelling
reasons to go forward with the work sooner than later.


On Mon, May 9, 2016 at 12:47 PM, Mike Youngstrom <youngm(a)gmail.com>
wrote:

Ubuntu Xenial Xerus was released a few weeks ago. Any plans to
incorporate Xenial into the platform? Stemcells and/or new root fs?

The recent lucid to trusty rootfs fire drill was frustrating to my
customers. I'm hoping that this year we can get a Xenial rootfs out
loooong before trusty support ends so I don't have to put another tight
deadline on my customers to test and move.

Thoughts?

Thanks,
Mike


Re: HTTP request status text is changed

Ben Hale <bhale@...>
 

As I mentioned earlier, the difference in behavior is down to the command that is being run when the application is pushed to Cloud Foundry. When you push your WAR to CloudFoundry, it is being run in a Tomcat container that is not configured to allow custom status messages. If you want to run the application in its embedded Jetty container, you’ll need to ensure that it has a `Main-Class` entry in the JAR’s manifest that will start the embedded Jetty container.


-Ben Hale
Cloud Foundry Java Experience

On May 11, 2016, at 12:45 AM, Stanley Shen <meteorping(a)gmail.com> wrote:

I created the docker image, and you can find it "meteorping2/hello".

You can push the image as an APP like "cf push stanley -o meteorping2/hello"
When the APP is ready, you can access the servlet like:
wget http://stanley.test.io/hello

And I got result
=======================================
:~ stanleyshen$ wget http://stanley.test.io/hello
--2016-05-11 15:23:16-- http://stanley.test.io/hello
Resolving stanley.test.io... 11.22.33.44
Connecting to stanley.test.io|11.22.33.44|:80... connected.
HTTP request sent, awaiting response... 403 Forbidden
2016-05-11 15:23:17 ERROR 403: Forbidden.
=======================================

But if you access the servlet deployed in local env via "java -jar jetty-runner.jar testId.war" you will get below result:
=======================================
--2016-05-11 07:04:31-- http://127.0.0.1:8080/hello
Connecting to 127.0.0.1:8080... connected.
HTTP request sent, awaiting response... 403 my customized 413 error message
2016-05-11 07:04:31 ERROR 403: my customized 413 error message.
=======================================

And here is the Dockerfile definition:
=======================================
FROM java:8
COPY . /usr/src/myapp
WORKDIR /usr/src/myapp

CMD java -jar jetty-runner.jar testId.war
=======================================
Thanks for investigation, let me know if you still cannot reproduce the issue.

Regards,
Stanley


Re: Announcing support for TCP Routing

Ruben Koster (VMware)
 

Really nice!! When will we be able to play with this functionality on PWS?


Re: HTTP request status text is changed

Stanley Shen <meteorping@...>
 

I created the docker image, and you can find it "meteorping2/hello".

You can push the image as an APP like "cf push stanley -o meteorping2/hello"
When the APP is ready, you can access the servlet like:
wget http://stanley.test.io/hello

And I got result
=======================================
:~ stanleyshen$ wget http://stanley.test.io/hello
--2016-05-11 15:23:16-- http://stanley.test.io/hello
Resolving stanley.test.io... 11.22.33.44
Connecting to stanley.test.io|11.22.33.44|:80... connected.
HTTP request sent, awaiting response... 403 Forbidden
2016-05-11 15:23:17 ERROR 403: Forbidden.
=======================================

But if you access the servlet deployed in local env via "java -jar jetty-runner.jar testId.war" you will get below result:
=======================================
--2016-05-11 07:04:31-- http://127.0.0.1:8080/hello
Connecting to 127.0.0.1:8080... connected.
HTTP request sent, awaiting response... 403 my customized 413 error message
2016-05-11 07:04:31 ERROR 403: my customized 413 error message.
=======================================

And here is the Dockerfile definition:
=======================================
FROM java:8
COPY . /usr/src/myapp
WORKDIR /usr/src/myapp

CMD java -jar jetty-runner.jar testId.war
=======================================
Thanks for investigation, let me know if you still cannot reproduce the issue.

Regards,
Stanley


Re: Announcing support for TCP Routing

Sam Ramji
 

And the crowd goes wild. Great news.

Looking forward to UDP support in the future for more IoT protocols.


*Sam* Ramji | sramji(a)cloudfoundry.org | +1-510-913-6495

On Tue, May 10, 2016 at 6:28 PM, Shannon Coen <scoen(a)pivotal.io> wrote:

On behalf of the CF Routing team I invite you to bring your non-http
workloads to Cloud Foundry.

By deploying the Routing Release [1] alongside Cloud Foundry and the Diego
runtime backend, operators can enable developers to create TCP routes based
on reservable ports.

The developer UX for pushing an app and mapping a TCP route is as simple
as this:

cf p myapp -d tcp.bosh-lite.com --random-route

The response includes a port associated with the TCP route. Client
requests to these ports will be routed to applications running on CF
through a layer-4 protocol-agnostic routing tier.

The Routing Release is still very much a work in progress. We are focusing
on mitigating stale routing information in the event of network partition
with system components, and on streamlining the deployment workflow.

Please review our README for deployment instructions, give it a go. We are
looking forward to your feedback.

Thank you!

[1] https://github.com/cloudfoundry-incubator/cf-routing-release

Note:
- UDP protocols are not supported.
- Both HTTP and TCP routes will be directed to the same application port,
identified by environment variable $PORT. Support for multiple application
ports is coming soon.


Shannon Coen
Product Manager, Cloud Foundry
Pivotal, Inc.


Re: Request for Multibuildpack Use Cases

Noburou TANIGUCHI
 

Hi Danny,

We (Japan Cloud Foundry Group) tried to deploy six applications with
ddollar's heroku-buildpack-multi (
https://github.com/ddollar/heroku-buildpack-multi (deprecated now)).

Here are our blog posts (sorry that all the posts are in Japanese, but I
think you may see the actual deployment procedures in "pre" sections (in
black background)):

* Yabitz http://blog.cloudfoundry.gr.jp/2015/06/cf100apps-014-yabitz.html
* jqplay http://blog.cloudfoundry.gr.jp/2015/07/jqplay-cloud-foundry.html
* Mattermost
http://blog.cloudfoundry.gr.jp/2015/09/cf100apps-066-mattermost.html
* Wizard Warz
http://blog.cloudfoundry.gr.jp/2015/10/cf100apps-079-wizardwarz.html
* Jekyll http://blog.cloudfoundry.gr.jp/2015/10/cf100apps-087-jekyll.html
* Mconf-Web
http://blog.cloudfoundry.gr.jp/2015/11/cf100apps-098-mconf-web.html

I also have some experience using James Bayer's scm-buildpack with
heroku-buildpack-multi. But it seems not the cases that you require.

Regards,


gberche wrote
Danny,

Some additional related use-cases:

1- our corporate CF platform provider is willing to scan content all apps
pre staging and post-staging. For instance, in order to black list of
known-to-be-vulnerable php library pulled by the production apps, or
perform static code analysis. We're currently considering a php buildpack
fork to insert an extension to do this. Being able to do such scanning
without forking the buildpack would be useful.

This seemed well match https://www.pivotaltracker.com/story/show/100758730

2- add some features that are language-independent to the set of supported
buildpacks without forking them (e.g. an agent collocated with the app, or
a command prexecuted prior to app starting).

A concrete example of this is securely fetching secrets from an hashicorp
vault before app startup and expose them as transient env vars (i.e. env
vars not stored in CC db). More details into
https://github.com/Orange-OpenSource/static-creds-broker/issues/11

As a starting point we have decided to work with officially provided
buildpacks as their behavior is known and controlled by the buildpacks
team.

I worry that only supporting a combination of the officially provided
buildpacks will drastically limit the number of use-cases addressed such a
"multi-buildpack" 1st step

Guillaume.



Guillaume.

On Wed, Apr 13, 2016 at 12:16 AM, Danny Rosen &lt;
drosen@
&gt; wrote:

As a starting point we have decided to work with officially provided
buildpacks as their behavior is known and controlled by the buildpacks
team. By discovering use cases (thank you John, Jack and David for your
examples) we can start work towards implementing multibuildpack solutions
that would be open to the community to consume and iterate on.
On Apr 12, 2016 1:06 PM, "Mike Youngstrom" &lt;
youngm@
&gt; wrote:

John,

It sounds like the buildpack team is thinking the multi buildpack
feature
would only work for buildpacks they provide not a custom
"dependency-resolution" buildpack. Or at least that is how I understood
the message from Danny Rosen earlier in the thread.

Mike

On Tue, Apr 12, 2016 at 10:45 AM, John Feminella &lt;
jxf@
&gt; wrote:

Multibuildpack is absolutely useful and I'm excited for this proposal.

I encounter a lot of use cases for this. The most common is that an
application wants to pull in private dependencies during a future
dependency-resolution step of a later buildpack, but the dependency
resolver needs to be primed in some specific way. If you wait until
buildpack time it's too late.

On Heroku, for example, this is accomplished by having something like
the netrc buildpack (
https://github.com/timshadel/heroku-buildpack-github-netrc), adding a
GITHUB_TOKEN environment variable, and then running your "real"
buildpack.
The netrc BP runs first, allowing Bundler to see the private
dependencies.

best,
~ jf

On Tue, Apr 12, 2016 at 12:36 PM Jack Cai &lt;
greensight@
&gt; wrote:

It would be more useful if the multi-buildpack can reference an admin
buildpack in addition to a remote git-hosted buildpack. :-)

Jack


On Tue, Apr 12, 2016 at 6:38 AM, David Illsley &lt;
davidillsley@
&gt;
wrote:

In the past we've used the multi-buildpack to be able to use ruby
sass
to compile SCSS for non-ruby projects (node and Java). In that case
we used
the multi-buildpack and a .buildpacks file which worked reasonably
well
(and was very clear).

On Mon, Apr 11, 2016 at 1:15 AM, Danny Rosen &lt;
drosen@
&gt;
wrote:

Hi there,

The CF Buildpacks team is considering taking on a line of work to
provide more formal support for multibuildpacks. Before we start, we
would
be interested in learning if any community users have compelling use
cases
they could share with us.

For more information on multibuildpacks, see Heroku's documentation
[1]

[1] -
https://devcenter.heroku.com/articles/using-multiple-buildpacks-for-an-app




-----
I'm not a ...
noburou taniguchi
--
View this message in context: http://cf-dev.70369.x6.nabble.com/cf-dev-Request-for-Multibuildpack-Use-Cases-tp4540p4859.html
Sent from the CF Dev mailing list archive at Nabble.com.


Re: Announcing support for TCP Routing

Onsi Fakhouri <ofakhouri@...>
 

Well done y'all! Huge milestone for CF and the routing team!

Onsi

On Tue, May 10, 2016 at 7:53 PM, Benjamin Black <bblack(a)pivotal.io> wrote:

Great work, everyone!
On May 10, 2016 18:29, "Shannon Coen" <scoen(a)pivotal.io> wrote:

On behalf of the CF Routing team I invite you to bring your non-http
workloads to Cloud Foundry.

By deploying the Routing Release [1] alongside Cloud Foundry and the
Diego runtime backend, operators can enable developers to create TCP routes
based on reservable ports.

The developer UX for pushing an app and mapping a TCP route is as simple
as this:

cf p myapp -d tcp.bosh-lite.com --random-route

The response includes a port associated with the TCP route. Client
requests to these ports will be routed to applications running on CF
through a layer-4 protocol-agnostic routing tier.

The Routing Release is still very much a work in progress. We are
focusing on mitigating stale routing information in the event of network
partition with system components, and on streamlining the deployment
workflow.

Please review our README for deployment instructions, give it a go. We
are looking forward to your feedback.

Thank you!

[1] https://github.com/cloudfoundry-incubator/cf-routing-release

Note:
- UDP protocols are not supported.
- Both HTTP and TCP routes will be directed to the same application port,
identified by environment variable $PORT. Support for multiple application
ports is coming soon.


Shannon Coen
Product Manager, Cloud Foundry
Pivotal, Inc.


Re: Announcing support for TCP Routing

Benjamin Black
 

Great work, everyone!

On May 10, 2016 18:29, "Shannon Coen" <scoen(a)pivotal.io> wrote:

On behalf of the CF Routing team I invite you to bring your non-http
workloads to Cloud Foundry.

By deploying the Routing Release [1] alongside Cloud Foundry and the Diego
runtime backend, operators can enable developers to create TCP routes based
on reservable ports.

The developer UX for pushing an app and mapping a TCP route is as simple
as this:

cf p myapp -d tcp.bosh-lite.com --random-route

The response includes a port associated with the TCP route. Client
requests to these ports will be routed to applications running on CF
through a layer-4 protocol-agnostic routing tier.

The Routing Release is still very much a work in progress. We are focusing
on mitigating stale routing information in the event of network partition
with system components, and on streamlining the deployment workflow.

Please review our README for deployment instructions, give it a go. We are
looking forward to your feedback.

Thank you!

[1] https://github.com/cloudfoundry-incubator/cf-routing-release

Note:
- UDP protocols are not supported.
- Both HTTP and TCP routes will be directed to the same application port,
identified by environment variable $PORT. Support for multiple application
ports is coming soon.


Shannon Coen
Product Manager, Cloud Foundry
Pivotal, Inc.


Re: Announcing support for TCP Routing

Dr Nic Williams <drnicwilliams@...>
 

Great work finishing the end to end integration with the CLI etc!!

On Tue, May 10, 2016 at 6:54 PM -0700, "Chris Sterling" <chris.sterling(a)gmail.com> wrote:










Wow, this is great news! Congratulations, team!
Chris Sterling
chris.sterling(a)gmail.com
twitter: @csterwa
linkedin: http://www.linkedin.com/in/chrissterling

On Tue, May 10, 2016 at 6:28 PM, Shannon Coen <scoen(a)pivotal.io> wrote:
On behalf of the CF Routing team I invite you to bring your non-http workloads to Cloud Foundry.
By deploying the Routing Release [1] alongside Cloud Foundry and the Diego runtime backend, operators can enable developers to create TCP routes based on reservable ports. 
The developer UX for pushing an app and mapping a TCP route is as simple as this:
cf p myapp -d tcp.bosh-lite.com --random-route

The response includes a port associated with the TCP route. Client requests to these ports will be routed to applications running on CF through a layer-4 protocol-agnostic routing tier. 
The Routing Release is still very much a work in progress. We are focusing on mitigating stale routing information in the event of network partition with system components, and on streamlining the deployment workflow.
Please review our README for deployment instructions, give it a go. We are looking forward to your feedback. 
Thank you!
[1] https://github.com/cloudfoundry-incubator/cf-routing-release
Note: - UDP protocols are not supported. - Both HTTP and TCP routes will be directed to the same application port, identified by environment variable $PORT. Support for multiple application ports is coming soon.

Shannon CoenProduct Manager, Cloud FoundryPivotal, Inc.


Re: Announcing support for TCP Routing

Chris Sterling
 

Wow, this is great news! Congratulations, team!

Chris Sterling
chris.sterling(a)gmail.com
twitter: @csterwa
linkedin: http://www.linkedin.com/in/chrissterling

On Tue, May 10, 2016 at 6:28 PM, Shannon Coen <scoen(a)pivotal.io> wrote:

On behalf of the CF Routing team I invite you to bring your non-http
workloads to Cloud Foundry.

By deploying the Routing Release [1] alongside Cloud Foundry and the Diego
runtime backend, operators can enable developers to create TCP routes based
on reservable ports.

The developer UX for pushing an app and mapping a TCP route is as simple
as this:

cf p myapp -d tcp.bosh-lite.com --random-route

The response includes a port associated with the TCP route. Client
requests to these ports will be routed to applications running on CF
through a layer-4 protocol-agnostic routing tier.

The Routing Release is still very much a work in progress. We are focusing
on mitigating stale routing information in the event of network partition
with system components, and on streamlining the deployment workflow.

Please review our README for deployment instructions, give it a go. We are
looking forward to your feedback.

Thank you!

[1] https://github.com/cloudfoundry-incubator/cf-routing-release

Note:
- UDP protocols are not supported.
- Both HTTP and TCP routes will be directed to the same application port,
identified by environment variable $PORT. Support for multiple application
ports is coming soon.


Shannon Coen
Product Manager, Cloud Foundry
Pivotal, Inc.

4561 - 4580 of 9425