Date   

Re: Java Buildpack v3.7.1

Duncan Winn
 

Ben, perfect timing - I'm working with a company requiring JCE right now.

Thanks for this!

On Fri, May 13, 2016 at 8:12 AM Josh Long <starbuxman(a)gmail.com> wrote:

Congratulations and well done!

I like it when anything has the title "unlimited strength"
On Fri, May 13, 2016 at 19:41 Ben Hale <bhale(a)pivotal.io> wrote:

I'm pleased to announce the release of the java-buildpack, version 3.7.1.
This release is a dependency update highlighted by the availability of JREs
with JCE Unlimited Strength encryption.

* OpenJDK JCE Unlimited Strength Encryption JREs (via Laura Kerksiek)
* Removal of Precise Artifacts

For a more detailed look at the changes in 3.7.1, please take a look at
the commit log[1]. Packaged versions of the buildpack, suitable for use
with create-buildpack and update-buildpack, can be found attached to this
release.


-Ben Hale
Cloud Foundry Java Experience


## Packaged Dependencies

AppDynamics 4.2.1_8
Dynatrace 6.3.0_1305
GemFire Modules Tomcat7 8.2.0
GemFire Modules 8.2.0
GemFire Security 8.2.0
GemFire 8.2.0
Groovy 2.4.6
JRebel 6.4.3
Log4j API 2.1.0
Log4j Core 2.1.0
Log4j Jcl 2.1.0
Log4j Jul 2.1.0
Log4j Slf4j 2.1.0
MariaDB JDBC 1.4.4
Memory Calculator (mountainlion) 2.0.2_RELEASE
Memory Calculator (trusty) 2.0.2_RELEASE
New Relic Agent 3.28.0
OpenJDK JRE (mountainlion) 1.8.0_91-unlimited-crypto
OpenJDK JRE (trusty) 1.8.0_91-unlimited-crypto
Play Framework JPA Plugin 1.10.0_RELEASE
PostgreSQL JDBC 9.4.1208
RedisStore 1.2.0_RELEASE
Ruxit 1.91.271
SLF4J API 1.7.7
SLF4J JDK14 1.7.7
Spring Auto-reconfiguration 1.10.0_RELEASE
Spring Boot CLI 1.3.5_RELEASE
Spring Boot Container Customizer 1.0.0_RELEASE
Tomcat Access Logging Support 2.5.0_RELEASE
Tomcat Lifecycle Support 2.5.0_RELEASE
Tomcat Logging Support 2.5.0_RELEASE
Tomcat 8.0.33
YourKit Profiler (mountainlion) 2016.02.36
YourKit Profiler (trusty) 2016.02.36


[1]: https://github.com/cloudfoundry/java-buildpack/compare/v3.7...v3.7.1
--
Duncan Winn
Cloud Foundry PCF Services


Re: Java Buildpack v3.7.1

Josh Long <starbuxman@...>
 

Congratulations and well done!

I like it when anything has the title "unlimited strength"

On Fri, May 13, 2016 at 19:41 Ben Hale <bhale(a)pivotal.io> wrote:

I'm pleased to announce the release of the java-buildpack, version 3.7.1.
This release is a dependency update highlighted by the availability of JREs
with JCE Unlimited Strength encryption.

* OpenJDK JCE Unlimited Strength Encryption JREs (via Laura Kerksiek)
* Removal of Precise Artifacts

For a more detailed look at the changes in 3.7.1, please take a look at
the commit log[1]. Packaged versions of the buildpack, suitable for use
with create-buildpack and update-buildpack, can be found attached to this
release.


-Ben Hale
Cloud Foundry Java Experience


## Packaged Dependencies

AppDynamics 4.2.1_8
Dynatrace 6.3.0_1305
GemFire Modules Tomcat7 8.2.0
GemFire Modules 8.2.0
GemFire Security 8.2.0
GemFire 8.2.0
Groovy 2.4.6
JRebel 6.4.3
Log4j API 2.1.0
Log4j Core 2.1.0
Log4j Jcl 2.1.0
Log4j Jul 2.1.0
Log4j Slf4j 2.1.0
MariaDB JDBC 1.4.4
Memory Calculator (mountainlion) 2.0.2_RELEASE
Memory Calculator (trusty) 2.0.2_RELEASE
New Relic Agent 3.28.0
OpenJDK JRE (mountainlion) 1.8.0_91-unlimited-crypto
OpenJDK JRE (trusty) 1.8.0_91-unlimited-crypto
Play Framework JPA Plugin 1.10.0_RELEASE
PostgreSQL JDBC 9.4.1208
RedisStore 1.2.0_RELEASE
Ruxit 1.91.271
SLF4J API 1.7.7
SLF4J JDK14 1.7.7
Spring Auto-reconfiguration 1.10.0_RELEASE
Spring Boot CLI 1.3.5_RELEASE
Spring Boot Container Customizer 1.0.0_RELEASE
Tomcat Access Logging Support 2.5.0_RELEASE
Tomcat Lifecycle Support 2.5.0_RELEASE
Tomcat Logging Support 2.5.0_RELEASE
Tomcat 8.0.33
YourKit Profiler (mountainlion) 2016.02.36
YourKit Profiler (trusty) 2016.02.36


[1]: https://github.com/cloudfoundry/java-buildpack/compare/v3.7...v3.7.1


Java Buildpack v3.7.1

Ben Hale <bhale@...>
 

I'm pleased to announce the release of the java-buildpack, version 3.7.1. This release is a dependency update highlighted by the availability of JREs with JCE Unlimited Strength encryption.

* OpenJDK JCE Unlimited Strength Encryption JREs (via Laura Kerksiek)
* Removal of Precise Artifacts

For a more detailed look at the changes in 3.7.1, please take a look at the commit log[1]. Packaged versions of the buildpack, suitable for use with create-buildpack and update-buildpack, can be found attached to this release.


-Ben Hale
Cloud Foundry Java Experience


## Packaged Dependencies

AppDynamics 4.2.1_8
Dynatrace 6.3.0_1305
GemFire Modules Tomcat7 8.2.0
GemFire Modules 8.2.0
GemFire Security 8.2.0
GemFire 8.2.0
Groovy 2.4.6
JRebel 6.4.3
Log4j API 2.1.0
Log4j Core 2.1.0
Log4j Jcl 2.1.0
Log4j Jul 2.1.0
Log4j Slf4j 2.1.0
MariaDB JDBC 1.4.4
Memory Calculator (mountainlion) 2.0.2_RELEASE
Memory Calculator (trusty) 2.0.2_RELEASE
New Relic Agent 3.28.0
OpenJDK JRE (mountainlion) 1.8.0_91-unlimited-crypto
OpenJDK JRE (trusty) 1.8.0_91-unlimited-crypto
Play Framework JPA Plugin 1.10.0_RELEASE
PostgreSQL JDBC 9.4.1208
RedisStore 1.2.0_RELEASE
Ruxit 1.91.271
SLF4J API 1.7.7
SLF4J JDK14 1.7.7
Spring Auto-reconfiguration 1.10.0_RELEASE
Spring Boot CLI 1.3.5_RELEASE
Spring Boot Container Customizer 1.0.0_RELEASE
Tomcat Access Logging Support 2.5.0_RELEASE
Tomcat Lifecycle Support 2.5.0_RELEASE
Tomcat Logging Support 2.5.0_RELEASE
Tomcat 8.0.33
YourKit Profiler (mountainlion) 2016.02.36
YourKit Profiler (trusty) 2016.02.36


[1]: https://github.com/cloudfoundry/java-buildpack/compare/v3.7...v3.7.1


Re: Spiff reloaded...

Krueger, Uwe <uwe.krueger@...>
 

Hi Alex,

thanks for your feedback.

I’m still maintaining it. So far we have all we need, here, so I’ll add only some minor stuff. Next week there will be a new release.
I’ve tried to place pull requests for the original project, but they do not accept feature development anymore. Only my first steps including
bug fixes were accepted. As soon as development is opened again, I will try again to bring back the stuff to the original.

In the meantime I’ll continue on my fork. If there are proposals for new features (or even pull requests) I’ll try my very best to handle them.

Best regards
Uwe



From: Alexander Lomov [mailto:alexander.lomov(a)altoros.com]
Sent: Donnerstag, 12. Mai 2016 19:05
To: cf-dev(a)lists.cloudfoundry.org
Subject: [cf-dev] Re: Spiff reloaded...

Spiff++ looks like a good idea. I've already tried it on my home projects.

Are there any updates on your project? Do you plan to merge changes to commonly used spiff?


------------------------
Alex Lomov
Altoros — Cloud Foundry deployment, training and integration
Twitter: @code1n<https://twitter.com/code1n> GitHub: @allomov<https://gist.github.com/allomov>


Re: Questions of how to add projects to Cloud Foundry Community repo

Layne Peng
 

Thanks you for reply. I will send you a separate email and also discuss the details of project I am proposing.


Re: Brokered route services only receiving traffic for routes mapped to started apps

Guillaume Berche
 

Shannon,

What are your current thoughts on "maintaining routes with no backends in
the routing table" ? I quickly scanned the routing backlog few days ago
without yet finding trace of it.

I wish we could have used the opportunity of the cf summit "project office
hours" routing session [1] to have interactive exchanges around these use
cases. Unfortunately, my autosleep session [2] is scheduled at the exact
same timeslot.
If the cf foundation organizers were able to swap sessions that would be
great. I'll send a separate email to events(a)cloudfoundry.org, is there are
other community members suffering from the same conflict.


Thanks in advance,

Guillaume.

[1] http://sched.co/71aq
[2] http://sched.co/6aNp


Guillaume.

On Sun, May 1, 2016 at 12:03 AM, Stefan Mayr <stefan(a)mayr-stefan.de> wrote:

Hi

Am 28.04.2016 um 23:08 schrieb Mike Youngstrom:

Here is another minor use case. My users are often confused that a
stopped app returns a 404 instead of a 503. So, we implement that
functionality for the user using an app mapped to wildcard routes that
constantly asks the CC for valid routes. This works for wildcard
domains but not one off domains.

It might be better if the router returned a 503. At least for routes
bound to apps. Not sure if this should extend to routes not bound to
apps.
+1 for that proposal. A 404 also causes issues when crawler remove pages
from their index. A 503 has less side effects. I would also prefer a 503
service unavailable when a route is not bound - because there is no service
for this route. IMHO the meaning is much closer to what has happended.

Stefan

Mike

On Thu, Apr 28, 2016 at 1:32 PM, Shannon Coen <scoen(a)pivotal.io
<mailto:scoen(a)pivotal.io>> wrote:

Hello Guillaume,

Thank you for sharing your thoughts on these use cases. I can see
how having
a route service field requests for an app, whether the app is up on
not,
could be useful.

However, enabling this would significantly change how routes are
registered
for apps on Cloud Foundry, and how the router handles the route
lookup.
Routes are not currently enabled in the routing tier unless they are
mapped
to an app, and only when the app is determined healthy.

You are proposing the router maintains routes which have no
backends, and
instead of a failed lookup determining whether a 404 is returned,
the router
should figure out whether a route has any backends or a route service.

I'll chew on your use case and keep my ear out for additional use
cases for
maintaining routes with no backends in the routing table.

Best,
Shannon



--
View this message in context:

http://cf-dev.70369.x6.nabble.com/cf-dev-Brokered-route-services-only-receiving-traffic-for-routes-mapped-to-started-apps-tp4699p4742.html
Sent from the CF Dev mailing list archive at Nabble.com.



"Killing oom-notifier process" but without OOM

John Wong
 

Hi

One of our NodeJS applications constantly crash with exit status 1.

I was reviewing the warden log on our DEA and I found this.

{"timestamp":1463013061.7575436,"message":"Killing oom-notifier
process","log_level":"debug","source":"Warden::Container::Features::MemLimit::OomNotifier","data":{},"thread_id":70306976232240,"fiber_id":70306988476180,"process_id":314,"file":"/var/vcap/data/packages/warden/88b0ad837f313990ce408e50cd904f7025983213.1-12d37c30c75c53ce5158f1b3d97cd1be85956f85/warden/lib/warden/container/features/mem_limit.rb","lineno":51,"method":"kill"}

Is this normal or does this indicate an OOM? If so, how is cf events' OOM
(we didn't see that entry) different from this?

Thanks.

John


Re: Ubuntu Xenial stemcell and rootfs plans

Danny Rosen
 

We all have unique perspectives that we offer each other, I appreciate the
thoughts and time you've put into formulating this alternative. I've not
spent enough to refute or agree with your proposal. This might be the start
of a compelling feature narrative. Let's discuss it in greater detail in
real time on the cloud foundry slack.

Future readers:
The original post re: Xenial stemcell and rootfs plans has been answered
earlier in this thread.

On May 12, 2016 1:53 PM, "Daniel Mikusa" <dmikusa(a)pivotal.io> wrote:

On Thu, May 12, 2016 at 12:52 PM, Danny Rosen <drosen(a)pivotal.io> wrote:

Fair enough. Though if you divorce the binary from the buildpack you
greatly increase the complexity of the environment.
I respectfully disagree. Build pack binaries are nothing more than
packages (just a different format than rpm or deb). Any competent Linux
administrator should be familiar with package management for their distro
of choice, and that includes the ability to have an internal repo (i.e.
mirror) for the distro's packages. Essentially the idea here would be to
do the same thing but for build pack binaries. Because having internal
repos / mirrors is something a lot of large companies do, I suspect many
administrators will be familiar with these concepts.

I think this change could actually simplify build packs. Currently most
of the build packs have ugly hacks in them to intercept requests for
external files, translate URLs and load local copies of files instead [1].
The idea that I'm proposing would simply require the build packs to pull
files from a repo. It doesn't matter if you are behind a firewall or on
the public Internet. You just download the files from a configured
repository. Simple and straightforward.

[1] -
https://github.com/cloudfoundry/compile-extensions/tree/9932bb1d352b88883d76df41e797a6fa556844f0#download_dependency


I think we can simplify this conversation a bit though using our
*current* architecture rather than creating new paradigms ... and more
work for the buildpacks team :)
Again, I disagree. I don't think you can address them using the current
architecture because it's the architecture that's the problem. Bundling
binaries with the build packs is at it's core a bad idea. Mike D listed
some of these earlier in this email thread. Summarizing the ones that come
to mind below.

- large build packs are hard to distribute
- large build packs increase staging time and in some cases cause staging
failures
- build packs are tied to the stack of the binaries they include
- build packs are tied to specific versions of the binaries they include
- supporting multiple sets of binaries requires multiple build packs or
really large build packs
- customizing build packs becomes more difficult as you now have to wrap
up the binaries in your custom build pack
- build packs are forced to release to keep up with their binaries, not
because the build packs need to change at that pace

Separating the binaries and build packs would seem to address these
issues. It would also turn binary management into a task that is more
similar to what Linux admins do today for their distro's packages. Perhaps
we could even piggy back on existing tools in this space to manage the
binaries like Artifactory.



As an operator, I want my users to use custom buildpacks because the
official buildpacks (their binaries, their configuration, etc) don't suit
my needs.

- This can be achieved today! Via:
- a proxy
- an internet enabled environment
- an internal git server

This is an over simplification and only partially addresses one issue
with bundling binaries into the build pack. The original issue on this
thread is being able to support the addition of a new stack. Mike D made
the point that supporting an additional stack would be difficult because it
would cause the size of the build pack to spike. He offered one possible
solution, but that looked like it would require work to the cloud
controller. I offered the idea of splitting the binaries out of the build
pack. It doesn't require any cloud controller work and it would scale
nicely as additional stacks are added (assuming you have an HTTP server
with a large enough disk).

One idea we're throwing around is being able to use a url containing a
zip file which could enable interesting solutions for operators who prefer
the "bring your own buildpacks but not from the internet and don't ask me
to upload it as an admin buildpack" solution.
I think that could be helpful. I remember back to the early days of Diego
when it could pull in a zip archive and it was nice in certain situations.
Having said that, I'm not seeing how this would help with the other issues
caused by having build packs and binaries bundled together. In particular,
the one of supporting multiple stacks.

Dan


If you're interested in working with us on this solution, let's talk!
We're happy to work with the community.

On Thu, May 12, 2016 at 12:26 PM, Daniel Mikusa <dmikusa(a)pivotal.io>
wrote:

On Thu, May 12, 2016 at 11:59 AM, Danny Rosen <drosen(a)pivotal.io> wrote:

Thanks. This is helpful! I'd like get a better understanding of the
following:

Why would an operator set their environment to be disconnected from the
internet if they wanted to enable their users to run arbitrary binaries via
a buildpack?
I don't think it wouldn't allow them to run arbitrary binaries, but it
would allow them to run arbitrary build packs. If you divorce the binaries
from the build pack, you can control the binaries separate in a corporate
IT managed, not public repository of binaries. Then users can use any
build pack they want so long as it points to the blessed internal repo of
trusted binaries.

Dan


For if an operator wanted to provide users the flexibility of executing
arbitrary binaries in buildpacks, custom buildpacks can be implemented via
an environment with internet *or* by providing a proxy
<http://docs.cloudfoundry.org/buildpacks/proxy-usage.html> that would
allow custom buildpacks to be deployed
<http://docs.cloudfoundry.org/buildpacks/custom.html#deploying-with-custom-buildpacks>
with an app.




On Thu, May 12, 2016 at 11:37 AM, Mike Youngstrom <youngm(a)gmail.com>
wrote:

See responses inline:

On Thu, May 12, 2016 at 9:04 AM, Danny Rosen <drosen(a)pivotal.io>
wrote:

* One of the key value propositions of a buildpack is the lightweight
process to fork and customize a buildpack.
*The inclusion of binaries makes buildpack customization a much
heavier process and less end user friendly in a number of ways.*
-- I'm not sure I agree with this point and would like to understand
your reasoning.
I may be missing something but it was my understanding that buildpacks
with binaries included must (unless checking all binaries into git) be
added as admin buildpacks which non admin users of CF cannot do.
Therefore, if I am a simple user of cloud foundry I cannot customize a
buldpack for my one off need without involving an administrator to upload
and manage the one off buildpack. If binary dependencies were instead
managed in a way like Daniel proposes the process would simply be to fork
the buildpack and specifying that git repo when pushing. Completely self
service without admin intervention. Making it a lighter weight process.

* For some of my customers the binary inclusion policies is too
restrictive.
-- It's hard for me to understand this point as I do not know your
customers' requirements. Would you mind providing details so we can better
understand their needs?
I've attempted to express that need previously here:
https://github.com/cloudfoundry/compile-extensions/issues/7 I don't
view this as a major issue but I think it could be something to consider if
buildpacks binary management is being reconsidered.

Hope those additional details help

Mike


Re: Questions of how to add projects to Cloud Foundry Community repo

Dr Nic Williams <drnicwilliams@...>
 

Layne, if you want to start with cloudfoundry-community then you can ask
any existing member/owner to add you, or me if you don't know anyone else
from the 160+ list.

Send me/them your github ID.

If you also want to create BOSH releases and would like some AWS
credentials for Amazon S3, then let me know.

Nic

On Fri, May 13, 2016 at 2:51 AM, Amit Gupta <agupta(a)pivotal.io> wrote:

Hi Layne,

There are three main Cloud Foundry-related organizations in GitHub:
cloudfoundry, cloudfoundry-incubator, and cloudfoundry-community. Each
organization has many repositories. If there is a specific repository you
wish to contribute to, and open a issue on GitHub stating your intent, and
hopefully the core maintainers can discuss options with you. Make sure to
look for a repository with recent activity.

Cheers,
Amit

On Thu, May 12, 2016 at 3:15 AM, Peng, Layne <Layne.Peng(a)emc.com> wrote:

Hi,

I found there are three repos related to Cloud Foundry in Github. And I
prefers to contribute some projects to the Cloud Foundry Community one. So,
whom I can contact? And what steps I should follow?

- Layne
--
Dr Nic Williams
Stark & Wayne LLC - consultancy for Cloud Foundry users
http://drnicwilliams.com
http://starkandwayne.com
cell +1 (415) 860-2185
twitter @drnic


Re: Ubuntu Xenial stemcell and rootfs plans

Daniel Mikusa
 

On Thu, May 12, 2016 at 12:52 PM, Danny Rosen <drosen(a)pivotal.io> wrote:

Fair enough. Though if you divorce the binary from the buildpack you
greatly increase the complexity of the environment.
I respectfully disagree. Build pack binaries are nothing more than
packages (just a different format than rpm or deb). Any competent Linux
administrator should be familiar with package management for their distro
of choice, and that includes the ability to have an internal repo (i.e.
mirror) for the distro's packages. Essentially the idea here would be to
do the same thing but for build pack binaries. Because having internal
repos / mirrors is something a lot of large companies do, I suspect many
administrators will be familiar with these concepts.

I think this change could actually simplify build packs. Currently most of
the build packs have ugly hacks in them to intercept requests for external
files, translate URLs and load local copies of files instead [1]. The idea
that I'm proposing would simply require the build packs to pull files from
a repo. It doesn't matter if you are behind a firewall or on the public
Internet. You just download the files from a configured repository.
Simple and straightforward.

[1] -
https://github.com/cloudfoundry/compile-extensions/tree/9932bb1d352b88883d76df41e797a6fa556844f0#download_dependency


I think we can simplify this conversation a bit though using our *current*
architecture rather than creating new paradigms ... and more work for the
buildpacks team :)
Again, I disagree. I don't think you can address them using the current
architecture because it's the architecture that's the problem. Bundling
binaries with the build packs is at it's core a bad idea. Mike D listed
some of these earlier in this email thread. Summarizing the ones that come
to mind below.

- large build packs are hard to distribute
- large build packs increase staging time and in some cases cause staging
failures
- build packs are tied to the stack of the binaries they include
- build packs are tied to specific versions of the binaries they include
- supporting multiple sets of binaries requires multiple build packs or
really large build packs
- customizing build packs becomes more difficult as you now have to wrap
up the binaries in your custom build pack
- build packs are forced to release to keep up with their binaries, not
because the build packs need to change at that pace

Separating the binaries and build packs would seem to address these
issues. It would also turn binary management into a task that is more
similar to what Linux admins do today for their distro's packages. Perhaps
we could even piggy back on existing tools in this space to manage the
binaries like Artifactory.



As an operator, I want my users to use custom buildpacks because the
official buildpacks (their binaries, their configuration, etc) don't suit
my needs.

- This can be achieved today! Via:
- a proxy
- an internet enabled environment
- an internal git server

This is an over simplification and only partially addresses one issue with
bundling binaries into the build pack. The original issue on this thread
is being able to support the addition of a new stack. Mike D made the
point that supporting an additional stack would be difficult because it
would cause the size of the build pack to spike. He offered one possible
solution, but that looked like it would require work to the cloud
controller. I offered the idea of splitting the binaries out of the build
pack. It doesn't require any cloud controller work and it would scale
nicely as additional stacks are added (assuming you have an HTTP server
with a large enough disk).

One idea we're throwing around is being able to use a url containing a zip
file which could enable interesting solutions for operators who prefer the
"bring your own buildpacks but not from the internet and don't ask me to
upload it as an admin buildpack" solution.
I think that could be helpful. I remember back to the early days of Diego
when it could pull in a zip archive and it was nice in certain situations.
Having said that, I'm not seeing how this would help with the other issues
caused by having build packs and binaries bundled together. In particular,
the one of supporting multiple stacks.

Dan


If you're interested in working with us on this solution, let's talk! We're
happy to work with the community.

On Thu, May 12, 2016 at 12:26 PM, Daniel Mikusa <dmikusa(a)pivotal.io>
wrote:

On Thu, May 12, 2016 at 11:59 AM, Danny Rosen <drosen(a)pivotal.io> wrote:

Thanks. This is helpful! I'd like get a better understanding of the
following:

Why would an operator set their environment to be disconnected from the
internet if they wanted to enable their users to run arbitrary binaries via
a buildpack?
I don't think it wouldn't allow them to run arbitrary binaries, but it
would allow them to run arbitrary build packs. If you divorce the binaries
from the build pack, you can control the binaries separate in a corporate
IT managed, not public repository of binaries. Then users can use any
build pack they want so long as it points to the blessed internal repo of
trusted binaries.

Dan


For if an operator wanted to provide users the flexibility of executing
arbitrary binaries in buildpacks, custom buildpacks can be implemented via
an environment with internet *or* by providing a proxy
<http://docs.cloudfoundry.org/buildpacks/proxy-usage.html> that would
allow custom buildpacks to be deployed
<http://docs.cloudfoundry.org/buildpacks/custom.html#deploying-with-custom-buildpacks>
with an app.




On Thu, May 12, 2016 at 11:37 AM, Mike Youngstrom <youngm(a)gmail.com>
wrote:

See responses inline:

On Thu, May 12, 2016 at 9:04 AM, Danny Rosen <drosen(a)pivotal.io> wrote:

* One of the key value propositions of a buildpack is the lightweight
process to fork and customize a buildpack.
*The inclusion of binaries makes buildpack customization a much
heavier process and less end user friendly in a number of ways.*
-- I'm not sure I agree with this point and would like to understand
your reasoning.
I may be missing something but it was my understanding that buildpacks
with binaries included must (unless checking all binaries into git) be
added as admin buildpacks which non admin users of CF cannot do.
Therefore, if I am a simple user of cloud foundry I cannot customize a
buldpack for my one off need without involving an administrator to upload
and manage the one off buildpack. If binary dependencies were instead
managed in a way like Daniel proposes the process would simply be to fork
the buildpack and specifying that git repo when pushing. Completely self
service without admin intervention. Making it a lighter weight process.

* For some of my customers the binary inclusion policies is too
restrictive.
-- It's hard for me to understand this point as I do not know your
customers' requirements. Would you mind providing details so we can better
understand their needs?
I've attempted to express that need previously here:
https://github.com/cloudfoundry/compile-extensions/issues/7 I don't
view this as a major issue but I think it could be something to consider if
buildpacks binary management is being reconsidered.

Hope those additional details help

Mike


Re: Spiff reloaded...

Alexander Lomov <alexander.lomov@...>
 

Spiff++ looks like a good idea. I've already tried it on my home projects.

Are there any updates on your project? Do you plan to merge changes to
commonly used spiff?


------------------------
Alex Lomov
*Altoros* — Cloud Foundry deployment, training and integration
*Twitter:* @code1n <https://twitter.com/code1n> *GitHub:* @allomov
<https://gist.github.com/allomov>


Re: Ubuntu Xenial stemcell and rootfs plans

Danny Rosen
 

Fair enough. Though if you divorce the binary from the buildpack you
greatly increase the complexity of the environment. I think we can simplify
this conversation a bit though using our *current* architecture rather than
creating new paradigms ... and more work for the buildpacks team :)

As an operator, I want my users to use custom buildpacks because the
official buildpacks (their binaries, their configuration, etc) don't suit
my needs.

- This can be achieved today! Via:
- a proxy
- an internet enabled environment
- an internal git server

One idea we're throwing around is being able to use a url containing a zip
file which could enable interesting solutions for operators who prefer the
"bring your own buildpacks but not from the internet and don't ask me to
upload it as an admin buildpack" solution.

If you're interested in working with us on this solution, let's talk! We're
happy to work with the community.

On Thu, May 12, 2016 at 12:26 PM, Daniel Mikusa <dmikusa(a)pivotal.io> wrote:

On Thu, May 12, 2016 at 11:59 AM, Danny Rosen <drosen(a)pivotal.io> wrote:

Thanks. This is helpful! I'd like get a better understanding of the
following:

Why would an operator set their environment to be disconnected from the
internet if they wanted to enable their users to run arbitrary binaries via
a buildpack?
I don't think it wouldn't allow them to run arbitrary binaries, but it
would allow them to run arbitrary build packs. If you divorce the binaries
from the build pack, you can control the binaries separate in a corporate
IT managed, not public repository of binaries. Then users can use any
build pack they want so long as it points to the blessed internal repo of
trusted binaries.

Dan


For if an operator wanted to provide users the flexibility of executing
arbitrary binaries in buildpacks, custom buildpacks can be implemented via
an environment with internet *or* by providing a proxy
<http://docs.cloudfoundry.org/buildpacks/proxy-usage.html> that would
allow custom buildpacks to be deployed
<http://docs.cloudfoundry.org/buildpacks/custom.html#deploying-with-custom-buildpacks>
with an app.




On Thu, May 12, 2016 at 11:37 AM, Mike Youngstrom <youngm(a)gmail.com>
wrote:

See responses inline:

On Thu, May 12, 2016 at 9:04 AM, Danny Rosen <drosen(a)pivotal.io> wrote:

* One of the key value propositions of a buildpack is the lightweight
process to fork and customize a buildpack.
*The inclusion of binaries makes buildpack customization a much heavier
process and less end user friendly in a number of ways.*
-- I'm not sure I agree with this point and would like to understand
your reasoning.
I may be missing something but it was my understanding that buildpacks
with binaries included must (unless checking all binaries into git) be
added as admin buildpacks which non admin users of CF cannot do.
Therefore, if I am a simple user of cloud foundry I cannot customize a
buldpack for my one off need without involving an administrator to upload
and manage the one off buildpack. If binary dependencies were instead
managed in a way like Daniel proposes the process would simply be to fork
the buildpack and specifying that git repo when pushing. Completely self
service without admin intervention. Making it a lighter weight process.

* For some of my customers the binary inclusion policies is too
restrictive.
-- It's hard for me to understand this point as I do not know your
customers' requirements. Would you mind providing details so we can better
understand their needs?
I've attempted to express that need previously here:
https://github.com/cloudfoundry/compile-extensions/issues/7 I don't
view this as a major issue but I think it could be something to consider if
buildpacks binary management is being reconsidered.

Hope those additional details help

Mike


Re: Questions of how to add projects to Cloud Foundry Community repo

Amit Kumar Gupta
 

Hi Layne,

There are three main Cloud Foundry-related organizations in GitHub:
cloudfoundry, cloudfoundry-incubator, and cloudfoundry-community. Each
organization has many repositories. If there is a specific repository you
wish to contribute to, and open a issue on GitHub stating your intent, and
hopefully the core maintainers can discuss options with you. Make sure to
look for a repository with recent activity.

Cheers,
Amit

On Thu, May 12, 2016 at 3:15 AM, Peng, Layne <Layne.Peng(a)emc.com> wrote:

Hi,

I found there are three repos related to Cloud Foundry in Github. And I
prefers to contribute some projects to the Cloud Foundry Community one. So,
whom I can contact? And what steps I should follow?

- Layne


Re: Ubuntu Xenial stemcell and rootfs plans

Daniel Mikusa
 

On Thu, May 12, 2016 at 11:59 AM, Danny Rosen <drosen(a)pivotal.io> wrote:

Thanks. This is helpful! I'd like get a better understanding of the
following:

Why would an operator set their environment to be disconnected from the
internet if they wanted to enable their users to run arbitrary binaries via
a buildpack?
I don't think it wouldn't allow them to run arbitrary binaries, but it
would allow them to run arbitrary build packs. If you divorce the binaries
from the build pack, you can control the binaries separate in a corporate
IT managed, not public repository of binaries. Then users can use any
build pack they want so long as it points to the blessed internal repo of
trusted binaries.

Dan


For if an operator wanted to provide users the flexibility of executing
arbitrary binaries in buildpacks, custom buildpacks can be implemented via
an environment with internet *or* by providing a proxy
<http://docs.cloudfoundry.org/buildpacks/proxy-usage.html> that would
allow custom buildpacks to be deployed
<http://docs.cloudfoundry.org/buildpacks/custom.html#deploying-with-custom-buildpacks>
with an app.




On Thu, May 12, 2016 at 11:37 AM, Mike Youngstrom <youngm(a)gmail.com>
wrote:

See responses inline:

On Thu, May 12, 2016 at 9:04 AM, Danny Rosen <drosen(a)pivotal.io> wrote:

* One of the key value propositions of a buildpack is the lightweight
process to fork and customize a buildpack.
*The inclusion of binaries makes buildpack customization a much heavier
process and less end user friendly in a number of ways.*
-- I'm not sure I agree with this point and would like to understand
your reasoning.
I may be missing something but it was my understanding that buildpacks
with binaries included must (unless checking all binaries into git) be
added as admin buildpacks which non admin users of CF cannot do.
Therefore, if I am a simple user of cloud foundry I cannot customize a
buldpack for my one off need without involving an administrator to upload
and manage the one off buildpack. If binary dependencies were instead
managed in a way like Daniel proposes the process would simply be to fork
the buildpack and specifying that git repo when pushing. Completely self
service without admin intervention. Making it a lighter weight process.

* For some of my customers the binary inclusion policies is too
restrictive.
-- It's hard for me to understand this point as I do not know your
customers' requirements. Would you mind providing details so we can better
understand their needs?
I've attempted to express that need previously here:
https://github.com/cloudfoundry/compile-extensions/issues/7 I don't view
this as a major issue but I think it could be something to consider if
buildpacks binary management is being reconsidered.

Hope those additional details help

Mike


Re: Ubuntu Xenial stemcell and rootfs plans

Danny Rosen
 

Thanks. This is helpful! I'd like get a better understanding of the
following:

Why would an operator set their environment to be disconnected from the
internet if they wanted to enable their users to run arbitrary binaries via
a buildpack? For if an operator wanted to provide users the flexibility of
executing arbitrary binaries in buildpacks, custom buildpacks can be
implemented via an environment with internet *or* by providing a proxy
<http://docs.cloudfoundry.org/buildpacks/proxy-usage.html> that would
allow custom
buildpacks to be deployed
<http://docs.cloudfoundry.org/buildpacks/custom.html#deploying-with-custom-buildpacks>
with an app.

On Thu, May 12, 2016 at 11:37 AM, Mike Youngstrom <youngm(a)gmail.com> wrote:

See responses inline:

On Thu, May 12, 2016 at 9:04 AM, Danny Rosen <drosen(a)pivotal.io> wrote:

* One of the key value propositions of a buildpack is the lightweight
process to fork and customize a buildpack.
*The inclusion of binaries makes buildpack customization a much heavier
process and less end user friendly in a number of ways.*
-- I'm not sure I agree with this point and would like to understand your
reasoning.
I may be missing something but it was my understanding that buildpacks
with binaries included must (unless checking all binaries into git) be
added as admin buildpacks which non admin users of CF cannot do.
Therefore, if I am a simple user of cloud foundry I cannot customize a
buldpack for my one off need without involving an administrator to upload
and manage the one off buildpack. If binary dependencies were instead
managed in a way like Daniel proposes the process would simply be to fork
the buildpack and specifying that git repo when pushing. Completely self
service without admin intervention. Making it a lighter weight process.

* For some of my customers the binary inclusion policies is too
restrictive.
-- It's hard for me to understand this point as I do not know your
customers' requirements. Would you mind providing details so we can better
understand their needs?
I've attempted to express that need previously here:
https://github.com/cloudfoundry/compile-extensions/issues/7 I don't view
this as a major issue but I think it could be something to consider if
buildpacks binary management is being reconsidered.

Hope those additional details help

Mike


Re: Ubuntu Xenial stemcell and rootfs plans

Mike Youngstrom
 

See responses inline:

On Thu, May 12, 2016 at 9:04 AM, Danny Rosen <drosen(a)pivotal.io> wrote:

* One of the key value propositions of a buildpack is the lightweight
process to fork and customize a buildpack.
*The inclusion of binaries makes buildpack customization a much heavier
process and less end user friendly in a number of ways.*
-- I'm not sure I agree with this point and would like to understand your
reasoning.
I may be missing something but it was my understanding that buildpacks with
binaries included must (unless checking all binaries into git) be added as
admin buildpacks which non admin users of CF cannot do. Therefore, if I am
a simple user of cloud foundry I cannot customize a buldpack for my one off
need without involving an administrator to upload and manage the one off
buildpack. If binary dependencies were instead managed in a way like
Daniel proposes the process would simply be to fork the buildpack and
specifying that git repo when pushing. Completely self service without
admin intervention. Making it a lighter weight process.

* For some of my customers the binary inclusion policies is too restrictive.
-- It's hard for me to understand this point as I do not know your
customers' requirements. Would you mind providing details so we can better
understand their needs?
I've attempted to express that need previously here:
https://github.com/cloudfoundry/compile-extensions/issues/7 I don't view
this as a major issue but I think it could be something to consider if
buildpacks binary management is being reconsidered.

Hope those additional details help

Mike


Re: Ubuntu Xenial stemcell and rootfs plans

Danny Rosen
 

* One of the key value propositions of a buildpack is the lightweight
process to fork and customize a buildpack.
*The inclusion of binaries makes buildpack customization a much heavier
process and less end user friendly in a number of ways.*
-- I'm not sure I agree with this point and would like to understand your
reasoning.

* For some of my customers the binary inclusion policies is too restrictive.
-- It's hard for me to understand this point as I do not know your
customers' requirements. Would you mind providing details so we can better
understand their needs?

On Wed, May 11, 2016 at 2:01 PM, Mike Youngstrom <youngm(a)gmail.com> wrote:

I really like the idea of finding a way to move away from bundling
binaries with the buildpacks while continuing to not require internet
access. My organization actually doesn't even use the binary bundled
buildpacks for our 2 main platforms (node and java).

Some issues we have with the offline buildpacks in addition to those
already mentioned:

* One of the key value propositions of a buildpack is the lightweight
process to fork and customize a buildpack. The inclusion of binaries makes
buildpack customization a much heavier process and less end user friendly
in a number of ways.
* We require some java-buildpack binaries that are not packaged with the
java-buildpack because of licensing issues, etc.
* For some of my customers the binary inclusion policies is too
restrictive.

So, I agree with your 100% Dan. I'd love to see some work more in the
direction of not including binaries rather than making admin bulidpack
selection more stack specific.

Mike

On Wed, May 11, 2016 at 11:09 AM, Daniel Mikusa <dmikusa(a)pivotal.io>
wrote:

On Wed, May 11, 2016 at 9:45 AM, Mike Dalessio <mdalessio(a)pivotal.io>
wrote:

Hi Mike,

I totally agree with you on all points, but there are second-order
effects that are worth discussing and understanding, as they've influenced
my own thinking around the timing of this work.

Given the current state of automation in the Buildpacks Team's CI
pipelines, we could add a Xenial-based rootfs ("cflinuxfs3"?)
Could we please, please not call it `cflinuxfs3`? A very common question
I get is what is `cflinuxfs2` really? I then have to explain that it is
basically Ubuntu Trusty. That invariably results in the follow up
question, why it's called `cflinuxfs2` then, to which I have no good answer.

Since it would seem that this naming choice has resulted in confused
users, can we think of something that is more indicative of what you
actually get from the rootfs? I would throw out cfxenialfs as it indicates
it's CF, Xenial and a file system. This seems more accurate as the rootfs
isn't really about "linux", if you look at linux as being the kernel [1].
It's about user land packages and those are Ubuntu Trusty or Xenial based,
so it seems like the name should reflect that.

[1] - https://en.wikipedia.org/wiki/GNU/Linux_naming_controversy

to CF pretty quickly (and in fact have considered doing exactly this),
and could build precompiled Xenial binaries to add to each buildpack pretty
easily.

Unfortunately, this would result in doubling (or nearly so) the size of
almost all of the buildpacks, since the majority of a buildpack's payload
are the precompiled binaries for the rootfs. For example, we'd need to
compile several Ruby binaries for Xenial and vendor them in the buildpack
alongside the existing Trusty-based binaries.

Larger buildpacks result in longer staging times, longer deploy times
for CF, and are just generally a burden to ship around, particularly for
operators and users that don't actually want or need two stacks.

A second solution is to ship a separate buildpack for each stack (so,
ruby_buildpack_cflinuxfs2 versus ruby_buildpack_cflinuxfs3), and have
`bin/detect` only select itself if it's running on the appropriate stack.

But this would simply be forcing all buildpacks to plug a leaky
abstraction, and so I'd like to endeavor to make buildpacks simpler to
maintain.

A third solution, and the one which I think we should pursue, is to ship
separate buildpacks for each stack, but make Cloud Controller aware of the
buildpack's "stackiness", and only invoke buildpacks that are appropriate
for that stack.

So, for example, the CC would know that the go_buildpack works on both
Trusty- and Xenial-based rootfses (as those binaries are statically
linked), and would also know that ruby_buildpack_cflinuxfs2 isn't valid for
applications running on cflinuxfs3.
Has there been any thought / consideration given to just not shipping
binaries with build packs? I know that we ship binaries with the build
packs so that they will work in offline environments, but doing so has the
obvious drawbacks you mentioned above (plus others). Have we considered
other ways to make the build packs work in offline environments? If the
build packs were just build pack code, it would make them *way* simpler to
manage and they could care much less about the stack.

One idea (sorry it's only half-baked) for enabling offline support but
not bundling binaries with the build packs would be to instead package
binaries into a separate job that runs as an HTTP server inside CF. Build
packs could then use that as an offline repo. Populating the repo could be
done in a few different ways. You could package binaries with the job, you
could have something (an errand maybe?) that uploads binaries to the VM,
you could have the HTTP server setup as a caching proxy that would fetch
them from some where else (perhaps just the proxy is allowed to access the
Internet) or the user could manually populate the files. It would also
give the user greater flexibility as to what versions of software are being
used in the environment, since build packs would no longer be limited by
the binary versions packaged with them, and instead just pull from what is
available on the repo. It would also change upgrading build packs to a
task that is mostly just pulling down the latest binaries to the HTTP
server. You'd only need to upgrade build packs when there is a problem
with the build pack itself.

Anyway, I like this option so I wanted to through it out there for
comment. Curious to hear thoughts from others. Happy to discuss further.

Thanks,

Dan




This work, however, will require some changes to CC's behavior, and
that's the critical path work that hasn't been scoped or prioritized yet.

Hope this helps everyone understand some of the concerns, and hopefully
explains why we haven't just shipped a Xenial-based stack.

-m


On Tue, May 10, 2016 at 1:34 PM, Mike Youngstrom <youngm(a)gmail.com>
wrote:

I may not have anything that qualifies as compelling. But, here are
some of the reasons I've got:

* If skipping Xenial that give at the most 1 year to transition from
trusty to a 2018.04 based rootfs. Lets say it takes 6 months to get the
new rootFS into our customers hands and for everyone to be comfortable
enough with it to make it the default. I don't think 6 months is enough
time for my users to naturally transition all of their applications via
pushes and restages to the new rootfs. The more time we have with the new
rootfs as the default the less I will need to bother my customers to test
before I force them to change.

* Xenial uses OpenSSL 1.0.2. Improving security by not statically
compiling OpenSSL into Node would be nice.

* With the lucid rootfs after a while it became difficult to find
pre-built libraries for Lucid. This put increased burden on me to identify
and provide lucid compatible builds for some common tools. One example of
this is wkhtmltopdf a commonly used tool in my organization.

I think the biggest thing for me is that the move from Lucid to Trusty
was a nightmare for me and my customers. Though better planning and adding
a couple of more months to the process would help, giving my users a couple
of years to migrate would be better. :)

Mike

On Mon, May 9, 2016 at 2:05 PM, Danny Rosen <drosen(a)pivotal.io> wrote:

Hey Mike,

Thanks for reaching out. We've discussed supporting Xenial recently
but have had trouble identifying compelling reasons to do so. Our current
version of the rootfs is supported until April 2019 [1] and while we do not
plan on waiting until March 2019 :) we want to understand compelling
reasons to go forward with the work sooner than later.


On Mon, May 9, 2016 at 12:47 PM, Mike Youngstrom <youngm(a)gmail.com>
wrote:

Ubuntu Xenial Xerus was released a few weeks ago. Any plans to
incorporate Xenial into the platform? Stemcells and/or new root fs?

The recent lucid to trusty rootfs fire drill was frustrating to my
customers. I'm hoping that this year we can get a Xenial rootfs out
loooong before trusty support ends so I don't have to put another tight
deadline on my customers to test and move.

Thoughts?

Thanks,
Mike


Re: How to install things on specfic APP container

Michal Tekel
 

Hi,

depending on which buildpack you use it might be easier or more complicated
to launch custom scripts at app runtime. In these scripts you can install
apps, but only in "userspace" - that is, not as root. This is possible for
various ubuntu packages, but it involves manual resolution of dependencies,
which all need to be installed in the same userspace.

In our case we have run nmap to do port scan from within app container (to
verify what everything is reachable by deployed apps). We have used install
script [1], which we added into bin/post_compile (python buildpack) - which
would run at the end of staging and put installed packages into the final
app droplet, making them available inside app container on launch. We then
run the scan by using another script [2] where we explicitly define
LD_LIBRARY_PATH to point to dependencies that we have also installed in
"userspace".

This is quite cumbersome, but at least it can be done this way. Some other
PAASes support direct installation of package dependencies in their
buildpacks [3].

[1]
https://github.com/alphagov/paas-cf/blob/c0db1e38a9294112b8ecbfd7e0eee3dea5cf94ac/tests/example-apps/port-scan/nmap_portable.sh
[2]
https://github.com/alphagov/paas-cf/blob/c0db1e38a9294112b8ecbfd7e0eee3dea5cf94ac/tests/example-apps/port-scan/scan.sh
[3] https://docs.tsuru.io/stable/using/python.html - see requirements.apt
file description

On 7 May 2016 at 03:52, Stanley Shen <meteorping(a)gmail.com> wrote:

Yes, the file is actually stored in database, we don't rely on the FS of
the container.
Just we want to do virus scan and other checks before we accept it and
store it to database.


[abacus] Separate time-based from discrete usage metrics

Hristo Iliev
 

Hi,

We're trying to fix Abacus issue 88: Missing aggregated usage for the running application [1].

Background
=========

See the jsdelfino comment in the GitHub issue [2]. TL;DR: Resource providers have to send a 'ping' doc per month for time-based metrics.

Proposed solution
==============

We decided to implement a solution in Abacus that frees the usage providers from sending the 'ping' submission.

To fix the issue we decided to:
1. Distinguish between time-based (linux-container) and discrete usage metrics (the rest basically)
2. Store the time-based metrics in a separate DB(s)

We already drafted a proposal for adding measurement type in the usage plans with PR #320 [3].

We're about to spike on storing the time-based metrics in their own Database, but we wanted to get the community opinion on the topic.

Motivation
========

The discrete usage submitted to Abacus is:
* stored in partitioned databases, due to their size/number
* like an event log, storing the history of the usage/resources

In contrast the current time-based metrics are:
* limited number (usually around 2 million on a loaded CF system)
* storing just the app resources usage state (GB/h consumed so far, GB/h consuming currently)

Therefore it looks like a good idea to separate the two usage metrics types and store the time-based metrics in a separate database. This will allow us not only to solve the issue, but also to store and query the data more effectively.

We may still need to maintain 2 databases and swap new/old (irrelevant) metrics to reduce the DB size on the month boundaries.


Regards,
Hristo & Adriana

[1] https://github.com/cloudfoundry-incubator/cf-abacus/issues/88
[2] https://github.com/cloudfoundry-incubator/cf-abacus/issues/88#issuecomment-148498164
[3] https://github.com/cloudfoundry-incubator/cf-abacus/pull/320


Questions of how to add projects to Cloud Foundry Community repo

Layne Peng
 

Hi,

I found there are three repos related to Cloud Foundry in Github. And I prefers to contribute some projects to the Cloud Foundry Community one. So, whom I can contact? And what steps I should follow?

- Layne

4541 - 4560 of 9426