Date   

Re: [abacus] Refactor Aggregated Usage and Aggregated Rated Usage data model

Saravanakumar A. Srinivasan
 

c) a middle-ground approach where we'll store the aggregated usage per app in separate docs, but maintain the aggregated usage at the upper levels (org, space, resource, plan) in the parent doc linking the app usage docs together, and explore what constrains or limitations that would impose on our ability to trigger real time usage limit alerts at any org, space, resource, plan, app etc level.
As a first step (refer to [1] for more details) to refactor the usage data model using middle-ground approach, we have removed Usage Rating Service from Abacus pipeline (refer to commit at [2]) and moved entire rating implementation from Usage Rating Service to Usage Aggregator (refer to commit at [3])

With these commits, If you are using Abacus, be aware that the Abacus pipeline has become shorter and you have one less application (Usage Rating Service) to manage.

[1] https://github.com/cloudfoundry-incubator/cf-abacus/issues/184
[2] https://github.com/cloudfoundry-incubator/cf-abacus/commit/1488e1ae2e4547a010151ad2245f3a3f1ff2e488
[3] https://github.com/cloudfoundry-incubator/cf-abacus/commit/c661b7bdd35e70e985583570cb9920b90ced44a8


Re: Dev and Production environment inconsistent

CF Runtime
 

Hi Juan,

That is correct. The "production" flag on apps in the CC should not be
used. Instead you should push apps to different spaces with separate
domains associated to each app/space to create a staging/production pattern
as you described.

Best,
Zak Auerbach, CF Release Integration

On Fri, Nov 27, 2015 at 4:31 AM, Juan Antonio Breña Moral <
bren(a)juanantonio.info> wrote:

Hi Alex,

when I get the summary from an app:
http://apidocs.cloudfoundry.org/214/apps/get_app_summary.html

I see a deprecated element: (example: production: false) so the best
practice is the development of virtual environments using spaces?

Example:

+ CERT
+ PRE
+ PRO

Is the idea with spaces?

Juan Antonio


Re: Passwords visible in infrastructure logs

Amit Kumar Gupta
 

Hey Momchil,

Do you know whether it's the DEA or Warden that's logging that sensitive
data when you say "runner"?

I would recommended opening issues against the relevant projects:

API: https://github.com/cloudfoundry/cloud_controller_ng/issues
DEA or Warden: https://github.com/cloudfoundry/dea_ng/issues or
https://github.com/cloudfoundry/warden/issues

As for NATS, you may be able to change the logging level? Alternatively,
NATS is not a Cloud Foundry project but you could ask over there about
encrypting log output: https://github.com/nats-io/gnatsd

In Pivotal's production environments, we run 100% on Diego, so we are not
concerned with DEA/Warden logging, and this move also removes NATS from the
flows like create-user-provided-service. CC is likely still an issue, so
it would be a good one to raise against their GitHub project.

Best,
Amit

On Fri, Dec 4, 2015 at 2:09 AM, Momchil Atanassov <momchil.atanassov(a)sap.com
wrote:
Hi,

We are using the `syslog_deamon_config` proprty to stream all of our CF
infrastructure logs to an external stack for later processing.

We have noticed that operations like `cf create-user-provided-service`,
`cf bind-service`, and others are logged by multiple components in CF. That
would normally not be a problem, except that these commands often involve
passwords and those passwords get logged as well, ending up in the log
files on the VM and the target log processing stack, which allows operators
of the system to view end-user passwords.

We have noticed that the following jobs are responsible for the logs:

* api
* runner
* nats

Increasing the log level from the default `debug` / `debug2` to `info`
solves the problem for the first two, at the cost of making troubleshooting
tasks more difficult on the system.
The last one can only be solved by removing the `nats_stream_forwarder`
component from the `nats` job, again making troubleshooting more difficult.

I believe the ideal solution is to have those components not log the
payload of commands holding confidential information. Maybe they could
replace it with some pattern.
This would help for the first two but might not help for nats, where some
other means would be needed (encryption of the private content?).

How are you solving this issue on your productive system? What are your
thoughts on this matter?

Thanks in advance!

Regards,
Momchil Atanassov


Re: Garden Port Assignment Story

Mike Youngstrom
 

Thanks for the update Will. I'll keep waiting patiently. :)

On Fri, Dec 4, 2015 at 10:44 AM, Will Pragnell <wpragnell(a)pivotal.io> wrote:

Hi Mike,

Just to let you know we've not forgotten this. We're giving the matter
some thought, and we'll report back here with a proposal once we've figured
out some of the fiddly details.

Thanks,
Will

On 24 November 2015 at 15:19, Mike Youngstrom <youngm(a)gmail.com> wrote:

Yes Will, that summary is essentially correct. But, for even more
clarity let me restate the complete story again and reason I want 92085170
to work across stemcell upgrades. :)

Today if NATS goes down after 2 minutes the routers will drop their
routing tables and my entire CF deployment goes down. The routers behave
this way because of an experience Dieu had [0]. I don't like this I would
prefer for routers to not drop routing tables if it cannot connect to
Nats. Therefore, the routing team is adding 'prune_on_config_unavailable'.
I plan to set this to false to make my deployment less sensitive to NATS
failure. In doing so I am incurring more risk of mis routed stale routes.
I am hoping that 92085170 will help reduce some of that risk. Since one of
the times I personally have experienced stale route routing was during a
deploy I hope that Garden will consider a port selection technique that
will help ensure uniqueness across stemcell upgrades, something we
frequently do as part of a deploy.

Consequently a stateless solution like random assignment or a consistent
hash will work across stemcell upgrades.

Thanks,
Mike

[0]
https://groups.google.com/a/cloudfoundry.org/d/msg/vcap-dev/yuVYCZkMLG8/7t8FHnFzWEsJ

On Tue, Nov 24, 2015 at 3:44 AM, Will Pragnell <wpragnell(a)pivotal.io>
wrote:

Hi Mike,

What I think you're saying is that once the new
`prune_on_config_unavailable` property is available in the router, and if
it's set to `false`, there's a case when NATs is not reachable from the
router in which potentially stale routes will continue to exist until the
router can reach NATs again. Is that correct?

(Sorry to repeat you back at yourself, just want to make sure I've
understood you correctly.)

Will

On 23 November 2015 at 19:02, Mike Youngstrom <youngm(a)gmail.com> wrote:

Hi Will,

Though I see the main reason for the issue assuming a healthy running
environment I've also experienced a deploy related issue that more unique
port assignment could help defend against. During one of our deploys the
routers finished deployed before the DEAs. When the DEAs started rolling,
for some reason some of our routers stopped getting route updates from
NATs. This caused their route tables to go stale and as apps started
rolling new apps started getting assigned ports previously held by other
apps. Which caused a number of our hosts to be mis-routed.

Though the root cause was probably some bug in the Nats client in
GoRouter the runtime team had apparently experienced a similar issue in the
past [0] which caused them to implement code that would delete stale routes
even then a router couldn't connect to NATs. The Router team is now
planning to optionally remove this failsafe [1]. I'm hoping that with the
removal of this failsafe (which I'm planning to take advantage of) this
tracker story will help protect us from the problem we experienced before
from happening again.

If the ports simply reset on a stemcell upgrade this issue provides no
defense for the problem we had before.

Does that make sense Will?

Mike

[0]
https://groups.google.com/a/cloudfoundry.org/d/msg/vcap-dev/yuVYCZkMLG8/7t8FHnFzWEsJ
[1] https://www.pivotaltracker.com/story/show/108659764

On Mon, Nov 23, 2015 at 11:11 AM, Will Pragnell <wpragnell(a)pivotal.io>
wrote:

Hi Mike,

What's the motivation for wanting rolling port assignment to persist
across e.g. stemcell upgrade? The motivation for this story is to prevent
stale routes from sending traffic to the wrong containers. Our assumption
is that stale routes won't ever exist for anything close to the amount of
time it takes BOSH to destroy and recreate a VM. Have we missed something
in making that assumption?

On your second point, I see your concern. We've talked about the
possibility of implementing FIFO semantics on free ports (when a port that
was in use becomes free, it goes to the end of the queue of available
ports) to decrease the chances of traffic reaching the wrong container as
far as possible. It's possible that the rolling ports approach is "good
enough" though. We're still trying to understand whether that's actually
the case.

The consistent hashing idea is interesting, but a few folks have
suggested that with a relatively small range of available ports (5000 by
default) that the chances of collision are actually higher than we'd want.
I'll see if someone wants to lay down some maths to give that idea some
credence.

Cheers,
Will

On 23 November 2015 at 08:47, Mike Youngstrom <youngm(a)gmail.com>
wrote:

Since I cannot comment in tracker I'm starting this thread to discuss
story:
https://www.pivotaltracker.com/n/projects/1158420/stories/92085170

Some comments I have:

* Although I can see how a rolling port assignment could be
maintained across garden/diego restarts I'd also like the story to ensure
that the rolling port assignments get maintained across a Stemcell upgrade
without the need for persistent disks on each cell. Perhaps etcd?

* Another thing to keep in mind. Although a rolling port value may
not duplicate ports 100% of the time in a short lived container in a long
lived container it seems to me that a rolling port assignment becomes no
more successful than a random port assignment if the container lives long
enough for the port assignment loop to loop a few times.

* Has there been any consideration to using an incremental consistent
hash of the app_guid to assign ports? A consistent hash would have the
benefit of being stateless. It also would have the benefit of increasing
the likely hood that if a request is sent to a stale route it may be to the
correct app anyway.

Thoughts?

Mike


[cf-env] [abacus] Changing how resources are organized

dmangin <dmangin@...>
 

With the current way we have resource_ids being used for metering,
accumulation, and aggregation, we have to have a resource definition for
every resource_id that people are using. Then when we have custom
buildpacks, we will have to start creating a resource definition for each
one of them, inflating the amount of the resource definitions that we have.
So we will be adding a new field called resource_type_id ontop of the
resource_id that will be the resource definition abacus uses, allowing
custom buildpacks to fall under the resource type. So we were thinking of
having the hierarchy changed so resource_id is underneath resource_type_id.

When we implement this, we will have to change abacus to use the
resource_type_id rather than the resource_id to do the calculations. What
changes will this cause in abacus right now and how will this affect the
previous reports abacus has created?

Reagrds,
Daniel Mangin



--
View this message in context: http://cf-dev.70369.x6.nabble.com/cf-env-abacus-Changing-how-resources-are-organized-tp2971.html
Sent from the CF Dev mailing list archive at Nabble.com.


Re: Garden Port Assignment Story

Will Pragnell <wpragnell@...>
 

Hi Mike,

Just to let you know we've not forgotten this. We're giving the matter some
thought, and we'll report back here with a proposal once we've figured out
some of the fiddly details.

Thanks,
Will

On 24 November 2015 at 15:19, Mike Youngstrom <youngm(a)gmail.com> wrote:

Yes Will, that summary is essentially correct. But, for even more clarity
let me restate the complete story again and reason I want 92085170 to work
across stemcell upgrades. :)

Today if NATS goes down after 2 minutes the routers will drop their
routing tables and my entire CF deployment goes down. The routers behave
this way because of an experience Dieu had [0]. I don't like this I would
prefer for routers to not drop routing tables if it cannot connect to
Nats. Therefore, the routing team is adding 'prune_on_config_unavailable'.
I plan to set this to false to make my deployment less sensitive to NATS
failure. In doing so I am incurring more risk of mis routed stale routes.
I am hoping that 92085170 will help reduce some of that risk. Since one of
the times I personally have experienced stale route routing was during a
deploy I hope that Garden will consider a port selection technique that
will help ensure uniqueness across stemcell upgrades, something we
frequently do as part of a deploy.

Consequently a stateless solution like random assignment or a consistent
hash will work across stemcell upgrades.

Thanks,
Mike

[0]
https://groups.google.com/a/cloudfoundry.org/d/msg/vcap-dev/yuVYCZkMLG8/7t8FHnFzWEsJ

On Tue, Nov 24, 2015 at 3:44 AM, Will Pragnell <wpragnell(a)pivotal.io>
wrote:

Hi Mike,

What I think you're saying is that once the new
`prune_on_config_unavailable` property is available in the router, and if
it's set to `false`, there's a case when NATs is not reachable from the
router in which potentially stale routes will continue to exist until the
router can reach NATs again. Is that correct?

(Sorry to repeat you back at yourself, just want to make sure I've
understood you correctly.)

Will

On 23 November 2015 at 19:02, Mike Youngstrom <youngm(a)gmail.com> wrote:

Hi Will,

Though I see the main reason for the issue assuming a healthy running
environment I've also experienced a deploy related issue that more unique
port assignment could help defend against. During one of our deploys the
routers finished deployed before the DEAs. When the DEAs started rolling,
for some reason some of our routers stopped getting route updates from
NATs. This caused their route tables to go stale and as apps started
rolling new apps started getting assigned ports previously held by other
apps. Which caused a number of our hosts to be mis-routed.

Though the root cause was probably some bug in the Nats client in
GoRouter the runtime team had apparently experienced a similar issue in the
past [0] which caused them to implement code that would delete stale routes
even then a router couldn't connect to NATs. The Router team is now
planning to optionally remove this failsafe [1]. I'm hoping that with the
removal of this failsafe (which I'm planning to take advantage of) this
tracker story will help protect us from the problem we experienced before
from happening again.

If the ports simply reset on a stemcell upgrade this issue provides no
defense for the problem we had before.

Does that make sense Will?

Mike

[0]
https://groups.google.com/a/cloudfoundry.org/d/msg/vcap-dev/yuVYCZkMLG8/7t8FHnFzWEsJ
[1] https://www.pivotaltracker.com/story/show/108659764

On Mon, Nov 23, 2015 at 11:11 AM, Will Pragnell <wpragnell(a)pivotal.io>
wrote:

Hi Mike,

What's the motivation for wanting rolling port assignment to persist
across e.g. stemcell upgrade? The motivation for this story is to prevent
stale routes from sending traffic to the wrong containers. Our assumption
is that stale routes won't ever exist for anything close to the amount of
time it takes BOSH to destroy and recreate a VM. Have we missed something
in making that assumption?

On your second point, I see your concern. We've talked about the
possibility of implementing FIFO semantics on free ports (when a port that
was in use becomes free, it goes to the end of the queue of available
ports) to decrease the chances of traffic reaching the wrong container as
far as possible. It's possible that the rolling ports approach is "good
enough" though. We're still trying to understand whether that's actually
the case.

The consistent hashing idea is interesting, but a few folks have
suggested that with a relatively small range of available ports (5000 by
default) that the chances of collision are actually higher than we'd want.
I'll see if someone wants to lay down some maths to give that idea some
credence.

Cheers,
Will

On 23 November 2015 at 08:47, Mike Youngstrom <youngm(a)gmail.com> wrote:

Since I cannot comment in tracker I'm starting this thread to discuss
story:
https://www.pivotaltracker.com/n/projects/1158420/stories/92085170

Some comments I have:

* Although I can see how a rolling port assignment could be maintained
across garden/diego restarts I'd also like the story to ensure that the
rolling port assignments get maintained across a Stemcell upgrade without
the need for persistent disks on each cell. Perhaps etcd?

* Another thing to keep in mind. Although a rolling port value may
not duplicate ports 100% of the time in a short lived container in a long
lived container it seems to me that a rolling port assignment becomes no
more successful than a random port assignment if the container lives long
enough for the port assignment loop to loop a few times.

* Has there been any consideration to using an incremental consistent
hash of the app_guid to assign ports? A consistent hash would have the
benefit of being stateless. It also would have the benefit of increasing
the likely hood that if a request is sent to a stale route it may be to the
correct app anyway.

Thoughts?

Mike


Re: - Reducing number of instances on a cloud foundry deployment

Kinjal Doshi
 

Thanks a lot for directing me towards this project. When I am trying to
deploy CF using this project, there are some issues as below:


1. 'bosh prepare deployment' fails because of checksum failure on
package buildpack_python
2. Alternatively I tried to upload release as directed in the 'Read Me'
of this project but there is a version mismatch, the read me suggests
uploading version 196 whereas the deployment manifest mentions version 222.
3. There are also some spiff merge issue as I get the below error on
using 'bosh deploy'

I have been able to resolve errors 1 and 2 but need help for point 3 as am
not well versed with spiff. Kindly help me resolve this error:

ubuntu(a)ip-172-31-38-159:~/cf-boshworkspace$ bosh deploy
Generating deployment manifest
Command failed: 'spiff merge
/home/ubuntu/cf-boshworkspace/templates/cf/cf-deployment.yml
/home/ubuntu/cf-boshworkspace/templates/cf/cf-resource-pools.yml
/home/ubuntu/cf-boshworkspace/templates/tiny/cf-tiny-scalable.yml
/home/ubuntu/cf-boshworkspace/templates/cf-uaa-port.yml
/home/ubuntu/cf-boshworkspace/templates/cf-allow-services-access.yml
/home/ubuntu/cf-boshworkspace/templates/cf/cf-properties.yml
/home/ubuntu/cf-boshworkspace/templates/cf/cf-infrastructure-aws.yml
/home/ubuntu/cf-boshworkspace/templates/cf-properties.yml
/home/ubuntu/cf-boshworkspace/templates/tiny/cf-jobs-nfs.yml
/home/ubuntu/cf-boshworkspace/templates/tiny/cf-jobs-uaa.yml
/home/ubuntu/cf-boshworkspace/templates/tiny/cf-jobs-base.yml
/home/ubuntu/cf-boshworkspace/templates/parallel.yml
/home/ubuntu/cf-boshworkspace/templates/cf-no-ssl.yml
/home/ubuntu/cf-boshworkspace/templates/cf-secrets.yml
/home/ubuntu/cf-boshworkspace/templates/tiny/cf-use-nfs.yml
/home/ubuntu/cf-boshworkspace/templates/tiny/cf-use-haproxy.yml
/home/ubuntu/cf-boshworkspace/templates/cf-networking.yml
/home/ubuntu/cf-boshworkspace/.stubs/cf-aws-tiny.yml 2>&1'
2015/12/04 14:02:53 error generating manifest: unresolved nodes:
(( merge )) in
/home/ubuntu/cf-boshworkspace/templates/cf/cf-properties.yml
properties.cc.staging_upload_user
(( merge )) in
/home/ubuntu/cf-boshworkspace/templates/cf/cf-properties.yml
properties.cc.staging_upload_password
(( bulk_api_password )) in dynaml
properties.cc.internal_api_password
(( merge )) in
/home/ubuntu/cf-boshworkspace/templates/cf/cf-properties.yml
properties.cc.db_encryption_key
(( merge )) in
/home/ubuntu/cf-boshworkspace/templates/cf/cf-properties.yml
properties.cc.bulk_api_password

Thanks in Advance,
Kinjal

On Thu, Dec 3, 2015 at 12:37 AM, Amit Gupta <agupta(a)pivotal.io> wrote:

The cf-boshworkspace project from Stark and Wayne:
https://github.com/cloudfoundry-community/cf-boshworkspace provides an
cf-aws-tiny.yml template, which I believe gets it down to 4 instances.

In principle, everything should be able to run on 1 VM, but my
understanding is there are port collisions that aren't configurable (yet),
and so 4 was the lower bound S&W found.

On Wed, Dec 2, 2015 at 4:33 AM, Kinjal Doshi <kindoshi(a)gmail.com> wrote:

Hi,

I am looking at miminal-aws.yml in the release
https://github.com/cloudfoundry/cf-release and realized that this
deployment will consume 13 instances on AWS.

Is it possible to reduce the number of instances for this deployment?

Thanks a lot for the help in advance.

Regards,
Kinjal


Re: Java Buildpack v3.4

Christopher Frost
 

You are correct, my apologies.

The release is at
https://github.com/cloudfoundry/java-buildpack/releases/tag/v3.4

The link to the commit log is correct.

Chris.


Christopher Frost - Pivotal UK
Java Buildpack Team

On Fri, Dec 4, 2015 at 1:39 PM, James Bayer <jbayer(a)pivotal.io> wrote:

i'm happy to see the new debug, profiling and jmx functionality and all of
the community contributions. also happy to see spring boot 1.3 support
picked up so quickly.

the link to the release in your post pointed to java-buildpack 3.3, and i
think you meant for it to be this URL:
https://github.com/cloudfoundry/java-buildpack/releases/tag/v3.4

congrats on the release!

thanks,

james

On Fri, Dec 4, 2015 at 2:22 AM Christopher Frost <cfrost(a)pivotal.io>
wrote:


I'm pleased to announce the release of the java-buildpack, version 3.4. This
release focuses on developer diagnostic tools.

- JMX Support with cf ssh
- Debugging Support with cf ssh (via Mike Youngstrom
<https://github.com/cloudfoundry/java-buildpack/issues/238>)
- YourKit Profiling Support with cf ssh
- Improved Tomcat documentation (via Violeta Georgieva
<https://github.com/cloudfoundry/java-buildpack/pull/247>)
- Improved Tomcat testing (via Violeta Georgieva
<https://github.com/cloudfoundry/java-buildpack/pull/261>)
- Improved AppDynamics config (via Nikhil Katre
<https://github.com/cloudfoundry/java-buildpack/pull/245>)

For a more detailed look at the changes in 3.4, please take a look at
the commit log
<https://github.com/cloudfoundry/java-buildpack/compare/v3.3...v3.4>.
Packaged versions of the buildpack, suitable for use with create-
buildpack and update-buildpack, can be found attached to this release
<https://github.com/cloudfoundry/java-buildpack/releases/tag/v3.3>.
*Packaged Dependencies*

- AppDynamics Agent: 4.1.7_1
- GemFire 8.2.0
- GemFire Modules 8.2.0
- GemFire Modules Tomcat7 8.2.0
- GemFire Security 8.2.0
- Groovy: 2.4.5
- JRebel 6.3.0
- MariaDB JDBC: 1.3.2
- Memory Calculator (mountainlion): 2.0.1.RELEASE
- Memory Calculator (precise): 2.0.1.RELEASE
- Memory Calculator (trusty): 2.0.1.RELEASE
- New Relic Agent: 3.22.0
- OpenJDK JRE (mountainlion): 1.8.0_65
- OpenJDK JRE (precise): 1.8.0_65
- OpenJDK JRE (trusty): 1.8.0_65
- Play Framework JPA Plugin: 1.10.0.RELEASE
- PostgreSQL JDBC: 9.4.1206
- RedisStore: 1.2.0_RELEASE
- Spring Auto-reconfiguration: 1.10.0_RELEASE
- Spring Boot CLI: 1.3.0_RELEASE
- Tomcat Access Logging Support: 2.4.0_RELEASE
- Tomcat Lifecycle Support: 2.4.0_RELEASE
- Tomcat Logging Support: 2.4.0_RELEASE
- Tomcat: 8.0.29
- YourKit: 2015.15080


Christopher Frost - Pivotal UK
Java Buildpack Team


Re: Java Buildpack v3.4

James Bayer
 

i'm happy to see the new debug, profiling and jmx functionality and all of
the community contributions. also happy to see spring boot 1.3 support
picked up so quickly.

the link to the release in your post pointed to java-buildpack 3.3, and i
think you meant for it to be this URL:
https://github.com/cloudfoundry/java-buildpack/releases/tag/v3.4

congrats on the release!

thanks,

james

On Fri, Dec 4, 2015 at 2:22 AM Christopher Frost <cfrost(a)pivotal.io> wrote:


I'm pleased to announce the release of the java-buildpack, version 3.4. This
release focuses on developer diagnostic tools.

- JMX Support with cf ssh
- Debugging Support with cf ssh (via Mike Youngstrom
<https://github.com/cloudfoundry/java-buildpack/issues/238>)
- YourKit Profiling Support with cf ssh
- Improved Tomcat documentation (via Violeta Georgieva
<https://github.com/cloudfoundry/java-buildpack/pull/247>)
- Improved Tomcat testing (via Violeta Georgieva
<https://github.com/cloudfoundry/java-buildpack/pull/261>)
- Improved AppDynamics config (via Nikhil Katre
<https://github.com/cloudfoundry/java-buildpack/pull/245>)

For a more detailed look at the changes in 3.4, please take a look at the commit
log <https://github.com/cloudfoundry/java-buildpack/compare/v3.3...v3.4>.
Packaged versions of the buildpack, suitable for use with create-buildpack
and update-buildpack, can be found attached to this release
<https://github.com/cloudfoundry/java-buildpack/releases/tag/v3.3>.
*Packaged Dependencies*

- AppDynamics Agent: 4.1.7_1
- GemFire 8.2.0
- GemFire Modules 8.2.0
- GemFire Modules Tomcat7 8.2.0
- GemFire Security 8.2.0
- Groovy: 2.4.5
- JRebel 6.3.0
- MariaDB JDBC: 1.3.2
- Memory Calculator (mountainlion): 2.0.1.RELEASE
- Memory Calculator (precise): 2.0.1.RELEASE
- Memory Calculator (trusty): 2.0.1.RELEASE
- New Relic Agent: 3.22.0
- OpenJDK JRE (mountainlion): 1.8.0_65
- OpenJDK JRE (precise): 1.8.0_65
- OpenJDK JRE (trusty): 1.8.0_65
- Play Framework JPA Plugin: 1.10.0.RELEASE
- PostgreSQL JDBC: 9.4.1206
- RedisStore: 1.2.0_RELEASE
- Spring Auto-reconfiguration: 1.10.0_RELEASE
- Spring Boot CLI: 1.3.0_RELEASE
- Tomcat Access Logging Support: 2.4.0_RELEASE
- Tomcat Lifecycle Support: 2.4.0_RELEASE
- Tomcat Logging Support: 2.4.0_RELEASE
- Tomcat: 8.0.29
- YourKit: 2015.15080


Christopher Frost - Pivotal UK
Java Buildpack Team


Java Buildpack v3.4

Christopher Frost
 

I'm pleased to announce the release of the java-buildpack, version 3.4. This
release focuses on developer diagnostic tools.

- JMX Support with cf ssh
- Debugging Support with cf ssh (via Mike Youngstrom
<https://github.com/cloudfoundry/java-buildpack/issues/238>)
- YourKit Profiling Support with cf ssh
- Improved Tomcat documentation (via Violeta Georgieva
<https://github.com/cloudfoundry/java-buildpack/pull/247>)
- Improved Tomcat testing (via Violeta Georgieva
<https://github.com/cloudfoundry/java-buildpack/pull/261>)
- Improved AppDynamics config (via Nikhil Katre
<https://github.com/cloudfoundry/java-buildpack/pull/245>)

For a more detailed look at the changes in 3.4, please take a look at
the commit
log <https://github.com/cloudfoundry/java-buildpack/compare/v3.3...v3.4>.
Packaged versions of the buildpack, suitable for use with create-buildpack
and update-buildpack, can be found attached to this release
<https://github.com/cloudfoundry/java-buildpack/releases/tag/v3.3>.
*Packaged Dependencies*

- AppDynamics Agent: 4.1.7_1
- GemFire 8.2.0
- GemFire Modules 8.2.0
- GemFire Modules Tomcat7 8.2.0
- GemFire Security 8.2.0
- Groovy: 2.4.5
- JRebel 6.3.0
- MariaDB JDBC: 1.3.2
- Memory Calculator (mountainlion): 2.0.1.RELEASE
- Memory Calculator (precise): 2.0.1.RELEASE
- Memory Calculator (trusty): 2.0.1.RELEASE
- New Relic Agent: 3.22.0
- OpenJDK JRE (mountainlion): 1.8.0_65
- OpenJDK JRE (precise): 1.8.0_65
- OpenJDK JRE (trusty): 1.8.0_65
- Play Framework JPA Plugin: 1.10.0.RELEASE
- PostgreSQL JDBC: 9.4.1206
- RedisStore: 1.2.0_RELEASE
- Spring Auto-reconfiguration: 1.10.0_RELEASE
- Spring Boot CLI: 1.3.0_RELEASE
- Tomcat Access Logging Support: 2.4.0_RELEASE
- Tomcat Lifecycle Support: 2.4.0_RELEASE
- Tomcat Logging Support: 2.4.0_RELEASE
- Tomcat: 8.0.29
- YourKit: 2015.15080


Christopher Frost - Pivotal UK
Java Buildpack Team


Passwords visible in infrastructure logs

Momchil Atanassov
 

Hi,

We are using the `syslog_deamon_config` proprty to stream all of our CF infrastructure logs to an external stack for later processing.

We have noticed that operations like `cf create-user-provided-service`, `cf bind-service`, and others are logged by multiple components in CF. That would normally not be a problem, except that these commands often involve passwords and those passwords get logged as well, ending up in the log files on the VM and the target log processing stack, which allows operators of the system to view end-user passwords.

We have noticed that the following jobs are responsible for the logs:

* api
* runner
* nats

Increasing the log level from the default `debug` / `debug2` to `info` solves the problem for the first two, at the cost of making troubleshooting tasks more difficult on the system.
The last one can only be solved by removing the `nats_stream_forwarder` component from the `nats` job, again making troubleshooting more difficult.

I believe the ideal solution is to have those components not log the payload of commands holding confidential information. Maybe they could replace it with some pattern.
This would help for the first two but might not help for nats, where some other means would be needed (encryption of the private content?).

How are you solving this issue on your productive system? What are your thoughts on this matter?

Thanks in advance!

Regards,
Momchil Atanassov


Security and Governance for cloud foundry vm's

Surekha Bejgam (sbejgam) <sbejgam@...>
 

Hi All,

We are deploying cloud foundry PAAS on Openstack IAAS. Currently all vm's deployed using bosh have vcap as the username. Devops on call engineers need to login to some of the bosh vm's to diagnose issues. Since all devops engineers use the same username "vcap" with private key to access the vm, we are having hard time figuring out who did what. Is there a standard way to deal with SSH Security for bosh vm's ?

Any suggestions will help.

Thanks,
Surekha


Re: Questions about purge app usage event API

Hristo Iliev
 

Hi,

You can do one or more of these:
1) Start billing epoch just once
2) Compare real application state with the events and provide a
compensation logic. We tried this in the Abacus cf-bridge prototype [1]
3) Help improve Abacus cf-bridge [2]

Probably the easiest thing to do is #1 since your implementation would be
much simpler and you won't need to synchronize polling of events with
purging (these might be responsibility of different teams which makes
things harder).

Even with all of these 3 approaches you might end up with problems. The
correct thing to do imho be to implement the idea of Piotr Przybylski [3].
This goes in the direction of continuously pulling the events and storing a
copy into a separate retention DB that can be used for audit,
re-calculation, legal or other reasons.

Regards,
Hristo Iliev

[1]
https://github.com/cloudfoundry-incubator/cf-abacus/blob/master/lib/cf/bridge/src/index.js#L246
[2]
https://github.com/cloudfoundry-incubator/cf-abacus/tree/master/lib/cf/bridge
[3] https://github.com/cloudfoundry-incubator/cf-abacus/issues/30

2015-12-03 12:10 GMT+02:00 Nitta, Minoru <minoru.nitta(a)jp.fujitsu.com>:

Hi Hristo,



Thank you for your response. If the API does not handle the race
condition, it would not be realistic to

create a billing epoch safely, especially after providing (commercial)
service because there are customer’s

application installed.



I would like to issue a purge API periodically, specifically once a month,
to calculate application working

time easily. I can calculate application working time during a month even
if I do not issue the API, but

application start event will be expired and deleted after one month
(default) and will not be able to

calculate application working time eventually.



Regards,

Minoru Nitta



*From:* Hristo Iliev [mailto:hsiliev(a)gmail.com]
*Sent:* Monday, November 30, 2015 4:48 PM
*To:* Discussions about Cloud Foundry projects and the system overall. <
cf-dev(a)lists.cloudfoundry.org>
*Subject:* [cf-dev] Re: Questions about purge app usage event API



Hi Minoru,

AFAIK the API does not handle the race condition and this is documented in
the link you provided: "There is the potential race condition if apps are
currently being started, stopped, or scaled"

The purging has to be called only once, before you start/connect your
billing infrastructure. Have a look at this blog for more detailed
explanation: https://www.cloudfoundry.org/how-to-bill-on-cloud-foundry/

Regards,

Hristo Iliev



2015-11-30 3:56 GMT+02:00 Nitta, Minoru <minoru.nitta(a)jp.fujitsu.com>:

Hi,

I have some questions about 'Purge and reseed App Usage Events' API.

https://apidocs.cloudfoundry.org/212/app_usage_events/purge_and_reseed_app_usage_events.html

I am wondering about app started events. The API populates new events for
started app.
This may cause the problems if the API and stopping app occur at the same
time (race condition).
(e.g. app stopped after the API execution, which populates new app start
event but stop app event
never occur).

How can I workaround this problem? Or CloudFoundry handles such race
condition internally and
I do not have to consider the workaround for it?

Regards,
Minoru



Re: Proposal: container networking for applications

Onsi Fakhouri <ofakhouri@...>
 

Great work all! Looking forward to the discussion on the doc!

Onsi

On Thu, Dec 3, 2015 at 10:02 AM, Jason Sherron <jsherron(a)pivotal.io> wrote:

Hi, CF-dev community members!

Our cross-company team is happy to present a proposal to support direct
container-to-container networking and communication. We aim to provide
value to developers and admins by enabling new capabilities while providing
network access controls, and by providing first-class network-operations
flexibility.

The problems
- The current network implementation in Cloud Foundry restricts developers
and admins from secure, performant network communications directly between
containers. To support new service architectures, customers often need
fast, direct container-to-container communication while maintaining
granular control of network security in CF.
- Physical network configuration is inflexible with one addressing and
routing topology, while customers are demanding support for a variety of
network configurations and virtualization stacks, often driven by security
and IT standards.

The proposal
We propose an improved container networking infrastructure, rooted in two
principles: declarative network policy, and modular network topology. Our
goal is to allow developers and admins to define container-to-container
network graphs that make sense for their business in a high-level,
build-time manner, and then mapping that logical topology onto supported
network stacks, enabled by the modular network capabilities in libnetwork
from the Docker project.

Help wanted
We specifically request feedback on potential service discovery mechanisms
to support this container-to-container capability. As containers and
microservices gain the ability to communicate directly, how should they
locate their peers or each other?

We invite your comments on all aspects of the proposal, here and in the
document.


https://docs.google.com/document/d/1zQJqIEk4ldHH5iE5zat_oKIK8Ejogkgd_lySpg_oV_s/edit?usp=sharing

Jason Sherron on behalf of the working group


Re: No suitable ServiceConnectorCreator found trying to connect to RabbitMQ

Jason Brown
 

It turned out to be dependencies. I removed the *ampq* stuff and added spring-rabbit and now it is working like a charm (still using your simplified bean declaration). Between Maven, Spring, and CF, I feel like it is pretty hard to nail down all the dependencies that are needed.


Proposal: container networking for applications

Jason Sherron
 

Hi, CF-dev community members!

Our cross-company team is happy to present a proposal to support direct
container-to-container networking and communication. We aim to provide
value to developers and admins by enabling new capabilities while providing
network access controls, and by providing first-class network-operations
flexibility.

The problems
- The current network implementation in Cloud Foundry restricts developers
and admins from secure, performant network communications directly between
containers. To support new service architectures, customers often need
fast, direct container-to-container communication while maintaining
granular control of network security in CF.
- Physical network configuration is inflexible with one addressing and
routing topology, while customers are demanding support for a variety of
network configurations and virtualization stacks, often driven by security
and IT standards.

The proposal
We propose an improved container networking infrastructure, rooted in two
principles: declarative network policy, and modular network topology. Our
goal is to allow developers and admins to define container-to-container
network graphs that make sense for their business in a high-level,
build-time manner, and then mapping that logical topology onto supported
network stacks, enabled by the modular network capabilities in libnetwork
from the Docker project.

Help wanted
We specifically request feedback on potential service discovery mechanisms
to support this container-to-container capability. As containers and
microservices gain the ability to communicate directly, how should they
locate their peers or each other?

We invite your comments on all aspects of the proposal, here and in the
document.

https://docs.google.com/document/d/1zQJqIEk4ldHH5iE5zat_oKIK8Ejogkgd_lySpg_oV_s/edit?usp=sharing

Jason Sherron on behalf of the working group


Re: Downloading Buildpack Bits from Cloud Foundry

Matthew Sykes <matthew.sykes@...>
 

The blob names in the store should match the buildpack key from the
buildpacks table in the CCDB. (The key is basically the buildpack guid
concatenated with the sha1 hash of the buildpack bits by an underscore.)
If you can access the blob store directly, you can just get them out that
way.

The other option (if you have the credentials for the staging user and
direct access to a cloud controller) is to hit the endpoint the DEAs use:
`/v2/buildpacks/${buildpack-guid}/download`. The URLs are in the NATS
staging messages and look like this for bosh-lite:

http://upload-user:upload-password(a)10.244.0.134:9022/v2/buildpacks/f2f882e5-7295-4613-a749-2bf94ff45927/download

Hope that helps a bit.

On Thu, Dec 3, 2015 at 11:56 AM, Matthew Sykes <matthew.sykes(a)gmail.com>
wrote:

Sorry - misread the question. You're looking for the actual buildpacks
instead of the droplets.

You can construct the file names from the information in the admin
buildpack table. I'll post details in a moment.

On Thu, Dec 3, 2015 at 11:51 AM, Matthew Sykes <matthew.sykes(a)gmail.com>
wrote:

If you're on a new enough level, you can try the droplet download
endpoint [1].

[1]:
https://apidocs.cloudfoundry.org/225/apps/downloads_the_staged_droplet_for_an_app.html

On Thu, Dec 3, 2015 at 10:39 AM, Mohamed, Owais <
Owais.Mohamed(a)covisint.com> wrote:

Hello,

Is there anyway to download the bits of an uploaded buildpack from a
Cloud Foundry Installation? Due to an internal mess up we are not able to
have one to one mapping from buildpack version to code version in SCM.

I do see packaged files in Cloud Foundry Blobstore related to Buildpacks
but can’t make out which file belongs to which buildpack because GUIDs are
used for all file names in cloud foundry blob store.

Regards,
Owais


--
Matthew Sykes
matthew.sykes(a)gmail.com


--
Matthew Sykes
matthew.sykes(a)gmail.com


--
Matthew Sykes
matthew.sykes(a)gmail.com


Re: Downloading Buildpack Bits from Cloud Foundry

Matthew Sykes <matthew.sykes@...>
 

Sorry - misread the question. You're looking for the actual buildpacks
instead of the droplets.

You can construct the file names from the information in the admin
buildpack table. I'll post details in a moment.

On Thu, Dec 3, 2015 at 11:51 AM, Matthew Sykes <matthew.sykes(a)gmail.com>
wrote:

If you're on a new enough level, you can try the droplet download endpoint
[1].

[1]:
https://apidocs.cloudfoundry.org/225/apps/downloads_the_staged_droplet_for_an_app.html

On Thu, Dec 3, 2015 at 10:39 AM, Mohamed, Owais <
Owais.Mohamed(a)covisint.com> wrote:

Hello,

Is there anyway to download the bits of an uploaded buildpack from a
Cloud Foundry Installation? Due to an internal mess up we are not able to
have one to one mapping from buildpack version to code version in SCM.

I do see packaged files in Cloud Foundry Blobstore related to Buildpacks
but can’t make out which file belongs to which buildpack because GUIDs are
used for all file names in cloud foundry blob store.

Regards,
Owais


--
Matthew Sykes
matthew.sykes(a)gmail.com


--
Matthew Sykes
matthew.sykes(a)gmail.com


Re: Downloading Buildpack Bits from Cloud Foundry

Matthew Sykes <matthew.sykes@...>
 

If you're on a new enough level, you can try the droplet download endpoint
[1].

[1]:
https://apidocs.cloudfoundry.org/225/apps/downloads_the_staged_droplet_for_an_app.html

On Thu, Dec 3, 2015 at 10:39 AM, Mohamed, Owais <Owais.Mohamed(a)covisint.com>
wrote:

Hello,

Is there anyway to download the bits of an uploaded buildpack from a Cloud
Foundry Installation? Due to an internal mess up we are not able to have
one to one mapping from buildpack version to code version in SCM.

I do see packaged files in Cloud Foundry Blobstore related to Buildpacks
but can’t make out which file belongs to which buildpack because GUIDs are
used for all file names in cloud foundry blob store.

Regards,
Owais


--
Matthew Sykes
matthew.sykes(a)gmail.com


Downloading Buildpack Bits from Cloud Foundry

Owais Mohamed
 

Hello,

Is there anyway to download the bits of an uploaded buildpack from a Cloud Foundry Installation? Due to an internal mess up we are not able to have one to one mapping from buildpack version to code version in SCM.

I do see packaged files in Cloud Foundry Blobstore related to Buildpacks but can't make out which file belongs to which buildpack because GUIDs are used for all file names in cloud foundry blob store.

Regards,
Owais

6441 - 6460 of 9422