Date   

Re: Firehose vs Pivotal Ops metrics

Qing Gong
 

Erik,

That's very helpful, thank you!

Qing


Re: Warden: Failed retrieving quota for uid=20002: Block device doesn't exist.

CF Runtime
 

Hi R M,

We think that this message may have been lost so we are resending it.

Are you able to deploy other applications? This error seems to stem from
mounted disk problems. Could you see if the disks are properly mounted on
your DEA.

Thanks
Joseph & Dan
OSS Release Integration Team

On Fri, Aug 14, 2015 at 10:11 AM, R M <rishi.investigate(a)gmail.com> wrote:

Sorry for the repost (I emailed this question to
cf-dev(a)lists.cloudfoundry.org but not sure if it was sent)...

I am getting this error while trying to deploy a test app. It fails
during staging with this exception:

/=================================================/
2015-08-13 17:33:16.542443 Warden::Container::Linux pid=13619 tid=885a
fid=d6ee container/base.rb/dispatch:300
handle=18t6vhrf6d0,request={"bind_mounts"=>["#<Warden::Protocol::CreateRequest::BindMount:0x0002ab0a5e79c0>",
"#<Warden::Protocol::CreateRequest::BindMount:0x0002ab0a5ebca0>",
"#<Warden::Protocol::CreateRequest::BindMount:0x0002ab0a5e9860>"],
"rootfs"=>"/var/vcap/packages/rootfs_cflinuxfs2"},response={"handle"=>"18t6vhrf6d0"}
DEBUG -- create (took 9.700584)
2015-08-13 17:33:16.543524 Warden::Container::Linux pid=13619 tid=885a
fid=0ccd container/base.rb/write_snapshot:334 handle=18t6vhrf6d0 DEBUG --
Wrote snapshot in 0.000068
2015-08-13 17:33:16.543599 Warden::Container::Linux pid=13619 tid=885a
fid=0ccd container/base.rb/dispatch:300
handle=18t6vhrf6d0,request={"handle"=>"18t6vhrf6d0",
"limit_in_shares"=>512},response={"limit_in_shares"=>512} DEBUG --
limit_cpu (took 0.000289)
2015-08-13 17:33:16.553165 Warden::Container::Linux pid=13619 tid=885a
fid=f1ec container/spawn.rb/set_deferred_success:135 stdout=,stderr=Failed
retrieving quota for uid=20002: Block device doesn't exist.
WARN -- Exited with status 1 (0.008s):
[["/var/vcap/data/packages/warden/724f030f2d6e90d02c2afbe90ed5fe1ce2de1667/warden/src/closefds/closefds",
"/var/vcap/data/packages/warden/724f030f2d6e90d02c2afbe90ed5fe1ce2de1667/warden/src/closefds/closefds"],
"/var/vcap/data/packages/warden/724f030f2d6e90d02c2afbe90ed5fe1ce2de1667/warden/src/repquota/repquota",
"/var", "20002"]

/=================================================/

Any tips to debug this further greatly appreciated.

Thanks.


Re: CF NATS Event\Topic List

CF Runtime
 

Hi Owais,

We think this message may have been lost so we are sending it again.

We compiled one back in December, I can't think of anything that would have
changed since then.

https://docs.google.com/document/d/1PHXRc0QAXYPbh88Nu5o60wlsuIddEJhAxiAmiq7NgKk/edit?usp=sharing

Are you writing a component that gets BOSH deployed along side Cloud
Foundry?

Joseph and Dan
OSS Release Integration Team

On Thu, Aug 13, 2015 at 2:48 PM, Owais Mohamed <equa.monde(a)gmail.com> wrote:

Has anyone compiled an entire list of events or topics that are being
published and subscribed to on CF NATS?

This would helpful for people who are writing components which need to
triggered when certain events take places.


Re: cannot replace the default buildpacks. Is it expected?

CF Runtime
 

When the cloud controller instances come up, they install any buildpacks
configured in the manifest. If you want a buildpack NOT to be replaced
automatically (because you've updated it yourself), you'll need to update
the 'locked' attribute of the buildpack.

http://apidocs.cloudfoundry.org/215/buildpacks/lock_or_unlock_a_buildpack.html

You will then need to unlock it before you are able to update it again.

Joseph & Dies
CF OSS Release Integration Team

On Tue, Aug 25, 2015 at 10:25 AM, Guangcai Wang <guangcai.wang(a)gmail.com>
wrote:

steps:
1. replace php_buildpack
2. stop all components by "nova stop"
3. start all components by "nova start"
4. check php_buildpack

I expected it was the version in step 1. However, step 4 got the default
php_buildpack.


On Tue, Aug 25, 2015 at 10:10 PM, Daniel Mikusa <dmikusa(a)pivotal.io>
wrote:

What do you mean by "after restarting cf"? How are you doing that?
There's a lot of components, which one or ones are you restarting? What
steps are you running?

Dan


On Tue, Aug 25, 2015 at 3:37 AM, Guangcai Wang <guangcai.wang(a)gmail.com>
wrote:

Hi all,

We tried to update the default buildpacks with a new version. However,
after restarting cf, the old version comes back. (See below) It seems all
the default buildpacks will be ensured to be installed with the default
version in the releases. Is this an expected behavior?

Default buildpacks (cf 212)
buildpack position enabled locked filename
java_buildpack 1 true false
java-buildpack-v3.0.zip
ruby_buildpack 2 true false
ruby_buildpack-cached-v1.4.2.zip
nodejs_buildpack 3 true false
nodejs_buildpack-cached-v1.3.4.zip
go_buildpack 4 true false
go_buildpack-cached-v1.3.1.zip
python_buildpack 5 true false
python_buildpack-cached-v1.3.5.zip
php_buildpack 6 true false
php_buildpack-cached-v3.2.2.zip

Updated php_buildpack:
buildpack position enabled locked filename
java_buildpack 1 true false
java-buildpack-v3.0.zip
ruby_buildpack 2 true false
ruby_buildpack-cached-v1.4.2.zip
nodejs_buildpack 3 true false
nodejs_buildpack-cached-v1.3.4.zip
go_buildpack 4 true false
go_buildpack-cached-v1.3.1.zip
python_buildpack 5 true false
python_buildpack-cached-v1.3.5.zip
php_buildpack 6 true false
php_buildpack-cached-v4.0.0.zip

However, after restarting all cf VMs, I got default php_buildpack
(php_buildpack-cached-v3.2.2.zip), not the updated version
(php_buildpack-cached-v4.0.0.zip)
buildpack position enabled locked filename
java_buildpack 1 true false
java-buildpack-v3.0.zip
ruby_buildpack 2 true false
ruby_buildpack-cached-v1.4.2.zip
nodejs_buildpack 3 true false
nodejs_buildpack-cached-v1.3.4.zip
go_buildpack 4 true false
go_buildpack-cached-v1.3.1.zip
python_buildpack 5 true false
python_buildpack-cached-v1.3.5.zip
php_buildpack 6 true false
php_buildpack-cached-v3.2.2.zip


I checked the cloud_controller_ng.log. From the log, all the default
buildpacks (defined in manifest) were installed every time when cf is
restarted,

{"timestamp":1440476666.1351752,"message":"Installing buildpack
java_buildpack","log_level":"info","source":"cc.background","data":{},"thread_id":70279906579220,"fiber_id":70279930973800,"process_id":1990,"file":"/var/vcap/data/packages/cloud_controller_ng/5d255d62301e734cfabb032a882d67f172318971.1-80d3ab4a46e7622a789a3ebc915713a0714ea06a/cloud_controller_ng/app/jobs/runtime/buildpack_installer.rb","lineno":15,"method":"perform"}
{"timestamp":1440476667.8460865,"message":"Installing buildpack
ruby_buildpack","log_level":"info","source":"cc.background","data":{},"thread_id":70279906579220,"fiber_id":70279930973800,"process_id":1990,"file":"/var/vcap/data/packages/cloud_controller_ng/5d255d62301e734cfabb032a882d67f172318971.1-80d3ab4a46e7622a789a3ebc915713a0714ea06a/cloud_controller_ng/app/jobs/runtime/buildpack_installer.rb","lineno":15,"method":"perform"}
{"timestamp":1440476670.4481,"message":"Installing buildpack
nodejs_buildpack","log_level":"info","source":"cc.background","data":{},"thread_id":70155356951340,"fiber_id":70155381353720,"process_id":1949,"file":"/var/vcap/data/packages/cloud_controller_ng/5d255d62301e734cfabb032a882d67f172318971.1-80d3ab4a46e7622a789a3ebc915713a0714ea06a/cloud_controller_ng/app/jobs/runtime/buildpack_installer.rb","lineno":15,"method":"perform"}
{"timestamp":1440476675.2146842,"message":"Installing buildpack
go_buildpack","log_level":"info","source":"cc.background","data":{},"thread_id":70155356951340,"fiber_id":70155381353720,"process_id":1949,"file":"/var/vcap/data/packages/cloud_controller_ng/5d255d62301e734cfabb032a882d67f172318971.1-80d3ab4a46e7622a789a3ebc915713a0714ea06a/cloud_controller_ng/app/jobs/runtime/buildpack_installer.rb","lineno":15,"method":"perform"}
{"timestamp":1440476688.722021,"message":"Installing buildpack
python_buildpack","log_level":"info","source":"cc.background","data":{},"thread_id":70279906579220,"fiber_id":70279930973800,"process_id":1990,"file":"/var/vcap/data/packages/cloud_controller_ng/5d255d62301e734cfabb032a882d67f172318971.1-80d3ab4a46e7622a789a3ebc915713a0714ea06a/cloud_controller_ng/app/jobs/runtime/buildpack_installer.rb","lineno":15,"method":"perform"}
{"timestamp":1440476696.635555,"message":"Installing buildpack
php_buildpack","log_level":"info","source":"cc.background","data":{},"thread_id":70155356951340,"fiber_id":70155381353720,"process_id":1949,"file":"/var/vcap/data/packages/cloud_controller_ng/5d255d62301e734cfabb032a882d67f172318971.1-80d3ab4a46e7622a789a3ebc915713a0714ea06a/cloud_controller_ng/app/jobs/runtime/buildpack_installer.rb","lineno":15,"method":"perform"}


Re: cannot replace the default buildpacks. Is it expected?

iamflying
 

steps:
1. replace php_buildpack
2. stop all components by "nova stop"
3. start all components by "nova start"
4. check php_buildpack

I expected it was the version in step 1. However, step 4 got the default
php_buildpack.

On Tue, Aug 25, 2015 at 10:10 PM, Daniel Mikusa <dmikusa(a)pivotal.io> wrote:

What do you mean by "after restarting cf"? How are you doing that?
There's a lot of components, which one or ones are you restarting? What
steps are you running?

Dan


On Tue, Aug 25, 2015 at 3:37 AM, Guangcai Wang <guangcai.wang(a)gmail.com>
wrote:

Hi all,

We tried to update the default buildpacks with a new version. However,
after restarting cf, the old version comes back. (See below) It seems all
the default buildpacks will be ensured to be installed with the default
version in the releases. Is this an expected behavior?

Default buildpacks (cf 212)
buildpack position enabled locked filename
java_buildpack 1 true false
java-buildpack-v3.0.zip
ruby_buildpack 2 true false
ruby_buildpack-cached-v1.4.2.zip
nodejs_buildpack 3 true false
nodejs_buildpack-cached-v1.3.4.zip
go_buildpack 4 true false
go_buildpack-cached-v1.3.1.zip
python_buildpack 5 true false
python_buildpack-cached-v1.3.5.zip
php_buildpack 6 true false
php_buildpack-cached-v3.2.2.zip

Updated php_buildpack:
buildpack position enabled locked filename
java_buildpack 1 true false
java-buildpack-v3.0.zip
ruby_buildpack 2 true false
ruby_buildpack-cached-v1.4.2.zip
nodejs_buildpack 3 true false
nodejs_buildpack-cached-v1.3.4.zip
go_buildpack 4 true false
go_buildpack-cached-v1.3.1.zip
python_buildpack 5 true false
python_buildpack-cached-v1.3.5.zip
php_buildpack 6 true false
php_buildpack-cached-v4.0.0.zip

However, after restarting all cf VMs, I got default php_buildpack
(php_buildpack-cached-v3.2.2.zip), not the updated version
(php_buildpack-cached-v4.0.0.zip)
buildpack position enabled locked filename
java_buildpack 1 true false
java-buildpack-v3.0.zip
ruby_buildpack 2 true false
ruby_buildpack-cached-v1.4.2.zip
nodejs_buildpack 3 true false
nodejs_buildpack-cached-v1.3.4.zip
go_buildpack 4 true false
go_buildpack-cached-v1.3.1.zip
python_buildpack 5 true false
python_buildpack-cached-v1.3.5.zip
php_buildpack 6 true false
php_buildpack-cached-v3.2.2.zip


I checked the cloud_controller_ng.log. From the log, all the default
buildpacks (defined in manifest) were installed every time when cf is
restarted,

{"timestamp":1440476666.1351752,"message":"Installing buildpack
java_buildpack","log_level":"info","source":"cc.background","data":{},"thread_id":70279906579220,"fiber_id":70279930973800,"process_id":1990,"file":"/var/vcap/data/packages/cloud_controller_ng/5d255d62301e734cfabb032a882d67f172318971.1-80d3ab4a46e7622a789a3ebc915713a0714ea06a/cloud_controller_ng/app/jobs/runtime/buildpack_installer.rb","lineno":15,"method":"perform"}
{"timestamp":1440476667.8460865,"message":"Installing buildpack
ruby_buildpack","log_level":"info","source":"cc.background","data":{},"thread_id":70279906579220,"fiber_id":70279930973800,"process_id":1990,"file":"/var/vcap/data/packages/cloud_controller_ng/5d255d62301e734cfabb032a882d67f172318971.1-80d3ab4a46e7622a789a3ebc915713a0714ea06a/cloud_controller_ng/app/jobs/runtime/buildpack_installer.rb","lineno":15,"method":"perform"}
{"timestamp":1440476670.4481,"message":"Installing buildpack
nodejs_buildpack","log_level":"info","source":"cc.background","data":{},"thread_id":70155356951340,"fiber_id":70155381353720,"process_id":1949,"file":"/var/vcap/data/packages/cloud_controller_ng/5d255d62301e734cfabb032a882d67f172318971.1-80d3ab4a46e7622a789a3ebc915713a0714ea06a/cloud_controller_ng/app/jobs/runtime/buildpack_installer.rb","lineno":15,"method":"perform"}
{"timestamp":1440476675.2146842,"message":"Installing buildpack
go_buildpack","log_level":"info","source":"cc.background","data":{},"thread_id":70155356951340,"fiber_id":70155381353720,"process_id":1949,"file":"/var/vcap/data/packages/cloud_controller_ng/5d255d62301e734cfabb032a882d67f172318971.1-80d3ab4a46e7622a789a3ebc915713a0714ea06a/cloud_controller_ng/app/jobs/runtime/buildpack_installer.rb","lineno":15,"method":"perform"}
{"timestamp":1440476688.722021,"message":"Installing buildpack
python_buildpack","log_level":"info","source":"cc.background","data":{},"thread_id":70279906579220,"fiber_id":70279930973800,"process_id":1990,"file":"/var/vcap/data/packages/cloud_controller_ng/5d255d62301e734cfabb032a882d67f172318971.1-80d3ab4a46e7622a789a3ebc915713a0714ea06a/cloud_controller_ng/app/jobs/runtime/buildpack_installer.rb","lineno":15,"method":"perform"}
{"timestamp":1440476696.635555,"message":"Installing buildpack
php_buildpack","log_level":"info","source":"cc.background","data":{},"thread_id":70155356951340,"fiber_id":70155381353720,"process_id":1949,"file":"/var/vcap/data/packages/cloud_controller_ng/5d255d62301e734cfabb032a882d67f172318971.1-80d3ab4a46e7622a789a3ebc915713a0714ea06a/cloud_controller_ng/app/jobs/runtime/buildpack_installer.rb","lineno":15,"method":"perform"}


Cloud Foundry Eclipse Setup

Deepak Arn <arn.deepak1@...>
 

Hello,

I am trying to setup Cloud foundry environment on Eclipse, so that I can
make java project and deploy them directly to the Cloud Foundry(Server),
But due to mismatch in different plugin version, it always throw some
error. As of know, i am using Eclipse Kepler, jdk1.8, Cloud Foundry ,
Spring Tool Suite
Could you please share the work flow, How we can deploy java application
using these plugins.

Thanks,
Deepak


Re: CF integration with logger and monitoring tools

Swatz bosh
 

I am getting error while running data-dog-nozzle -

https://github.com/cloudfoundry-incubator/datadog-firehose-nozzle

go run main.go -config config/datadog-firehose-nozzle.json

-----------ERROR-------------
2015/08/25 12:09:50 Starting DataDog Firehose Nozzle...
2015/08/25 12:09:50 Error while reading from the firehose: Unauthorized error: You are not authorized. Error: Invalid authorization
2015/08/25 12:09:50 Closing connection with traffic controller due to Unauthorized error: You are not authorized. Error: Invalid authorization
2015/08/25 12:09:50 Posting 3 metrics
2015/08/25 12:09:50 DataDog Firehose Nozzle shutting down...
----------------------------------------

below command of uaac, gives scope etc.

uaac client get datadog-firehose-nozzle

scope: doppler.firehose oauth.approvals openid
client_id: datadog-firehose-nozzle
resource_ids: none
authorized_grant_types: authorization_code client_credentials refresh_token
autoapprove:
authorities: doppler.firehose oauth.login
lastmodified: 1440494682000


I have added this property using -

uaac client add datadog-firehose-nozzle --scope openid,oauth.approvals,doppler.firehose --authorized_grant_types authorization_code,client_credentials,refresh_token --authorities oauth.login,doppler.firehose

I have used UAA client admin username/pswd in my nozzle config file and have also tried UAA client login username/pswd but same error.

Do I need to restart uaa job after adding datadog-nozzle client?


Re: cannot replace the default buildpacks. Is it expected?

Daniel Mikusa
 

What do you mean by "after restarting cf"? How are you doing that?
There's a lot of components, which one or ones are you restarting? What
steps are you running?

Dan


On Tue, Aug 25, 2015 at 3:37 AM, Guangcai Wang <guangcai.wang(a)gmail.com>
wrote:

Hi all,

We tried to update the default buildpacks with a new version. However,
after restarting cf, the old version comes back. (See below) It seems all
the default buildpacks will be ensured to be installed with the default
version in the releases. Is this an expected behavior?

Default buildpacks (cf 212)
buildpack position enabled locked filename
java_buildpack 1 true false java-buildpack-v3.0.zip
ruby_buildpack 2 true false
ruby_buildpack-cached-v1.4.2.zip
nodejs_buildpack 3 true false
nodejs_buildpack-cached-v1.3.4.zip
go_buildpack 4 true false
go_buildpack-cached-v1.3.1.zip
python_buildpack 5 true false
python_buildpack-cached-v1.3.5.zip
php_buildpack 6 true false
php_buildpack-cached-v3.2.2.zip

Updated php_buildpack:
buildpack position enabled locked filename
java_buildpack 1 true false java-buildpack-v3.0.zip
ruby_buildpack 2 true false
ruby_buildpack-cached-v1.4.2.zip
nodejs_buildpack 3 true false
nodejs_buildpack-cached-v1.3.4.zip
go_buildpack 4 true false
go_buildpack-cached-v1.3.1.zip
python_buildpack 5 true false
python_buildpack-cached-v1.3.5.zip
php_buildpack 6 true false
php_buildpack-cached-v4.0.0.zip

However, after restarting all cf VMs, I got default php_buildpack
(php_buildpack-cached-v3.2.2.zip), not the updated version
(php_buildpack-cached-v4.0.0.zip)
buildpack position enabled locked filename
java_buildpack 1 true false java-buildpack-v3.0.zip
ruby_buildpack 2 true false
ruby_buildpack-cached-v1.4.2.zip
nodejs_buildpack 3 true false
nodejs_buildpack-cached-v1.3.4.zip
go_buildpack 4 true false
go_buildpack-cached-v1.3.1.zip
python_buildpack 5 true false
python_buildpack-cached-v1.3.5.zip
php_buildpack 6 true false
php_buildpack-cached-v3.2.2.zip


I checked the cloud_controller_ng.log. From the log, all the default
buildpacks (defined in manifest) were installed every time when cf is
restarted,

{"timestamp":1440476666.1351752,"message":"Installing buildpack
java_buildpack","log_level":"info","source":"cc.background","data":{},"thread_id":70279906579220,"fiber_id":70279930973800,"process_id":1990,"file":"/var/vcap/data/packages/cloud_controller_ng/5d255d62301e734cfabb032a882d67f172318971.1-80d3ab4a46e7622a789a3ebc915713a0714ea06a/cloud_controller_ng/app/jobs/runtime/buildpack_installer.rb","lineno":15,"method":"perform"}
{"timestamp":1440476667.8460865,"message":"Installing buildpack
ruby_buildpack","log_level":"info","source":"cc.background","data":{},"thread_id":70279906579220,"fiber_id":70279930973800,"process_id":1990,"file":"/var/vcap/data/packages/cloud_controller_ng/5d255d62301e734cfabb032a882d67f172318971.1-80d3ab4a46e7622a789a3ebc915713a0714ea06a/cloud_controller_ng/app/jobs/runtime/buildpack_installer.rb","lineno":15,"method":"perform"}
{"timestamp":1440476670.4481,"message":"Installing buildpack
nodejs_buildpack","log_level":"info","source":"cc.background","data":{},"thread_id":70155356951340,"fiber_id":70155381353720,"process_id":1949,"file":"/var/vcap/data/packages/cloud_controller_ng/5d255d62301e734cfabb032a882d67f172318971.1-80d3ab4a46e7622a789a3ebc915713a0714ea06a/cloud_controller_ng/app/jobs/runtime/buildpack_installer.rb","lineno":15,"method":"perform"}
{"timestamp":1440476675.2146842,"message":"Installing buildpack
go_buildpack","log_level":"info","source":"cc.background","data":{},"thread_id":70155356951340,"fiber_id":70155381353720,"process_id":1949,"file":"/var/vcap/data/packages/cloud_controller_ng/5d255d62301e734cfabb032a882d67f172318971.1-80d3ab4a46e7622a789a3ebc915713a0714ea06a/cloud_controller_ng/app/jobs/runtime/buildpack_installer.rb","lineno":15,"method":"perform"}
{"timestamp":1440476688.722021,"message":"Installing buildpack
python_buildpack","log_level":"info","source":"cc.background","data":{},"thread_id":70279906579220,"fiber_id":70279930973800,"process_id":1990,"file":"/var/vcap/data/packages/cloud_controller_ng/5d255d62301e734cfabb032a882d67f172318971.1-80d3ab4a46e7622a789a3ebc915713a0714ea06a/cloud_controller_ng/app/jobs/runtime/buildpack_installer.rb","lineno":15,"method":"perform"}
{"timestamp":1440476696.635555,"message":"Installing buildpack
php_buildpack","log_level":"info","source":"cc.background","data":{},"thread_id":70155356951340,"fiber_id":70155381353720,"process_id":1949,"file":"/var/vcap/data/packages/cloud_controller_ng/5d255d62301e734cfabb032a882d67f172318971.1-80d3ab4a46e7622a789a3ebc915713a0714ea06a/cloud_controller_ng/app/jobs/runtime/buildpack_installer.rb","lineno":15,"method":"perform"}


cannot replace the default buildpacks. Is it expected?

iamflying
 

Hi all,

We tried to update the default buildpacks with a new version. However,
after restarting cf, the old version comes back. (See below) It seems all
the default buildpacks will be ensured to be installed with the default
version in the releases. Is this an expected behavior?

Default buildpacks (cf 212)
buildpack position enabled locked filename
java_buildpack 1 true false java-buildpack-v3.0.zip
ruby_buildpack 2 true false
ruby_buildpack-cached-v1.4.2.zip
nodejs_buildpack 3 true false
nodejs_buildpack-cached-v1.3.4.zip
go_buildpack 4 true false
go_buildpack-cached-v1.3.1.zip
python_buildpack 5 true false
python_buildpack-cached-v1.3.5.zip
php_buildpack 6 true false
php_buildpack-cached-v3.2.2.zip

Updated php_buildpack:
buildpack position enabled locked filename
java_buildpack 1 true false java-buildpack-v3.0.zip
ruby_buildpack 2 true false
ruby_buildpack-cached-v1.4.2.zip
nodejs_buildpack 3 true false
nodejs_buildpack-cached-v1.3.4.zip
go_buildpack 4 true false
go_buildpack-cached-v1.3.1.zip
python_buildpack 5 true false
python_buildpack-cached-v1.3.5.zip
php_buildpack 6 true false
php_buildpack-cached-v4.0.0.zip

However, after restarting all cf VMs, I got default php_buildpack
(php_buildpack-cached-v3.2.2.zip), not the updated version
(php_buildpack-cached-v4.0.0.zip)
buildpack position enabled locked filename
java_buildpack 1 true false java-buildpack-v3.0.zip
ruby_buildpack 2 true false
ruby_buildpack-cached-v1.4.2.zip
nodejs_buildpack 3 true false
nodejs_buildpack-cached-v1.3.4.zip
go_buildpack 4 true false
go_buildpack-cached-v1.3.1.zip
python_buildpack 5 true false
python_buildpack-cached-v1.3.5.zip
php_buildpack 6 true false
php_buildpack-cached-v3.2.2.zip


I checked the cloud_controller_ng.log. From the log, all the default
buildpacks (defined in manifest) were installed every time when cf is
restarted,

{"timestamp":1440476666.1351752,"message":"Installing buildpack
java_buildpack","log_level":"info","source":"cc.background","data":{},"thread_id":70279906579220,"fiber_id":70279930973800,"process_id":1990,"file":"/var/vcap/data/packages/cloud_controller_ng/5d255d62301e734cfabb032a882d67f172318971.1-80d3ab4a46e7622a789a3ebc915713a0714ea06a/cloud_controller_ng/app/jobs/runtime/buildpack_installer.rb","lineno":15,"method":"perform"}
{"timestamp":1440476667.8460865,"message":"Installing buildpack
ruby_buildpack","log_level":"info","source":"cc.background","data":{},"thread_id":70279906579220,"fiber_id":70279930973800,"process_id":1990,"file":"/var/vcap/data/packages/cloud_controller_ng/5d255d62301e734cfabb032a882d67f172318971.1-80d3ab4a46e7622a789a3ebc915713a0714ea06a/cloud_controller_ng/app/jobs/runtime/buildpack_installer.rb","lineno":15,"method":"perform"}
{"timestamp":1440476670.4481,"message":"Installing buildpack
nodejs_buildpack","log_level":"info","source":"cc.background","data":{},"thread_id":70155356951340,"fiber_id":70155381353720,"process_id":1949,"file":"/var/vcap/data/packages/cloud_controller_ng/5d255d62301e734cfabb032a882d67f172318971.1-80d3ab4a46e7622a789a3ebc915713a0714ea06a/cloud_controller_ng/app/jobs/runtime/buildpack_installer.rb","lineno":15,"method":"perform"}
{"timestamp":1440476675.2146842,"message":"Installing buildpack
go_buildpack","log_level":"info","source":"cc.background","data":{},"thread_id":70155356951340,"fiber_id":70155381353720,"process_id":1949,"file":"/var/vcap/data/packages/cloud_controller_ng/5d255d62301e734cfabb032a882d67f172318971.1-80d3ab4a46e7622a789a3ebc915713a0714ea06a/cloud_controller_ng/app/jobs/runtime/buildpack_installer.rb","lineno":15,"method":"perform"}
{"timestamp":1440476688.722021,"message":"Installing buildpack
python_buildpack","log_level":"info","source":"cc.background","data":{},"thread_id":70279906579220,"fiber_id":70279930973800,"process_id":1990,"file":"/var/vcap/data/packages/cloud_controller_ng/5d255d62301e734cfabb032a882d67f172318971.1-80d3ab4a46e7622a789a3ebc915713a0714ea06a/cloud_controller_ng/app/jobs/runtime/buildpack_installer.rb","lineno":15,"method":"perform"}
{"timestamp":1440476696.635555,"message":"Installing buildpack
php_buildpack","log_level":"info","source":"cc.background","data":{},"thread_id":70155356951340,"fiber_id":70155381353720,"process_id":1949,"file":"/var/vcap/data/packages/cloud_controller_ng/5d255d62301e734cfabb032a882d67f172318971.1-80d3ab4a46e7622a789a3ebc915713a0714ea06a/cloud_controller_ng/app/jobs/runtime/buildpack_installer.rb","lineno":15,"method":"perform"}


Re: Firehose vs Pivotal Ops metrics

Erik Jasiak
 

Hi Qing,

First, Ops Metrics is a closed-source addition to Pivotal Cloud Foundry.
The metrics exposed by Ops Metrics today come from the collector[1] and
tomorrow from both the collector and firehose. All other questions about
Ops Metrics should be directed to Pivotal support.

Metrics from the collector in open-source CF are available as OpenTSDB,
graphite, datadog, or CloudWatch formats.

The loggregator team has known for some time that CF would need to phase
out the collector. With the addition of the loggregator firehose, CF
components could broadcast metrics via another route. Several pre-Diego
components still rely on the collector today (and in many cases, now
also transmit metrics out the firehose as well). For maintaining
backward compatibility the loggregator team has been busily adding a
nozzle to feed firehose data back into the collector[2] . Overall, the
goal is that both metrics resources have parity where possible, but we
will phase out future metrics going to the collector over time.

Future CF components (Diego and after) will broadcast metrics along the
firehose only, and metrics made available in the format of choice via
nozzle.

Our metrics documentation is ongoing. Because metrics travel out the
loggregator or collector system - but originate from other CF components
- keeping track of what metrics are in the system is a challenge. We
have some basic public info we've gathered [3], and are working to wrap
this in proper documentation now.

Finally, we've been working to document all this, primarily in the
readmes of our repositories. It's been clear for some time that this is
not sufficient, and we're working on alternate ways to get the info out
into the community's hands, while also working to produce core CF
documentation.

Hope this helps; stay tuned as we get more info out there.
Erik Jasiak
PM - Loggregator

[1] https://github.com/cloudfoundry/collector
[2] https://github.com/cloudfoundry-incubator/varz-firehose-nozzle
[3]
https://docs.google.com/spreadsheets/d/176yIaJChXEmvm-CjybmwopdRGQfDGrSzo3J_Mx8mMnk


Qing Gong wrote:


I set up the open source CF and got the metrics streamed out of the
firehose. I also read document about Pivotal Ops Metrics in here.

http://docs.pivotal.io/pivotalcf/customizing/use-metrics.html

What are the relationship between the two sets of metrics? The Pivotal
one uses JMX and has a completely different set of metrics. Why do we
have metrics in two places?

Also, are there any documents about what metrics are streamed from the
firehose? I could not find any document on this other than seeing the
actual data streamed from the firehose.

Thanks!


Re: Fail to stage application when scale the DEA

James Bayer
 

i'm glad you found it! i'll make sure that amit gupta sees this report as
he is currently PM for the MEGA team that is responsible for the DEAs.
thanks for sharing the results.

On Mon, Aug 24, 2015 at 10:06 PM, Layne Peng <layne.peng(a)emc.com> wrote:

Finally, I find out the problem, but may not the one I said above. It
seems a little bug exsited.

In my deployment, it will install some buildpacks by default, but perhaps
caused by some network issues, the buildpack not compete. Which make the
dea_next start check the SHA is not match. It continue try to download the
buildpacks. I have to delete all buildpack in CF, as well as clean the
folder /var/vcap/data/dea_next/tmp/* in each DEA, and restart DEA_NEXT.
--
Thank you,

James Bayer


Re: Fail to stage application when scale the DEA

Layne Peng
 

Finally, I find out the problem, but may not the one I said above. It seems a little bug exsited.

In my deployment, it will install some buildpacks by default, but perhaps caused by some network issues, the buildpack not compete. Which make the dea_next start check the SHA is not match. It continue try to download the buildpacks. I have to delete all buildpack in CF, as well as clean the folder /var/vcap/data/dea_next/tmp/* in each DEA, and restart DEA_NEXT.


Re: Fail to stage application when scale the DEA

Layne Peng
 

yes, it work before.


Re: Fail to stage application when scale the DEA

Gwenn Etourneau
 

Hi,
You never change the password right ? it was working before ?

On Tue, Aug 25, 2015 at 1:44 AM, Layne Peng <layne.peng(a)emc.com> wrote:

i found all new apps cannot be created now. caused by this issue. Any clue
for it?


Re: CF UAA Refresh Token

Piotr Przybylski <piotrp@...>
 

Is refresh token always returned, for all the grant types ? It seems to be
the case for authorization_code grant type but I don't think it is returned
for client credentials grant.

Piotr



|------------>
| From: |
|------------>
>--------------------------------------------------------------------------------------------------------------------------------------------------|
|aaron_huber <aaron.m.huber(a)intel.com> |
>--------------------------------------------------------------------------------------------------------------------------------------------------|
|------------>
| To: |
|------------>
>--------------------------------------------------------------------------------------------------------------------------------------------------|
|cf-dev(a)lists.cloudfoundry.org |
>--------------------------------------------------------------------------------------------------------------------------------------------------|
|------------>
| Date: |
|------------>
>--------------------------------------------------------------------------------------------------------------------------------------------------|
|08/24/2015 10:44 AM |
>--------------------------------------------------------------------------------------------------------------------------------------------------|
|------------>
| Subject: |
|------------>
>--------------------------------------------------------------------------------------------------------------------------------------------------|
|[cf-dev] Re: Re: CF UAA Refresh Token |
>--------------------------------------------------------------------------------------------------------------------------------------------------|





Not sure I understand that. When you get a token you also automatically
get
a refresh token - are you saying the refresh token given isn't valid and we
have to generate a new refresh token as an admin user? To clarify, all
we're trying to do is renew the token when it expires so the user doesn't
have to log in again.

Aaron



--
View this message in context:
http://cf-dev.70369.x6.nabble.com/cf-dev-CF-UAA-Refresh-Token-tp1338p1340.html

Sent from the CF Dev mailing list archive at Nabble.com.


CF Summit Berlin and Shanghai 2015 Dates Announced and Call for talks!

Chip Childers <cchilders@...>
 

Hi all,

We've recently announced CF Summit events for both Berlin and Shanghai, and
we'd love your participation!

Berlin will be on November 2nd and 3rd, 2015 >
http://berlin2015.cfsummit.com/
The CFP for Berlin will close on September 11th.

Shanghai will be on December 2nd and 3rd, 2015 >
http://shanghai2015.cfsummit.com/
The CFP for Shanghai will close on September 25th.

Both events are specifically seeking talks that fall into one (or more) of
the following topic areas:

*User Stories*

We want to hear how you (or your customers) are using Cloud Foundry. What
results are you seeing? What challenges have you overcome that will make
deploying Cloud Foundry easier for others?

This track will give attendees an inside look at Cloud Foundry from the
perspective of those who have been there and done that..or are going there
and doing that.

*Operating Cloud Foundry*

Operators: We want to hear how you are deploying Cloud Foundry in your own
environments. What has it enabled you to do that you couldn’t before? How
has it changed your company’s culture? These are the talks that have a
healthy mix of developer and operations content.

*How Developers Can Take Advantage of Cloud Foundry*

Developers: This is the track for those of you on the front lines of
deployment. We’re looking for talks that go deep on devops. Talks that tell
stories of people and tools coming together. Continuous integration,
continuous deployment and continuous innovation are the key themes of this
track.


Looking forward to seeing everyone at these events!

-chip

Chip Childers | VP Technology | Cloud Foundry Foundation


Re: Buildpack dependency on rootfs

Jack Cai
 

Hi Mike

Thanks for addressing my questions gain. I agree that it's important to
make both the rootfs and the buildpacks backward-compatible with each
other. People often use master branch of the buildpacks to push
applications on non-current levels of Cloud Foundry. The strategy you
described will help such scenario.

Jack

On Mon, Aug 24, 2015 at 5:30 PM, Mike Dalessio <mdalessio(a)pivotal.io> wrote:

Hi Jack,

Thanks for asking these questions. Not knowing what specific changes
you're referring to, I'll try to address some different aspects of the
topic.

First, just to set expectations, `lucid64` is no longer supported, and so
new buildpacks do not officially support it. This was pretty well announced
and discussed on the old vcap-dev mailing list, as well as the current
cf-dev mailing list. We do have tools and artifacts to generate buildpack
binaries for unsupported rootfses, though -- notably
https://github.com/cloudfoundry/binary-builder. Please let me know if you
need assistance in using any of the buildpacks tooling to do nonstandard
things, I'm happy to work with you on it.

At some point, it's *possible* that a buildpack will be released that
won't work on an older version of cflinuxfs2. Just as an example, we're
thinking about removing the vendored version of jq from the go-buildpack,
and relying on the version of jq put into the rootfs in
https://github.com/cloudfoundry/stacks/commit/8770b5185656938c943ee9bb2a5892097d055264.
However, we'd only do something like that after asking for comments and
communicating clearly on the mailing list as well as in the release notes.

Generally speaking, the goal is to provide a rootfs that's
backwards-compatible with buildpacks (e.g., we don't remove packages over
time, we only add or update packages); and to provide buildpacks that are
backwards-compatible with rootfses. It's obviously possible that we'll
break this contract, but we'd only do so after a comment period and lots of
notice.

Have I answered your questions?

It might be worth noting that we only recently started to track the rootfs
as a versioned artifact; the GitHub release history[2] discusses the
changes in each release. Hopefully this will increase transparency; please
let me know if you have other ideas for communicating more clearly.

[2]: https://github.com/cloudfoundry/stacks/releases

Cheers,
-mike


On Mon, Aug 24, 2015 at 5:06 PM, Jack Cai <greensight(a)gmail.com> wrote:

I notice that there are a few changes going into the rootfs [1]. Are
there corresponding changes in the buildpack which would make them unable
to work properly in older levels of the rootfs (both cflinuxfs2 and
lucid64)?

I guess the larger question is that are we going to have hard
dependencies from buildpack on the underlying stack levels?

[1] https://github.com/cloudfoundry/stacks/commits/master


Re: Buildpack dependency on rootfs

Mike Dalessio
 

Hi Jack,

Thanks for asking these questions. Not knowing what specific changes you're
referring to, I'll try to address some different aspects of the topic.

First, just to set expectations, `lucid64` is no longer supported, and so
new buildpacks do not officially support it. This was pretty well announced
and discussed on the old vcap-dev mailing list, as well as the current
cf-dev mailing list. We do have tools and artifacts to generate buildpack
binaries for unsupported rootfses, though -- notably
https://github.com/cloudfoundry/binary-builder. Please let me know if you
need assistance in using any of the buildpacks tooling to do nonstandard
things, I'm happy to work with you on it.

At some point, it's *possible* that a buildpack will be released that won't
work on an older version of cflinuxfs2. Just as an example, we're thinking
about removing the vendored version of jq from the go-buildpack, and
relying on the version of jq put into the rootfs in
https://github.com/cloudfoundry/stacks/commit/8770b5185656938c943ee9bb2a5892097d055264.
However, we'd only do something like that after asking for comments and
communicating clearly on the mailing list as well as in the release notes.

Generally speaking, the goal is to provide a rootfs that's
backwards-compatible with buildpacks (e.g., we don't remove packages over
time, we only add or update packages); and to provide buildpacks that are
backwards-compatible with rootfses. It's obviously possible that we'll
break this contract, but we'd only do so after a comment period and lots of
notice.

Have I answered your questions?

It might be worth noting that we only recently started to track the rootfs
as a versioned artifact; the GitHub release history[2] discusses the
changes in each release. Hopefully this will increase transparency; please
let me know if you have other ideas for communicating more clearly.

[2]: https://github.com/cloudfoundry/stacks/releases

Cheers,
-mike

On Mon, Aug 24, 2015 at 5:06 PM, Jack Cai <greensight(a)gmail.com> wrote:

I notice that there are a few changes going into the rootfs [1]. Are there
corresponding changes in the buildpack which would make them unable to work
properly in older levels of the rootfs (both cflinuxfs2 and lucid64)?

I guess the larger question is that are we going to have hard dependencies
from buildpack on the underlying stack levels?

[1] https://github.com/cloudfoundry/stacks/commits/master


Re: CF UAA Refresh Token

Keagan Mendoza
 

That did the trick, thanks Filip.

Using Basic auth, cf as client id and an empty password is the solution

Keagan


Buildpack dependency on rootfs

Jack Cai
 

I notice that there are a few changes going into the rootfs [1]. Are there
corresponding changes in the buildpack which would make them unable to work
properly in older levels of the rootfs (both cflinuxfs2 and lucid64)?

I guess the larger question is that are we going to have hard dependencies
from buildpack on the underlying stack levels?

[1] https://github.com/cloudfoundry/stacks/commits/master

8041 - 8060 of 9426