Date   

Re: no more stdout in app files since upgrade to 214

James Bayer
 

here are some things that should help us troubleshoot:

does "cf logs APPNAME --recent" show anything different?
how did you create your deployment manifest?
how many availability zones do you have in your deployment?
how many traffic controllers and doppler instances do you have?
is the dea_logging_agent co-located with the DEAs and your "runner" VMs
configured with jobs something like this [0]?

with our installations, we typically use the affectance tests (CATS) [1] to
cover platform functionality. there is also a set of acceptance tests just
for loggregator [2].

[0]
https://github.com/cloudfoundry/cf-release/blob/master/example_manifests/minimal-aws.yml#L229-L239
[1] https://github.com/cloudfoundry/cf-acceptance-tests
[2]
https://github.com/cloudfoundry/loggregator/tree/develop/bosh/jobs/loggregator-acceptance-tests

On Mon, Aug 17, 2015 at 1:11 AM, ramonskie <ramon.makkelie(a)klm.com> wrote:

okay so no problem there
the only thing now is that there is no streaming logs with the cf logs
command
from my APP only RTS
any ideas there?



--
View this message in context:
http://cf-dev.70369.x6.nabble.com/no-more-stdout-in-app-files-since-upgrade-to-214-tp1197p1217.html
Sent from the CF Dev mailing list archive at Nabble.com.
--
Thank you,

James Bayer


Re: Web proxy support in buildpacks

JT Archie <jarchie@...>
 

Jack,

For cached buildpacks, it would not be useful to set HTTP proxying. The
dependencies are bundled with the buildpack and are loaded via the local
file system, not HTTP.

Most of the buildpacks use curl to download the dependencies from the HTTP
server. You should be able to set the environment variables HTTP_PROXY or
HTTPS_PROXY to for curl to use the proxy server.
<http://curl.haxx.se/libcurl/c/CURLOPT_PROXY.html> If this works for you it
would be great to hear your feedback.

Kinds Regards,

JT

On Mon, Aug 17, 2015 at 9:26 AM, Jack Cai <greensight(a)gmail.com> wrote:

Currently I see that the Java buildpack and the PHP buildpack explicitly
mentioned in their doc that they can run behind a Web proxy, by setting the
HTTP_PROXY and HTTPS_RPOXY environment variable. And I suppose this is
supported in either the cached version or the uncached one, and for both
the old lucid64 stack and the new cflinuxfs2 stack (which has different
Ruby version). Do other buildpacks support the same? Aka node.js, python,
ruby, go, etc.

Thanks in advance!

Jack


Web proxy support in buildpacks

Jack Cai
 

Currently I see that the Java buildpack and the PHP buildpack explicitly
mentioned in their doc that they can run behind a Web proxy, by setting the
HTTP_PROXY and HTTPS_RPOXY environment variable. And I suppose this is
supported in either the cached version or the uncached one, and for both
the old lucid64 stack and the new cflinuxfs2 stack (which has different
Ruby version). Do other buildpacks support the same? Aka node.js, python,
ruby, go, etc.

Thanks in advance!

Jack


no more APP logs when tailing the app since the upgrade from 207 to 214

ramonskie
 

since the upgrade from 207 to 214 i noticed 2 things
1) no more stdout and stderr in the logs/ dir off the app/container
someone pointet it out that this is removed in https://github.com/cloudfoundry/dea_ng/commit/930d3236b155da8660175198f4a1e4f18bf3cb6d

2) no more APP logs shown when tailing the app
the only thing i see are the RTR logs
i check all the job specs/templates of the loggregator, doppler and metron_agent
but i can't find anything


Re: CF integration with logger and monitoring tools

Daniel Mikusa
 

I think you're talking about two separate things here:

If you create a user defined service and bind that to your application [1],
the Java build pack should install the agent and configure it to monitor
your application. That's the key here, it will monitor that *one*
application. This will work very similar to if you ran your app locally
with the wily agent installed. You can then repeat this process for any
number of Java apps running on CF.

When you're talking about collector & firehose, you're talking about ways
to pull logs and metrics from the entire CF system, not just a single app.
This is unrelated to what is supported through the Java build pack. I have
no idea if Interscope is capable of doing this or if it would make sense to
use it for this type of data gathering.

Dan

[1] -
https://github.com/cloudfoundry/java-buildpack/blob/master/docs/framework-introscope_agent.md

On Mon, Aug 17, 2015 at 7:40 AM, Swatz bosh <swatzron(a)gmail.com> wrote:

Thanks Gwenn.

I found a link for integrating AppDynamics (agent based monitoring like
Wily) with PCF, where its mentioned to create user-defined service and bind
it to application, so that AppDynamic agent will detect the service
'VCAP_SERVICE' and start sending metrics/logs to AppDynamic collector.

http://blog.pivotal.io/pivotal-cloud-foundry/products/getting-started-with-pivotal-cloud-foundry-and-appdynamics

I am using here they are using 'collector' and not firehose nozzle?
So does that mean I should not be using (java_buildpack agent of
introscope)
https://github.com/cloudfoundry/java-buildpack/blob/master/docs/framework-introscope_agent.md
where introscope will need VCAP_SERVICE i.e. service binding with
substring as 'introscope' to all my java apps ? I think its old 'collector'
approach like AppDynamic ?
And I should not be creating in user defined service for Introscope and
bind to my java apps?
I should rely on dea and metron to provide java application metrics and
configure my firehose nozzle to point to Introscope-Enterprise Manager to
collect all App metrics, is it correct? If yes, then are we saying
firehouse metric will be auto understood by Introscope-Enterprise Manager?

Thanks for all your response.


Re: CF integration with logger and monitoring tools

Swatz bosh
 

Thanks Gwenn.

I found a link for integrating AppDynamics (agent based monitoring like Wily) with PCF, where its mentioned to create user-defined service and bind it to application, so that AppDynamic agent will detect the service 'VCAP_SERVICE' and start sending metrics/logs to AppDynamic collector.
http://blog.pivotal.io/pivotal-cloud-foundry/products/getting-started-with-pivotal-cloud-foundry-and-appdynamics

I am using here they are using 'collector' and not firehose nozzle?
So does that mean I should not be using (java_buildpack agent of introscope) https://github.com/cloudfoundry/java-buildpack/blob/master/docs/framework-introscope_agent.md
where introscope will need VCAP_SERVICE i.e. service binding with substring as 'introscope' to all my java apps ? I think its old 'collector' approach like AppDynamic ?
And I should not be creating in user defined service for Introscope and bind to my java apps?
I should rely on dea and metron to provide java application metrics and configure my firehose nozzle to point to Introscope-Enterprise Manager to collect all App metrics, is it correct? If yes, then are we saying firehouse metric will be auto understood by Introscope-Enterprise Manager?

Thanks for all your response.


Re: Hard-coded domain name in diego etcd job

Gwenn Etourneau
 

You not should change it, this domain is use only with consul as DNS.
Many component rely on it, uaa and so on.

https://github.com/cloudfoundry/cf-release/blob/90d730a2d13d9e065a7f348e7fd31a1522074d02/jobs/consul_agent/templates/config.json.erb

Do you have some logs ?

On Mon, Aug 17, 2015 at 7:41 PM, Meng, Xiangyi <xiangyi.meng(a)emc.com> wrote:

Hi,



I am trying to deploy diego 0.1402.0 into vShpere server to work with CF
210. However the deployment failed when creating job ‘etcd’ with following
error.



*Error: cannot sync with the cluster using endpoints
https://database-z1-0.etcd.service.cf.internal:4001
<https://database-z1-0.etcd.service.cf.internal:4001>*



I tried to change the domain name to my own domain name in diego yml file.
But it didn’t work. I found the domain name was hard-coded in
etcd_bosh_utils.sh.




https://github.com/cloudfoundry-incubator/diego-release/blob/develop/jobs/etcd/templates/etcd_bosh_utils.sh.erb



Could anyone tell me how to work around it?



Thanks,

Maggie


Hard-coded domain name in diego etcd job

MaggieMeng
 

Hi,

I am trying to deploy diego 0.1402.0 into vShpere server to work with CF 210. However the deployment failed when creating job 'etcd' with following error.

Error: cannot sync with the cluster using endpoints https://database-z1-0.etcd.service.cf.internal:4001

I tried to change the domain name to my own domain name in diego yml file. But it didn't work. I found the domain name was hard-coded in etcd_bosh_utils.sh.

https://github.com/cloudfoundry-incubator/diego-release/blob/develop/jobs/etcd/templates/etcd_bosh_utils.sh.erb

Could anyone tell me how to work around it?

Thanks,
Maggie


Re: CF integration with logger and monitoring tools

Gwenn Etourneau
 

I think the easy way is to use the provided logs system by CF, so
deaagent-> metron -> dopler -> Firehose consumer - forwarder -> Willy

Will be strange that Willy agent / or server don't provide the
specification of their format or any forwarder exist.

I found some opensource things about Wily so maybe you can find some
usefull work there.

https://github.com/nickman/wiex

On Mon, Aug 17, 2015 at 5:42 PM, Swati Goyal <swatzron(a)gmail.com> wrote:

Thanks James for your reply.

So you are recommending firehose nozzles for sending logs and metrics to
3rd party application monitoring and logging tools.
For product like Wily Introscope (now known as APM), which is agent based
monitoring approach, and I think these agents right now only support
Java/.Net technologies, which sends such metrics to Wily-Enterprise
Manager. I have found that cf has released Java-buildpack for this
Introscope agent (
https://github.com/cloudfoundry/java-buildpack/blob/master/lib/java_buildpack/framework/introscope_agent.rb
).
So how would the communication would work with such Introscope agent? Will
it still need nozzle? like - introscope agent (assuming Java application)
will send Metrics/Logs to cf metron-agent which in-turn send it to cf
doppler, then have firehose-nozzle to consume logs/metrics from doppler and
send it to Enterprise Manager? Is this the flow or have to follow someother
approach for such agent based monitoring tool?




On Mon, Aug 17, 2015 at 1:12 PM, James Bayer <jbayer(a)pivotal.io> wrote:

i wrote back last week, but it looks like it was swallowed and never
posted?

---------- Forwarded message ----------
From: James Bayer <jbayer(a)pivotal.io>
Date: Fri, Aug 14, 2015 at 6:33 AM
Subject: Re: [cf-dev] Re: CF integration with logger and monitoring tools
To: "Discussions about Cloud Foundry projects and the system overall." <
cf-dev(a)lists.cloudfoundry.org>


cf is moving toward the loggregator firehose nozzle approach for sending
stuff in cf to other systems.

most logging products and services support syslog. here is a syslog
nozzle for the firehose that should work well with splunk and similar:
https://github.com/cloudfoundry-community/firehose-to-syslog

if you're sending metrics, then you can look at something like the
datadog nozzle for integration with whatever metrics service you are using:
https://github.com/cloudfoundry-incubator/datadog-firehose-nozzle

the current system that is being replaced is called the collector [1],
which has plugins for various providers.

i hope that the CF ecosystem will start having nozzles for many different
metrics providers over time similar to the collector.

[1]
https://github.com/cloudfoundry/collector/tree/master/lib/collector/historian

On Thu, Aug 13, 2015 at 11:14 PM, Swatz bosh <swatzron(a)gmail.com> wrote:

Can you please help me with above query?


--
Thank you,

James Bayer



--
Thank you,

James Bayer


Warden: Failed retrieving quota for uid=20002: Block device doesn't exist.

R M
 

I am getting this error while trying to deploy a test app. It fails during staging with this exception:

/=================================================/
2015-08-13 17:33:16.542443 Warden::Container::Linux pid=13619 tid=885a fid=d6ee container/base.rb/dispatch:300 handle=18t6vhrf6d0,request={"bind_mounts"=>["#<Warden::Protocol::CreateRequest::BindMount:0x0002ab0a5e79c0>", "#<Warden::Protocol::CreateRequest::BindMount:0x0002ab0a5ebca0>", "#<Warden::Protocol::CreateRequest::BindMount:0x0002ab0a5e9860>"], "rootfs"=>"/var/vcap/packages/rootfs_cflinuxfs2"},response={"handle"=>"18t6vhrf6d0"} DEBUG -- create (took 9.700584)
2015-08-13 17:33:16.543524 Warden::Container::Linux pid=13619 tid=885a fid=0ccd container/base.rb/write_snapshot:334 handle=18t6vhrf6d0 DEBUG -- Wrote snapshot in 0.000068
2015-08-13 17:33:16.543599 Warden::Container::Linux pid=13619 tid=885a fid=0ccd container/base.rb/dispatch:300 handle=18t6vhrf6d0,request={"handle"=>"18t6vhrf6d0", "limit_in_shares"=>512},response={"limit_in_shares"=>512} DEBUG -- limit_cpu (took 0.000289)
2015-08-13 17:33:16.553165 Warden::Container::Linux pid=13619 tid=885a fid=f1ec container/spawn.rb/set_deferred_success:135 stdout=,stderr=Failed retrieving quota for uid=20002: Block device doesn't exist.
WARN -- Exited with status 1 (0.008s): [["/var/vcap/data/packages/warden/724f030f2d6e90d02c2afbe90ed5fe1ce2de1667/warden/src/closefds/closefds", "/var/vcap/data/packages/warden/724f030f2d6e90d02c2afbe90ed5fe1ce2de1667/warden/src/closefds/closefds"], "/var/vcap/data/packages/warden/724f030f2d6e90d02c2afbe90ed5fe1ce2de1667/warden/src/repquota/repquota", "/var", "20002"]

/=================================================/

Any tips to debug this further greatly appreciated.

Thanks.


Re: CF integration with logger and monitoring tools

Swatz bosh
 

Thanks James for your reply.

So you are recommending firehose nozzles for sending logs and metrics to
3rd party application monitoring and logging tools.
For product like Wily Introscope (now known as APM), which is agent based
monitoring approach, and I think these agents right now only support
Java/.Net technologies, which sends such metrics to Wily-Enterprise
Manager. I have found that cf has released Java-buildpack for this
Introscope agent (
https://github.com/cloudfoundry/java-buildpack/blob/master/lib/java_buildpack/framework/introscope_agent.rb
).
So how would the communication would work with such Introscope agent? Will
it still need nozzle? like - introscope agent (assuming Java application)
will send Metrics/Logs to cf metron-agent which in-turn send it to cf
doppler, then have firehose-nozzle to consume logs/metrics from doppler and
send it to Enterprise Manager? Is this the flow or have to follow someother
approach for such agent based monitoring tool?

On Mon, Aug 17, 2015 at 1:12 PM, James Bayer <jbayer(a)pivotal.io> wrote:

i wrote back last week, but it looks like it was swallowed and never
posted?

---------- Forwarded message ----------
From: James Bayer <jbayer(a)pivotal.io>
Date: Fri, Aug 14, 2015 at 6:33 AM
Subject: Re: [cf-dev] Re: CF integration with logger and monitoring tools
To: "Discussions about Cloud Foundry projects and the system overall." <
cf-dev(a)lists.cloudfoundry.org>


cf is moving toward the loggregator firehose nozzle approach for sending
stuff in cf to other systems.

most logging products and services support syslog. here is a syslog nozzle
for the firehose that should work well with splunk and similar:
https://github.com/cloudfoundry-community/firehose-to-syslog

if you're sending metrics, then you can look at something like the datadog
nozzle for integration with whatever metrics service you are using:
https://github.com/cloudfoundry-incubator/datadog-firehose-nozzle

the current system that is being replaced is called the collector [1],
which has plugins for various providers.

i hope that the CF ecosystem will start having nozzles for many different
metrics providers over time similar to the collector.

[1]
https://github.com/cloudfoundry/collector/tree/master/lib/collector/historian

On Thu, Aug 13, 2015 at 11:14 PM, Swatz bosh <swatzron(a)gmail.com> wrote:

Can you please help me with above query?


--
Thank you,

James Bayer



--
Thank you,

James Bayer


Re: UAA: How to set client_credentials token grant type to not expire

Paul Bakare
 

Thank you. Needed to update to make this work.

uaac client update useraccount --access_token_validity 315360000

On Fri, Jul 31, 2015 at 4:19 PM, Filip Hanik <fhanik(a)pivotal.io> wrote:

Start a local server (./gradlew run --info)

In another console, the following commands

1. uaac target http://localhost:8080/uaa
2. uaac token client get admin -s adminsecret
3. uaac client add testclient --authorized_grant_types
client_credentials --access_token_validity 315360000 --authorities openid
-s testclientsecret
4. uaac token client get testclient -s testclientsecret
5. uaac token decode

The output from the last command is
jti: 7397c7c9-de08-4b33-bd6a-0d248fd983b1
sub: testclient
authorities: openid
scope: openid
client_id: testclient
cid: testclient
azp: testclient
grant_type: client_credentials
rev_sig: fbc56677
iat: 1438351964
exp: 1753711964
iss: http://localhost:8080/uaa/oauth/token
zid: uaa
aud: testclient openid

The exp time is 1753711964, that is seconds from Jan 1st, 1970, and
corresponds to July 28, 2025



On Fri, Jul 31, 2015 at 12:57 AM, Kayode Odeyemi <dreyemi(a)gmail.com>
wrote:

Filip,

Here's my client config:
useraccount
scope: clients.read oauth.approvals openid password.write tokens.read
tokens.write uaa.admin
resource_ids: none
authorized_grant_types: authorization_code client_credentials
password refresh_token
authorities: scim.read scim.userids uaa.admin uaa.resource
clients.read scim.write cloud_controller.write scim.me clients.secret
password.write clients.write openid cloud_controller.read oauth.approvals
access_token_validity: 315360000
autoapprove: true

Gotten from `uaac clients`

I really do not know what else I might be doing wrongly.

Does `test_Token_Expiry_Time()` also cover for client_credentials grant
type? I tried running the test with
`./gradlew test
-Dtest.single=org/cloudfoundry/identity/uaa/mock/token/TokenMvcMockTests`
and placed debuggers in order to view the generated expiration time.
Nothing was printed in the test results.


On Wed, Jul 29, 2015 at 6:11 PM, Filip Hanik <fhanik(a)pivotal.io> wrote:

exp is expected to be 1753544877 when decoded. Unfortunately, this test
fails, as exp reads 1438228276

most likely your client does not have the access token validity setup
correctly. See the test case I posted that validates my statements

https://github.com/cloudfoundry/uaa/commit/f0c8ba99cf37855fec54b74c07ce19613c51d7e9#diff-f7a9f1a69eec2ce4278914f342d8a160R883


On Wed, Jul 29, 2015 at 9:57 AM, Kayode Odeyemi <dreyemi(a)gmail.com>
wrote:

Good. But my apologies. Assume:

creation time = 1438184877
access token validity (set by me) = 315360000

exp is expected to be 1753544877 when decoded. Unfortunately, this test
fails, as exp reads 1438228276

On Wed, Jul 29, 2015 at 5:43 PM, Filip Hanik <fhanik(a)pivotal.io> wrote:

If I set the access_token_validity to 315569260, I'm expecting the
token when decoded to read exp: 315569260. If this is not, then is it
possible to set the token expiry time?

It's a little bit different.

access_token_validity is how long the token is valid for from the time
of creation. thus we can derive

exp (expiration time) = token creation time + access token validity

you don't get to set the expiration time, since that doesn't make
sense as the clock keeps ticking forward.

in your case, having access token validity be 10 years, achieves
exactly what you want

Filip


On Wed, Jul 29, 2015 at 9:36 AM, Kayode Odeyemi <dreyemi(a)gmail.com>
wrote:

Thanks again Filip.

However, here's what I mean,

If I set the access_token_validity to 315569260, I'm expecting the
token when decoded to read exp: 315569260. If this is not, then is it
possible to set the token expiry time?

line 906 sets the value to 1438209609 when the token is decoded and
I believe that's what the check_token service also checks.
expirationTime*1000l occurs after the token has been decoded (whose exp
value is set to 1438209609)

Now the question is why do you have to do expirationTime*1000l since
the token when decoded originally set's this value to 1438209609
(without * 1000l)

Except I'm completely getting this all wrong?

_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev

_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev


_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev

_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev


_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev

_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev


Re: no more stdout in app files since upgrade to 214

ramonskie
 

okay so no problem there
the only thing now is that there is no streaming logs with the cf logs
command
from my APP only RTS
any ideas there?



--
View this message in context: http://cf-dev.70369.x6.nabble.com/no-more-stdout-in-app-files-since-upgrade-to-214-tp1197p1217.html
Sent from the CF Dev mailing list archive at Nabble.com.


Re: CF integration with logger and monitoring tools

James Bayer
 

i wrote back last week, but it looks like it was swallowed and never posted?

---------- Forwarded message ----------
From: James Bayer <jbayer(a)pivotal.io>
Date: Fri, Aug 14, 2015 at 6:33 AM
Subject: Re: [cf-dev] Re: CF integration with logger and monitoring tools
To: "Discussions about Cloud Foundry projects and the system overall." <
cf-dev(a)lists.cloudfoundry.org>


cf is moving toward the loggregator firehose nozzle approach for sending
stuff in cf to other systems.

most logging products and services support syslog. here is a syslog nozzle
for the firehose that should work well with splunk and similar:
https://github.com/cloudfoundry-community/firehose-to-syslog

if you're sending metrics, then you can look at something like the datadog
nozzle for integration with whatever metrics service you are using:
https://github.com/cloudfoundry-incubator/datadog-firehose-nozzle

the current system that is being replaced is called the collector [1],
which has plugins for various providers.

i hope that the CF ecosystem will start having nozzles for many different
metrics providers over time similar to the collector.

[1]
https://github.com/cloudfoundry/collector/tree/master/lib/collector/historian

On Thu, Aug 13, 2015 at 11:14 PM, Swatz bosh <swatzron(a)gmail.com> wrote:

Can you please help me with above query?


--
Thank you,

James Bayer
--
Thank you,

James Bayer


Re: CF integration with logger and monitoring tools

Swatz bosh
 

Hi,

Can someone help me on my query please?

Thanks


Re: Overcommit on Diego Cells

James Bayer
 

i know that onsi and eric have discussed this. i've heard that eric is
working on a reply.

On Tue, Aug 11, 2015 at 12:50 PM, Mike Youngstrom <youngm(a)gmail.com> wrote:

Today my org manages our DEA resources using a heavy overcommit strategy.
Rather than being conservative and ensuring that none of our DEAs commit to
more than they can handle we have instead decided to overcommit to the
point where we basically turn off DEA resource management.

All our DEAs have the same amount of RAM and Disk and we closely monitor
these resources. When load gets beyond a threshold we deploy more DEAs.
We use Org quotas as ceilings to help stop an app from accidentally killing
everything.

So far this strategy has worked out great for us. It's allowed us to
provide much more friendly defaults for RAM and Disk and allowed us to get
more value out of our DEA dollar.

As we move into Diego we're attempting to implement the same strategy. We
want to be sure to do it correctly since we're less comfortable with Diego
at this point.

Diego doesn't have the friendly "overcommit" property DEAs do. Instead I
see "diego.executor.memory_capacity_mb" and
"diego.executor.disk_capacity_mb". Can I overcommit these values and get
the same behaviour I would overcommitting DEAs?

I'd also like some advice on what "diego.garden-linux.btrfs_store_size_mb"
is and how it might apply to my overcommit plans.

Thanks,
Mike
--
Thank you,

James Bayer


CF Release Acceptance Test Changes

Zachary Auerbach <zauerbach@...>
 

The CF Acceptance Tests have been modified so that users can configure them
to run with HTTPS or HTTP. By default the settings have been changed from
HTTP to HTTPS. If you need to run them in HTTP mode for development (like
bosh-lite) then you can set the `"use_http": true` property in the
integration json config. This property can also be set for the
acceptance-test errand in your CF manifest.

Zak + Dan
CF OSS Integration
"Defender of the Universe"

--
-Zak
CF Voltron
"Defender of the Universe"


Re: Notifications for service brokers

Juan Pablo Genovese
 

Vineet,

there is some proposals to add better notifications to CF in general and
the CC in particular, but for now you can poll the CC API to get those
events. See http://apidocs.cloudfoundry.org/214/

Thanks!

2015-08-14 18:31 GMT-03:00 Vineet Banga <vineetbanga1(a)gmail.com>:

Is there any notification pub/sub mechanism in cloud foundry when services
are created/updated/deleted. We are exposing few services in CF using
service brokers and we would like some common actions to occur when our
services are created/delete/updated.
--
Mis mejores deseos,
Best wishes,
Meilleurs vœux,

Juan Pablo
------------------------------------------------------
http://www.jpgenovese.com


Re: Script hangs when updating the cf-release

CF Runtime
 

I see some differences in the submodules you are getting vs what is
currently on cf-release dev or master branches. Which branch, tag, or SHA
are you trying to check out?

Joseph & Dies
OSS Release Integration Team

On Thu, Aug 13, 2015 at 12:13 PM, Qing Gong <qinggong(a)gmail.com> wrote:

When I ran the following command, I always got stuck when updating the
etcd-incubator. Any idea? I tried on multiple machines and they all hang
at the same place. Is it possible that this is a problem in github?

The console info is:
prompt> ./update

===> Uncommitted submodules changes will be clobbered <===


===> Unversioned changes will be clobbered <===

+ has_upstream
+ git rev-parse '@{u}'
+ git pull
Already up-to-date.
+ git submodule sync
+ git submodule foreach --recursive 'git submodule sync; git clean -d
--force --force'
+ git submodule update --init --recursive
Submodule 'shared' (
https://github.com/cloudfoundry/shared-release-packages.git) registered
for path 'shared'
Submodule 'src/cloud_controller_ng' (
https://github.com/cloudfoundry/cloud_controller_ng.git) registered for
path 'src/cloud_controller_ng'
Submodule 'src/collector' (https://github.com/cloudfoundry/collector.git)
registered for path 'src/collector'
Submodule 'src/dea_next' (https://github.com/cloudfoundry/dea_ng.git)
registered for path 'src/dea_next'
Submodule 'src/etcd-metrics-server' (
https://github.com/cloudfoundry-incubator/etcd-metrics-server.git)
registered for path 'src/etcd-metrics-server'
Submodule 'src/etcd-release' (
https://github.com/cloudfoundry-incubator/etcd-release.git) registered
for path 'src/etcd-release'
Submodule 'src/github.com/cloudfoundry-incubator/routing-api' (
https://github.com/cloudfoundry-incubator/routing-api.git) registered for
path 'src/github.com/cloudfoundry-incubator/routing-api'
Submodule 'src/github.com/cloudfoundry/cf-acceptance-tests' (
https://github.com/cloudfoundry/cf-acceptance-tests) registered for path
'src/github.com/cloudfoundry/cf-acceptance-tests'
Submodule 'src/github.com/cloudfoundry/gorouter' (
https://github.com/cloudfoundry/gorouter) registered for path 'src/
github.com/cloudfoundry/gorouter'
Submodule 'src/gnatsd' (https://github.com/apcera/gnatsd.git) registered
for path 'src/gnatsd'
Submodule 'src/hm9000' (https://github.com/cloudfoundry/hm-workspace.git)
registered for path 'src/hm9000'
Submodule 'src/loggregator' (https://github.com/cloudfoundry/loggregator)
registered for path 'src/loggregator'
Submodule 'src/smoke-tests' (
https://github.com/cloudfoundry/cf-smoke-tests) registered for path
'src/smoke-tests'
Submodule 'src/statsd-injector' (
https://github.com/cloudfoundry/statsd-injector.git) registered for path
'src/statsd-injector'
Submodule 'src/uaa' (https://github.com/cloudfoundry/uaa.git) registered
for path 'src/uaa'
Submodule 'src/warden' (https://github.com/cloudfoundry/warden.git)
registered for path 'src/warden'
Initialized empty Git repository in
/local/install/users/cfg/workspace/cf-release/shared/.git/
remote: Counting objects: 1560, done.
remote: Total 1560 (delta 0), reused 0 (delta 0), pack-reused 1560
Receiving objects: 100% (1560/1560), 380.81 KiB, done.
Resolving deltas: 100% (589/589), done.
Submodule path 'shared': checked out
'87112bae91127792ffbed7fc6c76ac7088708ace'
Initialized empty Git repository in
/local/install/users/cfg/workspace/cf-release/src/cloud_controller_ng/.git/
remote: Counting objects: 66064, done.
remote: Compressing objects: 100% (107/107), done.
remote: Total 66064 (delta 26), reused 0 (delta 0), pack-reused 65955
Receiving objects: 100% (66064/66064), 17.75 MiB | 4.32 MiB/s, done.
Resolving deltas: 100% (45227/45227), done.
Submodule path 'src/cloud_controller_ng': checked out
'ff442527f0bee06cb6ea6baa22175905a92fc718'
Initialized empty Git repository in
/local/install/users/cfg/workspace/cf-release/src/collector/.git/
remote: Counting objects: 2488, done.
remote: Total 2488 (delta 0), reused 0 (delta 0), pack-reused 2488
Receiving objects: 100% (2488/2488), 9.19 MiB | 5.23 MiB/s, done.
Resolving deltas: 100% (1411/1411), done.
Submodule path 'src/collector': checked out
'c38732735b36ed3ee9e44ee9bf91dce07bedd63b'
Initialized empty Git repository in
/local/install/users/cfg/workspace/cf-release/src/dea_next/.git/
remote: Counting objects: 12737, done.
remote: Compressing objects: 100% (3/3), done.
remote: Total 12737 (delta 0), reused 0 (delta 0), pack-reused 12734
Receiving objects: 100% (12737/12737), 15.25 MiB | 4.26 MiB/s, done.
Resolving deltas: 100% (7014/7014), done.
Submodule path 'src/dea_next': checked out
'b74390b2472a6a929807040f4439a30ecb46e699'
Submodule 'go/src/github.com/cloudfoundry/gosteno' (
https://github.com/cloudfoundry/gosteno.git) registered for path 'go/src/
github.com/cloudfoundry/gosteno'
Submodule 'go/src/github.com/howeyc/fsnotify' (
https://github.com/howeyc/fsnotify.git) registered for path 'go/src/
github.com/howeyc/fsnotify'
Initialized empty Git repository in
/local/install/users/cfg/workspace/cf-release/src/dea_next/go/src/
github.com/cloudfoundry/gosteno/.git/
remote: Counting objects: 436, done.
remote: Total 436 (delta 0), reused 0 (delta 0), pack-reused 436
Receiving objects: 100% (436/436), 105.20 KiB, done.
Resolving deltas: 100% (222/222), done.
Submodule path 'go/src/github.com/cloudfoundry/gosteno': checked out
'c6379cb7ef097850eec4dc61e1730bbb99a2a2a8'
Initialized empty Git repository in
/local/install/users/cfg/workspace/cf-release/src/dea_next/go/src/
github.com/howeyc/fsnotify/.git/
remote: Counting objects: 660, done.
remote: Total 660 (delta 0), reused 0 (delta 0), pack-reused 660
Receiving objects: 100% (660/660), 168.97 KiB, done.
Resolving deltas: 100% (382/382), done.
Submodule path 'go/src/github.com/howeyc/fsnotify': checked out
'08040c5a90632bd721465eb8ad74a8e61bd7bf95'
Initialized empty Git repository in
/local/install/users/cfg/workspace/cf-release/src/etcd-metrics-server/.git/
remote: Counting objects: 1658, done.
remote: Total 1658 (delta 0), reused 0 (delta 0), pack-reused 1658
Receiving objects: 100% (1658/1658), 591.04 KiB, done.
Resolving deltas: 100% (757/757), done.
Submodule path 'src/etcd-metrics-server': checked out
'90c444c7f93cacb998e45c46f1e06ecf4c8eb9c4'
Initialized empty Git repository in
/local/install/users/cfg/workspace/cf-release/src/etcd-release/.git/
remote: Counting objects: 1546, done.
remote: Total 1546 (delta 0), reused 0 (delta 0), pack-reused 1546
Receiving objects: 100% (1546/1546), 1.50 MiB | 2.22 MiB/s, done.
Resolving deltas: 100% (396/396), done.
Submodule path 'src/etcd-release': checked out
'6722dd34e92e760c6e34134224cd3323215c7817'
Submodule 'src/etcd' (https://github.com/coreos/etcd.git) registered for
path 'src/etcd'
Submodule 'src/github.com/cloudfoundry-incubator/cf-test-helpers' (
http://github.com/cloudfoundry-incubator/cf-test-helpers.git) registered
for path 'src/github.com/cloudfoundry-incubator/cf-test-helpers'
Submodule 'src/github.com/coreos/go-etcd' (
http://github.com/coreos/go-etcd.git) registered for path 'src/
github.com/coreos/go-etcd'
Submodule 'src/github.com/nu7hatch/gouuid' (
http://github.com/nu7hatch/gouuid.git) registered for path 'src/
github.com/nu7hatch/gouuid'
Submodule 'src/github.com/onsi/ginkgo' (http://github.com/onsi/ginkgo.git)
registered for path 'src/github.com/onsi/ginkgo'
Submodule 'src/github.com/onsi/gomega' (http://github.com/onsi/gomega.git)
registered for path 'src/github.com/onsi/gomega'
Submodule 'src/github.com/ugorji/go' (http://github.com/ugorji/go.git)
registered for path 'src/github.com/ugorji/go'
Initialized empty Git repository in
/local/install/users/cfg/workspace/cf-release/src/etcd-release/src/etcd/.git/
remote: Counting objects: 29815, done.
remote: Compressing objects: 100% (31/31), done.
remote: Total 29815 (delta 6), reused 0 (delta 0), pack-reused 29783
Receiving objects: 100% (29815/29815), 17.07 MiB | 6.59 MiB/s, done.
Resolving deltas: 100% (18339/18339), done.
Submodule path 'src/etcd': checked out
'6335fdc595ff03d27007db04e5b545189b9647c6'
Initialized empty Git repository in
/local/install/users/cfg/workspace/cf-release/src/etcd-release/src/
github.com/cloudfoundry-incubator/cf-test-helpers/.git/
(hang!)


Re: Recommended way/place to configure uaa for CF runtime

Madhura Bhave
 

Hi Tom,

The recommended way to update uaa config in cf-release is to update the
manifest file that was used for deploying CF and then doing a bosh deploy
again. This will update the uaa.yml with the properties that are configured
in the manifest. The properties required for LDAP configuration can be
found here:
https://github.com/cloudfoundry/cf-release/blob/master/jobs/uaa/spec#L169-L231
.

Thanks,
Madhura

On Thu, Aug 13, 2015 at 4:27 AM, Tom Sherrod <tom.sherrod(a)gmail.com> wrote:

I've deployed cf-release 214.
I now wish to configure uaa to use ldap(AD) for authentication.
I find documentation on how uaa works and the uaa.yml.
How does the configuring/updating of uaa.yml work in the cf-release deploy
work flow?
I deployed with remote/download instead of create and upload. Edit
directly on the server?
Is editing uaa.yml then create release and upload the recommended way?

Any additional pointers appreciated.

Regarding uaa functionality...any additional information on ldap groups
and org permissions.
https://groups.google.com/a/cloudfoundry.org/forum/#!topic/vcap-dev/X1OLss4zumQ

Thanks,
Tom

8161 - 8180 of 9429