Date   

Trouble enabling diego ssh in cf-release:222 diego:0.1437

Mike Youngstrom <youngm@...>
 

I'm working on upgrading to latest cf-release+diego and I'm having trouble
getting ssh working.

When attempting to ssh with the latest cli I get the error:

"Authorization server did not redirect with one time code"

The relevant config is:

ssh_proxy.uaa_token_url=https://{uaa server}/oauth/token

uaa.clients.ssh-proxy:
authorized-grant-types: authorization_code
autoapprove: true
override: true
redirect-uri: /login
scope: openid,cloud_controller.read,cloud_controller.write
secret: secret

When tracing the CLI I see a call to "POST /oauth/token" and a 200. It
appears that the CLI is expecting a redirect and not a 200.

Is "oauth/token" the correct uaa_token_url endpoint? Any idea why UAA
wouldn't be sending a redirect response from /oauth/token when the plugin
is expecting it?

Mike


Re: cloud_controller_ng performance degrades slowly over time

Dieu Cao <dcao@...>
 

You might try moving the nameserver entry for the consul_agent in
/etc/resolv.conf on the cloud controller to the end to see if that helps.

-Dieu

On Wed, Oct 28, 2015 at 12:55 PM, Matt Cholick <cholick(a)gmail.com> wrote:

Looks like you're right and we're experiencing the same issue as you are
Amit. We're suffering slow DNS lookups. The code is spending all of its
time here:
/var/vcap/packages/ruby-2.1.6/lib/ruby/2.1.0/net/http.rb.initialize :879

I've experimented some with the environment and, after narrowing things
down to DNS, here's some minimal demonstrating the problem:

require "net/http"
require "uri"

# uri = URI.parse("http://uaa.example.com/info")
uri = URI.parse("https://www.google.com")

i = 0
while true do
beginning_time = Time.now
response = Net::HTTP.get_response(uri)

end_time = Time.now
i+=1
puts "#{"%04d" % i} Response: [#{response.code}], Elapsed: #{((end_time - beginning_time)*1000).round} ms"
end


I see the issue hitting both UAA and just hitting Google. At some point,
requests start taking 5 second longer, which I assume is a timeout. One run:

0349 Response: [200], Elapsed: 157 ms
0350 Response: [200], Elapsed: 169 ms
0351 Response: [200], Elapsed: 148 ms
0352 Response: [200], Elapsed: 151 ms
0353 Response: [200], Elapsed: 151 ms
0354 Response: [200], Elapsed: 152 ms
0355 Response: [200], Elapsed: 153 ms
0356 Response: [200], Elapsed: 6166 ms
0357 Response: [200], Elapsed: 5156 ms
0358 Response: [200], Elapsed: 5158 ms
0359 Response: [200], Elapsed: 5156 ms
0360 Response: [200], Elapsed: 5156 ms
0361 Response: [200], Elapsed: 5160 ms
0362 Response: [200], Elapsed: 5172 ms
0363 Response: [200], Elapsed: 5157 ms
0364 Response: [200], Elapsed: 5165 ms
0365 Response: [200], Elapsed: 5157 ms
0366 Response: [200], Elapsed: 5155 ms
0367 Response: [200], Elapsed: 5157 ms

Other runs are the same. How many requests it takes before things time out
varies considerably (one run started in the 10s and another took 20k
requests), but it always happens. After that, lookups take an additional 5
second and never recover to their initial speed. This is why restarting the
cloud controller fixes the issue (temporarily).

The really slow cli calls (in the 1+min range) are simply due to the
amount of paging that a fetching data for a large org does, as that 5
seconds is multiplied out over several calls. Every user is feeling this
delay, it's just that it's only unworkable pulling the large datasets from
UAA.

I was not able to reproduce timeouts using a script calling "dig" against
localhost, only inside a ruby code.

The re-iterate our setup: we're running 212 without a consul server, just
the agents. I also successfully reproduce this problem in completely
different 217 install in a different datacenter. This setup also didn't
have an actual consul server, just the agent. I don't see anything in the
release notes past 217 indicating that this is fixed.

Anyone have thoughts? This is definitely creating some real headaches for
user management in our larger orgs. Amit: is there a bug we can follow?

-Matt


On Fri, Oct 9, 2015 at 10:52 AM, Amit Gupta <agupta(a)pivotal.io> wrote:

You may not be running any consul servers, but you may have a consul
agent colocated on your CC VM and running there.

On Thu, Oct 8, 2015 at 5:59 PM, Matt Cholick <cholick(a)gmail.com> wrote:

Zack & Swetha,
Thanks for the suggestion, will gather netstat info there next time.

Amit,
1:20 delay is due to paging. The total call length for each page is
closer to 10s. Just included those two calls with paging by the cf command
line included numbers to demonstrate the dramatic difference after a
restart. Delays disappear after a restart. We're not running consul yet, so
it wouldn't be that.

-Matt



On Thu, Oct 8, 2015 at 10:03 AM, Amit Gupta <agupta(a)pivotal.io> wrote:

We've seen issues on some environments where requests to cc that
involve cc making a request to uaa or hm9k have a 5s delay while the local
consul agent fails to resolves the DNS for uaa/hm9k, before moving on to a
different resolver.

The expected behavior observed in almost all environments is that the
DNS request to consul agent fails fast and moves on to the next resolver,
we haven't figured out why a couple envs exhibit different behavior. The
impact is a 5 or 10s delay (5 or 10, not 5 to 10). It doesn't explain your
1:20 delay though. Are you always seeing delays that long?

Amit


On Thursday, October 8, 2015, Zach Robinson <zrobinson(a)pivotal.io>
wrote:

Hey Matt,

I'm trying to think of other things that would affect only the
endpoints that interact with UAA and would be fixed after a CC restart.
I'm wondering if it's possible there are a large number of connections
being kept-alive, or stuck in a wait state or something. Could you take a
look at the netstat information on the CC and UAA next time this happens?

-Zach and Swetha


Re: cloud_controller_ng performance degrades slowly over time

Matt Cholick
 

Looks like you're right and we're experiencing the same issue as you are
Amit. We're suffering slow DNS lookups. The code is spending all of its
time here:
/var/vcap/packages/ruby-2.1.6/lib/ruby/2.1.0/net/http.rb.initialize :879

I've experimented some with the environment and, after narrowing things
down to DNS, here's some minimal demonstrating the problem:

require "net/http"
require "uri"

# uri = URI.parse("http://uaa.example.com/info")
uri = URI.parse("https://www.google.com")

i = 0
while true do
beginning_time = Time.now
response = Net::HTTP.get_response(uri)

end_time = Time.now
i+=1
puts "#{"%04d" % i} Response: [#{response.code}], Elapsed:
#{((end_time - beginning_time)*1000).round} ms"
end


I see the issue hitting both UAA and just hitting Google. At some point,
requests start taking 5 second longer, which I assume is a timeout. One run:

0349 Response: [200], Elapsed: 157 ms
0350 Response: [200], Elapsed: 169 ms
0351 Response: [200], Elapsed: 148 ms
0352 Response: [200], Elapsed: 151 ms
0353 Response: [200], Elapsed: 151 ms
0354 Response: [200], Elapsed: 152 ms
0355 Response: [200], Elapsed: 153 ms
0356 Response: [200], Elapsed: 6166 ms
0357 Response: [200], Elapsed: 5156 ms
0358 Response: [200], Elapsed: 5158 ms
0359 Response: [200], Elapsed: 5156 ms
0360 Response: [200], Elapsed: 5156 ms
0361 Response: [200], Elapsed: 5160 ms
0362 Response: [200], Elapsed: 5172 ms
0363 Response: [200], Elapsed: 5157 ms
0364 Response: [200], Elapsed: 5165 ms
0365 Response: [200], Elapsed: 5157 ms
0366 Response: [200], Elapsed: 5155 ms
0367 Response: [200], Elapsed: 5157 ms

Other runs are the same. How many requests it takes before things time out
varies considerably (one run started in the 10s and another took 20k
requests), but it always happens. After that, lookups take an additional 5
second and never recover to their initial speed. This is why restarting the
cloud controller fixes the issue (temporarily).

The really slow cli calls (in the 1+min range) are simply due to the amount
of paging that a fetching data for a large org does, as that 5 seconds is
multiplied out over several calls. Every user is feeling this delay, it's
just that it's only unworkable pulling the large datasets from UAA.

I was not able to reproduce timeouts using a script calling "dig" against
localhost, only inside a ruby code.

The re-iterate our setup: we're running 212 without a consul server, just
the agents. I also successfully reproduce this problem in completely
different 217 install in a different datacenter. This setup also didn't
have an actual consul server, just the agent. I don't see anything in the
release notes past 217 indicating that this is fixed.

Anyone have thoughts? This is definitely creating some real headaches for
user management in our larger orgs. Amit: is there a bug we can follow?

-Matt

On Fri, Oct 9, 2015 at 10:52 AM, Amit Gupta <agupta(a)pivotal.io> wrote:

You may not be running any consul servers, but you may have a consul agent
colocated on your CC VM and running there.

On Thu, Oct 8, 2015 at 5:59 PM, Matt Cholick <cholick(a)gmail.com> wrote:

Zack & Swetha,
Thanks for the suggestion, will gather netstat info there next time.

Amit,
1:20 delay is due to paging. The total call length for each page is
closer to 10s. Just included those two calls with paging by the cf command
line included numbers to demonstrate the dramatic difference after a
restart. Delays disappear after a restart. We're not running consul yet, so
it wouldn't be that.

-Matt



On Thu, Oct 8, 2015 at 10:03 AM, Amit Gupta <agupta(a)pivotal.io> wrote:

We've seen issues on some environments where requests to cc that involve
cc making a request to uaa or hm9k have a 5s delay while the local consul
agent fails to resolves the DNS for uaa/hm9k, before moving on to a
different resolver.

The expected behavior observed in almost all environments is that the
DNS request to consul agent fails fast and moves on to the next resolver,
we haven't figured out why a couple envs exhibit different behavior. The
impact is a 5 or 10s delay (5 or 10, not 5 to 10). It doesn't explain your
1:20 delay though. Are you always seeing delays that long?

Amit


On Thursday, October 8, 2015, Zach Robinson <zrobinson(a)pivotal.io>
wrote:

Hey Matt,

I'm trying to think of other things that would affect only the
endpoints that interact with UAA and would be fixed after a CC restart.
I'm wondering if it's possible there are a large number of connections
being kept-alive, or stuck in a wait state or something. Could you take a
look at the netstat information on the CC and UAA next time this happens?

-Zach and Swetha


Re: Ability to move a space between orgs

Mike Youngstrom <youngm@...>
 

You're right. I can see why the ability to move a space is not in any
nearish term plans.

We'll probably look at creating a clone/copy solution and consider it a
long term investment.

Thanks,
Mike

On Tue, Oct 27, 2015 at 6:32 PM, Dieu Cao <dcao(a)pivotal.io> wrote:

Hi Mike,

Yes, moving gets stickier very quickly and precisely what one team wants
to have preserved across an org may differ depending on use case.
Service bindings, environment variables, routes, membership etc are tied
to particular spaces and orgs.
There are many implications to "moving" service instances, apps, etc to be
considered.

Have you thought about "cloning" a space?
I could imagine a plugin that could clone apps (names, bits, environment
variables) from one space to another.
Perhaps even moving routes.
Even creation of service instances and binding to similarly named apps
could be reasoned over.

-Dieu
CF CAPI PM


On Tue, Oct 27, 2015 at 9:23 AM, Mike Youngstrom <youngm(a)gmail.com> wrote:

We occasionally need to move spaces between orgs when our business
reorganizes. It would be great if we could atomically move spaces between
orgs.

It seems not difficult but when you look deeper things get stickier:
* Private Domains are owned by orgs
* Service access may be different between orgs
* New Organization scoped brokers may cause issues.

Thoughts on supporting moving a space between orgs? Could perhaps error
out if an issue like the ones above is detected?

Mike


Error to make a Request to update password in UAA

Juan Antonio BreƱa Moral <bren at juanantonio.info...>
 

Hi,

Using UAA API, it is possible to create users without password. Later if you need to update the password what is the right request to make the process? Current documentation is not very clear:

https://github.com/cloudfoundry/uaa/blob/master/docs/UAA-APIs.rst#create-a-user-post-users

The document has al ink for a section to udpate password but it was removed:
http://www.simplecloud.info/specs/draft-scim-api-01.html#change-password

Using the documentation, the request throws an Error:

uaa_options = {
"schemas":["urn:scim:schemas:core:1.0"],
"password": "abc123456",
"oldPassword": "oldpassword"
}

return CloudFoundryUsersUAA.updatePassword(token_type, access_token, uaa_guid, uaa_options);

UsersUAA.prototype.updatePassword = function (token_type, access_token, uaa_guid, uaa_options) {
"use strict";

var url = this.UAA_API_URL + "/Users/" + uaa_guid + "/password";
var options = {
method: 'PUT',
url: url,
headers: {
Accept: 'application/json',
Authorization: token_type + ' ' + access_token
},
json: uaa_options
};

return this.REST.request(options, "200", false);
};

Error:

Error: the string "<html><head><title>Apache Tomcat/7.0.55 - Error report</
title><style><!--H1 {font-family:Tahoma,Arial,sans-serif;color:white;background-
color:#525D76;font-size:22px;} H2 {font-family:Tahoma,Arial,sans-serif;color:whi
te;background-color:#525D76;font-size:16px;} H3 {font-family:Tahoma,Arial,sans-s
erif;color:white;background-color:#525D76;font-size:14px;} BODY {font-family:Tah
oma,Arial,sans-serif;color:black;background-color:white;} B {font-family:Tahoma,
Arial,sans-serif;color:white;background-color:#525D76;} P {font-family:Tahoma,Ar
ial,sans-serif;background:white;color:black;font-size:12px;}A {color : black;}A.
name {color : black;}HR {color : #525D76;}--></style> </head><body><h1>HTTP Stat
us 400 - </h1><HR size=\"1\" noshade=\"noshade\"><p><b>type</b> Status report</p
<p><b>message</b> <u></u></p><p><b>description</b> <u>The request sent by the c
lient was syntactically incorrect.</u></p><HR size=\"1\" noshade=\"noshade\"><h3
Apache Tomcat/7.0.55</h3></body></html>" was thrown, throw an Error :)
Note: it is possible to create an user in UAA with a password in the first operation, but the documenation is not clear in this point.

var uaa_options = {
"schemas":["urn:scim:schemas:core:1.0"],
"userName":username,
"emails":[
{
"value":"demo(a)example.com",
"type":"work"
}
],
"password": "123456",
};

Usage with CF CLI: cf login -a https://apiMY_IP.xip.io -u userXXX -p 123456 --skip-ssl-validation

Any help to update passwords?

Juan Antonio


Presentation of BOSH on OpenStack Tokyo Summit 2015

Hua ZZ Zhang <zhuadl@...>
 

Hi CF users and developers,
Ā 
Today in the OpenStack Tokyo Summit 2015,Ā we presented BOSH to the OpenStackĀ community and shared theĀ results of the survey we did a couple days ago. We also recordedĀ a video andĀ demonstrated how toĀ useĀ BOSH to deploy Cloud Foundry on OpenStack and shared theĀ experiences with audience. You can find the document and video linksĀ below. Any comment is welcomed! Thanks!
Ā 
Presentation:Ā https://goo.gl/QVUAZn
Ā 
Best regards,
Ā 
-Edward, Dr. Max, Tom (BOSH contributors)


Re: Logs, Timestamps

Bharath
 

Hi daniel ,


The logs formats are basically using log library called steno
https://github.com/cloudfoundry/steno

It was mentioned that we can configure to modify the logs to a human
readable format. I never did that . I think you can have a look into it

you can also look at firehouse and nozzles project . I think that is also
use ful

http://docs.cloudfoundry.org/loggregator/architecture.html#firehose

http://docs.cloudfoundry.org/loggregator/architecture.html#nozzles

tutorials on nozzle
http://docs.cloudfoundry.org/loggregator/nozzle-tutorial.html


regards
Bharath

On Wed, Oct 28, 2015 at 2:06 PM, Daniel Jones <
daniel.jones(a)engineerbetter.com> wrote:

Hi all,

Why are logs like those of the CloudController dated by machine-readable
timestamps? Are there any tools to parse CF logs in-situ?

I find it a constant source of frustration trying to debug an issue
reported by a human with a human-parseable date, putting that into a Unix
timestamp converter, searching through logs for that timestamp, then
scrolling up and down having to occasionally copy/paste a timestamp back
into a converter to see if I've gone too far away from my rough target time.

Computers are rather good at converting data formats - my brain, not so
much. Wouldn't it make more sense to have the logs human-readable by
default, and if an automated system needs to ingest those logs, let *it* do
the parsing?

Regards,

Daniel Jones
EngineerBetter.com


Logs, Timestamps

Daniel Jones
 

Hi all,

Why are logs like those of the CloudController dated by machine-readable
timestamps? Are there any tools to parse CF logs in-situ?

I find it a constant source of frustration trying to debug an issue
reported by a human with a human-parseable date, putting that into a Unix
timestamp converter, searching through logs for that timestamp, then
scrolling up and down having to occasionally copy/paste a timestamp back
into a converter to see if I've gone too far away from my rough target time.

Computers are rather good at converting data formats - my brain, not so
much. Wouldn't it make more sense to have the logs human-readable by
default, and if an automated system needs to ingest those logs, let *it* do
the parsing?

Regards,

Daniel Jones
EngineerBetter.com


Re: Ability to move a space between orgs

Dieu Cao <dcao@...>
 

Hi Mike,

Yes, moving gets stickier very quickly and precisely what one team wants to
have preserved across an org may differ depending on use case.
Service bindings, environment variables, routes, membership etc are tied to
particular spaces and orgs.
There are many implications to "moving" service instances, apps, etc to be
considered.

Have you thought about "cloning" a space?
I could imagine a plugin that could clone apps (names, bits, environment
variables) from one space to another.
Perhaps even moving routes.
Even creation of service instances and binding to similarly named apps
could be reasoned over.

-Dieu
CF CAPI PM

On Tue, Oct 27, 2015 at 9:23 AM, Mike Youngstrom <youngm(a)gmail.com> wrote:

We occasionally need to move spaces between orgs when our business
reorganizes. It would be great if we could atomically move spaces between
orgs.

It seems not difficult but when you look deeper things get stickier:
* Private Domains are owned by orgs
* Service access may be different between orgs
* New Organization scoped brokers may cause issues.

Thoughts on supporting moving a space between orgs? Could perhaps error
out if an issue like the ones above is detected?

Mike


Re: cf": error=2, No such file or directory and error=2

Amit Kumar Gupta
 

Hi Varsha,

Can you do the following:

0. show the output of running "which cf" in the terminal
1. show the output of running "echo $PATH" in the terminal
2. restart Eclipse
3. Add the line Matthew suggested to dump the PATH that Eclipse knows
about, above the line where you execute the command
4. Re-run the execution from Eclipse, and show the output from the PATH
dump.

Thanks,
Amit

On Wed, Oct 21, 2015 at 11:36 PM, Varsha Nagraj <n.varsha(a)gmail.com> wrote:

Hello Mathew,

Can you please let me know how do I add this to my PATH. Previously I
would run the same commands on a windows system from eclipse. I have not
set any PATH env on windows as I remember.


Re: doppler issue which fails to emit logs with syslog protocol on CFv212

Warren Fernandes
 

Hey Masumi,

We setup a similar environment with CFv212 and went through a few scenarios. We took a look at the problem and we feel that there are couple of odd things.

"Error when polling cloud controller: Remote server error: Unauthorized" is caused when syslog_drain_binder gets a 401 response from CC which means your credentials could be invalid.

"AppStoreWatcher: Got error while waiting for ETCD events: store request timed out" is caused when we took ETCD down and noticed a flood of these errors in the doppler.stdout.log. In this case (ETCD being down) you should also see in syslog_drain_binder.stdout.log the following error message: "Error when staying leader: store request timed out". And yes other components like trafficcontroller and metron agents should also error out.

How many ETCDs were running at the time of these errors? Was there a deployment going on at the time? As Dies Koper pointed out (lamb slack channel), in CF 212 the username for the syslog drain endpoint in the syslog_drain_binder is hardcoded to bulk_api and only the password is configurable. So maybe there was a password mismatch? Check the timestamps of the events to see if they are related.

Thanks.
Warren & Nino


Re: Droplets and Stacks

Guillaume Berche
 

Thanks Dieu for your response and for considering these use-cases!

When updating a system buildpack, only the "updated_at" field changes and
no other versionning info is available. I therefore understand that the
"updated_at" field plays the role of the "sequence number" for updates and
the suggestion for an explicit sequence number integer is not currently
considered. While the updated_at field is generic to all CC entities, and
makes it appealing for filtering, I feel that dates are heavier and more
error prone for querying than integer sequence numbers.

Besides, as buildpacks updates have security implications, would'nt it make
sense to expose "buildpack events" when they are created/updated/deleted
including the regular actor, actee information (just like other system-wide
entities such as service brokers or service plans) ?

Last, would'nt it make sense to have the buildpack staging life cycle to
invoke the buildpack "detect" script and record its output even when the -b
option is specified in "cf push" command ? This way the detailed buildpack
internal versionning information returned in STDOUT would be available and
displayed with "cf app" command regardless of whether -b option was used to
constraint use of specific buildpack.

ps: Dieu, Mike, I'm not sure whether buildpack life cycle features now fall
into the CAPI team or the buildpacks team (which tracker should I watch
stories related to this discussion ?).

Thanks again,

Guillaume.


Initial buildpack uploading:
$ cf create-buildpack gbe-static-bp-test ./binary-bp1.zip 20
$ Cf_TRACE=true cf buildpacks
[...]
{
"metadata": {
"guid": "0b173bea-6b1b-4ec5-b1de-e8911c3d7cc4",
"url": "/v2/buildpacks/0b173bea-6b1b-4ec5-b1de-e8911c3d7cc4",
"created_at": "2015-10-27T17:46:49Z",
"updated_at": "2015-10-27T17:46:50Z"
},
"entity": {
"name": "gbe-static-bp-test",
"position": 13,
"enabled": true,
"locked": false,
"filename": "binary-bp1.zip"
}
}

Buildpack update:

$ cf update-buildpack gbe-static-bp-test -p ./binary-bp2.zip -i 20
$ Cf_TRACE=true cf buildpacks
[...]

{
"metadata": {
"guid": "0b173bea-6b1b-4ec5-b1de-e8911c3d7cc4",
"url": "/v2/buildpacks/0b173bea-6b1b-4ec5-b1de-e8911c3d7cc4",
"created_at": "2015-10-27T17:46:49Z",
"updated_at": "2015-10-27T17:48:28Z"
},
"entity": {
"name": "gbe-static-bp-test",
"position": 13,
"enabled": true,
"locked": false,
"filename": "binary-bp2.zip"
}
}

On Tue, Oct 27, 2015 at 10:13 AM, Dieu Cao <dcao(a)pivotal.io> wrote:

Hi Guillaume,

We could consider exposing on a droplet [1] the additional metadata that
you've mentioned, such as the buildpack guid for a system buildpack or the
git sha given a buildpack url.

I think it would make sense to consider adding an additional filter to
/v3/droplets to be able to query by creation date. Similarly, allowing
filters based on buildpack information sounds useful as well.

-Dieu



[1]
http://apidocs.cloudfoundry.org/222/droplets_(experimental)/get_a_droplet.html
[2]
http://apidocs.cloudfoundry.org/222/droplets_(experimental)/list_all_droplets.html


On Sun, Oct 25, 2015 at 2:40 PM, Guillaume Berche <bercheg(a)gmail.com>
wrote:

Thanks Mike for the additional details. Long due PR on buildpacks-docs at
[1]

Dieu, can you please comment on the suggestion above for more fine
grained buildpack versionning support ?

Thanks,

Guillaume.

[1] https://github.com/cloudfoundry/docs-buildpacks/pull/39

On Thu, Aug 6, 2015 at 8:02 PM, Mike Dalessio <mdalessio(a)pivotal.io>
wrote:





The /v2/buildpacks endpoint (used by the "cf buildpacks" command)
displays the last update date for a buildpack, e.g.

{
"metadata": {
"guid": "e000b78c-c898-419e-843c-2fd64175527e",
"url": "/v2/buildpacks/e000b78c-c898-419e-843c-2fd64175527e",
"created_at": "2014-04-08T22:05:34Z",
"updated_at": "2015-07-08T23:26:42Z"
},
"entity": {
"name": "java_buildpack",
"position": 3,
"enabled": true,
"locked": false,
"filename": "java-buildpack-v3.1.zip"
}
}

Would'nt it make sense to have the CC increment a version number for
each update so that it becomes easier to query than only relying on dates
comparison ?

While it's great to have buildpacks provide themselves detailed
versionning info for their code and their most important
dependencies/remote artifacts, I feel the cf platform should provide a bit
more support to help identify versions of buildpacks used by apps, such as:
- refine the app summary endpoint [g2]:
- for system buildpacks: include the buildpack guid (in addition to
the buildpack name) as to allow correlation to /v2/buildpacks endpoint
- for custom buildpacks (url): record and display the git hash
commit for a buildpack url
- refine the app listing endpoints [g4] or v3 [g5] to
- support querying app per system buildpack id
- support querying app by dates of "package_updated_at" or best a
version number as suggested above

I'm wondering whether the CAPI team working on API V3 is planning some
work in this area, and could comment the suggestions above.
I'll let Dieu respond to these suggestions, as she's the CAPI PM.


Re: When will dea be replaced by diego?

Amit Kumar Gupta
 

That's strange, just followed it, worked for me. You can search the
mailing list for the subject "Cloud Foundry DEA to Diego switch - when?"

On Tue, Oct 27, 2015 at 1:35 AM, Aleksey Zalesov <
aleksey.zalesov(a)altoros.com> wrote:

Hi Amit,

the link is broken.

Alex Zalesov


Ability to move a space between orgs

Mike Youngstrom <youngm@...>
 

We occasionally need to move spaces between orgs when our business
reorganizes. It would be great if we could atomically move spaces between
orgs.

It seems not difficult but when you look deeper things get stickier:
* Private Domains are owned by orgs
* Service access may be different between orgs
* New Organization scoped brokers may cause issues.

Thoughts on supporting moving a space between orgs? Could perhaps error
out if an issue like the ones above is detected?

Mike


Re: how does hm9000 actually determine application health?

Jesse T. Alford
 

If the app doesn't have a bound route, it's not health checked on DEAs. If
it fails without actually exiting, the system won't notice.

(On Diego, the health check has to be explicitly disabled.)

On Tue, Oct 27, 2015, 8:12 AM Eric Poelke <epoelke(a)gmail.com> wrote:

Thanks guys. So I thought that I had come across the bit of its healthy
if its listening on $PORT, but that got me thinking about the small worker
process I just deployed. Since it does not listen on a port is it just
assumed to be "ok"?


[abacus] Port numbers we use in local test environments

Jean-Sebastien Delfino
 

Hi all,

In preparation for our final 0.0.2 rc, I've started to look into setting up
local test environments with multiple instances of each Abacus app. I found
that we don't have a documented list of the port numbers we use for testing
each app locally outside of CF, and that the configuration of these ports
is distributed in multiple files and scripts, making it a bit difficult to
manage.

So, I'm planning to clean that up sometime today, hoping to make it easier
to configure test environments with multiple instances. As a heads up, I'm
probably going to reassign some of the test ports we use, but will keep the
ports for usage submission and reporting (our public APIs) unchanged to not
break the folks that may be testing with them.

I'm also going to add that list of default test ports to a doc/ports.md
file.

HTH

- Jean-Sebastien


Re: how does hm9000 actually determine application health?

Eric Poelke
 

Thanks guys. So I thought that I had come across the bit of its healthy if its listening on $PORT, but that got me thinking about the small worker process I just deployed. Since it does not listen on a port is it just assumed to be "ok"?


Re: How to get a new UAA guid by REST

Juan Antonio BreƱa Moral <bren at juanantonio.info...>
 


Re: Diego and Maven support

Daniel Mikusa
 

Krzysztof,

Passed some of this information along to engineering and they've got a
story to investigate this more.

https://www.pivotaltracker.com/story/show/106702544

From what I saw with some additional tests, this 503 error is not going to
prevent your app from deploying. It happens after the app has been
deployed, when the plugin is trying to monitor the progress of your app
while it stages and starts. The 503 will make Maven fail, but if you
ignore that and wait a bit you should see that the app stages and starts
successfully.

If it doesn't start, you could be hitting a different issue or there could
be some other problem with the app. You could try troubleshooting that by
using `cf` to push the app and by running `cf logs` in one terminal and
`mvn cf:push` from a second, which would let you see the full staging and
start up logs. Perhaps something in there would indicate the problem.

Dan

On Mon, Oct 26, 2015 at 10:23 AM, Daniel Mikusa <dmikusa(a)pivotal.io> wrote:

Krzysztof,

Thanks for the follow up. I pushed a test app and I'm seeing the same
"503 Service Unavailable" from Maven. Is that what you're seeing?

Dan

On Sun, Oct 25, 2015 at 4:32 PM, Krzysztof Wilk <chris.m.wilk(a)gmail.com>
wrote:

Dan,

It seems to me that there is more than one cause of this problem.

I have done the following:
1. upgraded cf CLI from 6.12 to 6.13
2.cf set-health-check MY_APP_NAME none
3. set <healthCheckTimeout>180</healthCheckTimeout> (pom.xml)
4. removed env variables in order to allow Maven to set them again

Still no luck.

However, I have read that there is intensive development of Maven client
plugin 2.x happening now. I will give it a try as soon as first release is
available.

Best,
Krzysztof


Multiple ldap backend in UAA

Jakub Witkowski
 

I would like to create configuration that will work with more than one LDAP backend for users authentication.
I've read some Java code of UAA but it's not clear for my if configuration describe below is possible.

My configuration have users split over two AD domains controller. Desired configuration have only one UAA endpoints.
I don't want setup two UUA server or use multitenant configuration in UAA.
Desired configuration have one UAA server with database MariaDB as primary profile and multiple LDAP backends only user for authentication.
Mariadb will have all groups and there shouldn't be be any LDAP mappings between and UAA (but if it possible it would quite nice to create some kind of hybrid)

best regards
j.witkowski