Date   

Re: How shoulld I debug a blobstore error?

Eyal Shalev
 

It seems to have generated two of them even through I am not using 2 zones.
Also I see port 8080 mentioned somewhere in there, as mentioned before port 8080 is only opened internally in the security group (between the CF nodes). Should it also be opened up for the client? (what are the ports that the the client needs to function [I have identified ports 80 and 443] ).

Here is the config:

- instances: 1
name: uaa_z1
networks:
- name: cf1
properties:
consul:
agent:
services:
uaa: {}
metron_agent:
zone: z1
route_registrar:
routes:
- health_check:
name: uaa-healthcheck
script_path: /var/vcap/jobs/uaa/bin/health_check
name: uaa
port: 8080
registration_interval: 4s
tags:
component: uaa
uris:
- uaa.10.60.18.186.xip.io
- '*.uaa.10.60.18.186.xip.io'
- login.10.60.18.186.xip.io
- '*.login.10.60.18.186.xip.io'
uaa:
proxy:
servers:
- 192.168.10.69
resource_pool: medium_z1
templates:
- name: uaa
release: cf
- name: metron_agent
release: cf
- name: consul_agent
release: cf
- name: route_registrar
release: cf
- name: statsd-injector
release: cf
update: {}
- instances: 0
name: uaa_z2
networks:
- name: cf2
properties:
consul:
agent:
services:
uaa: {}
metron_agent:
zone: z2
route_registrar:
routes:
- health_check:
name: uaa-healthcheck
script_path: /var/vcap/jobs/uaa/bin/health_check
name: uaa
port: 8080
registration_interval: 4s
tags:
component: uaa
uris:
- uaa.10.60.18.186.xip.io
- '*.uaa.10.60.18.186.xip.io'
- login.10.60.18.186.xip.io
- '*.login.10.60.18.186.xip.io'
uaa:
proxy:
servers:
- 192.168.10.69
resource_pool: medium_z2
templates:
- name: uaa
release: cf
- name: metron_agent
release: cf
- name: consul_agent
release: cf
- name: route_registrar
release: cf
- name: statsd-injector
release: cf
update: {}


Re: How shoulld I debug a blobstore error?

Ronak Banka
 

Eyal ,

In your final manifest , can you check what are the properties under
route-registrar for uaa job ?

https://github.com/cloudfoundry/cf-release/blob/master/templates/cf.yml#L194

On Tue, Jun 28, 2016 at 6:53 AM, Eyal Shalev <eshalev(a)cisco.com> wrote:

That works, but now I cannot connect the cf client.
I am getting a 404.
It does not explicilty say so in the docs, so I assuming that the API
endoint is:
https://api.domain_for_haproxy_node is this correct?

my client is not accessing cf from within the security groups (an
openstack limitation in the deployment that I use). As such I only opened
ports 80,443,4443 & 2222 in the firewall . [internally all tcp traffic is
enabled]

These are the commands that I ran (see the 404):

bosh vms
RSA 1024 bit CA certificates are loaded due to old openssl compatibility
Acting as user 'admin' on 'my-bosh'
Deployment 'ENVIRONMENT'

Director task 33

Task 33 done


+---------------------------------------------------------------------------+---------+-----+-----------+---------------+
| VM
| State | AZ | VM Type | IPs |

+---------------------------------------------------------------------------+---------+-----+-----------+---------------+
| api_worker_z1/0 (e9f91b0e-ad01-4053-975f-47715023b4cb)
| running | n/a | small_z1 | 192.168.10.56 |
| api_z1/0 (34bf56c5-5bcc-496c-859d-c56a917a8901)
| running | n/a | large_z1 | 192.168.10.54 |
| blobstore_z1/0 (4f12e375-1003-4a66-ac8b-a5eb5571f920)
| running | n/a | medium_z1 | 192.168.10.52 |
| clock_global/0 (f099a159-9ae2-4d92-b88b-d0d55fdd5f3e)
| running | n/a | medium_z1 | 192.168.10.55 |
| consul_z1/0 (ff08d8b8-fbba-474c-9640-a03577acf586)
| running | n/a | small_z1 | 192.168.10.76 |
| doppler_z1/0 (437a1ab7-b6b8-4ae2-be0f-cd75b62b8228)
| running | n/a | medium_z1 | 192.168.10.59 |
| etcd_z1/0 (a2527fc7-3e3e-489c-8ea0-cd3a443f1c7d)
| running | n/a | medium_z1 | 192.168.10.72 |
| ha_proxy_z1/0 (e4fd4fdd-8d5e-4e85-90e5-6774f277c4a8)
| running | n/a | router_z1 | 192.168.10.64 |
|
| | | | 10.60.18.186 |
| hm9000_z1/0 (14d70eac-2687-4961-99f7-3f3f8f4e55c8)
| running | n/a | medium_z1 | 192.168.10.57 |
| loggregator_trafficcontroller_z1/0
(ea59e739-15f9-4149-8d1a-cca3b1fbfb55) | running | n/a | small_z1 |
192.168.10.60 |
| nats_z1/0 (7a31a162-e5a3-4b29-82f8-fe76897d587d)
| running | n/a | medium_z1 | 192.168.10.66 |
| postgres_z1/0 (8ed03c6f-8ea5-403a-bbb5-f1bc091b96b4)
| running | n/a | medium_z1 | 192.168.10.68 |
| router_z1/0 (9749bd15-48f3-4b7d-a82e-d0aac34554fe)
| running | n/a | router_z1 | 192.168.10.69 |
| runner_z1/0 (54e20fba-3185-45d2-9f3b-8da00de495f5)
| running | n/a | runner_z1 | 192.168.10.58 |
| stats_z1/0 (9a107f21-7eb3-4df8-ac7b-13bd1d709e1f)
| running | n/a | small_z1 | 192.168.10.51 |
| uaa_z1/0 (9b58319d-451a-4726-a4bf-e9431a467f47)
| running | n/a | medium_z1 | 192.168.10.53 |

+---------------------------------------------------------------------------+---------+-----+-----------+---------------+

VMs total: 16


cf api api.10.60.18.186.xip.io --skip-ssl-validation
Setting api endpoint to api.10.60.18.186.xip.io...
OK


API endpoint: https://api.10.60.18.186.xip.io (API version: 2.56.0)
Not logged in. Use 'cf login' to log in.



cf -v login --skip-ssl-validation
API endpoint: https://api.10.60.18.186.xip.io

REQUEST: [2016-06-27T21:36:51Z]
GET /v2/info HTTP/1.1
Host: api.10.60.18.186.xip.io
Accept: application/json
Content-Type: application/json
User-Agent: go-cli 6.19.0+b29b4e0 / linux



RESPONSE: [2016-06-27T21:36:51Z]
HTTP/1.1 200 OK
Content-Length: 580
Content-Type: application/json;charset=utf-8
Date: Mon, 27 Jun 2016 21:36:57 GMT
Server: nginx
X-Content-Type-Options: nosniff
X-Vcap-Request-Id: 9170d9a4-3dce-45aa-7576-377a6d9c2940
X-Vcap-Request-Id:
9170d9a4-3dce-45aa-7576-377a6d9c2940::a4533964-ae04-4aa1-93ef-4626f4336187

{"name":"","build":"","support":"http://support.cloudfoundry.com
","version":0,"description":"","authorization_endpoint":"
http://login.sysdomain.10.60.18.186.xip.io","token_endpoint":"
https://uaa.10.60.18.186.xip.io
","min_cli_version":null,"min_recommended_cli_version":null,"api_version":"2.56.0","app_ssh_endpoint":"
ssh.sysdomain.10.60.18.186.xip.io:2222
","app_ssh_host_key_fingerprint":null,"app_ssh_oauth_client":"ssh-proxy","logging_endpoint":"wss://
loggregator.sysdomain.10.60.18.186.xip.io:4443
","doppler_logging_endpoint":"wss://
doppler.sysdomain.10.60.18.186.xip.io:4443"}

REQUEST: [2016-06-27T21:36:52Z]
GET /login HTTP/1.1
Host: login.sysdomain.10.60.18.186.xip.io
Accept: application/json
Content-Type: application/json
User-Agent: go-cli 6.19.0+b29b4e0 / linux



RESPONSE: [2016-06-27T21:36:52Z]
HTTP/1.1 404 Not Found
Content-Length: 87
Content-Type: text/plain; charset=utf-8
Date: Mon, 27 Jun 2016 21:36:57 GMT
X-Cf-Routererror: unknown_route
X-Content-Type-Options: nosniff
X-Vcap-Request-Id: 4419650f-6a06-4b9d-5475-0f2790934fd5

404 Not Found: Requested route ('login.sysdomain.10.60.18.186.xip.io')
does not exist.



API endpoint: https://api.10.60.18.186.xip.io (API version: 2.56.0)
Not logged in. Use 'cf login' to log in.
FAILED
Server error, status code: 404, error code: , message:


Emitting service instance logs to dopplr

Dr Nic Williams <drnicwilliams@...>
 

Has anyone implemented (and has some sample code/OSS project) for a service broker implementation to emit logs/events back into dopplr for each service binding's app?
Nic


Spring OAuth not retrieving scopes from UAA

Bryan Perino
 

Hello All,

Brand new to Cloud Foundry. I have hooked up a Spring Cloud Application to a UAA server and gotten it to authenticate properly. However, I noticed that none of the scopes that I defined in uaa.yml for the user are showing up in the resource server backend.

Here is a link to the debugging session of what I can see: http://imgur.com/6wTYpQD
Here is the code I am debugging:

@RequestMapping("/")
public Message home(OAuth2Authentication principal) {
System.out.println(principal.getName());
return new Message("Hello World");
}

The screenshot is the value of the 'principal' variable. I have set the Spring Security yml variables for the resource server like so:

security:
oauth2:
resource:
userInfoUri: http://localhost:8080/uaa/userinfo

and here is the relevant parts from the uaa.yml:

https://gist.github.com/bryantp/2bfc4538f36f28ba285fda84c59b89f8

Thanks for any help.


Re: How shoulld I debug a blobstore error?

Eyal Shalev
 

That works, but now I cannot connect the cf client.
I am getting a 404.
It does not explicilty say so in the docs, so I assuming that the API endoint is:
https://api.domain_for_haproxy_node is this correct?

my client is not accessing cf from within the security groups (an openstack limitation in the deployment that I use). As such I only opened ports 80,443,4443 & 2222 in the firewall . [internally all tcp traffic is enabled]

These are the commands that I ran (see the 404):

bosh vms
RSA 1024 bit CA certificates are loaded due to old openssl compatibility
Acting as user 'admin' on 'my-bosh'
Deployment 'ENVIRONMENT'

Director task 33

Task 33 done

+---------------------------------------------------------------------------+---------+-----+-----------+---------------+
| VM | State | AZ | VM Type | IPs |
+---------------------------------------------------------------------------+---------+-----+-----------+---------------+
| api_worker_z1/0 (e9f91b0e-ad01-4053-975f-47715023b4cb) | running | n/a | small_z1 | 192.168.10.56 |
| api_z1/0 (34bf56c5-5bcc-496c-859d-c56a917a8901) | running | n/a | large_z1 | 192.168.10.54 |
| blobstore_z1/0 (4f12e375-1003-4a66-ac8b-a5eb5571f920) | running | n/a | medium_z1 | 192.168.10.52 |
| clock_global/0 (f099a159-9ae2-4d92-b88b-d0d55fdd5f3e) | running | n/a | medium_z1 | 192.168.10.55 |
| consul_z1/0 (ff08d8b8-fbba-474c-9640-a03577acf586) | running | n/a | small_z1 | 192.168.10.76 |
| doppler_z1/0 (437a1ab7-b6b8-4ae2-be0f-cd75b62b8228) | running | n/a | medium_z1 | 192.168.10.59 |
| etcd_z1/0 (a2527fc7-3e3e-489c-8ea0-cd3a443f1c7d) | running | n/a | medium_z1 | 192.168.10.72 |
| ha_proxy_z1/0 (e4fd4fdd-8d5e-4e85-90e5-6774f277c4a8) | running | n/a | router_z1 | 192.168.10.64 |
| | | | | 10.60.18.186 |
| hm9000_z1/0 (14d70eac-2687-4961-99f7-3f3f8f4e55c8) | running | n/a | medium_z1 | 192.168.10.57 |
| loggregator_trafficcontroller_z1/0 (ea59e739-15f9-4149-8d1a-cca3b1fbfb55) | running | n/a | small_z1 | 192.168.10.60 |
| nats_z1/0 (7a31a162-e5a3-4b29-82f8-fe76897d587d) | running | n/a | medium_z1 | 192.168.10.66 |
| postgres_z1/0 (8ed03c6f-8ea5-403a-bbb5-f1bc091b96b4) | running | n/a | medium_z1 | 192.168.10.68 |
| router_z1/0 (9749bd15-48f3-4b7d-a82e-d0aac34554fe) | running | n/a | router_z1 | 192.168.10.69 |
| runner_z1/0 (54e20fba-3185-45d2-9f3b-8da00de495f5) | running | n/a | runner_z1 | 192.168.10.58 |
| stats_z1/0 (9a107f21-7eb3-4df8-ac7b-13bd1d709e1f) | running | n/a | small_z1 | 192.168.10.51 |
| uaa_z1/0 (9b58319d-451a-4726-a4bf-e9431a467f47) | running | n/a | medium_z1 | 192.168.10.53 |
+---------------------------------------------------------------------------+---------+-----+-----------+---------------+

VMs total: 16


cf api api.10.60.18.186.xip.io --skip-ssl-validation
Setting api endpoint to api.10.60.18.186.xip.io...
OK


API endpoint: https://api.10.60.18.186.xip.io (API version: 2.56.0)
Not logged in. Use 'cf login' to log in.



cf -v login --skip-ssl-validation
API endpoint: https://api.10.60.18.186.xip.io

REQUEST: [2016-06-27T21:36:51Z]
GET /v2/info HTTP/1.1
Host: api.10.60.18.186.xip.io
Accept: application/json
Content-Type: application/json
User-Agent: go-cli 6.19.0+b29b4e0 / linux



RESPONSE: [2016-06-27T21:36:51Z]
HTTP/1.1 200 OK
Content-Length: 580
Content-Type: application/json;charset=utf-8
Date: Mon, 27 Jun 2016 21:36:57 GMT
Server: nginx
X-Content-Type-Options: nosniff
X-Vcap-Request-Id: 9170d9a4-3dce-45aa-7576-377a6d9c2940
X-Vcap-Request-Id: 9170d9a4-3dce-45aa-7576-377a6d9c2940::a4533964-ae04-4aa1-93ef-4626f4336187

{"name":"","build":"","support":"http://support.cloudfoundry.com","version":0,"description":"","authorization_endpoint":"http://login.sysdomain.10.60.18.186.xip.io","token_endpoint":"https://uaa.10.60.18.186.xip.io","min_cli_version":null,"min_recommended_cli_version":null,"api_version":"2.56.0","app_ssh_endpoint":"ssh.sysdomain.10.60.18.186.xip.io:2222","app_ssh_host_key_fingerprint":null,"app_ssh_oauth_client":"ssh-proxy","logging_endpoint":"wss://loggregator.sysdomain.10.60.18.186.xip.io:4443","doppler_logging_endpoint":"wss://doppler.sysdomain.10.60.18.186.xip.io:4443"}

REQUEST: [2016-06-27T21:36:52Z]
GET /login HTTP/1.1
Host: login.sysdomain.10.60.18.186.xip.io
Accept: application/json
Content-Type: application/json
User-Agent: go-cli 6.19.0+b29b4e0 / linux



RESPONSE: [2016-06-27T21:36:52Z]
HTTP/1.1 404 Not Found
Content-Length: 87
Content-Type: text/plain; charset=utf-8
Date: Mon, 27 Jun 2016 21:36:57 GMT
X-Cf-Routererror: unknown_route
X-Content-Type-Options: nosniff
X-Vcap-Request-Id: 4419650f-6a06-4b9d-5475-0f2790934fd5

404 Not Found: Requested route ('login.sysdomain.10.60.18.186.xip.io') does not exist.



API endpoint: https://api.10.60.18.186.xip.io (API version: 2.56.0)
Not logged in. Use 'cf login' to log in.
FAILED
Server error, status code: 404, error code: , message:


Re: Consul Encryption in CF v234+

Amit Kumar Gupta
 

Hi Carsten,

That's a good question. We haven't built anything specifically to support
0-downtime for the DEAs, but we have some upcoming changes to make the etcd
used by etcd-metric-server, routing-api, all loggregator components, and
HM9k also switch to TLS. This would affect all the metron agents colocated
on all the VMs, and we're building out a component to support a 0-downtime
transition.

This work is currently in flight:
https://www.pivotaltracker.com/epic/show/2566951

You could apply this concept to consul:

* create a new secure (TLS) consul cluster
* replace the existing consul cluster (don't change the job name or IPs,
just what processes it runs) with an HTTP proxy that proxies requests to
the secure cluster
* roll out the new IPs and TLS credentials to all clients of the consul
cluster
* after that deploy is done, nothing should be talking to the HTTP proxy,
and you can simply delete that job.

Best,
Amit

On Fri, Jun 24, 2016 at 8:46 AM, Long Nguyen <long.nguyen11288(a)gmail.com>
wrote:



Hi there!

We found that if you monit stop all the consul nodes before upgrading and
adding ssl. The deployment should upgrade without any issues.

Thanks,
Long

On June 23, 2016 at 11:56:04 AM, Hiort, Carsten, Springer DE (
carsten.hiort(a)springer.com) wrote:

Hi,

CF v234 enforces the use of SSL for Consul. We are currently wondering if
there is a supposed upgrade path.
When you switch to SSL and the Consul cluster get’s upgraded all machines
that are not yet upgraded will be blind with respect to service discovery/
DNS through Consul. This particularly affects the DEAs as they are not able
to figure out where to get the droplets from when staging causing a 500
when cf pushing. I did already try deploying the Certs on 231 with
require_ssl=false but then setting require_ssl true or upgrading to v234+
will still rest in this situation.
Any thoughts highly appreciated!


Thanks,

Carsten

---

Carsten Hiort
Platform Engineer
Platform Engineering

SpringerNature
Abraham-Lincoln-Str. 46, 65189 Wiesbaden, Germany
T +49 611 7878665
M +49 175 2965802

*carsten.hiort(a)springernature.com <carsten.hiort(a)springernature.com> *
www.springernature.com

Springer Nature is one of the world’s leading global research, educational
and professional publishers, created in May 2015 through the combination of
Nature Publishing Group,
Palgrave Macmillan, Macmillan Education and Springer Science+Business Media

Springer Fachmedien Wiesbaden GmbH
Registered Office: Wiesbaden | Amtsgericht Wiesbaden, HRB 9754
Directors: Armin Gross, Joachim Krieger, Dr. Niels Peter Thomas


Re: How shoulld I debug a blobstore error?

Amit Kumar Gupta
 

You can replace it in the stub and rerun generate.

On Mon, Jun 27, 2016 at 11:10 AM, Eyal Shalev <eshalev(a)cisco.com> wrote:

Can I replace it in the manifest stub and rerun generate? or do I need to
replace it in the generated manifest?


Re: How shoulld I debug a blobstore error?

Eyal Shalev
 

Can I replace it in the manifest stub and rerun generate? or do I need to replace it in the generated manifest?


Re: How shoulld I debug a blobstore error?

Amit Kumar Gupta
 

Please try replacing all occurrences of "SYSTEM_DOMAIN" in your manifest
with "sys.10.60.18.186.xip.io" and all instances of "APP_DOMAIN" with "
apps.10.60.18.186.xip.io".

On Mon, Jun 27, 2016 at 8:18 AM, Eyal Shalev <eshalev(a)cisco.com> wrote:

Following up on my previously posted config,
I found the following message in
/var/vcap/sys/log/cloud_controller_ng/cloud_controller_ng.log (The error
log was empty... ):

Problem is that I don't understand ow "APPDOMAIN" violates the rules in
the error message

{"timestamp":1467040322.8311825,"message":"Encountered error: Error for
shared domain name APPDOMAIN: name can contain multiple subdomains, each
having only alphanumeric characters and hyphens of up to 63 characters, see
RFC
1035.\n/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.3.0/gems/sequel-4.29.0/lib/sequel/model/base.rb:1543:in
`save'\n/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/app/models/runtime/shared_domain.rb:35:in
`block in
find_or_create'\n/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.3.0/gems/sequel-4.29.0/lib/sequel/database/transactions.rb:134:in
`_transaction'\n/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.3.0/gems/sequel-4.29.0/lib/sequel/database/transactions.rb:108:in
`block in
transaction'\n/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.3.0/gems/sequel-4.29.0/lib/sequel/database/connecting.rb:249:in
`block in
synchron
ize'\n/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.3.0/gems/sequel-4.29.0/lib/sequel/connection_pool/threaded.rb:103:in
`hold'\n/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.3.0/gems/sequel-4.29.0/lib/sequel/database/connecting.rb:249:in
`synchronize'\n/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.3.0/gems/sequel-4.29.0/lib/sequel/database/transactions.rb:97:in
`transaction'\n/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/app/models/runtime/shared_domain.rb:27:in
`find_or_create'\n/var/vcap/data/packages/cloud_controller_ng/f87cb49aa2cd87792cb9c2211a79e1542502be4d.1-08d000452ce2b287c00720587f20dc62976a73b6/cloud_controller_ng/lib/cloud_controller/seeds.rb:57:in
`block in
create_seed_domains'\n/var/vcap/data/packages/cloud_controller_ng/f87cb49aa2cd87792cb9c2211a79e1542502be4d.1-08d000452ce2b287c00720587f20dc62976a73b6/cloud_controller_ng/lib/cloud_controller/see
ds.rb:56
:in `
each'\n/var/vcap/data/packages/cloud_controller_ng/f87cb49aa2cd87792cb9c2211a79e1542502be4d.1-08d000452ce2b287c00720587f20dc62976a73b6/cloud_controller_ng/lib/cloud_controller/seeds.rb:56:in
`create_seed_domains'\n/var/vcap/data/packages/cloud_controller_ng/f87cb49aa2cd87792cb9c2211a79e1542502be4d.1-08d000452ce2b287c00720587f20dc62976a73b6/cloud_controller_ng/lib/cloud_controller/seeds.rb:9:in
`write_seed_data'\n/var/vcap/data/packages/cloud_controller_ng/f87cb49aa2cd87792cb9c2211a79e1542502be4d.1-08d000452ce2b287c00720587f20dc62976a73b6/cloud_controller_ng/lib/cloud_controller/runner.rb:88:in
`block in
run!'\n/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.3.0/gems/eventmachine-1.0.9.1/lib/eventmachine.rb:193:in
`run_machine'\n/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.3.0/gems/eventmachine-1.0.9.1/lib/eventmachine.rb:193:in
`run'\n/var/vcap/data/packages/cloud_controller_ng/f87cb49aa2cd87792cb9c2211a79e154
2502be4d
.1-08d000452ce2b287c00720587f20dc62976a73b6/cloud_controller_ng/lib/cloud_controller/runner.rb:82:in
`run!'\n/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/bin/cloud_controller:8:in
`<main>'","log_level":"error","source":"cc.runner","data":{},"thread_id":47219093041420,"fiber_id":47219133477120,"process_id":27911,"file":"/var/vcap/data/packages/cloud_controller_ng/f87cb49aa2cd87792cb9c2211a79e1542502be4d.1-08d000452ce2b287c00720587f20dc62976a73b6/cloud_controller_ng/lib/cloud_controller/runner.rb","lineno":102,"method":"rescue
in block in run!"}


Re: How shoulld I debug a blobstore error?

Eyal Shalev
 

Following up on my previously posted config,
I found the following message in /var/vcap/sys/log/cloud_controller_ng/cloud_controller_ng.log (The error log was empty... ):

Problem is that I don't understand ow "APPDOMAIN" violates the rules in the error message

{"timestamp":1467040322.8311825,"message":"Encountered error: Error for shared domain name APPDOMAIN: name can contain multiple subdomains, each having only alphanumeric characters and hyphens of up to 63 characters, see RFC 1035.\n/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.3.0/gems/sequel-4.29.0/lib/sequel/model/base.rb:1543:in `save'\n/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/app/models/runtime/shared_domain.rb:35:in `block in find_or_create'\n/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.3.0/gems/sequel-4.29.0/lib/sequel/database/transactions.rb:134:in `_transaction'\n/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.3.0/gems/sequel-4.29.0/lib/sequel/database/transactions.rb:108:in `block in transaction'\n/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.3.0/gems/sequel-4.29.0/lib/sequel/database/connecting.rb:249:in `block in synchron
ize'\n/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.3.0/gems/sequel-4.29.0/lib/sequel/connection_pool/threaded.rb:103:in `hold'\n/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.3.0/gems/sequel-4.29.0/lib/sequel/database/connecting.rb:249:in `synchronize'\n/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.3.0/gems/sequel-4.29.0/lib/sequel/database/transactions.rb:97:in `transaction'\n/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/app/models/runtime/shared_domain.rb:27:in `find_or_create'\n/var/vcap/data/packages/cloud_controller_ng/f87cb49aa2cd87792cb9c2211a79e1542502be4d.1-08d000452ce2b287c00720587f20dc62976a73b6/cloud_controller_ng/lib/cloud_controller/seeds.rb:57:in `block in create_seed_domains'\n/var/vcap/data/packages/cloud_controller_ng/f87cb49aa2cd87792cb9c2211a79e1542502be4d.1-08d000452ce2b287c00720587f20dc62976a73b6/cloud_controller_ng/lib/cloud_controller/seeds.rb:56
:in `
each'\n/var/vcap/data/packages/cloud_controller_ng/f87cb49aa2cd87792cb9c2211a79e1542502be4d.1-08d000452ce2b287c00720587f20dc62976a73b6/cloud_controller_ng/lib/cloud_controller/seeds.rb:56:in `create_seed_domains'\n/var/vcap/data/packages/cloud_controller_ng/f87cb49aa2cd87792cb9c2211a79e1542502be4d.1-08d000452ce2b287c00720587f20dc62976a73b6/cloud_controller_ng/lib/cloud_controller/seeds.rb:9:in `write_seed_data'\n/var/vcap/data/packages/cloud_controller_ng/f87cb49aa2cd87792cb9c2211a79e1542502be4d.1-08d000452ce2b287c00720587f20dc62976a73b6/cloud_controller_ng/lib/cloud_controller/runner.rb:88:in `block in run!'\n/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.3.0/gems/eventmachine-1.0.9.1/lib/eventmachine.rb:193:in `run_machine'\n/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.3.0/gems/eventmachine-1.0.9.1/lib/eventmachine.rb:193:in `run'\n/var/vcap/data/packages/cloud_controller_ng/f87cb49aa2cd87792cb9c2211a79e1542502be4d
.1-08d000452ce2b287c00720587f20dc62976a73b6/cloud_controller_ng/lib/cloud_controller/runner.rb:82:in `run!'\n/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/bin/cloud_controller:8:in `<main>'","log_level":"error","source":"cc.runner","data":{},"thread_id":47219093041420,"fiber_id":47219133477120,"process_id":27911,"file":"/var/vcap/data/packages/cloud_controller_ng/f87cb49aa2cd87792cb9c2211a79e1542502be4d.1-08d000452ce2b287c00720587f20dc62976a73b6/cloud_controller_ng/lib/cloud_controller/runner.rb","lineno":102,"method":"rescue in block in run!"}


Re: How shoulld I debug a blobstore error?

Eyal Shalev
 

Hello Ronak,
I have used XIP.
It does not seem to have helped.
I got much the same result.
API node does not complete the job.

looking at the vcap logs looks similar to above:
cloud_controller_ng_ctl reports that Blobstore is accessible, but it is still stuck in an endless restart loop.

looking at the generated cf-deployment.yml as you requested I find the following lines ( I have obfuscated the public ip):
properties:
acceptance_tests: null
app_domains:
- APPDOMAIN
...
packages:
app_package_directory_key: 10.60.18.186.xip.io-cc-packages
blobstore_type: webdav
...
public_endpoint: http://blobstore.10.60.18.186.xip.io



Also, in my original cf-stub I configured the properties as such:
properties:
domain: 10.60.18.186.xip.io
system_domain: SYSDOMAIN
system_domain_organization: EYALDOMAIN
app_domains:
- APPDOMAIN

So unless I was using improperly, adding the xip domain did not seem to help.


Re: Moving Diego Repositories

Will Pragnell <wpragnell@...>
 

Ignore me, I can't read... looks like this is using that already. Nice!

On 27 June 2016 at 13:56, Will Pragnell <wpragnell(a)pivotal.io> wrote:

Have we abandoned plans for the import path service Eric proposed back in
April? If not, aren't we just going to have to update all our imports again
once that rolls out?

On 25 June 2016 at 18:36, Kris Hicks <khicks(a)pivotal.io> wrote:

I've found when doing this the easiest thing to do is to use sed to
remove existing imports followed by goimports -w on the same files.

KH


On Sunday, June 26, 2016, Amit Gupta <agupta(a)pivotal.io> wrote:

Congrats on making it official!

I assume you folks are going to build some scripts or some "go fmt -r"
to find/change all import paths that need updating. If you find anything
useful that other teams might be able to leverage, sharing would be much
appreciated.

Cheers,
Amit

On Fri, Jun 24, 2016 at 4:20 PM, Onsi Fakhouri <ofakhouri(a)pivotal.io>
wrote:

That stuff has always technically been foundation code and should have
moved out of pivotal-golang into cloudfoundry-incubator long ago. Back
when we made the org the rules about how orgs are structured weren't quite
so clear.

Onsi

On Fri, Jun 24, 2016 at 4:18 PM, Alex Suraci <asuraci(a)pivotal.io>
wrote:

why are we moving the pivotal-golang repos? I thought the point of
that was to be able to create those cheaply for super generic packages that
we just need to have somewhere in cases where the repo incubation
foundation lifecycle crap gets in the way. are we now able to create repos
willy-nilly under the cloudfoundry org?

On Sat, Jun 25, 2016, 6:13 AM James Myers <jmyers(a)pivotal.io> wrote:

Hi all,

We are currently in the process of moving all of the Diego
repositories from `cloudfoundry-incubator` to the `cloudfoundry`
organization.

Specifically we are moving the following repositories to the
`cloudfoundry` organization:

- auction
- auctioneer
- bbs
- benchmark-bbs
- buildpack_app_lifecycle
- cacheddownloader
- cf-debug-server
- cf-lager
- cf_http
- consuladapter
- converger
- diego-ssh
- diego-upgrade-stability-tests
- docker_app_lifecycle
- executor
- file-server
- healthcheck
- inigo
- locket
- rep
- route-emitter
- runtime-schema
- vizzini
- diego-cf-compatibility
- diego-ci-pools
- diego-ci
- diego-design-notes
- diego-dev-notes
- diego-dockerfiles
- diego-perf-release
- diego-release
- diego-stress-tests

We are also moving the following from `pivotal-golang` to
`cloudfoundry` as well:

- archiver
- bytefmt
- clock
- eventhub
- lager
- localip
- operationq

We are also renaming the following, and will be updating their
package names accordingly:

- benchmark-bbs -> benchmarkbbs
- buildpack_app_lifecycle -> buildpackapplifecycle
- cf-debug-server -> debugserver
- cf-lager -> cflager
- cf_http -> cfhttp
- docker_app_lifecycle -> dockerapplifecycle
- file-server -> fileserver
- runtime-schema -> runtimeschema

You might be asking yourself, what does this mean for me?

Generally it means the following:

- If you are importing any of the above repos in your golang code,
you should change it from `
github.com/cloudfoundry-incubator/REPO_NAME`
<http://github.com/cloudfoundry-incubator/REPO_NAME> to `
code.cloudfoundry.org/REPO_NAME`
<http://code.cloudfoundry.org/REPO_NAME>.

- Update your golang code when you update your dependencies to
reference the new package names marked above.

- If you are consuming the Diego bosh release from bosh.io, you will
need to update the location to
http://bosh.io/releases/github.com/cloudfoundry/diego-release.


Other than that, github redirects should handle most of the issues
for you.

As a side note we are also moving the following deprecated
repositories to the `cloudfoundry-attic`:

- diego-acceptance-tests
- diego-smoke-tests
- receptor
- runtime-metrics-server

Let us know if you have any questions.

Best,

Jim + Andrew, CF Diego Team


Re: Moving Diego Repositories

Will Pragnell <wpragnell@...>
 

Have we abandoned plans for the import path service Eric proposed back in
April? If not, aren't we just going to have to update all our imports again
once that rolls out?

On 25 June 2016 at 18:36, Kris Hicks <khicks(a)pivotal.io> wrote:

I've found when doing this the easiest thing to do is to use sed to remove
existing imports followed by goimports -w on the same files.

KH


On Sunday, June 26, 2016, Amit Gupta <agupta(a)pivotal.io> wrote:

Congrats on making it official!

I assume you folks are going to build some scripts or some "go fmt -r" to
find/change all import paths that need updating. If you find anything
useful that other teams might be able to leverage, sharing would be much
appreciated.

Cheers,
Amit

On Fri, Jun 24, 2016 at 4:20 PM, Onsi Fakhouri <ofakhouri(a)pivotal.io>
wrote:

That stuff has always technically been foundation code and should have
moved out of pivotal-golang into cloudfoundry-incubator long ago. Back
when we made the org the rules about how orgs are structured weren't quite
so clear.

Onsi

On Fri, Jun 24, 2016 at 4:18 PM, Alex Suraci <asuraci(a)pivotal.io> wrote:

why are we moving the pivotal-golang repos? I thought the point of that
was to be able to create those cheaply for super generic packages that we
just need to have somewhere in cases where the repo incubation foundation
lifecycle crap gets in the way. are we now able to create repos willy-nilly
under the cloudfoundry org?

On Sat, Jun 25, 2016, 6:13 AM James Myers <jmyers(a)pivotal.io> wrote:

Hi all,

We are currently in the process of moving all of the Diego
repositories from `cloudfoundry-incubator` to the `cloudfoundry`
organization.

Specifically we are moving the following repositories to the
`cloudfoundry` organization:

- auction
- auctioneer
- bbs
- benchmark-bbs
- buildpack_app_lifecycle
- cacheddownloader
- cf-debug-server
- cf-lager
- cf_http
- consuladapter
- converger
- diego-ssh
- diego-upgrade-stability-tests
- docker_app_lifecycle
- executor
- file-server
- healthcheck
- inigo
- locket
- rep
- route-emitter
- runtime-schema
- vizzini
- diego-cf-compatibility
- diego-ci-pools
- diego-ci
- diego-design-notes
- diego-dev-notes
- diego-dockerfiles
- diego-perf-release
- diego-release
- diego-stress-tests

We are also moving the following from `pivotal-golang` to
`cloudfoundry` as well:

- archiver
- bytefmt
- clock
- eventhub
- lager
- localip
- operationq

We are also renaming the following, and will be updating their package
names accordingly:

- benchmark-bbs -> benchmarkbbs
- buildpack_app_lifecycle -> buildpackapplifecycle
- cf-debug-server -> debugserver
- cf-lager -> cflager
- cf_http -> cfhttp
- docker_app_lifecycle -> dockerapplifecycle
- file-server -> fileserver
- runtime-schema -> runtimeschema

You might be asking yourself, what does this mean for me?

Generally it means the following:

- If you are importing any of the above repos in your golang code, you
should change it from `github.com/cloudfoundry-incubator/REPO_NAME`
<http://github.com/cloudfoundry-incubator/REPO_NAME> to `
code.cloudfoundry.org/REPO_NAME`
<http://code.cloudfoundry.org/REPO_NAME>.

- Update your golang code when you update your dependencies to
reference the new package names marked above.

- If you are consuming the Diego bosh release from bosh.io, you will
need to update the location to
http://bosh.io/releases/github.com/cloudfoundry/diego-release.


Other than that, github redirects should handle most of the issues for
you.

As a side note we are also moving the following deprecated
repositories to the `cloudfoundry-attic`:

- diego-acceptance-tests
- diego-smoke-tests
- receptor
- runtime-metrics-server

Let us know if you have any questions.

Best,

Jim + Andrew, CF Diego Team


Re: Retrieve __VCAP__ID from instance_ID

Daniel Mikusa
 

Need more info. What's the command(s) you're running? What's the full
output from the command(s)? What is the output of `cf target`? Is Diego
supported / available on that target?

Dan

On Sun, Jun 26, 2016 at 8:59 AM, Vinod A <vin.app(a)gmail.com> wrote:

I tried enabling the diego for the php app and I am not able to start the
app after enabling.

I get the below errors:

Server error, status code: 500, error code: 10001, message: An unknown
error occurred.

Server error, status code: 500, error code: 170011, message: Stager error:
getaddrinfo: Name or service not known -----> on development


Re: Retrieve __VCAP__ID from instance_ID

Vinod A
 

I tried enabling the diego for the php app and I am not able to start the app after enabling.

I get the below errors:

Server error, status code: 500, error code: 10001, message: An unknown error occurred.

Server error, status code: 500, error code: 170011, message: Stager error: getaddrinfo: Name or service not known -----> on development


Re: How shoulld I debug a blobstore error?

Tom Sherrod <tom.sherrod@...>
 

I got past the api_z1 failure by adding:
- name: consul_agent
release: cf

to the api_z1 section.

I'm now confused by having to remove the other lines. I will need to test
this out.

Tom

On Fri, Jun 24, 2016 at 4:39 PM, Eyal Shalev <eshalev(a)cisco.com> wrote:

Hello Amit,
I have removed the lines that you have marked.

Now I am getting a different error...
Process 'consul_agent' running
Process 'cloud_controller_ng' Connection failed
Process 'cloud_controller_worker_local_1' not monitored
Process 'cloud_controller_worker_local_2' not monitored
Process 'nginx_cc' initializing
Process 'cloud_controller_migration' running
Process 'metron_agent' running
Process 'statsd-injector' running
Process 'route_registrar' running
System 'system_localhost' running


The blob store is available, but still the process fails:
[2016-06-24 20:13:17+0000] ------------ STARTING
cloud_controller_worker_ctl at Fri Jun 24 20:13:17 UTC 2016 --------------
[2016-06-24 20:13:17+0000] Removing stale pidfile
[2016-06-24 20:13:17+0000] Checking for blobstore availability
[2016-06-24 20:13:17+0000] Blobstore is available
[2016-06-24 20:13:18+0000] Buildpacks installation failed


and also:
[2016-06-24 20:33:16+0000] ------------ STARTING cloud_controller_ng_ctl
at Fri Jun 24 20:33:16 UTC 2016 --------------
[2016-06-24 20:33:16+0000] Checking for blobstore availability
[2016-06-24 20:33:16+0000] Blobstore is available
[2016-06-24 20:33:38+0000] Killing
/var/vcap/sys/run/cloud_controller_ng/cloud_controller_ng.pid: 28368
[2016-06-24 20:33:38+0000] Stopped
[2016-06-24 20:33:39+0000] ------------ STARTING cloud_controller_ng_ctl
at Fri Jun 24 20:33:39 UTC 2016 --------------
[2016-06-24 20:33:39+0000] Checking for blobstore availability
[2016-06-24 20:33:39+0000] Blobstore is available
[2016-06-24 20:34:02+0000] Killing
/var/vcap/sys/run/cloud_controller_ng/cloud_controller_ng.pid: 28818
[2016-06-24 20:34:03+0000] Stopped

Which brings me to another question:
Do you have a stable old release of CF for openstack? I don't mind
downgrading, if the new releases are unstable. If that is not possible, can
you post a valid cf-stub.yml without the need for any manual removal of
invalid lines? (that way I have a reference to what tried and tested stub
should look like)

Thanks alot for your help,
Eyal


Re: Retrieve __VCAP__ID from instance_ID

James Bayer
 

you can tell when you're on DEAs by comparing the notes that dan referred
to here [1]. for example, if you have the env variable VCAP_APP_PORT then
you're likely on a DEA container.

[1]
https://github.com/cloudfoundry/diego-design-notes/blob/master/migrating-to-diego.md#cf-specific-environment-variables

On Sat, Jun 25, 2016 at 6:21 AM, Vinod A <vin.app(a)gmail.com> wrote:

Installed the sample app and tried it on Bluemix and I don't see
CF_INSTANCE_GUID or INSTANCE_GUID in Environment section. How do I know if
its Diego or DEA ?.

Thanks,
Vinod
--
Thank you,

James Bayer


Re: Moving Diego Repositories

Kris Hicks <khicks@...>
 

I've found when doing this the easiest thing to do is to use sed to remove
existing imports followed by goimports -w on the same files.

KH

On Sunday, June 26, 2016, Amit Gupta <agupta(a)pivotal.io> wrote:

Congrats on making it official!

I assume you folks are going to build some scripts or some "go fmt -r" to
find/change all import paths that need updating. If you find anything
useful that other teams might be able to leverage, sharing would be much
appreciated.

Cheers,
Amit

On Fri, Jun 24, 2016 at 4:20 PM, Onsi Fakhouri <ofakhouri(a)pivotal.io
<javascript:_e(%7B%7D,'cvml','ofakhouri(a)pivotal.io');>> wrote:

That stuff has always technically been foundation code and should have
moved out of pivotal-golang into cloudfoundry-incubator long ago. Back
when we made the org the rules about how orgs are structured weren't quite
so clear.

Onsi

On Fri, Jun 24, 2016 at 4:18 PM, Alex Suraci <asuraci(a)pivotal.io
<javascript:_e(%7B%7D,'cvml','asuraci(a)pivotal.io');>> wrote:

why are we moving the pivotal-golang repos? I thought the point of that
was to be able to create those cheaply for super generic packages that we
just need to have somewhere in cases where the repo incubation foundation
lifecycle crap gets in the way. are we now able to create repos willy-nilly
under the cloudfoundry org?

On Sat, Jun 25, 2016, 6:13 AM James Myers <jmyers(a)pivotal.io
<javascript:_e(%7B%7D,'cvml','jmyers(a)pivotal.io');>> wrote:

Hi all,

We are currently in the process of moving all of the Diego repositories
from `cloudfoundry-incubator` to the `cloudfoundry` organization.

Specifically we are moving the following repositories to the
`cloudfoundry` organization:

- auction
- auctioneer
- bbs
- benchmark-bbs
- buildpack_app_lifecycle
- cacheddownloader
- cf-debug-server
- cf-lager
- cf_http
- consuladapter
- converger
- diego-ssh
- diego-upgrade-stability-tests
- docker_app_lifecycle
- executor
- file-server
- healthcheck
- inigo
- locket
- rep
- route-emitter
- runtime-schema
- vizzini
- diego-cf-compatibility
- diego-ci-pools
- diego-ci
- diego-design-notes
- diego-dev-notes
- diego-dockerfiles
- diego-perf-release
- diego-release
- diego-stress-tests

We are also moving the following from `pivotal-golang` to
`cloudfoundry` as well:

- archiver
- bytefmt
- clock
- eventhub
- lager
- localip
- operationq

We are also renaming the following, and will be updating their package
names accordingly:

- benchmark-bbs -> benchmarkbbs
- buildpack_app_lifecycle -> buildpackapplifecycle
- cf-debug-server -> debugserver
- cf-lager -> cflager
- cf_http -> cfhttp
- docker_app_lifecycle -> dockerapplifecycle
- file-server -> fileserver
- runtime-schema -> runtimeschema

You might be asking yourself, what does this mean for me?

Generally it means the following:

- If you are importing any of the above repos in your golang code, you
should change it from `github.com/cloudfoundry-incubator/REPO_NAME`
<http://github.com/cloudfoundry-incubator/REPO_NAME> to `
code.cloudfoundry.org/REPO_NAME`
<http://code.cloudfoundry.org/REPO_NAME>.

- Update your golang code when you update your dependencies to
reference the new package names marked above.

- If you are consuming the Diego bosh release from bosh.io, you will
need to update the location to
http://bosh.io/releases/github.com/cloudfoundry/diego-release.


Other than that, github redirects should handle most of the issues for
you.

As a side note we are also moving the following deprecated repositories
to the `cloudfoundry-attic`:

- diego-acceptance-tests
- diego-smoke-tests
- receptor
- runtime-metrics-server

Let us know if you have any questions.

Best,

Jim + Andrew, CF Diego Team


Re: Moving Diego Repositories

Amit Kumar Gupta
 

Congrats on making it official!

I assume you folks are going to build some scripts or some "go fmt -r" to
find/change all import paths that need updating. If you find anything
useful that other teams might be able to leverage, sharing would be much
appreciated.

Cheers,
Amit

On Fri, Jun 24, 2016 at 4:20 PM, Onsi Fakhouri <ofakhouri(a)pivotal.io> wrote:

That stuff has always technically been foundation code and should have
moved out of pivotal-golang into cloudfoundry-incubator long ago. Back
when we made the org the rules about how orgs are structured weren't quite
so clear.

Onsi

On Fri, Jun 24, 2016 at 4:18 PM, Alex Suraci <asuraci(a)pivotal.io> wrote:

why are we moving the pivotal-golang repos? I thought the point of that
was to be able to create those cheaply for super generic packages that we
just need to have somewhere in cases where the repo incubation foundation
lifecycle crap gets in the way. are we now able to create repos willy-nilly
under the cloudfoundry org?

On Sat, Jun 25, 2016, 6:13 AM James Myers <jmyers(a)pivotal.io> wrote:

Hi all,

We are currently in the process of moving all of the Diego repositories
from `cloudfoundry-incubator` to the `cloudfoundry` organization.

Specifically we are moving the following repositories to the
`cloudfoundry` organization:

- auction
- auctioneer
- bbs
- benchmark-bbs
- buildpack_app_lifecycle
- cacheddownloader
- cf-debug-server
- cf-lager
- cf_http
- consuladapter
- converger
- diego-ssh
- diego-upgrade-stability-tests
- docker_app_lifecycle
- executor
- file-server
- healthcheck
- inigo
- locket
- rep
- route-emitter
- runtime-schema
- vizzini
- diego-cf-compatibility
- diego-ci-pools
- diego-ci
- diego-design-notes
- diego-dev-notes
- diego-dockerfiles
- diego-perf-release
- diego-release
- diego-stress-tests

We are also moving the following from `pivotal-golang` to `cloudfoundry`
as well:

- archiver
- bytefmt
- clock
- eventhub
- lager
- localip
- operationq

We are also renaming the following, and will be updating their package
names accordingly:

- benchmark-bbs -> benchmarkbbs
- buildpack_app_lifecycle -> buildpackapplifecycle
- cf-debug-server -> debugserver
- cf-lager -> cflager
- cf_http -> cfhttp
- docker_app_lifecycle -> dockerapplifecycle
- file-server -> fileserver
- runtime-schema -> runtimeschema

You might be asking yourself, what does this mean for me?

Generally it means the following:

- If you are importing any of the above repos in your golang code, you
should change it from `github.com/cloudfoundry-incubator/REPO_NAME`
<http://github.com/cloudfoundry-incubator/REPO_NAME> to `
code.cloudfoundry.org/REPO_NAME`
<http://code.cloudfoundry.org/REPO_NAME>.

- Update your golang code when you update your dependencies to reference
the new package names marked above.

- If you are consuming the Diego bosh release from bosh.io, you will
need to update the location to
http://bosh.io/releases/github.com/cloudfoundry/diego-release.


Other than that, github redirects should handle most of the issues for
you.

As a side note we are also moving the following deprecated repositories
to the `cloudfoundry-attic`:

- diego-acceptance-tests
- diego-smoke-tests
- receptor
- runtime-metrics-server

Let us know if you have any questions.

Best,

Jim + Andrew, CF Diego Team


Re: Creating new user UAA

sridhar vennela
 

Excellent!!!. Good luck.