Date   

Re: Loggregator roadmap for CF Community input

Jim CF Campbell
 

Well I *thought* I had put in a universal link. Here is one
<https://docs.google.com/spreadsheets/d/1QOCUIlTkhGzVwfRji7Q14vczqkBbFGkiDWrJSKdRLRg/edit?usp=sharing>
.

I've also responded to all your requests for sharing. Looking forward to
you feedback!

Jim

On Mon, Dec 14, 2015 at 6:08 AM, Voelz, Marco <marco.voelz(a)sap.com> wrote:

Same here, seems like the document is not publicly readable?

Warm regards
Marco




On 14/12/15 06:06, "Noburou TANIGUCHI" <dev(a)nota.m001.jp> wrote:

I'm asked to sign in to Google account to read the Roadmap.
Is this an intentional behavior?

Thanks in advance.


Jim Campbell wrote
Hi cf-dev,

Over the past two months, I've been gathering customer input about the
CF
OSS logging. I've created a first draft of a Loggregator Roadmap
&lt;
https://docs.google.com/spreadsheets/d/1QOCUIlTkhGzVwfRji7Q14vczqkBbFGkiDWrJSKdRLRg/edit?usp=sharing>
;.
I'm looking for feedback from the folks on this list. You can comment on
the doc and/or put your feedback in this thread.

Thanks!

--
Jim Campbell | Product Manager | Cloud Foundry | Pivotal.io |
303.618.0963





-----
I'm not a ...
noburou taniguchi
--
View this message in context:
http://cf-dev.70369.x6.nabble.com/cf-dev-Loggregator-roadmap-for-CF-Community-input-tp3016p3080.html
Sent from the CF Dev mailing list archive at Nabble.com.
--
Jim Campbell | Product Manager | Cloud Foundry | Pivotal.io | 303.618.0963


Re: Loggregator roadmap for CF Community input

Marco Voelz
 

Same here, seems like the document is not publicly readable?

Warm regards
Marco

On 14/12/15 06:06, "Noburou TANIGUCHI" <dev(a)nota.m001.jp> wrote:

I'm asked to sign in to Google account to read the Roadmap.
Is this an intentional behavior?

Thanks in advance.


Jim Campbell wrote
Hi cf-dev,

Over the past two months, I've been gathering customer input about the CF
OSS logging. I've created a first draft of a Loggregator Roadmap
&lt;https://docs.google.com/spreadsheets/d/1QOCUIlTkhGzVwfRji7Q14vczqkBbFGkiDWrJSKdRLRg/edit?usp=sharing>;.
I'm looking for feedback from the folks on this list. You can comment on
the doc and/or put your feedback in this thread.

Thanks!

--
Jim Campbell | Product Manager | Cloud Foundry | Pivotal.io | 303.618.0963




-----
I'm not a ...
noburou taniguchi
--
View this message in context: http://cf-dev.70369.x6.nabble.com/cf-dev-Loggregator-roadmap-for-CF-Community-input-tp3016p3080.html
Sent from the CF Dev mailing list archive at Nabble.com.


Re: Diego docker app launch issue with Diego's v0.1443.0

Anuj Jain <anuj17280@...>
 

Hi Eric – Thanks for trying to help me to resolve my issues – please check
comments inline:

On Mon, Dec 14, 2015 at 12:02 AM, Eric Malm <emalm(a)pivotal.io> wrote:

Hi, Anuj,

Thanks for the info, and sorry to hear you've run into some difficulties.
It sounds like Cloud Controller is getting a 503 error from the
nsync-listener service on the CC-Bridge. That most likely means it's
encountering some sort of error in communicating with Diego's BBS API. You
mentioned that you had some problems with the database jobs when upgrading
as well. Does BOSH now report that all the VMs in the Diego deployment are
running correctly?

=> All VMs under (CF, Diego and Diego docker cache) showing running
=> Database issue was in manifest, which got resolved once I update/changed
the log_level value from debug2 to debug


One next step to try would be to tail logs from the nsync-listener
processes on the CC-Bridge VMs with `tail -f
/var/vcap/sys/log/nsync/nsync_listener.stdout.log`, and from the BBS
processes on the database VMs with `tail -f
/var/vcap/sys/log/bbs/bbs.stdout.log`, then try restarting your app that
targets Diego, and see if there are any errors in the logs. It may also
help to filter the logs to contain only the ones with your app guid, which
you can get from the CF CLI via `cf app APP_NAME --guid`.

=> It could not able to see any logs for my app on CC-Bridge - checked
stager and/or Nsync job logs
=> I checked CC host and run 'netstat -anp | grep consul' couple of time
and found that sometime it showing established connection with one consul
server and sometime not - below is the sample output:

# netstat -anp | grep consul
tcp 0 0 10.5.139.156:8301 0.0.0.0:*
LISTEN 4559/consul
tcp 0 0 127.0.0.1:8400 0.0.0.0:*
LISTEN 4559/consul
tcp 0 0 127.0.0.1:8500 0.0.0.0:*
LISTEN 4559/consul
tcp 0 0 127.0.0.1:53 0.0.0.0:*
LISTEN 4559/consul
udp 0 0 127.0.0.1:53 0.0.0.0:*
4559/consul
udp 0 0 10.5.139.156:8301 0.0.0.0:*
4559/consul
root(a)07c40eae-6a8c-4fb1-996d-e638637b5caa:/var/vcap/bosh_ssh/bosh_eerpetbrz#
netstat -anp | grep consul
tcp 0 0 10.5.139.156:8301 0.0.0.0:*
LISTEN 4559/consul
tcp 0 0 127.0.0.1:8400 0.0.0.0:*
LISTEN 4559/consul
tcp 0 0 127.0.0.1:8500 0.0.0.0:*
LISTEN 4559/consul
tcp 0 0 127.0.0.1:53 0.0.0.0:*
LISTEN 4559/consul
tcp 0 0 10.5.139.156:60316 10.5.139.140:8300
ESTABLISHED 4559/consul
udp 0 0 127.0.0.1:53 0.0.0.0:*
4559/consul
udp 0 0 10.5.139.156:8301 0.0.0.0:*
4559/consul

=> After that I also checked/verify consul agent logs on CC which showing
EventmemberFailed and EventmemberJoin messages

========================================================================================
logs:
2015/12/14 09:32:43 [INFO] serf: EventMemberFailed: api-z1-0
10.5.139.156
2015/12/14 09:32:44 [INFO] serf: EventMemberJoin: docker-cache-0
10.5.139.252
2015/12/14 09:32:44 [INFO] serf: EventMemberJoin: api-z1-0 10.5.139.156
2015/12/14 09:32:49 [INFO] serf: EventMemberJoin: ha-proxy-z1-0
10.5.103.103
2015/12/14 09:33:32 [INFO] memberlist: Suspect ha-proxy-z1-1 has
failed, no acks received
2015/12/14 09:33:40 [INFO] serf: EventMemberFailed: ha-proxy-z1-1
10.5.103.104
2015/12/14 09:33:40 [INFO] serf: EventMemberFailed: database-z1-2
10.5.139.194
2015/12/14 09:33:40 [INFO] memberlist: Marking ha-proxy-z1-0 as failed,
suspect timeout reached
2015/12/14 09:33:40 [INFO] serf: EventMemberFailed: ha-proxy-z1-0
10.5.103.103
2015/12/14 09:33:41 [INFO] serf: EventMemberJoin: database-z1-2
10.5.139.194
2015/12/14 09:33:42 [INFO] serf: EventMemberFailed: cell-z1-3
10.5.139.199
2015/12/14 09:33:43 [INFO] serf: EventMemberJoin: cell-z1-3 10.5.139.199
2015/12/14 09:33:43 [INFO] serf: EventMemberFailed: api-worker-z1-0
10.5.139.159
2015/12/14 09:33:44 [INFO] serf: EventMemberJoin: ha-proxy-z1-1
10.5.103.104
2015/12/14 09:33:44 [INFO] memberlist: Marking uaa-z1-1 as failed,
suspect timeout reached
2015/12/14 09:33:44 [INFO] serf: EventMemberFailed: uaa-z1-1
10.5.139.155
2015/12/14 09:33:46 [INFO] serf: EventMemberFailed: cell-z1-1
10.5.139.197
2015/12/14 09:33:46 [INFO] serf: EventMemberJoin: uaa-z1-1 10.5.139.155
2015/12/14 09:33:47 [INFO] memberlist: Marking cc-bridge-z1-0 as
failed, suspect timeout reached
2015/12/14 09:33:47 [INFO] serf: EventMemberFailed: cc-bridge-z1-0
10.5.139.200
2015/12/14 09:33:49 [INFO] serf: EventMemberFailed: database-z1-1
10.5.139.193
2015/12/14 09:33:58 [INFO] serf: EventMemberJoin: api-worker-z1-0
10.5.139.159
2015/12/14 09:33:58 [INFO] serf: EventMemberJoin: database-z1-1
10.5.139.193
2015/12/14 09:33:59 [INFO] serf: EventMemberJoin: cell-z1-1 10.5.139.197
2015/12/14 09:33:59 [INFO] serf: EventMemberJoin: cc-bridge-z1-0
10.5.139.200
2015/12/14 09:34:01 [INFO] serf: EventMemberFailed: database-z1-1
10.5.139.193
2015/12/14 09:34:07 [INFO] serf: EventMemberJoin: database-z1-1
10.5.139.193
2015/12/14 09:34:09 [INFO] memberlist: Marking cell-z1-1 as failed,
suspect timeout reached
2015/12/14 09:34:09 [INFO] serf: EventMemberFailed: cell-z1-1
10.5.139.197
2015/12/14 09:34:20 [INFO] serf: EventMemberJoin: ha-proxy-z1-0
10.5.103.103
2015/12/14 09:34:28 [INFO] serf: EventMemberJoin: cell-z1-1 10.5.139.197
2015/12/14 09:34:38 [INFO] serf: EventMemberFailed: ha-proxy-z1-0
10.5.103.103
2015/12/14 09:34:42 [INFO] serf: EventMemberFailed: ha-proxy-z1-1
10.5.103.104
2015/12/14 09:34:44 [INFO] memberlist: Marking api-z1-0 as failed,
suspect timeout reached
2015/12/14 09:34:44 [INFO] serf: EventMemberFailed: api-z1-0
10.5.139.156
2015/12/14 09:34:48 [INFO] serf: EventMemberJoin: ha-proxy-z1-0
10.5.103.103
2015/12/14 09:34:48 [INFO] memberlist: Marking api-worker-z1-0 as
failed, suspect timeout reached
2015/12/14 09:34:48 [INFO] serf: EventMemberFailed: api-worker-z1-0
10.5.139.159
2015/12/14 09:34:49 [INFO] serf: EventMemberJoin: api-z1-0 10.5.139.156
2015/12/14 09:34:52 [INFO] serf: EventMemberJoin: ha-proxy-z1-1
10.5.103.104
2015/12/14 09:34:58 [INFO] serf: EventMemberJoin: api-worker-z1-0
10.5.139.159
================================================================================



Also, are you able to run a buildpack-based app on the Diego backend, or
do you get the same error as with this Docker-based app?

=> No, I am also not able to run buildpack-based app on Diego backend -
verified that by enable-diego on one of the app and then tried starting it
- got same 500 Error.


Best,
Eric

On Thu, Dec 10, 2015 at 6:45 AM, Anuj Jain <anuj17280(a)gmail.com> wrote:

Hi,

I deployed the latest CF v226 with Diego v0.1443.0 - I was able to
successfully upgrade both deployments and verified that CF is working as
expected. currently seeing problem with Diego while trying to deploy any
docker app - I am getting *'Server error, status code: 500, error code:
170016, message: Runner error: stop app failed: 503' *- below you can
see the CF_TRACE output of last few lines.

I also notice that while trying to upgrade diego v0.1443.0 - it gave
me the error while trying to upgrade database job - the fix which I applied
(changed debug2 to debug from diego manifest file - path: properties =>
consul => log_level: debug)


RESPONSE: [2015-12-10T09:35:07-05:00]
HTTP/1.1 500 Internal Server Error
Content-Length: 110
Content-Type: application/json;charset=utf-8
Date: Thu, 10 Dec 2015 14:35:07 GMT
Server: nginx
X-Cf-Requestid: 8328f518-4847-41ec-5836-507d4bb054bb
X-Content-Type-Options: nosniff
X-Vcap-Request-Id:
324d0fc0-2146-48f0-6265-755efb556e23::5c869046-8803-4dac-a620-8ca701f5bd22

{
"code": 170016,
"description": "Runner error: stop app failed: 503",
"error_code": "CF-RunnerError"
}

FAILED
Server error, status code: 500, error code: 170016, message: Runner
error: stop app failed: 503
FAILED
Server error, status code: 500, error code: 170016, message: Runner
error: stop app failed: 503
FAILED
Error: Error executing cli core command
Starting app testing89 in org PAAS / space dev as admin...

FAILED

Server error, status code: 500, error code: 170016, message: Runner
error: stop app failed: 503


- Anuj


Re: Loggregator roadmap for CF Community input

Noburou TANIGUCHI
 

I'm asked to sign in to Google account to read the Roadmap.
Is this an intentional behavior?

Thanks in advance.


Jim Campbell wrote
Hi cf-dev,

Over the past two months, I've been gathering customer input about the CF
OSS logging. I've created a first draft of a Loggregator Roadmap
&lt;https://docs.google.com/spreadsheets/d/1QOCUIlTkhGzVwfRji7Q14vczqkBbFGkiDWrJSKdRLRg/edit?usp=sharing>;.
I'm looking for feedback from the folks on this list. You can comment on
the doc and/or put your feedback in this thread.

Thanks!

--
Jim Campbell | Product Manager | Cloud Foundry | Pivotal.io | 303.618.0963




-----
I'm not a ...
noburou taniguchi
--
View this message in context: http://cf-dev.70369.x6.nabble.com/cf-dev-Loggregator-roadmap-for-CF-Community-input-tp3016p3080.html
Sent from the CF Dev mailing list archive at Nabble.com.


Re: about consul_agent's cert

Gwenn Etourneau
 

Please read the documentation
http://docs.cloudfoundry.org/deploying/common/consul-security.html

On Mon, Dec 14, 2015 at 11:35 AM, 于长江 <yuchangjiang(a)cmss.chinamobile.com>
wrote:

hi,
when i deploy cf-release, consul agent job failed start, i found the
err log in the vm.

==> Starting Consul agent...

==> Error starting agent: Failed to start Consul server: Failed to parse
any CA certificates

--------------------------------------------

then i found the configuration in cf’s manifest file is not correct,like
this:


consul:

encrypt_keys:

- CONSUL_ENCRYPT_KEY

ca_cert: CONSUL_CA_CERT

server_cert: CONSUL_SERVER_CERT

server_key: CONSUL_SERVER_KEY

agent_cert: CONSUL_AGENT_CERT

agent_key: CONSUL_AGENT_KEY


i have no idea of how to complete these fields, can someone give me an
example, thanks~

------------------------------
于长江
15101057694


Re: Import large dataset to Postgres instance in CF

Noburou TANIGUCHI
 

I'm afraid I don't understand your situation, but can't you use User-Provided
Service [1]?

If you can build a PostgreSQL instance where you can access it with psql and
where a CF app instance can access, you can use it as a User-Provided
Service.

- Pros:
-- you can use it as an ordinary PostgreSQL instance

- Cons:
-- you should manage it on your own
-- you may have to ask to your CF administrator to open Application Security
Groups [2].

If you aren't allowed to create User-Provided Service, please forget this
post.

[1] https://docs.cloudfoundry.org/devguide/services/user-provided.html
[2] https://docs.cloudfoundry.org/adminguide/app-sec-groups.html



Siva Balan wrote
Thanks. We are not running Diego. So writing an app seem to be the most
viable option.

On Fri, Dec 11, 2015 at 3:44 AM, Matthew Sykes &lt;
matthew.sykes@
&gt;
wrote:

Regarding Nic's ssh comment, if you're running Diego, I'd recommend using
the port forwarding feature instead of copying the data. It was actually
one of the scenarios that drove the implementation of that feature.

Once a the port forwarding is setup, you should be able to target the
local endpoint with your database tools and have everything forwarded
over
the tunnel to the database.

On Thu, Dec 10, 2015 at 12:35 AM, Nicholas Calugar &lt;
ncalugar@
&gt;
wrote:

Hi Siva,

1. If you run the PostgreSQL, you likely want to temporarily open the
firewall to load data or get on a jump box of some sort that can
access the
database. It's not really a CF issue at this point, it's a general
issue of
seeding a database out-of-band from the application server.
2. If the above isn't an option and your CF is running Diego, you
could use SSH to get onto an app container after SCPing the data to
that
container.
3. The only other option I can think of is writing a simple app that
you can push to CF to do the import.

Hope that helps,

Nick

On Wed, Dec 9, 2015 at 3:08 PM Siva Balan &lt;
mailsiva@
&gt; wrote:

Hi Nick,
Your Option 1(Using psql CLI) is not possible since there is a firewall
that only allows connection from CF apps to postgres DB. Apps like psql
CLI
that are outside of CF have no access to the postgres DB.
I just wanted to get some thoughts from this community since I presume
many would have faced a similar circumstance of importing large sets of
data to their DB which is behind a firewall and accessible only through
CF
apps.

Thanks
Siva

On Wed, Dec 9, 2015 at 2:27 PM, Nicholas Calugar &lt;
ncalugar@
&gt;
wrote:

Hi Siva,

You'll have to tell us more about how your PostgreSQL and CF was
deployed, but you might be able to connect to it from your local
machine
using the psql CLI and the credentials for one of your bound apps.
This
takes CF out of the equation other than the service binding providing
the
credentials.

If this doesn't work, there are a number of things that could be in
the
way, i.e. firewall that only allows connection from CF or the
PostgreSQL
server is on a different subnet. You can then try using some machine
as a
jump box that will allow access to the PostgreSQL.

Nick

On Wed, Dec 9, 2015 at 9:40 AM Siva Balan &lt;
mailsiva@
&gt; wrote:

Hello,
Below is my requirement:
I have a postgres instance deployed on our corporate CF deployment. I
have created a service instance of this postgres and bound my app to
it.
Now I need to import a very large dataset(millions of records) into
this
postgres instance.
As a CF user, I do not have access to any ports on CF other than 80
and 443. So I am not be able to use any of the native postgresql
tools to
import the data. I can view and run simple SQL commands on this
postgres
instance using the phppgadmin app that is also bound to my postgres
service
instance.
Now, what is the best way for me to import this large dataset to my
postgres service instance?
All thoughts and suggestions welcome.

Thanks
Siva Balan

--
http://www.twitter.com/sivabalans

--
http://www.twitter.com/sivabalans

--
Matthew Sykes
matthew.sykes@


--
http://www.twitter.com/sivabalans




-----
I'm not a ...
noburou taniguchi
--
View this message in context: http://cf-dev.70369.x6.nabble.com/cf-dev-Import-large-dataset-to-Postgres-instance-in-CF-tp3017p3079.html
Sent from the CF Dev mailing list archive at Nabble.com.


about consul_agent's cert

于长江 <yuchangjiang at cmss.chinamobile.com...>
 

hi,
when i deploy cf-release, consul agent job failed start, i found the err log in the vm.


== Starting Consul agent...
== Error starting agent: Failed to start Consul server: Failed to parse any CA certificates
--------------------------------------------
then i found the configuration in cf’s manifest file is not correct,like this:


consul:
encrypt_keys:
- CONSUL_ENCRYPT_KEY
ca_cert: CONSUL_CA_CERT
server_cert: CONSUL_SERVER_CERT
server_key: CONSUL_SERVER_KEY
agent_cert: CONSUL_AGENT_CERT
agent_key: CONSUL_AGENT_KEY


i have no idea of how to complete these fields, can someone give me an example, thanks~


于长江
15101057694


Re: Diego: docker registry v2 only?

Eric Malm <emalm@...>
 

Hi, Tom,

That's correct, Diego final release versions 0.1438.0 and later currently
support staging Docker images only from v2 Docker registry APIs.
Garden-Linux final release 0.327.0 and later also support pulling and
running images only from v2 registries.

Thanks,
Eric

On Fri, Dec 11, 2015 at 9:07 PM, Tom Sherrod <tom.sherrod(a)gmail.com> wrote:

Hi,

I wish to confirm, diego(v0.1441.0) only pulls from v2 registry now and
not backward compatible with v1?
What diego version switched out versions?

Thanks,
Tom


Re: cf start of diego enabled app results in status code: 500 -- where to look for logs?

Eric Malm <emalm@...>
 

Hi, Tom,

Thanks for the update. Sounds like we'll have to make sure that the
consul_agent job template is present on all the CC VM types in all the
cf-release manifest generation templates and examples, so that it's
difficult to run into this type of error. I've added
https://www.pivotaltracker.com/story/show/110032764 for the CF Runtime
Diego team to review and update those templates where necessary, in
coordination with the CF Release Integration team.

I think the CF Release Integration team has also been trying to improve the
manifest and stub examples at
https://github.com/cloudfoundry/cf-release/tree/master/spec/fixtures and
elsewhere in cf-release to make them more realistic and functional, with
the ultimate goal of having them produce a correctly functioning deployment
after all the placeholder entries are filled in appropriately.

Thanks again,
Eric

On Fri, Dec 11, 2015 at 8:56 PM, Tom Sherrod <tom.sherrod(a)gmail.com> wrote:

Hi Eric,

Thank you. These questions sent me in the right direction.

On the CC VM:
To be clear, this vm is labeled api_z1, where I got the log from, and
running cloud_controller_ng. No consul was running, it isn't even
installed. I reviewed the manifest. Consul didn't exist in the api_z1
definition. I added routing-api and consul_agent(the 2 differences from the
default api_z definition) to the api_z1 block.

The manifest I'm using was generated from Amit's
https://gist.github.com/Amit-PivotalLabs/04dd9addca704f86b3b7 . While for
223, I hoped it worked for 225 also.

I've successfully deployed an app, enabled and ran in diego, and deployed
a container.

Thanks,
Tom

On Fri, Dec 11, 2015 at 4:55 PM, Eric Malm <emalm(a)pivotal.io> wrote:

Hi, Tom,

Thanks for the logs. From the getaddrinfo error in the CC logs, it looks
like CC is unable to resolve the nsync-listener service via Consul DNS on
the expected nsync.service.cf.internal domain. So it could be a problem on
that CC VM, or on the CC-Bridge VM(s), or with the consul server cluster
itself. Here are some things to check next:

On that CC VM:
- Does monit report that the consul_agent job is running?
- Does running `netstat -anp | grep consul` show that the consul process
is listening on port 53, and that it has a connection established to port
8300 on one of the consul server VMs?
- Are there warnings or errors that look like they could be relevant in
`/var/vcap/sys/log/consul_agent/consul_agent.stdout.log` or
`/var/vcap/sys/log/consul_agent/consul_agent.stdout.log`?
- What's the output of `nslookup nsync.service.cf.internal` or `dig
nsync.service.cf.internal`?
- Does running `/var/vcap/packages/consul/bin/consul members` list a
cc-bridge agent node for each CC-Bridge VM in your deployment, and a consul
server node for each consul server in your deployment?

On each CC-Bridge VM (you may have only one):
- Does monit report that the nsync_listener and consul_agent jobs are
both running?
- Does running `netstat -anp | grep nsync` report that the nsync-listener
process is listening on port 8787?
- Does running `netstat -anp | grep consul` show that the consul process
is listening on port 53, and that it has a connection established to port
8300 on one of the consul server VMs?
- Does /var/vcap/jobs/consul_agent/config/service-nsync.json exist and
contain JSON like
`{"service":{"name":"nsync","check":{"script":"/var/vcap/jobs/nsync/bin/dns_health_check","interval":"3s"},"tags":["cc-bridge-z1-0"]}}`?
- Are there warnings or errors that look like they could be relevant in
`/var/vcap/sys/log/consul_agent/consul_agent.stdout.log` or
`/var/vcap/sys/log/consul_agent/consul_agent.stdout.log`?

On the consul server VMs:
- Does monit report that the consul_agent job is running?
- Are there warnings or errors that look like they could be relevant in
`/var/vcap/sys/log/consul_agent/consul_agent.stdout.log` or
`/var/vcap/sys/log/consul_agent/consul_agent.stdout.log`?

Thanks,
Eric


On Thu, Dec 10, 2015 at 5:01 AM, Tom Sherrod <tom.sherrod(a)gmail.com>
wrote:

Hi Eric,

Thanks for the pointers.

`bosh vms` -- all running

Only 1 api vm running. cloud_controller_ng.log is almost constantly
being updated.

Below is the 500 error capture:


{"timestamp":1449752019.6870825,"message":"desire.app.request","log_level":"info","source":"cc.nsync.listener.client","data":{"request_guid":"e046d57f-277a-4f81-452c-3b6fb892432e::53557eba-6212-4727-9b43-ebbb03426b76","process_guid":"9f528159-1a7b-4876-92c9-34d040e9824d-29fd370c-04fd-4481-b432-39431460a963"},"thread_id":69975079694320,"fiber_id":69975079888800,"process_id":6524,"file":"/var/vcap/data/packages/cloud_controller_ng/80b067d32996057a4fc88fe1c553764671e7e8e8.1-1ea6a37e7b427cc868c0727a4bfec391725a79ab/cloud_controller_ng/lib/cloud_controller/diego/nsync_client.rb","lineno":15,"method":"desire_app"}

{"timestamp":1449752019.6899576,"message":"Cannot communicate with diego
- tried to send
start","log_level":"error","source":"cc.diego.runner","data":{"request_guid":"e046d57f-277a-4f81-452c-3b6fb892432e::53557eba-6212-4727-9b43-ebbb03426b76"},"thread_id":69975079694320,"fiber_id":69975079888800,"process_id":6524,"file":"/var/vcap/data/packages/cloud_controller_ng/80b067d32996057a4fc88fe1c553764671e7e8e8.1-1ea6a37e7b427cc868c0727a4bfec391725a79ab/cloud_controller_ng/lib/cloud_controller/diego/runner.rb","lineno":43,"method":"rescue
in with_logging"}

{"timestamp":1449752019.6909509,"message":"Request failed: 500:
{\"code\"=>10001, \"description\"=>\"getaddrinfo: Name or service not
known\", \"error_code\"=>\"CF-CannotCommunicateWithDiegoError\",
\"backtrace\"=>[\"/var/vcap/data/packages/cloud_controller_ng/80b067d32996057a4fc88fe1c553764671e7e8e8.1-1ea6a37e7b427cc868c0727a4bfec391725a79ab/cloud_controller_ng/lib/cloud_controller/diego/runner.rb:44:in
`rescue in with_logging'\",
\"/var/vcap/data/packages/cloud_controller_ng/80b067d32996057a4fc88fe1c553764671e7e8e8.1-1ea6a37e7b427cc868c0727a4bfec391725a79ab/cloud_controller_ng/lib/cloud_controller/diego/runner.rb:40:in
`with_logging'\",
\"/var/vcap/data/packages/cloud_controller_ng/80b067d32996057a4fc88fe1c553764671e7e8e8.1-1ea6a37e7b427cc868c0727a4bfec391725a79ab/cloud_controller_ng/lib/cloud_controller/diego/runner.rb:19:in
`start'\",
\"/var/vcap/data/packages/cloud_controller_ng/80b067d32996057a4fc88fe1c553764671e7e8e8.1-1ea6a37e7b427cc868c0727a4bfec391725a79ab/cloud_controller_ng/lib/cloud_controller/app_observer.rb:63:in
`react_to_state_change'\",
\"/var/vcap/data/packages/cloud_controller_ng/80b067d32996057a4fc88fe1c553764671e7e8e8.1-1ea6a37e7b427cc868c0727a4bfec391725a79ab/cloud_controller_ng/lib/cloud_controller/app_observer.rb:31:in
`updated'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/app/models/runtime/app.rb:574:in
`after_commit'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sequel-4.21.0/lib/sequel/model/base.rb:1920:in
`block in _save'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sequel-4.21.0/lib/sequel/database/transactions.rb:280:in
`call'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sequel-4.21.0/lib/sequel/database/transactions.rb:280:in
`block in remove_transaction'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sequel-4.21.0/lib/sequel/database/transactions.rb:280:in
`each'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sequel-4.21.0/lib/sequel/database/transactions.rb:280:in
`remove_transaction'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sequel-4.21.0/lib/sequel/database/transactions.rb:156:in
`_transaction'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sequel-4.21.0/lib/sequel/database/transactions.rb:108:in
`block in transaction'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sequel-4.21.0/lib/sequel/database/connecting.rb:250:in
`block in synchronize'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sequel-4.21.0/lib/sequel/connection_pool/threaded.rb:98:in
`hold'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sequel-4.21.0/lib/sequel/database/connecting.rb:250:in
`synchronize'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sequel-4.21.0/lib/sequel/database/transactions.rb:97:in
`transaction'\",
\"/var/vcap/data/packages/cloud_controller_ng/80b067d32996057a4fc88fe1c553764671e7e8e8.1-1ea6a37e7b427cc868c0727a4bfec391725a79ab/cloud_controller_ng/app/controllers/base/model_controller.rb:66:in
`update'\",
\"/var/vcap/data/packages/cloud_controller_ng/80b067d32996057a4fc88fe1c553764671e7e8e8.1-1ea6a37e7b427cc868c0727a4bfec391725a79ab/cloud_controller_ng/app/controllers/base/base_controller.rb:78:in
`dispatch'\",
\"/var/vcap/data/packages/cloud_controller_ng/80b067d32996057a4fc88fe1c553764671e7e8e8.1-1ea6a37e7b427cc868c0727a4bfec391725a79ab/cloud_controller_ng/lib/cloud_controller/rest_controller/routes.rb:16:in
`block in define_route'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sinatra-1.4.6/lib/sinatra/base.rb:1609:in
`call'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sinatra-1.4.6/lib/sinatra/base.rb:1609:in
`block in compile!'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sinatra-1.4.6/lib/sinatra/base.rb:974:in
`[]'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sinatra-1.4.6/lib/sinatra/base.rb:974:in
`block (3 levels) in route!'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sinatra-1.4.6/lib/sinatra/base.rb:993:in
`route_eval'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sinatra-1.4.6/lib/sinatra/base.rb:974:in
`block (2 levels) in route!'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sinatra-1.4.6/lib/sinatra/base.rb:1014:in
`block in process_route'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sinatra-1.4.6/lib/sinatra/base.rb:1012:in
`catch'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sinatra-1.4.6/lib/sinatra/base.rb:1012:in
`process_route'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sinatra-1.4.6/lib/sinatra/base.rb:972:in
`block in route!'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sinatra-1.4.6/lib/sinatra/base.rb:971:in
`each'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sinatra-1.4.6/lib/sinatra/base.rb:971:in
`route!'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sinatra-1.4.6/lib/sinatra/base.rb:1084:in
`block in dispatch!'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sinatra-1.4.6/lib/sinatra/base.rb:1066:in
`block in invoke'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sinatra-1.4.6/lib/sinatra/base.rb:1066:in
`catch'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sinatra-1.4.6/lib/sinatra/base.rb:1066:in
`invoke'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sinatra-1.4.6/lib/sinatra/base.rb:1081:in
`dispatch!'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sinatra-1.4.6/lib/sinatra/base.rb:906:in
`block in call!'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sinatra-1.4.6/lib/sinatra/base.rb:1066:in
`block in invoke'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sinatra-1.4.6/lib/sinatra/base.rb:1066:in
`catch'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sinatra-1.4.6/lib/sinatra/base.rb:1066:in
`invoke'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sinatra-1.4.6/lib/sinatra/base.rb:906:in
`call!'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sinatra-1.4.6/lib/sinatra/base.rb:894:in
`call'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/rack-protection-1.5.3/lib/rack/protection/xss_header.rb:18:in
`call'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/rack-protection-1.5.3/lib/rack/protection/path_traversal.rb:16:in
`call'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/rack-protection-1.5.3/lib/rack/protection/json_csrf.rb:18:in
`call'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/rack-protection-1.5.3/lib/rack/protection/base.rb:49:in
`call'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/rack-protection-1.5.3/lib/rack/protection/base.rb:49:in
`call'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/rack-protection-1.5.3/lib/rack/protection/frame_options.rb:31:in
`call'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/rack-1.6.4/lib/rack/nulllogger.rb:9:in
`call'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/rack-1.6.4/lib/rack/head.rb:13:in
`call'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sinatra-1.4.6/lib/sinatra/base.rb:181:in
`call'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sinatra-1.4.6/lib/sinatra/base.rb:2021:in
`call'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/rack-1.6.4/lib/rack/urlmap.rb:66:in
`block in call'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/rack-1.6.4/lib/rack/urlmap.rb:50:in
`each'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/rack-1.6.4/lib/rack/urlmap.rb:50:in
`call'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/middleware/request_logs.rb:21:in
`call'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/middleware/vcap_request_id.rb:14:in
`call'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/middleware/cors.rb:47:in
`call_app'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/middleware/cors.rb:12:in
`call'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/middleware/request_metrics.rb:12:in
`call'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/rack-1.6.4/lib/rack/builder.rb:153:in
`call'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/thin-1.6.3/lib/thin/connection.rb:86:in
`block in pre_process'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/thin-1.6.3/lib/thin/connection.rb:84:in
`catch'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/thin-1.6.3/lib/thin/connection.rb:84:in
`pre_process'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/eventmachine-1.0.8/lib/eventmachine.rb:1062:in
`call'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/eventmachine-1.0.8/lib/eventmachine.rb:1062:in
`block in
spawn_threadpool'\"]}","log_level":"error","source":"cc.api","data":{"request_guid":"e046d57f-277a-4f81-452c-3b6fb892432e::53557eba-6212-4727-9b43-ebbb03426b76"},"thread_id":69975079694320,"fiber_id":69975079888800,"process_id":6524,"file":"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/lib/sinatra/vcap.rb","lineno":53,"method":"block
in registered"}

{"timestamp":1449752019.691719,"message":"Completed 500 vcap-request-id:
e046d57f-277a-4f81-452c-3b6fb892432e::53557eba-6212-4727-9b43-ebbb03426b76","log_level":"info","source":"cc.api","data":{},"thread_id":69975079694320,"fiber_id":69975079888800,"process_id":6524,"file":"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/middleware/request_logs.rb","lineno":23,"method":"call"}

On Wed, Dec 9, 2015 at 5:53 PM, Eric Malm <emalm(a)pivotal.io> wrote:

Hi, Tom,

It may be that Cloud Controller is unable to resolve the
consul-provided DNS entries for the CC-Bridge components, as that '10001
Unknown Error' 500 response sounds like this bug in the Diego tracker:
https://www.pivotaltracker.com/story/show/104066600. That 500 response
should be reflected as some sort of error in the CC log file, located by
default in /var/vcap/sys/log/cloud_controller_ng/cloud_controller_ng.log on
your CC VMs. It may even be helpful to follow that log in real-time with
`tail -f` while you try starting the Diego-targeted app via the CLI. To be
sure you capture it, you should tail that log file on each CC in your
deployment. In any case, a stack trace associated to that error would
likely help us identify what to check next.

Also, does `bosh vms` report any failing VMs in either the CF or the
Diego deployments?

Best,
Eric

On Wed, Dec 9, 2015 at 2:27 PM, Tom Sherrod <tom.sherrod(a)gmail.com>
wrote:

I'm giving CF 225 and diego 0.1441.0 a run.
CF 225 is up and app deployed.
Stop app. cf enable-diego app. Start app:
FAILED
Server error, status code: 500, error code: 10001, message: An unknown
error occurred.
FAILED
Server error, status code: 500, error code: 10001, message: An unknown
error occurred.

CF_TRACE ends with:
REQUEST: [2015-12-09T17:17:37-05:00]
PUT
/v2/apps/02c68ddd-0596-4aab-8c05-a8f538d06712?async=true&inline-relations-depth=1
HTTP/1.1
Host: api.dev.foo.com
Accept: application/json
Authorization: [PRIVATE DATA HIDDEN]
Content-Type: application/json
User-Agent: go-cli 6.14.0+2654a47 / darwin

{"state":"STARTED"}

RESPONSE: [2015-12-09T17:17:37-05:00]
HTTP/1.1 500 Internal Server Error
Content-Length: 99
Content-Type: application/json;charset=utf-8
Date: Wed, 09 Dec 2015 22:17:36 GMT
Server: nginx
X-Cf-Requestid: 6edf0ac8-384f-4db3-576a-6744b7eb4b8c
X-Content-Type-Options: nosniff
X-Vcap-Request-Id:
860d73f9-9415-478f-6c60-13e2e5ddde8c::80a4a687-7f2d-44c5-9b09-4e3c9fa07b68

{
"error_code": "UnknownError",
"description": "An unknown error occurred.",
"code": 10001
}


Where next to look for the broken piece?


Re: Diego docker app launch issue with Diego's v0.1443.0

Eric Malm <emalm@...>
 

Hi, Anuj,

Thanks for the info, and sorry to hear you've run into some difficulties.
It sounds like Cloud Controller is getting a 503 error from the
nsync-listener service on the CC-Bridge. That most likely means it's
encountering some sort of error in communicating with Diego's BBS API. You
mentioned that you had some problems with the database jobs when upgrading
as well. Does BOSH now report that all the VMs in the Diego deployment are
running correctly?

One next step to try would be to tail logs from the nsync-listener
processes on the CC-Bridge VMs with `tail -f
/var/vcap/sys/log/nsync/nsync_listener.stdout.log`, and from the BBS
processes on the database VMs with `tail -f
/var/vcap/sys/log/bbs/bbs.stdout.log`, then try restarting your app that
targets Diego, and see if there are any errors in the logs. It may also
help to filter the logs to contain only the ones with your app guid, which
you can get from the CF CLI via `cf app APP_NAME --guid`.

Also, are you able to run a buildpack-based app on the Diego backend, or do
you get the same error as with this Docker-based app?

Best,
Eric

On Thu, Dec 10, 2015 at 6:45 AM, Anuj Jain <anuj17280(a)gmail.com> wrote:

Hi,

I deployed the latest CF v226 with Diego v0.1443.0 - I was able to
successfully upgrade both deployments and verified that CF is working as
expected. currently seeing problem with Diego while trying to deploy any
docker app - I am getting *'Server error, status code: 500, error code:
170016, message: Runner error: stop app failed: 503' *- below you can see
the CF_TRACE output of last few lines.

I also notice that while trying to upgrade diego v0.1443.0 - it gave me
the error while trying to upgrade database job - the fix which I applied
(changed debug2 to debug from diego manifest file - path: properties =>
consul => log_level: debug)


RESPONSE: [2015-12-10T09:35:07-05:00]
HTTP/1.1 500 Internal Server Error
Content-Length: 110
Content-Type: application/json;charset=utf-8
Date: Thu, 10 Dec 2015 14:35:07 GMT
Server: nginx
X-Cf-Requestid: 8328f518-4847-41ec-5836-507d4bb054bb
X-Content-Type-Options: nosniff
X-Vcap-Request-Id:
324d0fc0-2146-48f0-6265-755efb556e23::5c869046-8803-4dac-a620-8ca701f5bd22

{
"code": 170016,
"description": "Runner error: stop app failed: 503",
"error_code": "CF-RunnerError"
}

FAILED
Server error, status code: 500, error code: 170016, message: Runner error:
stop app failed: 503
FAILED
Server error, status code: 500, error code: 170016, message: Runner error:
stop app failed: 503
FAILED
Error: Error executing cli core command
Starting app testing89 in org PAAS / space dev as admin...

FAILED

Server error, status code: 500, error code: 170016, message: Runner error:
stop app failed: 503


- Anuj


Diego: docker registry v2 only?

Tom Sherrod <tom.sherrod@...>
 

Hi,

I wish to confirm, diego(v0.1441.0) only pulls from v2 registry now and not backward compatible with v1?
What diego version switched out versions?

Thanks,
Tom


Re: cf start of diego enabled app results in status code: 500 -- where to look for logs?

Tom Sherrod <tom.sherrod@...>
 

Hi Eric,

Thank you. These questions sent me in the right direction.

On the CC VM:
To be clear, this vm is labeled api_z1, where I got the log from, and
running cloud_controller_ng. No consul was running, it isn't even
installed. I reviewed the manifest. Consul didn't exist in the api_z1
definition. I added routing-api and consul_agent(the 2 differences from the
default api_z definition) to the api_z1 block.

The manifest I'm using was generated from Amit's
https://gist.github.com/Amit-PivotalLabs/04dd9addca704f86b3b7 . While for
223, I hoped it worked for 225 also.

I've successfully deployed an app, enabled and ran in diego, and deployed a
container.

Thanks,
Tom

On Fri, Dec 11, 2015 at 4:55 PM, Eric Malm <emalm(a)pivotal.io> wrote:

Hi, Tom,

Thanks for the logs. From the getaddrinfo error in the CC logs, it looks
like CC is unable to resolve the nsync-listener service via Consul DNS on
the expected nsync.service.cf.internal domain. So it could be a problem on
that CC VM, or on the CC-Bridge VM(s), or with the consul server cluster
itself. Here are some things to check next:

On that CC VM:
- Does monit report that the consul_agent job is running?
- Does running `netstat -anp | grep consul` show that the consul process
is listening on port 53, and that it has a connection established to port
8300 on one of the consul server VMs?
- Are there warnings or errors that look like they could be relevant in
`/var/vcap/sys/log/consul_agent/consul_agent.stdout.log` or
`/var/vcap/sys/log/consul_agent/consul_agent.stdout.log`?
- What's the output of `nslookup nsync.service.cf.internal` or `dig
nsync.service.cf.internal`?
- Does running `/var/vcap/packages/consul/bin/consul members` list a
cc-bridge agent node for each CC-Bridge VM in your deployment, and a consul
server node for each consul server in your deployment?

On each CC-Bridge VM (you may have only one):
- Does monit report that the nsync_listener and consul_agent jobs are both
running?
- Does running `netstat -anp | grep nsync` report that the nsync-listener
process is listening on port 8787?
- Does running `netstat -anp | grep consul` show that the consul process
is listening on port 53, and that it has a connection established to port
8300 on one of the consul server VMs?
- Does /var/vcap/jobs/consul_agent/config/service-nsync.json exist and
contain JSON like
`{"service":{"name":"nsync","check":{"script":"/var/vcap/jobs/nsync/bin/dns_health_check","interval":"3s"},"tags":["cc-bridge-z1-0"]}}`?
- Are there warnings or errors that look like they could be relevant in
`/var/vcap/sys/log/consul_agent/consul_agent.stdout.log` or
`/var/vcap/sys/log/consul_agent/consul_agent.stdout.log`?

On the consul server VMs:
- Does monit report that the consul_agent job is running?
- Are there warnings or errors that look like they could be relevant in
`/var/vcap/sys/log/consul_agent/consul_agent.stdout.log` or
`/var/vcap/sys/log/consul_agent/consul_agent.stdout.log`?

Thanks,
Eric


On Thu, Dec 10, 2015 at 5:01 AM, Tom Sherrod <tom.sherrod(a)gmail.com>
wrote:

Hi Eric,

Thanks for the pointers.

`bosh vms` -- all running

Only 1 api vm running. cloud_controller_ng.log is almost constantly being
updated.

Below is the 500 error capture:


{"timestamp":1449752019.6870825,"message":"desire.app.request","log_level":"info","source":"cc.nsync.listener.client","data":{"request_guid":"e046d57f-277a-4f81-452c-3b6fb892432e::53557eba-6212-4727-9b43-ebbb03426b76","process_guid":"9f528159-1a7b-4876-92c9-34d040e9824d-29fd370c-04fd-4481-b432-39431460a963"},"thread_id":69975079694320,"fiber_id":69975079888800,"process_id":6524,"file":"/var/vcap/data/packages/cloud_controller_ng/80b067d32996057a4fc88fe1c553764671e7e8e8.1-1ea6a37e7b427cc868c0727a4bfec391725a79ab/cloud_controller_ng/lib/cloud_controller/diego/nsync_client.rb","lineno":15,"method":"desire_app"}

{"timestamp":1449752019.6899576,"message":"Cannot communicate with diego
- tried to send
start","log_level":"error","source":"cc.diego.runner","data":{"request_guid":"e046d57f-277a-4f81-452c-3b6fb892432e::53557eba-6212-4727-9b43-ebbb03426b76"},"thread_id":69975079694320,"fiber_id":69975079888800,"process_id":6524,"file":"/var/vcap/data/packages/cloud_controller_ng/80b067d32996057a4fc88fe1c553764671e7e8e8.1-1ea6a37e7b427cc868c0727a4bfec391725a79ab/cloud_controller_ng/lib/cloud_controller/diego/runner.rb","lineno":43,"method":"rescue
in with_logging"}

{"timestamp":1449752019.6909509,"message":"Request failed: 500:
{\"code\"=>10001, \"description\"=>\"getaddrinfo: Name or service not
known\", \"error_code\"=>\"CF-CannotCommunicateWithDiegoError\",
\"backtrace\"=>[\"/var/vcap/data/packages/cloud_controller_ng/80b067d32996057a4fc88fe1c553764671e7e8e8.1-1ea6a37e7b427cc868c0727a4bfec391725a79ab/cloud_controller_ng/lib/cloud_controller/diego/runner.rb:44:in
`rescue in with_logging'\",
\"/var/vcap/data/packages/cloud_controller_ng/80b067d32996057a4fc88fe1c553764671e7e8e8.1-1ea6a37e7b427cc868c0727a4bfec391725a79ab/cloud_controller_ng/lib/cloud_controller/diego/runner.rb:40:in
`with_logging'\",
\"/var/vcap/data/packages/cloud_controller_ng/80b067d32996057a4fc88fe1c553764671e7e8e8.1-1ea6a37e7b427cc868c0727a4bfec391725a79ab/cloud_controller_ng/lib/cloud_controller/diego/runner.rb:19:in
`start'\",
\"/var/vcap/data/packages/cloud_controller_ng/80b067d32996057a4fc88fe1c553764671e7e8e8.1-1ea6a37e7b427cc868c0727a4bfec391725a79ab/cloud_controller_ng/lib/cloud_controller/app_observer.rb:63:in
`react_to_state_change'\",
\"/var/vcap/data/packages/cloud_controller_ng/80b067d32996057a4fc88fe1c553764671e7e8e8.1-1ea6a37e7b427cc868c0727a4bfec391725a79ab/cloud_controller_ng/lib/cloud_controller/app_observer.rb:31:in
`updated'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/app/models/runtime/app.rb:574:in
`after_commit'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sequel-4.21.0/lib/sequel/model/base.rb:1920:in
`block in _save'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sequel-4.21.0/lib/sequel/database/transactions.rb:280:in
`call'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sequel-4.21.0/lib/sequel/database/transactions.rb:280:in
`block in remove_transaction'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sequel-4.21.0/lib/sequel/database/transactions.rb:280:in
`each'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sequel-4.21.0/lib/sequel/database/transactions.rb:280:in
`remove_transaction'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sequel-4.21.0/lib/sequel/database/transactions.rb:156:in
`_transaction'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sequel-4.21.0/lib/sequel/database/transactions.rb:108:in
`block in transaction'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sequel-4.21.0/lib/sequel/database/connecting.rb:250:in
`block in synchronize'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sequel-4.21.0/lib/sequel/connection_pool/threaded.rb:98:in
`hold'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sequel-4.21.0/lib/sequel/database/connecting.rb:250:in
`synchronize'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sequel-4.21.0/lib/sequel/database/transactions.rb:97:in
`transaction'\",
\"/var/vcap/data/packages/cloud_controller_ng/80b067d32996057a4fc88fe1c553764671e7e8e8.1-1ea6a37e7b427cc868c0727a4bfec391725a79ab/cloud_controller_ng/app/controllers/base/model_controller.rb:66:in
`update'\",
\"/var/vcap/data/packages/cloud_controller_ng/80b067d32996057a4fc88fe1c553764671e7e8e8.1-1ea6a37e7b427cc868c0727a4bfec391725a79ab/cloud_controller_ng/app/controllers/base/base_controller.rb:78:in
`dispatch'\",
\"/var/vcap/data/packages/cloud_controller_ng/80b067d32996057a4fc88fe1c553764671e7e8e8.1-1ea6a37e7b427cc868c0727a4bfec391725a79ab/cloud_controller_ng/lib/cloud_controller/rest_controller/routes.rb:16:in
`block in define_route'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sinatra-1.4.6/lib/sinatra/base.rb:1609:in
`call'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sinatra-1.4.6/lib/sinatra/base.rb:1609:in
`block in compile!'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sinatra-1.4.6/lib/sinatra/base.rb:974:in
`[]'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sinatra-1.4.6/lib/sinatra/base.rb:974:in
`block (3 levels) in route!'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sinatra-1.4.6/lib/sinatra/base.rb:993:in
`route_eval'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sinatra-1.4.6/lib/sinatra/base.rb:974:in
`block (2 levels) in route!'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sinatra-1.4.6/lib/sinatra/base.rb:1014:in
`block in process_route'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sinatra-1.4.6/lib/sinatra/base.rb:1012:in
`catch'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sinatra-1.4.6/lib/sinatra/base.rb:1012:in
`process_route'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sinatra-1.4.6/lib/sinatra/base.rb:972:in
`block in route!'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sinatra-1.4.6/lib/sinatra/base.rb:971:in
`each'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sinatra-1.4.6/lib/sinatra/base.rb:971:in
`route!'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sinatra-1.4.6/lib/sinatra/base.rb:1084:in
`block in dispatch!'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sinatra-1.4.6/lib/sinatra/base.rb:1066:in
`block in invoke'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sinatra-1.4.6/lib/sinatra/base.rb:1066:in
`catch'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sinatra-1.4.6/lib/sinatra/base.rb:1066:in
`invoke'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sinatra-1.4.6/lib/sinatra/base.rb:1081:in
`dispatch!'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sinatra-1.4.6/lib/sinatra/base.rb:906:in
`block in call!'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sinatra-1.4.6/lib/sinatra/base.rb:1066:in
`block in invoke'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sinatra-1.4.6/lib/sinatra/base.rb:1066:in
`catch'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sinatra-1.4.6/lib/sinatra/base.rb:1066:in
`invoke'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sinatra-1.4.6/lib/sinatra/base.rb:906:in
`call!'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sinatra-1.4.6/lib/sinatra/base.rb:894:in
`call'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/rack-protection-1.5.3/lib/rack/protection/xss_header.rb:18:in
`call'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/rack-protection-1.5.3/lib/rack/protection/path_traversal.rb:16:in
`call'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/rack-protection-1.5.3/lib/rack/protection/json_csrf.rb:18:in
`call'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/rack-protection-1.5.3/lib/rack/protection/base.rb:49:in
`call'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/rack-protection-1.5.3/lib/rack/protection/base.rb:49:in
`call'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/rack-protection-1.5.3/lib/rack/protection/frame_options.rb:31:in
`call'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/rack-1.6.4/lib/rack/nulllogger.rb:9:in
`call'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/rack-1.6.4/lib/rack/head.rb:13:in
`call'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sinatra-1.4.6/lib/sinatra/base.rb:181:in
`call'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sinatra-1.4.6/lib/sinatra/base.rb:2021:in
`call'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/rack-1.6.4/lib/rack/urlmap.rb:66:in
`block in call'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/rack-1.6.4/lib/rack/urlmap.rb:50:in
`each'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/rack-1.6.4/lib/rack/urlmap.rb:50:in
`call'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/middleware/request_logs.rb:21:in
`call'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/middleware/vcap_request_id.rb:14:in
`call'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/middleware/cors.rb:47:in
`call_app'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/middleware/cors.rb:12:in
`call'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/middleware/request_metrics.rb:12:in
`call'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/rack-1.6.4/lib/rack/builder.rb:153:in
`call'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/thin-1.6.3/lib/thin/connection.rb:86:in
`block in pre_process'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/thin-1.6.3/lib/thin/connection.rb:84:in
`catch'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/thin-1.6.3/lib/thin/connection.rb:84:in
`pre_process'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/eventmachine-1.0.8/lib/eventmachine.rb:1062:in
`call'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/eventmachine-1.0.8/lib/eventmachine.rb:1062:in
`block in
spawn_threadpool'\"]}","log_level":"error","source":"cc.api","data":{"request_guid":"e046d57f-277a-4f81-452c-3b6fb892432e::53557eba-6212-4727-9b43-ebbb03426b76"},"thread_id":69975079694320,"fiber_id":69975079888800,"process_id":6524,"file":"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/lib/sinatra/vcap.rb","lineno":53,"method":"block
in registered"}

{"timestamp":1449752019.691719,"message":"Completed 500 vcap-request-id:
e046d57f-277a-4f81-452c-3b6fb892432e::53557eba-6212-4727-9b43-ebbb03426b76","log_level":"info","source":"cc.api","data":{},"thread_id":69975079694320,"fiber_id":69975079888800,"process_id":6524,"file":"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/middleware/request_logs.rb","lineno":23,"method":"call"}

On Wed, Dec 9, 2015 at 5:53 PM, Eric Malm <emalm(a)pivotal.io> wrote:

Hi, Tom,

It may be that Cloud Controller is unable to resolve the consul-provided
DNS entries for the CC-Bridge components, as that '10001 Unknown Error' 500
response sounds like this bug in the Diego tracker:
https://www.pivotaltracker.com/story/show/104066600. That 500 response
should be reflected as some sort of error in the CC log file, located by
default in /var/vcap/sys/log/cloud_controller_ng/cloud_controller_ng.log on
your CC VMs. It may even be helpful to follow that log in real-time with
`tail -f` while you try starting the Diego-targeted app via the CLI. To be
sure you capture it, you should tail that log file on each CC in your
deployment. In any case, a stack trace associated to that error would
likely help us identify what to check next.

Also, does `bosh vms` report any failing VMs in either the CF or the
Diego deployments?

Best,
Eric

On Wed, Dec 9, 2015 at 2:27 PM, Tom Sherrod <tom.sherrod(a)gmail.com>
wrote:

I'm giving CF 225 and diego 0.1441.0 a run.
CF 225 is up and app deployed.
Stop app. cf enable-diego app. Start app:
FAILED
Server error, status code: 500, error code: 10001, message: An unknown
error occurred.
FAILED
Server error, status code: 500, error code: 10001, message: An unknown
error occurred.

CF_TRACE ends with:
REQUEST: [2015-12-09T17:17:37-05:00]
PUT
/v2/apps/02c68ddd-0596-4aab-8c05-a8f538d06712?async=true&inline-relations-depth=1
HTTP/1.1
Host: api.dev.foo.com
Accept: application/json
Authorization: [PRIVATE DATA HIDDEN]
Content-Type: application/json
User-Agent: go-cli 6.14.0+2654a47 / darwin

{"state":"STARTED"}

RESPONSE: [2015-12-09T17:17:37-05:00]
HTTP/1.1 500 Internal Server Error
Content-Length: 99
Content-Type: application/json;charset=utf-8
Date: Wed, 09 Dec 2015 22:17:36 GMT
Server: nginx
X-Cf-Requestid: 6edf0ac8-384f-4db3-576a-6744b7eb4b8c
X-Content-Type-Options: nosniff
X-Vcap-Request-Id:
860d73f9-9415-478f-6c60-13e2e5ddde8c::80a4a687-7f2d-44c5-9b09-4e3c9fa07b68

{
"error_code": "UnknownError",
"description": "An unknown error occurred.",
"code": 10001
}


Where next to look for the broken piece?


[abacus] Accommodating for plans in a resource config

Benjamin Cheng
 

Abacus will want to support plans in its resource config (as mentioned in issue #153 https://github.com/cloudfoundry-incubator/cf-abacus/issues/153)

Starting with a basic approach, there would be a plans property(an array) added to the top-level of a resource config. The current metrics and measures properties would be moved under that plans property. This will allow them to be scoped to a plan. Despite moving metrics and measures under plans, there will be a need of a common sets of measures/metrics for plans to fall back on. This comes into play in the report for example when summary/charge functions are running on aggregated usage across all plans.

In terms of the common section, there's of a choice of leaving measures/metrics on the top level as the common/default or putting those under a different property name.

I think there's a couple of things to consider here:
-Defaulting for a plan to the common section if there is no formula defined. This may require the plan to point to the common section or logic that would automatically default to the common section (and subsequently the absolute resource config defaults that are already in place).
-If there's no plan id passed(for example some of the charge/summary calls), they would need to go this common section.

Thoughts/Concerns/Suggestions?


Re: cf start of diego enabled app results in status code: 500 -- where to look for logs?

Eric Malm <emalm@...>
 

Hi, Tom,

Thanks for the logs. From the getaddrinfo error in the CC logs, it looks
like CC is unable to resolve the nsync-listener service via Consul DNS on
the expected nsync.service.cf.internal domain. So it could be a problem on
that CC VM, or on the CC-Bridge VM(s), or with the consul server cluster
itself. Here are some things to check next:

On that CC VM:
- Does monit report that the consul_agent job is running?
- Does running `netstat -anp | grep consul` show that the consul process is
listening on port 53, and that it has a connection established to port 8300
on one of the consul server VMs?
- Are there warnings or errors that look like they could be relevant in
`/var/vcap/sys/log/consul_agent/consul_agent.stdout.log` or
`/var/vcap/sys/log/consul_agent/consul_agent.stdout.log`?
- What's the output of `nslookup nsync.service.cf.internal` or `dig
nsync.service.cf.internal`?
- Does running `/var/vcap/packages/consul/bin/consul members` list a
cc-bridge agent node for each CC-Bridge VM in your deployment, and a consul
server node for each consul server in your deployment?

On each CC-Bridge VM (you may have only one):
- Does monit report that the nsync_listener and consul_agent jobs are both
running?
- Does running `netstat -anp | grep nsync` report that the nsync-listener
process is listening on port 8787?
- Does running `netstat -anp | grep consul` show that the consul process is
listening on port 53, and that it has a connection established to port 8300
on one of the consul server VMs?
- Does /var/vcap/jobs/consul_agent/config/service-nsync.json exist and
contain JSON like
`{"service":{"name":"nsync","check":{"script":"/var/vcap/jobs/nsync/bin/dns_health_check","interval":"3s"},"tags":["cc-bridge-z1-0"]}}`?
- Are there warnings or errors that look like they could be relevant in
`/var/vcap/sys/log/consul_agent/consul_agent.stdout.log` or
`/var/vcap/sys/log/consul_agent/consul_agent.stdout.log`?

On the consul server VMs:
- Does monit report that the consul_agent job is running?
- Are there warnings or errors that look like they could be relevant in
`/var/vcap/sys/log/consul_agent/consul_agent.stdout.log` or
`/var/vcap/sys/log/consul_agent/consul_agent.stdout.log`?

Thanks,
Eric

On Thu, Dec 10, 2015 at 5:01 AM, Tom Sherrod <tom.sherrod(a)gmail.com> wrote:

Hi Eric,

Thanks for the pointers.

`bosh vms` -- all running

Only 1 api vm running. cloud_controller_ng.log is almost constantly being
updated.

Below is the 500 error capture:


{"timestamp":1449752019.6870825,"message":"desire.app.request","log_level":"info","source":"cc.nsync.listener.client","data":{"request_guid":"e046d57f-277a-4f81-452c-3b6fb892432e::53557eba-6212-4727-9b43-ebbb03426b76","process_guid":"9f528159-1a7b-4876-92c9-34d040e9824d-29fd370c-04fd-4481-b432-39431460a963"},"thread_id":69975079694320,"fiber_id":69975079888800,"process_id":6524,"file":"/var/vcap/data/packages/cloud_controller_ng/80b067d32996057a4fc88fe1c553764671e7e8e8.1-1ea6a37e7b427cc868c0727a4bfec391725a79ab/cloud_controller_ng/lib/cloud_controller/diego/nsync_client.rb","lineno":15,"method":"desire_app"}

{"timestamp":1449752019.6899576,"message":"Cannot communicate with diego -
tried to send
start","log_level":"error","source":"cc.diego.runner","data":{"request_guid":"e046d57f-277a-4f81-452c-3b6fb892432e::53557eba-6212-4727-9b43-ebbb03426b76"},"thread_id":69975079694320,"fiber_id":69975079888800,"process_id":6524,"file":"/var/vcap/data/packages/cloud_controller_ng/80b067d32996057a4fc88fe1c553764671e7e8e8.1-1ea6a37e7b427cc868c0727a4bfec391725a79ab/cloud_controller_ng/lib/cloud_controller/diego/runner.rb","lineno":43,"method":"rescue
in with_logging"}

{"timestamp":1449752019.6909509,"message":"Request failed: 500:
{\"code\"=>10001, \"description\"=>\"getaddrinfo: Name or service not
known\", \"error_code\"=>\"CF-CannotCommunicateWithDiegoError\",
\"backtrace\"=>[\"/var/vcap/data/packages/cloud_controller_ng/80b067d32996057a4fc88fe1c553764671e7e8e8.1-1ea6a37e7b427cc868c0727a4bfec391725a79ab/cloud_controller_ng/lib/cloud_controller/diego/runner.rb:44:in
`rescue in with_logging'\",
\"/var/vcap/data/packages/cloud_controller_ng/80b067d32996057a4fc88fe1c553764671e7e8e8.1-1ea6a37e7b427cc868c0727a4bfec391725a79ab/cloud_controller_ng/lib/cloud_controller/diego/runner.rb:40:in
`with_logging'\",
\"/var/vcap/data/packages/cloud_controller_ng/80b067d32996057a4fc88fe1c553764671e7e8e8.1-1ea6a37e7b427cc868c0727a4bfec391725a79ab/cloud_controller_ng/lib/cloud_controller/diego/runner.rb:19:in
`start'\",
\"/var/vcap/data/packages/cloud_controller_ng/80b067d32996057a4fc88fe1c553764671e7e8e8.1-1ea6a37e7b427cc868c0727a4bfec391725a79ab/cloud_controller_ng/lib/cloud_controller/app_observer.rb:63:in
`react_to_state_change'\",
\"/var/vcap/data/packages/cloud_controller_ng/80b067d32996057a4fc88fe1c553764671e7e8e8.1-1ea6a37e7b427cc868c0727a4bfec391725a79ab/cloud_controller_ng/lib/cloud_controller/app_observer.rb:31:in
`updated'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/app/models/runtime/app.rb:574:in
`after_commit'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sequel-4.21.0/lib/sequel/model/base.rb:1920:in
`block in _save'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sequel-4.21.0/lib/sequel/database/transactions.rb:280:in
`call'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sequel-4.21.0/lib/sequel/database/transactions.rb:280:in
`block in remove_transaction'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sequel-4.21.0/lib/sequel/database/transactions.rb:280:in
`each'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sequel-4.21.0/lib/sequel/database/transactions.rb:280:in
`remove_transaction'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sequel-4.21.0/lib/sequel/database/transactions.rb:156:in
`_transaction'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sequel-4.21.0/lib/sequel/database/transactions.rb:108:in
`block in transaction'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sequel-4.21.0/lib/sequel/database/connecting.rb:250:in
`block in synchronize'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sequel-4.21.0/lib/sequel/connection_pool/threaded.rb:98:in
`hold'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sequel-4.21.0/lib/sequel/database/connecting.rb:250:in
`synchronize'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sequel-4.21.0/lib/sequel/database/transactions.rb:97:in
`transaction'\",
\"/var/vcap/data/packages/cloud_controller_ng/80b067d32996057a4fc88fe1c553764671e7e8e8.1-1ea6a37e7b427cc868c0727a4bfec391725a79ab/cloud_controller_ng/app/controllers/base/model_controller.rb:66:in
`update'\",
\"/var/vcap/data/packages/cloud_controller_ng/80b067d32996057a4fc88fe1c553764671e7e8e8.1-1ea6a37e7b427cc868c0727a4bfec391725a79ab/cloud_controller_ng/app/controllers/base/base_controller.rb:78:in
`dispatch'\",
\"/var/vcap/data/packages/cloud_controller_ng/80b067d32996057a4fc88fe1c553764671e7e8e8.1-1ea6a37e7b427cc868c0727a4bfec391725a79ab/cloud_controller_ng/lib/cloud_controller/rest_controller/routes.rb:16:in
`block in define_route'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sinatra-1.4.6/lib/sinatra/base.rb:1609:in
`call'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sinatra-1.4.6/lib/sinatra/base.rb:1609:in
`block in compile!'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sinatra-1.4.6/lib/sinatra/base.rb:974:in
`[]'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sinatra-1.4.6/lib/sinatra/base.rb:974:in
`block (3 levels) in route!'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sinatra-1.4.6/lib/sinatra/base.rb:993:in
`route_eval'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sinatra-1.4.6/lib/sinatra/base.rb:974:in
`block (2 levels) in route!'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sinatra-1.4.6/lib/sinatra/base.rb:1014:in
`block in process_route'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sinatra-1.4.6/lib/sinatra/base.rb:1012:in
`catch'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sinatra-1.4.6/lib/sinatra/base.rb:1012:in
`process_route'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sinatra-1.4.6/lib/sinatra/base.rb:972:in
`block in route!'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sinatra-1.4.6/lib/sinatra/base.rb:971:in
`each'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sinatra-1.4.6/lib/sinatra/base.rb:971:in
`route!'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sinatra-1.4.6/lib/sinatra/base.rb:1084:in
`block in dispatch!'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sinatra-1.4.6/lib/sinatra/base.rb:1066:in
`block in invoke'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sinatra-1.4.6/lib/sinatra/base.rb:1066:in
`catch'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sinatra-1.4.6/lib/sinatra/base.rb:1066:in
`invoke'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sinatra-1.4.6/lib/sinatra/base.rb:1081:in
`dispatch!'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sinatra-1.4.6/lib/sinatra/base.rb:906:in
`block in call!'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sinatra-1.4.6/lib/sinatra/base.rb:1066:in
`block in invoke'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sinatra-1.4.6/lib/sinatra/base.rb:1066:in
`catch'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sinatra-1.4.6/lib/sinatra/base.rb:1066:in
`invoke'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sinatra-1.4.6/lib/sinatra/base.rb:906:in
`call!'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sinatra-1.4.6/lib/sinatra/base.rb:894:in
`call'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/rack-protection-1.5.3/lib/rack/protection/xss_header.rb:18:in
`call'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/rack-protection-1.5.3/lib/rack/protection/path_traversal.rb:16:in
`call'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/rack-protection-1.5.3/lib/rack/protection/json_csrf.rb:18:in
`call'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/rack-protection-1.5.3/lib/rack/protection/base.rb:49:in
`call'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/rack-protection-1.5.3/lib/rack/protection/base.rb:49:in
`call'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/rack-protection-1.5.3/lib/rack/protection/frame_options.rb:31:in
`call'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/rack-1.6.4/lib/rack/nulllogger.rb:9:in
`call'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/rack-1.6.4/lib/rack/head.rb:13:in
`call'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sinatra-1.4.6/lib/sinatra/base.rb:181:in
`call'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sinatra-1.4.6/lib/sinatra/base.rb:2021:in
`call'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/rack-1.6.4/lib/rack/urlmap.rb:66:in
`block in call'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/rack-1.6.4/lib/rack/urlmap.rb:50:in
`each'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/rack-1.6.4/lib/rack/urlmap.rb:50:in
`call'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/middleware/request_logs.rb:21:in
`call'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/middleware/vcap_request_id.rb:14:in
`call'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/middleware/cors.rb:47:in
`call_app'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/middleware/cors.rb:12:in
`call'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/middleware/request_metrics.rb:12:in
`call'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/rack-1.6.4/lib/rack/builder.rb:153:in
`call'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/thin-1.6.3/lib/thin/connection.rb:86:in
`block in pre_process'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/thin-1.6.3/lib/thin/connection.rb:84:in
`catch'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/thin-1.6.3/lib/thin/connection.rb:84:in
`pre_process'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/eventmachine-1.0.8/lib/eventmachine.rb:1062:in
`call'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/eventmachine-1.0.8/lib/eventmachine.rb:1062:in
`block in
spawn_threadpool'\"]}","log_level":"error","source":"cc.api","data":{"request_guid":"e046d57f-277a-4f81-452c-3b6fb892432e::53557eba-6212-4727-9b43-ebbb03426b76"},"thread_id":69975079694320,"fiber_id":69975079888800,"process_id":6524,"file":"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/lib/sinatra/vcap.rb","lineno":53,"method":"block
in registered"}

{"timestamp":1449752019.691719,"message":"Completed 500 vcap-request-id:
e046d57f-277a-4f81-452c-3b6fb892432e::53557eba-6212-4727-9b43-ebbb03426b76","log_level":"info","source":"cc.api","data":{},"thread_id":69975079694320,"fiber_id":69975079888800,"process_id":6524,"file":"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/middleware/request_logs.rb","lineno":23,"method":"call"}

On Wed, Dec 9, 2015 at 5:53 PM, Eric Malm <emalm(a)pivotal.io> wrote:

Hi, Tom,

It may be that Cloud Controller is unable to resolve the consul-provided
DNS entries for the CC-Bridge components, as that '10001 Unknown Error' 500
response sounds like this bug in the Diego tracker:
https://www.pivotaltracker.com/story/show/104066600. That 500 response
should be reflected as some sort of error in the CC log file, located by
default in /var/vcap/sys/log/cloud_controller_ng/cloud_controller_ng.log on
your CC VMs. It may even be helpful to follow that log in real-time with
`tail -f` while you try starting the Diego-targeted app via the CLI. To be
sure you capture it, you should tail that log file on each CC in your
deployment. In any case, a stack trace associated to that error would
likely help us identify what to check next.

Also, does `bosh vms` report any failing VMs in either the CF or the
Diego deployments?

Best,
Eric

On Wed, Dec 9, 2015 at 2:27 PM, Tom Sherrod <tom.sherrod(a)gmail.com>
wrote:

I'm giving CF 225 and diego 0.1441.0 a run.
CF 225 is up and app deployed.
Stop app. cf enable-diego app. Start app:
FAILED
Server error, status code: 500, error code: 10001, message: An unknown
error occurred.
FAILED
Server error, status code: 500, error code: 10001, message: An unknown
error occurred.

CF_TRACE ends with:
REQUEST: [2015-12-09T17:17:37-05:00]
PUT
/v2/apps/02c68ddd-0596-4aab-8c05-a8f538d06712?async=true&inline-relations-depth=1
HTTP/1.1
Host: api.dev.foo.com
Accept: application/json
Authorization: [PRIVATE DATA HIDDEN]
Content-Type: application/json
User-Agent: go-cli 6.14.0+2654a47 / darwin

{"state":"STARTED"}

RESPONSE: [2015-12-09T17:17:37-05:00]
HTTP/1.1 500 Internal Server Error
Content-Length: 99
Content-Type: application/json;charset=utf-8
Date: Wed, 09 Dec 2015 22:17:36 GMT
Server: nginx
X-Cf-Requestid: 6edf0ac8-384f-4db3-576a-6744b7eb4b8c
X-Content-Type-Options: nosniff
X-Vcap-Request-Id:
860d73f9-9415-478f-6c60-13e2e5ddde8c::80a4a687-7f2d-44c5-9b09-4e3c9fa07b68

{
"error_code": "UnknownError",
"description": "An unknown error occurred.",
"code": 10001
}


Where next to look for the broken piece?


Re: Import large dataset to Postgres instance in CF

Siva Balan <mailsiva@...>
 

Thanks. We are not running Diego. So writing an app seem to be the most
viable option.

On Fri, Dec 11, 2015 at 3:44 AM, Matthew Sykes <matthew.sykes(a)gmail.com>
wrote:

Regarding Nic's ssh comment, if you're running Diego, I'd recommend using
the port forwarding feature instead of copying the data. It was actually
one of the scenarios that drove the implementation of that feature.

Once a the port forwarding is setup, you should be able to target the
local endpoint with your database tools and have everything forwarded over
the tunnel to the database.

On Thu, Dec 10, 2015 at 12:35 AM, Nicholas Calugar <ncalugar(a)pivotal.io>
wrote:

Hi Siva,

1. If you run the PostgreSQL, you likely want to temporarily open the
firewall to load data or get on a jump box of some sort that can access the
database. It's not really a CF issue at this point, it's a general issue of
seeding a database out-of-band from the application server.
2. If the above isn't an option and your CF is running Diego, you
could use SSH to get onto an app container after SCPing the data to that
container.
3. The only other option I can think of is writing a simple app that
you can push to CF to do the import.

Hope that helps,

Nick

On Wed, Dec 9, 2015 at 3:08 PM Siva Balan <mailsiva(a)gmail.com> wrote:

Hi Nick,
Your Option 1(Using psql CLI) is not possible since there is a firewall
that only allows connection from CF apps to postgres DB. Apps like psql CLI
that are outside of CF have no access to the postgres DB.
I just wanted to get some thoughts from this community since I presume
many would have faced a similar circumstance of importing large sets of
data to their DB which is behind a firewall and accessible only through CF
apps.

Thanks
Siva

On Wed, Dec 9, 2015 at 2:27 PM, Nicholas Calugar <ncalugar(a)pivotal.io>
wrote:

Hi Siva,

You'll have to tell us more about how your PostgreSQL and CF was
deployed, but you might be able to connect to it from your local machine
using the psql CLI and the credentials for one of your bound apps. This
takes CF out of the equation other than the service binding providing the
credentials.

If this doesn't work, there are a number of things that could be in the
way, i.e. firewall that only allows connection from CF or the PostgreSQL
server is on a different subnet. You can then try using some machine as a
jump box that will allow access to the PostgreSQL.

Nick

On Wed, Dec 9, 2015 at 9:40 AM Siva Balan <mailsiva(a)gmail.com> wrote:

Hello,
Below is my requirement:
I have a postgres instance deployed on our corporate CF deployment. I
have created a service instance of this postgres and bound my app to it.
Now I need to import a very large dataset(millions of records) into this
postgres instance.
As a CF user, I do not have access to any ports on CF other than 80
and 443. So I am not be able to use any of the native postgresql tools to
import the data. I can view and run simple SQL commands on this postgres
instance using the phppgadmin app that is also bound to my postgres service
instance.
Now, what is the best way for me to import this large dataset to my
postgres service instance?
All thoughts and suggestions welcome.

Thanks
Siva Balan

--
http://www.twitter.com/sivabalans

--
http://www.twitter.com/sivabalans

--
Matthew Sykes
matthew.sykes(a)gmail.com


--
http://www.twitter.com/sivabalans


uaa: typ claim on generated jwt's

tony kerz
 

hi,

i'm trying to have a third party validate a uaa jwt (https://getkong.org/plugins/jwt/),
and it is rejecting the uaa token because it doesn't have the "typ" claim on it.

i kno "typ" is optional, but anyone know any way to get uaa to include it in the generated tokens?

best,
tony.


Re: Import large dataset to Postgres instance in CF

Matthew Sykes <matthew.sykes@...>
 

Regarding Nic's ssh comment, if you're running Diego, I'd recommend using
the port forwarding feature instead of copying the data. It was actually
one of the scenarios that drove the implementation of that feature.

Once a the port forwarding is setup, you should be able to target the local
endpoint with your database tools and have everything forwarded over the
tunnel to the database.

On Thu, Dec 10, 2015 at 12:35 AM, Nicholas Calugar <ncalugar(a)pivotal.io>
wrote:

Hi Siva,

1. If you run the PostgreSQL, you likely want to temporarily open the
firewall to load data or get on a jump box of some sort that can access the
database. It's not really a CF issue at this point, it's a general issue of
seeding a database out-of-band from the application server.
2. If the above isn't an option and your CF is running Diego, you
could use SSH to get onto an app container after SCPing the data to that
container.
3. The only other option I can think of is writing a simple app that
you can push to CF to do the import.

Hope that helps,

Nick

On Wed, Dec 9, 2015 at 3:08 PM Siva Balan <mailsiva(a)gmail.com> wrote:

Hi Nick,
Your Option 1(Using psql CLI) is not possible since there is a firewall
that only allows connection from CF apps to postgres DB. Apps like psql CLI
that are outside of CF have no access to the postgres DB.
I just wanted to get some thoughts from this community since I presume
many would have faced a similar circumstance of importing large sets of
data to their DB which is behind a firewall and accessible only through CF
apps.

Thanks
Siva

On Wed, Dec 9, 2015 at 2:27 PM, Nicholas Calugar <ncalugar(a)pivotal.io>
wrote:

Hi Siva,

You'll have to tell us more about how your PostgreSQL and CF was
deployed, but you might be able to connect to it from your local machine
using the psql CLI and the credentials for one of your bound apps. This
takes CF out of the equation other than the service binding providing the
credentials.

If this doesn't work, there are a number of things that could be in the
way, i.e. firewall that only allows connection from CF or the PostgreSQL
server is on a different subnet. You can then try using some machine as a
jump box that will allow access to the PostgreSQL.

Nick

On Wed, Dec 9, 2015 at 9:40 AM Siva Balan <mailsiva(a)gmail.com> wrote:

Hello,
Below is my requirement:
I have a postgres instance deployed on our corporate CF deployment. I
have created a service instance of this postgres and bound my app to it.
Now I need to import a very large dataset(millions of records) into this
postgres instance.
As a CF user, I do not have access to any ports on CF other than 80 and
443. So I am not be able to use any of the native postgresql tools to
import the data. I can view and run simple SQL commands on this postgres
instance using the phppgadmin app that is also bound to my postgres service
instance.
Now, what is the best way for me to import this large dataset to my
postgres service instance?
All thoughts and suggestions welcome.

Thanks
Siva Balan

--
http://www.twitter.com/sivabalans

--
http://www.twitter.com/sivabalans

--
Matthew Sykes
matthew.sykes(a)gmail.com


Re: Bosh version and stemcell for 225

Amit Kumar Gupta
 

Hey Mike,

I'm discussing with the PWS teams if there's a good way to announce that
info.

Best,
Amit

On Mon, Dec 7, 2015 at 10:17 PM, Mike Youngstrom <youngm(a)gmail.com> wrote:

Thanks Amit,

In the past we've typically used the bosh version deployed to PWS as an
indication of bosh version that has gone through some real use. I
understand the desire to not publish "recommended" bosh versions along with
release versions. But, it would be nice to know what bosh versions are
deployed to PWS. Similar to how we know when a cf-release has been
deployed to PWS.

What team manages bosh deploys to PWS? Should I be requesting this
information from them instead?

Thanks,
Mike

On Mon, Dec 7, 2015 at 8:18 PM, Amit Gupta <agupta(a)pivotal.io> wrote:

Added:

* BOSH Release Version: bosh/223
* BOSH Stemcell Version(s): bosh-aws-xen-hvm-ubuntu-trusty-go_agent/3104

Note, as we are decoupling our OSS release process from Pivotal's release
process, a couple things will change going forward:

1. We will provide (soft) recommendations for stemcells on all core
supported IaaS: AWS, vSphere, OpenStack, and BOSH-Lite

2. We will not provide BOSH Release Version recommendations. It's
exceedingly rare that the BOSH release version matters, existing
deployments can almost surely continue to use their existing BOSH, and new
deployments can almost surely pick up the latest BOSH. In the medium term,
we will begin to leverage upcoming features in BOSH which may change the
structure of the job specs in the various releases, at which point we will
make clear mention of it in the release notes, but we will not publish
recommended BOSH versions on an ongoing basis.

Best,
Amit, OSS Release Integration PM

On Mon, Dec 7, 2015 at 11:07 AM, Mike Youngstrom <youngm(a)gmail.com>
wrote:

We are preparing to release 225 and noticed the release notes don't list
a bosh and stemcell version. Does anyone have that info?

Mike


Re: persistence for apps?

Michael Maximilien
 

Excellent. Please make sure to comment, if you have any. We want to address all by YE (BTW, thanks Amit for your comments).




Best,




Max




Sent from Mailbox

On Fri, Dec 11, 2015 at 3:56 AM, Matthias Ender <Matthias.Ender(a)sas.com>
wrote:

yes, that one would hit the spot!
From: Amit Gupta [mailto:agupta(a)pivotal.io]
Sent: Thursday, December 10, 2015 2:29 PM
To: Discussions about Cloud Foundry projects and the system overall. <cf-dev(a)lists.cloudfoundry.org>
Subject: [cf-dev] Re: Re: Re: Re: persistence for apps?
Importance: High
Matthias,
Have you seen Dr. Max's proposal for apps with persistence: https://docs.google.com/document/d/1A1PVnwB7wdzrWq2ZTjNrDFULlmyTUSsOuWeih8kdUtw/edit#heading=h.vfuwctflv5u2
It looks like exactly what you're talking about.
Johannes is correct, for now you can't do anything like mount volumes in the container. Any sort of persistence has to be externalized to a service you connect to over the network. Depending on the type of data and how you interact with it, a document store or object store would be the way to go, but you could in principle use a relational database, key value store, etc. Swift will give you S3 and OpenStack compatibility, so given that you're going to need a new implementation anyways, Swift might be a good choice.
Best,
Amit
On Thu, Dec 10, 2015 at 8:14 AM, Johannes Hiemer <jvhiemer(a)gmail.com<mailto:jvhiemer(a)gmail.com>> wrote:
Gerne Matthias. :-)
Swift should be an easy way to go if you know the S3 API quite well.
On 10.12.2015, at 16:53, Matthias Ender <Matthias.Ender(a)sas.com<mailto:Matthias.Ender(a)sas.com>> wrote:
Danke, Johannes.
We actually have an implementation that uses S3, but want to also be able to also support openstack, on-premise. Rather than re-implementing in swift, nfs would be an easier path from the app development side.
But if there is no path on the cf side, we’ll have to rethink.
From: Johannes Hiemer [mailto:jvhiemer(a)gmail.com]
Sent: Thursday, December 10, 2015 10:21 AM
To: Discussions about Cloud Foundry projects and the system overall. <cf-dev(a)lists.cloudfoundry.org<mailto:cf-dev(a)lists.cloudfoundry.org>>
Subject: [cf-dev] Re: persistence for apps?
Hi Mathias,
the assumption you have is wrong. There are two issues regarding your suggestion:
1) you don't have any control on the cf side (client) over nfs in warden containers. As far as I know this won't be the case for Diego as well
2) you should stick with solutions like swift or s3 for sharing data, which is the propagated way for cloud native applications
What kind of data are you going to share between the apps?
Mit freundlichen Grüßen
Johannes Hiemer
On 10.12.2015, at 16:15, Matthias Ender <Matthias.Ender(a)sas.com<mailto:Matthias.Ender(a)sas.com>> wrote:
We are looking at solutions to persist and share directory-type information among a couple of apps within our application stack.
NFS comes to mind.
How would one go about that? A manifest modification to mount the nfs share on the runners, I assume. How would the apps then get access? A volume mount on the Warden container? But where to specify that?
Or am I thinking about this the wrong way?
thanks for any suggestions,
Matthias


Re: [cf-env] [abacus] Changing how resources are organized

Jean-Sebastien Delfino
 

Thanks Piotr,

The main aggregation needed is at the resource type, however the
aggregation within consumer by the resource Id is also something we would
like to access - for example to determine that application used two
different version of node.

OK so then that means a new aggregation level, not rocket science, but a
rather mechanical addition of a new aggregation level similar to the
existing ones to the aggregator, reporting, tests, demos, schemas and API
doc. I'm out on vacation tomorrow Friday but tomorrow's IPM could be a good
opportunity to get the team to point that story's work with Max -- and that
way I won't be able to influence the point'ing :).

Instead of introducing resource type, the alternative approach could be
to augment the consumer id with the resource id

Not sure how that would work given that a consumer can use/consume multiple
(service) resources, and this 'resource type' aggregation should work for
all types of resources (not just runtime buildpack resources).

- Jean-Sebastien

On Thu, Dec 10, 2015 at 12:57 PM, Piotr Przybylski <piotrp(a)us.ibm.com>
wrote:

The main aggregation needed is at the resource type, however the
aggregation within consumer by the resource Id is also something we would
like to access - for example to determine that application used two
different version of node. Instead of introducing resource type, the
alternative approach could be to augment the consumer id with the resource
id.

Piotr

-----Jean-Sebastien Delfino <jsdelfino(a)gmail.com> wrote: -----
To: "Discussions about Cloud Foundry projects and the system overall." <
cf-dev(a)lists.cloudfoundry.org>
From: Jean-Sebastien Delfino <jsdelfino(a)gmail.com>
Date: 12/09/2015 11:51AM
Subject: [cf-dev] Re: Re: Re: [cf-env] [abacus] Changing how resources are
organized


It depends if you still want usage aggregation at both the resource_id and
resource_type_id levels (more changes as that'll add another aggregation
level to the reports) or if you only need aggregation at the
resource_type_id level (and are effectively treating that resource_type_id
as a 'more convenient' resource_id).

What aggregation levels do you need, both, or just aggregation at that
resource_type_id level?

- Jean-Sebastien

On Mon, Dec 7, 2015 at 3:19 PM, dmangin <dmangin(a)us.ibm.com> wrote:

Yes, this is related to github issue 38.

https://github.com/cloudfoundry-incubator/cf-abacus/issues/38

We want to group the buildpacks by a resource_type_id and bind resource
definitions to the resource_type_id rather than to the resource_id.
However,
when we make this change, how will this affect how abacus is doing all of
the calculations. The only change that I can think of is for abacus to use
the resource_type_id rather than the resource_id when creating the
reports.





--
View this message in context:
http://cf-dev.70369.x6.nabble.com/cf-dev-abacus-Changing-how-resources-are-organized-tp2971p2991.html
Sent from the CF Dev mailing list archive at Nabble.com.