Date   

Re: Bosh version and stemcell for 225

Amit Kumar Gupta
 

Hey Mike,

I'm discussing with the PWS teams if there's a good way to announce that
info.

Best,
Amit

On Mon, Dec 7, 2015 at 10:17 PM, Mike Youngstrom <youngm(a)gmail.com> wrote:

Thanks Amit,

In the past we've typically used the bosh version deployed to PWS as an
indication of bosh version that has gone through some real use. I
understand the desire to not publish "recommended" bosh versions along with
release versions. But, it would be nice to know what bosh versions are
deployed to PWS. Similar to how we know when a cf-release has been
deployed to PWS.

What team manages bosh deploys to PWS? Should I be requesting this
information from them instead?

Thanks,
Mike

On Mon, Dec 7, 2015 at 8:18 PM, Amit Gupta <agupta(a)pivotal.io> wrote:

Added:

* BOSH Release Version: bosh/223
* BOSH Stemcell Version(s): bosh-aws-xen-hvm-ubuntu-trusty-go_agent/3104

Note, as we are decoupling our OSS release process from Pivotal's release
process, a couple things will change going forward:

1. We will provide (soft) recommendations for stemcells on all core
supported IaaS: AWS, vSphere, OpenStack, and BOSH-Lite

2. We will not provide BOSH Release Version recommendations. It's
exceedingly rare that the BOSH release version matters, existing
deployments can almost surely continue to use their existing BOSH, and new
deployments can almost surely pick up the latest BOSH. In the medium term,
we will begin to leverage upcoming features in BOSH which may change the
structure of the job specs in the various releases, at which point we will
make clear mention of it in the release notes, but we will not publish
recommended BOSH versions on an ongoing basis.

Best,
Amit, OSS Release Integration PM

On Mon, Dec 7, 2015 at 11:07 AM, Mike Youngstrom <youngm(a)gmail.com>
wrote:

We are preparing to release 225 and noticed the release notes don't list
a bosh and stemcell version. Does anyone have that info?

Mike


Re: persistence for apps?

Michael Maximilien
 

Excellent. Please make sure to comment, if you have any. We want to address all by YE (BTW, thanks Amit for your comments).




Best,




Max




Sent from Mailbox

On Fri, Dec 11, 2015 at 3:56 AM, Matthias Ender <Matthias.Ender(a)sas.com>
wrote:

yes, that one would hit the spot!
From: Amit Gupta [mailto:agupta(a)pivotal.io]
Sent: Thursday, December 10, 2015 2:29 PM
To: Discussions about Cloud Foundry projects and the system overall. <cf-dev(a)lists.cloudfoundry.org>
Subject: [cf-dev] Re: Re: Re: Re: persistence for apps?
Importance: High
Matthias,
Have you seen Dr. Max's proposal for apps with persistence: https://docs.google.com/document/d/1A1PVnwB7wdzrWq2ZTjNrDFULlmyTUSsOuWeih8kdUtw/edit#heading=h.vfuwctflv5u2
It looks like exactly what you're talking about.
Johannes is correct, for now you can't do anything like mount volumes in the container. Any sort of persistence has to be externalized to a service you connect to over the network. Depending on the type of data and how you interact with it, a document store or object store would be the way to go, but you could in principle use a relational database, key value store, etc. Swift will give you S3 and OpenStack compatibility, so given that you're going to need a new implementation anyways, Swift might be a good choice.
Best,
Amit
On Thu, Dec 10, 2015 at 8:14 AM, Johannes Hiemer <jvhiemer(a)gmail.com<mailto:jvhiemer(a)gmail.com>> wrote:
Gerne Matthias. :-)
Swift should be an easy way to go if you know the S3 API quite well.
On 10.12.2015, at 16:53, Matthias Ender <Matthias.Ender(a)sas.com<mailto:Matthias.Ender(a)sas.com>> wrote:
Danke, Johannes.
We actually have an implementation that uses S3, but want to also be able to also support openstack, on-premise. Rather than re-implementing in swift, nfs would be an easier path from the app development side.
But if there is no path on the cf side, we’ll have to rethink.
From: Johannes Hiemer [mailto:jvhiemer(a)gmail.com]
Sent: Thursday, December 10, 2015 10:21 AM
To: Discussions about Cloud Foundry projects and the system overall. <cf-dev(a)lists.cloudfoundry.org<mailto:cf-dev(a)lists.cloudfoundry.org>>
Subject: [cf-dev] Re: persistence for apps?
Hi Mathias,
the assumption you have is wrong. There are two issues regarding your suggestion:
1) you don't have any control on the cf side (client) over nfs in warden containers. As far as I know this won't be the case for Diego as well
2) you should stick with solutions like swift or s3 for sharing data, which is the propagated way for cloud native applications
What kind of data are you going to share between the apps?
Mit freundlichen Grüßen
Johannes Hiemer
On 10.12.2015, at 16:15, Matthias Ender <Matthias.Ender(a)sas.com<mailto:Matthias.Ender(a)sas.com>> wrote:
We are looking at solutions to persist and share directory-type information among a couple of apps within our application stack.
NFS comes to mind.
How would one go about that? A manifest modification to mount the nfs share on the runners, I assume. How would the apps then get access? A volume mount on the Warden container? But where to specify that?
Or am I thinking about this the wrong way?
thanks for any suggestions,
Matthias


Re: [cf-env] [abacus] Changing how resources are organized

Jean-Sebastien Delfino
 

Thanks Piotr,

The main aggregation needed is at the resource type, however the
aggregation within consumer by the resource Id is also something we would
like to access - for example to determine that application used two
different version of node.

OK so then that means a new aggregation level, not rocket science, but a
rather mechanical addition of a new aggregation level similar to the
existing ones to the aggregator, reporting, tests, demos, schemas and API
doc. I'm out on vacation tomorrow Friday but tomorrow's IPM could be a good
opportunity to get the team to point that story's work with Max -- and that
way I won't be able to influence the point'ing :).

Instead of introducing resource type, the alternative approach could be
to augment the consumer id with the resource id

Not sure how that would work given that a consumer can use/consume multiple
(service) resources, and this 'resource type' aggregation should work for
all types of resources (not just runtime buildpack resources).

- Jean-Sebastien

On Thu, Dec 10, 2015 at 12:57 PM, Piotr Przybylski <piotrp(a)us.ibm.com>
wrote:

The main aggregation needed is at the resource type, however the
aggregation within consumer by the resource Id is also something we would
like to access - for example to determine that application used two
different version of node. Instead of introducing resource type, the
alternative approach could be to augment the consumer id with the resource
id.

Piotr

-----Jean-Sebastien Delfino <jsdelfino(a)gmail.com> wrote: -----
To: "Discussions about Cloud Foundry projects and the system overall." <
cf-dev(a)lists.cloudfoundry.org>
From: Jean-Sebastien Delfino <jsdelfino(a)gmail.com>
Date: 12/09/2015 11:51AM
Subject: [cf-dev] Re: Re: Re: [cf-env] [abacus] Changing how resources are
organized


It depends if you still want usage aggregation at both the resource_id and
resource_type_id levels (more changes as that'll add another aggregation
level to the reports) or if you only need aggregation at the
resource_type_id level (and are effectively treating that resource_type_id
as a 'more convenient' resource_id).

What aggregation levels do you need, both, or just aggregation at that
resource_type_id level?

- Jean-Sebastien

On Mon, Dec 7, 2015 at 3:19 PM, dmangin <dmangin(a)us.ibm.com> wrote:

Yes, this is related to github issue 38.

https://github.com/cloudfoundry-incubator/cf-abacus/issues/38

We want to group the buildpacks by a resource_type_id and bind resource
definitions to the resource_type_id rather than to the resource_id.
However,
when we make this change, how will this affect how abacus is doing all of
the calculations. The only change that I can think of is for abacus to use
the resource_type_id rather than the resource_id when creating the
reports.





--
View this message in context:
http://cf-dev.70369.x6.nabble.com/cf-dev-abacus-Changing-how-resources-are-organized-tp2971p2991.html
Sent from the CF Dev mailing list archive at Nabble.com.


Re: persistence for apps?

Gwenn Etourneau
 

Agree with swift solution, Swift is S3 compatible (
https://wiki.openstack.org/wiki/Swift/APIFeatureComparison) and you can
use the EC2 to get credential amazon like (Access Key, Secret Key).


Unless you are doing something exotic Swift should be the way to go without
any change in your code.

On Fri, Dec 11, 2015 at 4:56 AM, Matthias Ender <Matthias.Ender(a)sas.com>
wrote:

yes, that one would hit the spot!



*From:* Amit Gupta [mailto:agupta(a)pivotal.io]
*Sent:* Thursday, December 10, 2015 2:29 PM
*To:* Discussions about Cloud Foundry projects and the system overall. <
cf-dev(a)lists.cloudfoundry.org>
*Subject:* [cf-dev] Re: Re: Re: Re: persistence for apps?
*Importance:* High



Matthias,



Have you seen Dr. Max's proposal for apps with persistence:
https://docs.google.com/document/d/1A1PVnwB7wdzrWq2ZTjNrDFULlmyTUSsOuWeih8kdUtw/edit#heading=h.vfuwctflv5u2



It looks like exactly what you're talking about.



Johannes is correct, for now you can't do anything like mount volumes in
the container. Any sort of persistence has to be externalized to a service
you connect to over the network. Depending on the type of data and how you
interact with it, a document store or object store would be the way to go,
but you could in principle use a relational database, key value store,
etc. Swift will give you S3 and OpenStack compatibility, so given that
you're going to need a new implementation anyways, Swift might be a good
choice.



Best,

Amit



On Thu, Dec 10, 2015 at 8:14 AM, Johannes Hiemer <jvhiemer(a)gmail.com>
wrote:

Gerne Matthias. :-)



Swift should be an easy way to go if you know the S3 API quite well.






On 10.12.2015, at 16:53, Matthias Ender <Matthias.Ender(a)sas.com> wrote:

Danke, Johannes.



We actually have an implementation that uses S3, but want to also be able
to also support openstack, on-premise. Rather than re-implementing in
swift, nfs would be an easier path from the app development side.



But if there is no path on the cf side, we’ll have to rethink.



*From:* Johannes Hiemer [mailto:jvhiemer(a)gmail.com <jvhiemer(a)gmail.com>]
*Sent:* Thursday, December 10, 2015 10:21 AM
*To:* Discussions about Cloud Foundry projects and the system overall. <
cf-dev(a)lists.cloudfoundry.org>
*Subject:* [cf-dev] Re: persistence for apps?



Hi Mathias,

the assumption you have is wrong. There are two issues regarding your
suggestion:



1) you don't have any control on the cf side (client) over nfs in warden
containers. As far as I know this won't be the case for Diego as well

2) you should stick with solutions like swift or s3 for sharing data,
which is the propagated way for cloud native applications



What kind of data are you going to share between the apps?

Mit freundlichen Grüßen



Johannes Hiemer






On 10.12.2015, at 16:15, Matthias Ender <Matthias.Ender(a)sas.com> wrote:

We are looking at solutions to persist and share directory-type
information among a couple of apps within our application stack.

NFS comes to mind.

How would one go about that? A manifest modification to mount the nfs
share on the runners, I assume. How would the apps then get access? A
volume mount on the Warden container? But where to specify that?



Or am I thinking about this the wrong way?



thanks for any suggestions,

Matthias





Re: [cf-env] [abacus] Changing how resources are organized

Piotr Przybylski <piotrp@...>
 

The main aggregation needed is at the resource type, however the aggregation within consumer by the resource Id is also something we would like to access - for example to determine that application used two different version of node. Instead of introducing resource type, the alternative approach could be to augment the consumer id with the resource id. 

Piotr

-----Jean-Sebastien Delfino <jsdelfino@...> wrote: -----
To: "Discussions about Cloud Foundry projects and the system overall." <cf-dev@...>
From: Jean-Sebastien Delfino <jsdelfino@...>
Date: 12/09/2015 11:51AM
Subject: [cf-dev] Re: Re: Re: [cf-env] [abacus] Changing how resources are organized

It depends if you still want usage aggregation at both the resource_id and resource_type_id levels (more changes as that'll add another aggregation level to the reports) or if you only need aggregation at the resource_type_id level (and are effectively treating that resource_type_id as a 'more convenient' resource_id).

What aggregation levels do you need, both, or just aggregation at that resource_type_id level?

- Jean-Sebastien


On Mon, Dec 7, 2015 at 3:19 PM, dmangin <dmangin@...> wrote:
Yes, this is related to github issue 38.

https://github.com/cloudfoundry-incubator/cf-abacus/issues/38

We want to group the buildpacks by a resource_type_id and bind resource
definitions to the resource_type_id rather than to the resource_id. However,
when we make this change, how will this affect how abacus is doing all of
the calculations. The only change that I can think of is for abacus to use
the resource_type_id rather than the resource_id when creating the reports.





--
View this message in context: http://cf-dev.70369.x6.nabble.com/cf-dev-abacus-Changing-how-resources-are-organized-tp2971p2991.html
Sent from the CF Dev mailing list archive at Nabble.com.



Re: persistence for apps?

Matthias Ender <Matthias.Ender@...>
 

yes, that one would hit the spot!

From: Amit Gupta [mailto:agupta(a)pivotal.io]
Sent: Thursday, December 10, 2015 2:29 PM
To: Discussions about Cloud Foundry projects and the system overall. <cf-dev(a)lists.cloudfoundry.org>
Subject: [cf-dev] Re: Re: Re: Re: persistence for apps?
Importance: High

Matthias,

Have you seen Dr. Max's proposal for apps with persistence: https://docs.google.com/document/d/1A1PVnwB7wdzrWq2ZTjNrDFULlmyTUSsOuWeih8kdUtw/edit#heading=h.vfuwctflv5u2

It looks like exactly what you're talking about.

Johannes is correct, for now you can't do anything like mount volumes in the container. Any sort of persistence has to be externalized to a service you connect to over the network. Depending on the type of data and how you interact with it, a document store or object store would be the way to go, but you could in principle use a relational database, key value store, etc. Swift will give you S3 and OpenStack compatibility, so given that you're going to need a new implementation anyways, Swift might be a good choice.

Best,
Amit

On Thu, Dec 10, 2015 at 8:14 AM, Johannes Hiemer <jvhiemer(a)gmail.com<mailto:jvhiemer(a)gmail.com>> wrote:
Gerne Matthias. :-)

Swift should be an easy way to go if you know the S3 API quite well.



On 10.12.2015, at 16:53, Matthias Ender <Matthias.Ender(a)sas.com<mailto:Matthias.Ender(a)sas.com>> wrote:
Danke, Johannes.

We actually have an implementation that uses S3, but want to also be able to also support openstack, on-premise. Rather than re-implementing in swift, nfs would be an easier path from the app development side.

But if there is no path on the cf side, we’ll have to rethink.

From: Johannes Hiemer [mailto:jvhiemer(a)gmail.com]
Sent: Thursday, December 10, 2015 10:21 AM
To: Discussions about Cloud Foundry projects and the system overall. <cf-dev(a)lists.cloudfoundry.org<mailto:cf-dev(a)lists.cloudfoundry.org>>
Subject: [cf-dev] Re: persistence for apps?

Hi Mathias,
the assumption you have is wrong. There are two issues regarding your suggestion:

1) you don't have any control on the cf side (client) over nfs in warden containers. As far as I know this won't be the case for Diego as well
2) you should stick with solutions like swift or s3 for sharing data, which is the propagated way for cloud native applications

What kind of data are you going to share between the apps?

Mit freundlichen Grüßen

Johannes Hiemer



On 10.12.2015, at 16:15, Matthias Ender <Matthias.Ender(a)sas.com<mailto:Matthias.Ender(a)sas.com>> wrote:
We are looking at solutions to persist and share directory-type information among a couple of apps within our application stack.
NFS comes to mind.
How would one go about that? A manifest modification to mount the nfs share on the runners, I assume. How would the apps then get access? A volume mount on the Warden container? But where to specify that?

Or am I thinking about this the wrong way?

thanks for any suggestions,
Matthias


Re: persistence for apps?

Amit Kumar Gupta
 

Matthias,

Have you seen Dr. Max's proposal for apps with persistence:
https://docs.google.com/document/d/1A1PVnwB7wdzrWq2ZTjNrDFULlmyTUSsOuWeih8kdUtw/edit#heading=h.vfuwctflv5u2

It looks like exactly what you're talking about.

Johannes is correct, for now you can't do anything like mount volumes in
the container. Any sort of persistence has to be externalized to a service
you connect to over the network. Depending on the type of data and how you
interact with it, a document store or object store would be the way to go,
but you could in principle use a relational database, key value store,
etc. Swift will give you S3 and OpenStack compatibility, so given that
you're going to need a new implementation anyways, Swift might be a good
choice.

Best,
Amit

On Thu, Dec 10, 2015 at 8:14 AM, Johannes Hiemer <jvhiemer(a)gmail.com> wrote:

Gerne Matthias. :-)

Swift should be an easy way to go if you know the S3 API quite well.



On 10.12.2015, at 16:53, Matthias Ender <Matthias.Ender(a)sas.com> wrote:

Danke, Johannes.



We actually have an implementation that uses S3, but want to also be able
to also support openstack, on-premise. Rather than re-implementing in
swift, nfs would be an easier path from the app development side.



But if there is no path on the cf side, we’ll have to rethink.



*From:* Johannes Hiemer [mailto:jvhiemer(a)gmail.com <jvhiemer(a)gmail.com>]
*Sent:* Thursday, December 10, 2015 10:21 AM
*To:* Discussions about Cloud Foundry projects and the system overall. <
cf-dev(a)lists.cloudfoundry.org>
*Subject:* [cf-dev] Re: persistence for apps?



Hi Mathias,

the assumption you have is wrong. There are two issues regarding your
suggestion:



1) you don't have any control on the cf side (client) over nfs in warden
containers. As far as I know this won't be the case for Diego as well

2) you should stick with solutions like swift or s3 for sharing data,
which is the propagated way for cloud native applications



What kind of data are you going to share between the apps?

Mit freundlichen Grüßen



Johannes Hiemer






On 10.12.2015, at 16:15, Matthias Ender <Matthias.Ender(a)sas.com> wrote:

We are looking at solutions to persist and share directory-type
information among a couple of apps within our application stack.

NFS comes to mind.

How would one go about that? A manifest modification to mount the nfs
share on the runners, I assume. How would the apps then get access? A
volume mount on the Warden container? But where to specify that?



Or am I thinking about this the wrong way?



thanks for any suggestions,

Matthias




Re: Cloud Foundry Org/Space Metadata Synchronization

Chaskin Saroff <chaskin.saroff@...>
 

Hey Dieu,

I appreciate the response. It sounds like keeping this data synchronized
in real time is just not practical at the moment.

Thanks,
Chaskin

On Thu, Dec 10, 2015 at 10:29 AM Dieu Cao <dcao(a)pivotal.io> wrote:

Hi Chaskin,

There is not a webhook or similar functionality currently available in CF
to hook into changing of user roles or deletions of orgs and spaces.
I have heard interest in such functionality in the past, but improvements
in this area are not currently prioritized for the near term.

There was an effort to come up with a proposal for notifications based on
events in the past but it has not moved forward.

It's possible you could as an operator, set up a log drain for cloud
controller, and trigger something based on logs but this is imprecise and
the logs are subject to change.

-Dieu
CF CAPI PM



On Wed, Dec 9, 2015 at 6:51 PM, Chaskin Saroff <chaskin.saroff(a)gmail.com>
wrote:

As I project requirement, I'm attempting to extend some user preferences
about an org/space. The requirement includes basic CRUD operations for
these preferences with each preferences being **user level**. This means
that each user gets their own preferences for each org that they are a part
of. For example, a user, Bob in orgA should be able to set his first orgA
preference to true and his second orgA preference to "banana".

At the moment, my architecture has these preferences stored in a couchdb
database outside of cloud foundry. This approach works up to the point
where a user is removed from an org that they have set preferences in. The
same issue arises when an org is deleted and these same preference ideas
and issues can be extended to spaces as well.

My question is, is there any way to keep my preferences database up to
date with the data living in CF(via webhooks, etc)? Alternatively, is
there some other method of storing these preferences that will mitigate
these synchronization issues?

Hopefully this makes sense, but please ask for clarity if something isn't
clicking.

The very best regards,
Chaskin


Re: Import large dataset to Postgres instance in CF

Guillaume Berche
 

Hi Siva,

We've been working at Orange on a solution which dumps of an existing db to
an S3-compatible endpoint and then reimports from the S3 bucket into a db
instance (see mailing list announce in [1] and specs in [2]). The
implementation at [3] is still in early stage and currently lacks
documentation beyond the specs. We'd be happy to get feedback from the
community. While this does not directly addresses your issue, this might
provide ideas:

a) within corp network manually upload the data set (e.g. a pg dump) and
upload it to S3 using S3 CLIs (e.g. riakcs service). Then within one of
your CF instance, ssh to it, and download the dump from S3 and stream it
into a pg client to import it into a CF reacheable instance (as to avoid
reaching ephemeral FS limit)

b) If this process is recurrent and needs automation, then the
service-db-dumper could potentially help.

I'll think about extending the service db dumper to accept a remote S3
bucket as the source of a dump (currently it accepts a db URL to perform a
dump from, and soon a service instance name/guid)

If this service-db-dumper improvement were available, then you could
instanciate a service-db-dumper within your private CF instance. Then
instanciate a dump service instance from the S3 bucket were you would have
uploaded the dump.
Then use the service-db-dumper to restore/import this dump into to your pg
instance accessible within CF.

Hope this helps,

Guillaume.

[1]
http://cf-dev.70369.x6.nabble.com/cf-dev-Data-services-import-export-tp1717.html
[2]
https://docs.google.com/document/d/1Y5vwWjvaUIwHI76XU63cAS8xEOJvN69-cNoCQRqLPqU/edit
[3] https://github.com/Orange-OpenSource/service-db-dumper

On Thu, Dec 10, 2015 at 6:35 AM, Nicholas Calugar <ncalugar(a)pivotal.io>
wrote:

Hi Siva,

1. If you run the PostgreSQL, you likely want to temporarily open the
firewall to load data or get on a jump box of some sort that can access the
database. It's not really a CF issue at this point, it's a general issue of
seeding a database out-of-band from the application server.
2. If the above isn't an option and your CF is running Diego, you
could use SSH to get onto an app container after SCPing the data to that
container.
3. The only other option I can think of is writing a simple app that
you can push to CF to do the import.

Hope that helps,

Nick

On Wed, Dec 9, 2015 at 3:08 PM Siva Balan <mailsiva(a)gmail.com> wrote:

Hi Nick,
Your Option 1(Using psql CLI) is not possible since there is a firewall
that only allows connection from CF apps to postgres DB. Apps like psql CLI
that are outside of CF have no access to the postgres DB.
I just wanted to get some thoughts from this community since I presume
many would have faced a similar circumstance of importing large sets of
data to their DB which is behind a firewall and accessible only through CF
apps.

Thanks
Siva

On Wed, Dec 9, 2015 at 2:27 PM, Nicholas Calugar <ncalugar(a)pivotal.io>
wrote:

Hi Siva,

You'll have to tell us more about how your PostgreSQL and CF was
deployed, but you might be able to connect to it from your local machine
using the psql CLI and the credentials for one of your bound apps. This
takes CF out of the equation other than the service binding providing the
credentials.

If this doesn't work, there are a number of things that could be in the
way, i.e. firewall that only allows connection from CF or the PostgreSQL
server is on a different subnet. You can then try using some machine as a
jump box that will allow access to the PostgreSQL.

Nick

On Wed, Dec 9, 2015 at 9:40 AM Siva Balan <mailsiva(a)gmail.com> wrote:

Hello,
Below is my requirement:
I have a postgres instance deployed on our corporate CF deployment. I
have created a service instance of this postgres and bound my app to it.
Now I need to import a very large dataset(millions of records) into this
postgres instance.
As a CF user, I do not have access to any ports on CF other than 80 and
443. So I am not be able to use any of the native postgresql tools to
import the data. I can view and run simple SQL commands on this postgres
instance using the phppgadmin app that is also bound to my postgres service
instance.
Now, what is the best way for me to import this large dataset to my
postgres service instance?
All thoughts and suggestions welcome.

Thanks
Siva Balan

--
http://www.twitter.com/sivabalans

--
http://www.twitter.com/sivabalans


Re: Cloud Foundry Org/Space Metadata Synchronization

Dieu Cao <dcao@...>
 

Hi Chaskin,

There is not a webhook or similar functionality currently available in CF
to hook into changing of user roles or deletions of orgs and spaces.
I have heard interest in such functionality in the past, but improvements
in this area are not currently prioritized for the near term.

There was an effort to come up with a proposal for notifications based on
events in the past but it has not moved forward.

It's possible you could as an operator, set up a log drain for cloud
controller, and trigger something based on logs but this is imprecise and
the logs are subject to change.

-Dieu
CF CAPI PM



On Wed, Dec 9, 2015 at 6:51 PM, Chaskin Saroff <chaskin.saroff(a)gmail.com>
wrote:

As I project requirement, I'm attempting to extend some user preferences
about an org/space. The requirement includes basic CRUD operations for
these preferences with each preferences being **user level**. This means
that each user gets their own preferences for each org that they are a part
of. For example, a user, Bob in orgA should be able to set his first orgA
preference to true and his second orgA preference to "banana".

At the moment, my architecture has these preferences stored in a couchdb
database outside of cloud foundry. This approach works up to the point
where a user is removed from an org that they have set preferences in. The
same issue arises when an org is deleted and these same preference ideas
and issues can be extended to spaces as well.

My question is, is there any way to keep my preferences database up to
date with the data living in CF(via webhooks, etc)? Alternatively, is
there some other method of storing these preferences that will mitigate
these synchronization issues?

Hopefully this makes sense, but please ask for clarity if something isn't
clicking.

The very best regards,
Chaskin


Re: Quotas in CF

Dieu Cao <dcao@...>
 

Hi Rajesh,

I don't believe there are any correlations between memory quota and the
implications on routes or services.

I think this varies widely based on what your app is.
For example a node app versus a java app have very different memory foot
prints and can also vary widely depending on what the app is responsible
for, the code base etc.
As for routes, that varies again by organization, based on policy, etc.
Service instances also vary widely by implementation, since they could
simply be credentials to an existing resource or they could be new
deployments of varying sizes, etc.

-Dieu
CF CAPI PM

On Wed, Dec 9, 2015 at 2:58 PM, Rajesh Jain <rajain(a)pivotal.io> wrote:

In cf at an org level you have quotas for
1. Memory
2. Routes
3. Services

Two questions on quotas for routes and services:

Is there any correlation or best practice on assigning quotas for routes
and services based on memory quota for an org.
For e.g 16 GB quota can have 8 app instances of 2 GB per instance and
assuming 2 routes per app, you can assign (though not scientifically) quota
of 16 routes?
What about service quota? And what is the memory footprint of a service
instance on the org quota?

Thanks, Rajesh


Re: App Container IP Address assignment on vSphere

Daya Shetty <daya.shetty@...>
 

Hi Eric,

Thanks for the detailed explanation! Makes perfect sense as the network pool was calculated to be 10.254.0.0/22 and not 10.254.0.0/24.

Regards
Daya


Bits Service Proposal

Simon D Moser
 

Hi everybody,

We have been putting together a proposal for a "bits service" - basically
a service that will take bits (packages, droplets, etc) and provide
upload/download capabilities for those. It's a functionality in the CC
today, but the general idea is to externalise that into a separate service
that is reusable from both the cloud controller as well as Diego and
potentially other consumers. Read more in the document at
https://docs.google.com/document/d/1kIjBuJJ0ZiJRPzMJW8dtce26jhAHbK7KotY9416YMEI/edit#
!
Please join the discussion over the next few weeks - we'd like to start
working on that early in the new year.

Mit freundlichen Grüßen / Kind regards

Simon Moser

Senior Technical Staff Member / IBM Master Inventor
Bluemix Application Platform Lead Architect
Dept. C727, IBM Research & Development Boeblingen

-------------------------------------------------------------------------------------------------------------------------------------------
IBM Deutschland
Schoenaicher Str. 220
71032 Boeblingen
Phone: +49-7031-16-4304
Fax: +49-7031-16-4890
E-Mail: smoser(a)de.ibm.com
-------------------------------------------------------------------------------------------------------------------------------------------
IBM Deutschland Research & Development GmbH / Vorsitzender des
Aufsichtsrats: Martina Koederitz
Geschäftsführung: Dirk Wittkopp
Sitz der Gesellschaft: Böblingen / Registergericht: Amtsgericht
Stuttgart, HRB 243294

**
Great minds discuss ideas; average minds discuss events; small minds
discuss people.
Eleanor Roosevelt


Re: persistence for apps?

Johannes Hiemer <jvhiemer@...>
 

Gerne Matthias. :-)

Swift should be an easy way to go if you know the S3 API quite well.

On 10.12.2015, at 16:53, Matthias Ender <Matthias.Ender(a)sas.com> wrote:

Danke, Johannes.

We actually have an implementation that uses S3, but want to also be able to also support openstack, on-premise. Rather than re-implementing in swift, nfs would be an easier path from the app development side.

But if there is no path on the cf side, we’ll have to rethink.

From: Johannes Hiemer [mailto:jvhiemer(a)gmail.com]
Sent: Thursday, December 10, 2015 10:21 AM
To: Discussions about Cloud Foundry projects and the system overall. <cf-dev(a)lists.cloudfoundry.org>
Subject: [cf-dev] Re: persistence for apps?

Hi Mathias,
the assumption you have is wrong. There are two issues regarding your suggestion:

1) you don't have any control on the cf side (client) over nfs in warden containers. As far as I know this won't be the case for Diego as well
2) you should stick with solutions like swift or s3 for sharing data, which is the propagated way for cloud native applications

What kind of data are you going to share between the apps?

Mit freundlichen Grüßen

Johannes Hiemer



On 10.12.2015, at 16:15, Matthias Ender <Matthias.Ender(a)sas.com> wrote:

We are looking at solutions to persist and share directory-type information among a couple of apps within our application stack.
NFS comes to mind.
How would one go about that? A manifest modification to mount the nfs share on the runners, I assume. How would the apps then get access? A volume mount on the Warden container? But where to specify that?

Or am I thinking about this the wrong way?

thanks for any suggestions,
Matthias


Re: persistence for apps?

Matthias Ender <Matthias.Ender@...>
 

Danke, Johannes.

We actually have an implementation that uses S3, but want to also be able to also support openstack, on-premise. Rather than re-implementing in swift, nfs would be an easier path from the app development side.

But if there is no path on the cf side, we’ll have to rethink.

From: Johannes Hiemer [mailto:jvhiemer(a)gmail.com]
Sent: Thursday, December 10, 2015 10:21 AM
To: Discussions about Cloud Foundry projects and the system overall. <cf-dev(a)lists.cloudfoundry.org>
Subject: [cf-dev] Re: persistence for apps?

Hi Mathias,
the assumption you have is wrong. There are two issues regarding your suggestion:

1) you don't have any control on the cf side (client) over nfs in warden containers. As far as I know this won't be the case for Diego as well
2) you should stick with solutions like swift or s3 for sharing data, which is the propagated way for cloud native applications

What kind of data are you going to share between the apps?

Mit freundlichen Grüßen

Johannes Hiemer



On 10.12.2015, at 16:15, Matthias Ender <Matthias.Ender(a)sas.com<mailto:Matthias.Ender(a)sas.com>> wrote:
We are looking at solutions to persist and share directory-type information among a couple of apps within our application stack.
NFS comes to mind.
How would one go about that? A manifest modification to mount the nfs share on the runners, I assume. How would the apps then get access? A volume mount on the Warden container? But where to specify that?

Or am I thinking about this the wrong way?

thanks for any suggestions,
Matthias


Re: persistence for apps?

Johannes Hiemer <jvhiemer@...>
 

Hi Mathias,
the assumption you have is wrong. There are two issues regarding your suggestion:

1) you don't have any control on the cf side (client) over nfs in warden containers. As far as I know this won't be the case for Diego as well
2) you should stick with solutions like swift or s3 for sharing data, which is the propagated way for cloud native applications

What kind of data are you going to share between the apps?

Mit freundlichen Grüßen

Johannes Hiemer

On 10.12.2015, at 16:15, Matthias Ender <Matthias.Ender(a)sas.com> wrote:

We are looking at solutions to persist and share directory-type information among a couple of apps within our application stack.
NFS comes to mind.
How would one go about that? A manifest modification to mount the nfs share on the runners, I assume. How would the apps then get access? A volume mount on the Warden container? But where to specify that?

Or am I thinking about this the wrong way?

thanks for any suggestions,
Matthias


persistence for apps?

Matthias Ender <Matthias.Ender@...>
 

We are looking at solutions to persist and share directory-type information among a couple of apps within our application stack.
NFS comes to mind.
How would one go about that? A manifest modification to mount the nfs share on the runners, I assume. How would the apps then get access? A volume mount on the Warden container? But where to specify that?

Or am I thinking about this the wrong way?

thanks for any suggestions,
Matthias


Diego docker app launch issue with Diego's v0.1443.0

Anuj Jain <anuj17280@...>
 

Hi,

I deployed the latest CF v226 with Diego v0.1443.0 - I was able to
successfully upgrade both deployments and verified that CF is working as
expected. currently seeing problem with Diego while trying to deploy any
docker app - I am getting *'Server error, status code: 500, error code:
170016, message: Runner error: stop app failed: 503' *- below you can see
the CF_TRACE output of last few lines.

I also notice that while trying to upgrade diego v0.1443.0 - it gave me
the error while trying to upgrade database job - the fix which I applied
(changed debug2 to debug from diego manifest file - path: properties =>
consul => log_level: debug)


RESPONSE: [2015-12-10T09:35:07-05:00]
HTTP/1.1 500 Internal Server Error
Content-Length: 110
Content-Type: application/json;charset=utf-8
Date: Thu, 10 Dec 2015 14:35:07 GMT
Server: nginx
X-Cf-Requestid: 8328f518-4847-41ec-5836-507d4bb054bb
X-Content-Type-Options: nosniff
X-Vcap-Request-Id:
324d0fc0-2146-48f0-6265-755efb556e23::5c869046-8803-4dac-a620-8ca701f5bd22

{
"code": 170016,
"description": "Runner error: stop app failed: 503",
"error_code": "CF-RunnerError"
}

FAILED
Server error, status code: 500, error code: 170016, message: Runner error:
stop app failed: 503
FAILED
Server error, status code: 500, error code: 170016, message: Runner error:
stop app failed: 503
FAILED
Error: Error executing cli core command
Starting app testing89 in org PAAS / space dev as admin...

FAILED

Server error, status code: 500, error code: 170016, message: Runner error:
stop app failed: 503


- Anuj


Re: cf start of diego enabled app results in status code: 500 -- where to look for logs?

Tom Sherrod <tom.sherrod@...>
 

Hi Eric,

Thanks for the pointers.

`bosh vms` -- all running

Only 1 api vm running. cloud_controller_ng.log is almost constantly being
updated.

Below is the 500 error capture:

{"timestamp":1449752019.6870825,"message":"desire.app.request","log_level":"info","source":"cc.nsync.listener.client","data":{"request_guid":"e046d57f-277a-4f81-452c-3b6fb892432e::53557eba-6212-4727-9b43-ebbb03426b76","process_guid":"9f528159-1a7b-4876-92c9-34d040e9824d-29fd370c-04fd-4481-b432-39431460a963"},"thread_id":69975079694320,"fiber_id":69975079888800,"process_id":6524,"file":"/var/vcap/data/packages/cloud_controller_ng/80b067d32996057a4fc88fe1c553764671e7e8e8.1-1ea6a37e7b427cc868c0727a4bfec391725a79ab/cloud_controller_ng/lib/cloud_controller/diego/nsync_client.rb","lineno":15,"method":"desire_app"}

{"timestamp":1449752019.6899576,"message":"Cannot communicate with diego -
tried to send
start","log_level":"error","source":"cc.diego.runner","data":{"request_guid":"e046d57f-277a-4f81-452c-3b6fb892432e::53557eba-6212-4727-9b43-ebbb03426b76"},"thread_id":69975079694320,"fiber_id":69975079888800,"process_id":6524,"file":"/var/vcap/data/packages/cloud_controller_ng/80b067d32996057a4fc88fe1c553764671e7e8e8.1-1ea6a37e7b427cc868c0727a4bfec391725a79ab/cloud_controller_ng/lib/cloud_controller/diego/runner.rb","lineno":43,"method":"rescue
in with_logging"}

{"timestamp":1449752019.6909509,"message":"Request failed: 500:
{\"code\"=>10001, \"description\"=>\"getaddrinfo: Name or service not
known\", \"error_code\"=>\"CF-CannotCommunicateWithDiegoError\",
\"backtrace\"=>[\"/var/vcap/data/packages/cloud_controller_ng/80b067d32996057a4fc88fe1c553764671e7e8e8.1-1ea6a37e7b427cc868c0727a4bfec391725a79ab/cloud_controller_ng/lib/cloud_controller/diego/runner.rb:44:in
`rescue in with_logging'\",
\"/var/vcap/data/packages/cloud_controller_ng/80b067d32996057a4fc88fe1c553764671e7e8e8.1-1ea6a37e7b427cc868c0727a4bfec391725a79ab/cloud_controller_ng/lib/cloud_controller/diego/runner.rb:40:in
`with_logging'\",
\"/var/vcap/data/packages/cloud_controller_ng/80b067d32996057a4fc88fe1c553764671e7e8e8.1-1ea6a37e7b427cc868c0727a4bfec391725a79ab/cloud_controller_ng/lib/cloud_controller/diego/runner.rb:19:in
`start'\",
\"/var/vcap/data/packages/cloud_controller_ng/80b067d32996057a4fc88fe1c553764671e7e8e8.1-1ea6a37e7b427cc868c0727a4bfec391725a79ab/cloud_controller_ng/lib/cloud_controller/app_observer.rb:63:in
`react_to_state_change'\",
\"/var/vcap/data/packages/cloud_controller_ng/80b067d32996057a4fc88fe1c553764671e7e8e8.1-1ea6a37e7b427cc868c0727a4bfec391725a79ab/cloud_controller_ng/lib/cloud_controller/app_observer.rb:31:in
`updated'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/app/models/runtime/app.rb:574:in
`after_commit'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sequel-4.21.0/lib/sequel/model/base.rb:1920:in
`block in _save'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sequel-4.21.0/lib/sequel/database/transactions.rb:280:in
`call'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sequel-4.21.0/lib/sequel/database/transactions.rb:280:in
`block in remove_transaction'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sequel-4.21.0/lib/sequel/database/transactions.rb:280:in
`each'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sequel-4.21.0/lib/sequel/database/transactions.rb:280:in
`remove_transaction'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sequel-4.21.0/lib/sequel/database/transactions.rb:156:in
`_transaction'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sequel-4.21.0/lib/sequel/database/transactions.rb:108:in
`block in transaction'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sequel-4.21.0/lib/sequel/database/connecting.rb:250:in
`block in synchronize'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sequel-4.21.0/lib/sequel/connection_pool/threaded.rb:98:in
`hold'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sequel-4.21.0/lib/sequel/database/connecting.rb:250:in
`synchronize'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sequel-4.21.0/lib/sequel/database/transactions.rb:97:in
`transaction'\",
\"/var/vcap/data/packages/cloud_controller_ng/80b067d32996057a4fc88fe1c553764671e7e8e8.1-1ea6a37e7b427cc868c0727a4bfec391725a79ab/cloud_controller_ng/app/controllers/base/model_controller.rb:66:in
`update'\",
\"/var/vcap/data/packages/cloud_controller_ng/80b067d32996057a4fc88fe1c553764671e7e8e8.1-1ea6a37e7b427cc868c0727a4bfec391725a79ab/cloud_controller_ng/app/controllers/base/base_controller.rb:78:in
`dispatch'\",
\"/var/vcap/data/packages/cloud_controller_ng/80b067d32996057a4fc88fe1c553764671e7e8e8.1-1ea6a37e7b427cc868c0727a4bfec391725a79ab/cloud_controller_ng/lib/cloud_controller/rest_controller/routes.rb:16:in
`block in define_route'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sinatra-1.4.6/lib/sinatra/base.rb:1609:in
`call'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sinatra-1.4.6/lib/sinatra/base.rb:1609:in
`block in compile!'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sinatra-1.4.6/lib/sinatra/base.rb:974:in
`[]'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sinatra-1.4.6/lib/sinatra/base.rb:974:in
`block (3 levels) in route!'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sinatra-1.4.6/lib/sinatra/base.rb:993:in
`route_eval'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sinatra-1.4.6/lib/sinatra/base.rb:974:in
`block (2 levels) in route!'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sinatra-1.4.6/lib/sinatra/base.rb:1014:in
`block in process_route'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sinatra-1.4.6/lib/sinatra/base.rb:1012:in
`catch'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sinatra-1.4.6/lib/sinatra/base.rb:1012:in
`process_route'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sinatra-1.4.6/lib/sinatra/base.rb:972:in
`block in route!'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sinatra-1.4.6/lib/sinatra/base.rb:971:in
`each'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sinatra-1.4.6/lib/sinatra/base.rb:971:in
`route!'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sinatra-1.4.6/lib/sinatra/base.rb:1084:in
`block in dispatch!'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sinatra-1.4.6/lib/sinatra/base.rb:1066:in
`block in invoke'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sinatra-1.4.6/lib/sinatra/base.rb:1066:in
`catch'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sinatra-1.4.6/lib/sinatra/base.rb:1066:in
`invoke'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sinatra-1.4.6/lib/sinatra/base.rb:1081:in
`dispatch!'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sinatra-1.4.6/lib/sinatra/base.rb:906:in
`block in call!'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sinatra-1.4.6/lib/sinatra/base.rb:1066:in
`block in invoke'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sinatra-1.4.6/lib/sinatra/base.rb:1066:in
`catch'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sinatra-1.4.6/lib/sinatra/base.rb:1066:in
`invoke'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sinatra-1.4.6/lib/sinatra/base.rb:906:in
`call!'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sinatra-1.4.6/lib/sinatra/base.rb:894:in
`call'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/rack-protection-1.5.3/lib/rack/protection/xss_header.rb:18:in
`call'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/rack-protection-1.5.3/lib/rack/protection/path_traversal.rb:16:in
`call'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/rack-protection-1.5.3/lib/rack/protection/json_csrf.rb:18:in
`call'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/rack-protection-1.5.3/lib/rack/protection/base.rb:49:in
`call'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/rack-protection-1.5.3/lib/rack/protection/base.rb:49:in
`call'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/rack-protection-1.5.3/lib/rack/protection/frame_options.rb:31:in
`call'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/rack-1.6.4/lib/rack/nulllogger.rb:9:in
`call'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/rack-1.6.4/lib/rack/head.rb:13:in
`call'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sinatra-1.4.6/lib/sinatra/base.rb:181:in
`call'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sinatra-1.4.6/lib/sinatra/base.rb:2021:in
`call'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/rack-1.6.4/lib/rack/urlmap.rb:66:in
`block in call'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/rack-1.6.4/lib/rack/urlmap.rb:50:in
`each'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/rack-1.6.4/lib/rack/urlmap.rb:50:in
`call'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/middleware/request_logs.rb:21:in
`call'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/middleware/vcap_request_id.rb:14:in
`call'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/middleware/cors.rb:47:in
`call_app'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/middleware/cors.rb:12:in
`call'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/middleware/request_metrics.rb:12:in
`call'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/rack-1.6.4/lib/rack/builder.rb:153:in
`call'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/thin-1.6.3/lib/thin/connection.rb:86:in
`block in pre_process'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/thin-1.6.3/lib/thin/connection.rb:84:in
`catch'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/thin-1.6.3/lib/thin/connection.rb:84:in
`pre_process'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/eventmachine-1.0.8/lib/eventmachine.rb:1062:in
`call'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/eventmachine-1.0.8/lib/eventmachine.rb:1062:in
`block in
spawn_threadpool'\"]}","log_level":"error","source":"cc.api","data":{"request_guid":"e046d57f-277a-4f81-452c-3b6fb892432e::53557eba-6212-4727-9b43-ebbb03426b76"},"thread_id":69975079694320,"fiber_id":69975079888800,"process_id":6524,"file":"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/lib/sinatra/vcap.rb","lineno":53,"method":"block
in registered"}

{"timestamp":1449752019.691719,"message":"Completed 500 vcap-request-id:
e046d57f-277a-4f81-452c-3b6fb892432e::53557eba-6212-4727-9b43-ebbb03426b76","log_level":"info","source":"cc.api","data":{},"thread_id":69975079694320,"fiber_id":69975079888800,"process_id":6524,"file":"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/middleware/request_logs.rb","lineno":23,"method":"call"}

On Wed, Dec 9, 2015 at 5:53 PM, Eric Malm <emalm(a)pivotal.io> wrote:

Hi, Tom,

It may be that Cloud Controller is unable to resolve the consul-provided
DNS entries for the CC-Bridge components, as that '10001 Unknown Error' 500
response sounds like this bug in the Diego tracker:
https://www.pivotaltracker.com/story/show/104066600. That 500 response
should be reflected as some sort of error in the CC log file, located by
default in /var/vcap/sys/log/cloud_controller_ng/cloud_controller_ng.log on
your CC VMs. It may even be helpful to follow that log in real-time with
`tail -f` while you try starting the Diego-targeted app via the CLI. To be
sure you capture it, you should tail that log file on each CC in your
deployment. In any case, a stack trace associated to that error would
likely help us identify what to check next.

Also, does `bosh vms` report any failing VMs in either the CF or the Diego
deployments?

Best,
Eric

On Wed, Dec 9, 2015 at 2:27 PM, Tom Sherrod <tom.sherrod(a)gmail.com> wrote:

I'm giving CF 225 and diego 0.1441.0 a run.
CF 225 is up and app deployed.
Stop app. cf enable-diego app. Start app:
FAILED
Server error, status code: 500, error code: 10001, message: An unknown
error occurred.
FAILED
Server error, status code: 500, error code: 10001, message: An unknown
error occurred.

CF_TRACE ends with:
REQUEST: [2015-12-09T17:17:37-05:00]
PUT
/v2/apps/02c68ddd-0596-4aab-8c05-a8f538d06712?async=true&inline-relations-depth=1
HTTP/1.1
Host: api.dev.foo.com
Accept: application/json
Authorization: [PRIVATE DATA HIDDEN]
Content-Type: application/json
User-Agent: go-cli 6.14.0+2654a47 / darwin

{"state":"STARTED"}

RESPONSE: [2015-12-09T17:17:37-05:00]
HTTP/1.1 500 Internal Server Error
Content-Length: 99
Content-Type: application/json;charset=utf-8
Date: Wed, 09 Dec 2015 22:17:36 GMT
Server: nginx
X-Cf-Requestid: 6edf0ac8-384f-4db3-576a-6744b7eb4b8c
X-Content-Type-Options: nosniff
X-Vcap-Request-Id:
860d73f9-9415-478f-6c60-13e2e5ddde8c::80a4a687-7f2d-44c5-9b09-4e3c9fa07b68

{
"error_code": "UnknownError",
"description": "An unknown error occurred.",
"code": 10001
}


Where next to look for the broken piece?


Re: App Container IP Address assignment on vSphere

Eric Malm <emalm@...>
 

Hi, Daya,

Based on
https://github.com/cloudfoundry/warden/blob/master/warden/lib/warden/config.rb#L207-L216,
the warden server uses the values of the network.pool_start_address and
network.pool_size properties from the rendered warden.yml config file to
construct a value for the pool_network property. Warden allocates a /30
subnet for each container, to have room for both the host-side and
container-side IP addresses in the veth pair, as well as the broadcast
address on the subnet. With the default values of 10.254.0.0 for the pool
start address and 256 (= 2^8) for the pool size, warden then calculates the
pool network to be 10.254.0.0/22. This /22 subnet includes the 10.254.2.x
and 10.254.3.x addresses you have observed on your DEAs.

In any case, these 10.254.x.y IP addresses are used only internally on each
DEA or Diego cell VM, so there's no conflict between these IP addresses on
other VMs that run warden/garden containers. If you examine the 'nat' table
in the iptables config, you'll see that for each container, warden creates
a NAT rule that directs inbound traffic from a particular port on the host
VM's eth0 interface to that same port on the container's host-side veth
interface (the one with offset 2 in the container's /30 subnet). The DEA
then provides this port as the value of the $PORT environment variable, so
the CF app process running in the container can listen on that port for its
web traffic.

Thanks,
Eric

On Wed, Dec 9, 2015 at 11:25 PM, Will Pragnell <wpragnell(a)pivotal.io> wrote:

Ah, sorry, my bad! I assumed Garden for some reason.

On 9 December 2015 at 21:15, Daya Shetty <daya.shetty(a)bnymellon.com>
wrote:

Will,

We are using warden containers in our deployment and I was referring to
the attributes defined in

./cf-release/jobs/dea_next/templates/warden.yml.erb

network:
pool_start_address: 10.254.0.0
pool_size: 256

and in ./cf-release/src/warden/warden/lib/warden/config.rb

def self.network_defaults
{
"pool_network" => "10.254.0.0/24",
"deny_networks" => [],
"allow_networks" => [],
"allow_host_access" => false,
"mtu" => 1500,
}
end

def self.network_schema
::Membrane::SchemaParser.parse do
{
# Preferred way to specify networks to pool
optional("pool_network") => String,

# Present for Backwards compatibility
optional("pool_start_address") => String,
optional("pool_size") => Integer,
optional("release_delay") => Integer,
optional("mtu") => Integer,

"deny_networks" => [String],
"allow_networks" => [String],
optional("allow_host_access") => bool,
}
end

Thanks
Daya