Re: Bosh version and stemcell for 225
Amit Kumar Gupta
Hey Mike,
toggle quoted message
Show quoted text
I'm discussing with the PWS teams if there's a good way to announce that info. Best, Amit On Mon, Dec 7, 2015 at 10:17 PM, Mike Youngstrom <youngm(a)gmail.com> wrote:
Thanks Amit, |
|
Re: persistence for apps?
Michael Maximilien
Excellent. Please make sure to comment, if you have any. We want to address all by YE (BTW, thanks Amit for your comments).
Best, Max — Sent from Mailbox On Fri, Dec 11, 2015 at 3:56 AM, Matthias Ender <Matthias.Ender(a)sas.com> wrote: yes, that one would hit the spot! |
|
Re: [cf-env] [abacus] Changing how resources are organized
Jean-Sebastien Delfino
Thanks Piotr,
The main aggregation needed is at the resource type, however theaggregation within consumer by the resource Id is also something we would like to access - for example to determine that application used two different version of node. OK so then that means a new aggregation level, not rocket science, but a rather mechanical addition of a new aggregation level similar to the existing ones to the aggregator, reporting, tests, demos, schemas and API doc. I'm out on vacation tomorrow Friday but tomorrow's IPM could be a good opportunity to get the team to point that story's work with Max -- and that way I won't be able to influence the point'ing :). Instead of introducing resource type, the alternative approach could beto augment the consumer id with the resource id Not sure how that would work given that a consumer can use/consume multiple (service) resources, and this 'resource type' aggregation should work for all types of resources (not just runtime buildpack resources). - Jean-Sebastien On Thu, Dec 10, 2015 at 12:57 PM, Piotr Przybylski <piotrp(a)us.ibm.com> wrote: The main aggregation needed is at the resource type, however the |
|
Re: persistence for apps?
Gwenn Etourneau
Agree with swift solution, Swift is S3 compatible (
https://wiki.openstack.org/wiki/Swift/APIFeatureComparison) and you can use the EC2 to get credential amazon like (Access Key, Secret Key). Unless you are doing something exotic Swift should be the way to go without any change in your code. On Fri, Dec 11, 2015 at 4:56 AM, Matthias Ender <Matthias.Ender(a)sas.com> wrote: yes, that one would hit the spot! |
|
Re: [cf-env] [abacus] Changing how resources are organized
Piotr Przybylski <piotrp@...>
The main aggregation needed is at the resource type, however the aggregation within consumer by the resource Id is also something we would like to access - for example to determine that application used two different version of node. Instead of introducing resource type, the alternative approach could be to augment the consumer id with the resource id. Piotr -----Jean-Sebastien Delfino <jsdelfino@...> wrote: ----- To: "Discussions about Cloud Foundry projects and the system overall." <cf-dev@...> From: Jean-Sebastien Delfino <jsdelfino@...> Date: 12/09/2015 11:51AM Subject: [cf-dev] Re: Re: Re: [cf-env] [abacus] Changing how resources are organized It depends if you still want usage aggregation at both the resource_id and resource_type_id levels (more changes as that'll add another aggregation level to the reports) or if you only need aggregation at the resource_type_id level (and are effectively treating that resource_type_id as a 'more convenient' resource_id). What aggregation levels do you need, both, or just aggregation at that resource_type_id level? - Jean-Sebastien On Mon, Dec 7, 2015 at 3:19 PM, dmangin <dmangin@...> wrote: Yes, this is related to github issue 38. |
|
Re: persistence for apps?
Matthias Ender <Matthias.Ender@...>
yes, that one would hit the spot!
From: Amit Gupta [mailto:agupta(a)pivotal.io] Sent: Thursday, December 10, 2015 2:29 PM To: Discussions about Cloud Foundry projects and the system overall. <cf-dev(a)lists.cloudfoundry.org> Subject: [cf-dev] Re: Re: Re: Re: persistence for apps? Importance: High Matthias, Have you seen Dr. Max's proposal for apps with persistence: https://docs.google.com/document/d/1A1PVnwB7wdzrWq2ZTjNrDFULlmyTUSsOuWeih8kdUtw/edit#heading=h.vfuwctflv5u2 It looks like exactly what you're talking about. Johannes is correct, for now you can't do anything like mount volumes in the container. Any sort of persistence has to be externalized to a service you connect to over the network. Depending on the type of data and how you interact with it, a document store or object store would be the way to go, but you could in principle use a relational database, key value store, etc. Swift will give you S3 and OpenStack compatibility, so given that you're going to need a new implementation anyways, Swift might be a good choice. Best, Amit On Thu, Dec 10, 2015 at 8:14 AM, Johannes Hiemer <jvhiemer(a)gmail.com<mailto:jvhiemer(a)gmail.com>> wrote: Gerne Matthias. :-) Swift should be an easy way to go if you know the S3 API quite well. On 10.12.2015, at 16:53, Matthias Ender <Matthias.Ender(a)sas.com<mailto:Matthias.Ender(a)sas.com>> wrote: Danke, Johannes. We actually have an implementation that uses S3, but want to also be able to also support openstack, on-premise. Rather than re-implementing in swift, nfs would be an easier path from the app development side. But if there is no path on the cf side, we’ll have to rethink. From: Johannes Hiemer [mailto:jvhiemer(a)gmail.com] Sent: Thursday, December 10, 2015 10:21 AM To: Discussions about Cloud Foundry projects and the system overall. <cf-dev(a)lists.cloudfoundry.org<mailto:cf-dev(a)lists.cloudfoundry.org>> Subject: [cf-dev] Re: persistence for apps? Hi Mathias, the assumption you have is wrong. There are two issues regarding your suggestion: 1) you don't have any control on the cf side (client) over nfs in warden containers. As far as I know this won't be the case for Diego as well 2) you should stick with solutions like swift or s3 for sharing data, which is the propagated way for cloud native applications What kind of data are you going to share between the apps? Mit freundlichen Grüßen Johannes Hiemer On 10.12.2015, at 16:15, Matthias Ender <Matthias.Ender(a)sas.com<mailto:Matthias.Ender(a)sas.com>> wrote: We are looking at solutions to persist and share directory-type information among a couple of apps within our application stack. NFS comes to mind. How would one go about that? A manifest modification to mount the nfs share on the runners, I assume. How would the apps then get access? A volume mount on the Warden container? But where to specify that? Or am I thinking about this the wrong way? thanks for any suggestions, Matthias |
|
Re: persistence for apps?
Amit Kumar Gupta
Matthias,
toggle quoted message
Show quoted text
Have you seen Dr. Max's proposal for apps with persistence: https://docs.google.com/document/d/1A1PVnwB7wdzrWq2ZTjNrDFULlmyTUSsOuWeih8kdUtw/edit#heading=h.vfuwctflv5u2 It looks like exactly what you're talking about. Johannes is correct, for now you can't do anything like mount volumes in the container. Any sort of persistence has to be externalized to a service you connect to over the network. Depending on the type of data and how you interact with it, a document store or object store would be the way to go, but you could in principle use a relational database, key value store, etc. Swift will give you S3 and OpenStack compatibility, so given that you're going to need a new implementation anyways, Swift might be a good choice. Best, Amit On Thu, Dec 10, 2015 at 8:14 AM, Johannes Hiemer <jvhiemer(a)gmail.com> wrote:
Gerne Matthias. :-) |
|
Re: Cloud Foundry Org/Space Metadata Synchronization
Chaskin Saroff <chaskin.saroff@...>
Hey Dieu,
toggle quoted message
Show quoted text
I appreciate the response. It sounds like keeping this data synchronized in real time is just not practical at the moment. Thanks, Chaskin On Thu, Dec 10, 2015 at 10:29 AM Dieu Cao <dcao(a)pivotal.io> wrote:
Hi Chaskin, |
|
Re: Import large dataset to Postgres instance in CF
Hi Siva,
We've been working at Orange on a solution which dumps of an existing db to an S3-compatible endpoint and then reimports from the S3 bucket into a db instance (see mailing list announce in [1] and specs in [2]). The implementation at [3] is still in early stage and currently lacks documentation beyond the specs. We'd be happy to get feedback from the community. While this does not directly addresses your issue, this might provide ideas: a) within corp network manually upload the data set (e.g. a pg dump) and upload it to S3 using S3 CLIs (e.g. riakcs service). Then within one of your CF instance, ssh to it, and download the dump from S3 and stream it into a pg client to import it into a CF reacheable instance (as to avoid reaching ephemeral FS limit) b) If this process is recurrent and needs automation, then the service-db-dumper could potentially help. I'll think about extending the service db dumper to accept a remote S3 bucket as the source of a dump (currently it accepts a db URL to perform a dump from, and soon a service instance name/guid) If this service-db-dumper improvement were available, then you could instanciate a service-db-dumper within your private CF instance. Then instanciate a dump service instance from the S3 bucket were you would have uploaded the dump. Then use the service-db-dumper to restore/import this dump into to your pg instance accessible within CF. Hope this helps, Guillaume. [1] http://cf-dev.70369.x6.nabble.com/cf-dev-Data-services-import-export-tp1717.html [2] https://docs.google.com/document/d/1Y5vwWjvaUIwHI76XU63cAS8xEOJvN69-cNoCQRqLPqU/edit [3] https://github.com/Orange-OpenSource/service-db-dumper On Thu, Dec 10, 2015 at 6:35 AM, Nicholas Calugar <ncalugar(a)pivotal.io> wrote: Hi Siva, |
|
Re: Cloud Foundry Org/Space Metadata Synchronization
Dieu Cao <dcao@...>
Hi Chaskin,
There is not a webhook or similar functionality currently available in CF to hook into changing of user roles or deletions of orgs and spaces. I have heard interest in such functionality in the past, but improvements in this area are not currently prioritized for the near term. There was an effort to come up with a proposal for notifications based on events in the past but it has not moved forward. It's possible you could as an operator, set up a log drain for cloud controller, and trigger something based on logs but this is imprecise and the logs are subject to change. -Dieu CF CAPI PM On Wed, Dec 9, 2015 at 6:51 PM, Chaskin Saroff <chaskin.saroff(a)gmail.com> wrote: As I project requirement, I'm attempting to extend some user preferences |
|
Re: Quotas in CF
Dieu Cao <dcao@...>
Hi Rajesh,
toggle quoted message
Show quoted text
I don't believe there are any correlations between memory quota and the implications on routes or services. I think this varies widely based on what your app is. For example a node app versus a java app have very different memory foot prints and can also vary widely depending on what the app is responsible for, the code base etc. As for routes, that varies again by organization, based on policy, etc. Service instances also vary widely by implementation, since they could simply be credentials to an existing resource or they could be new deployments of varying sizes, etc. -Dieu CF CAPI PM On Wed, Dec 9, 2015 at 2:58 PM, Rajesh Jain <rajain(a)pivotal.io> wrote:
In cf at an org level you have quotas for |
|
Re: App Container IP Address assignment on vSphere
Daya Shetty <daya.shetty@...>
Hi Eric,
Thanks for the detailed explanation! Makes perfect sense as the network pool was calculated to be 10.254.0.0/22 and not 10.254.0.0/24. Regards Daya |
|
Bits Service Proposal
Simon D Moser
Hi everybody,
We have been putting together a proposal for a "bits service" - basically a service that will take bits (packages, droplets, etc) and provide upload/download capabilities for those. It's a functionality in the CC today, but the general idea is to externalise that into a separate service that is reusable from both the cloud controller as well as Diego and potentially other consumers. Read more in the document at https://docs.google.com/document/d/1kIjBuJJ0ZiJRPzMJW8dtce26jhAHbK7KotY9416YMEI/edit# ! Please join the discussion over the next few weeks - we'd like to start working on that early in the new year. Mit freundlichen Grüßen / Kind regards Simon Moser Senior Technical Staff Member / IBM Master Inventor Bluemix Application Platform Lead Architect Dept. C727, IBM Research & Development Boeblingen ------------------------------------------------------------------------------------------------------------------------------------------- IBM Deutschland Schoenaicher Str. 220 71032 Boeblingen Phone: +49-7031-16-4304 Fax: +49-7031-16-4890 E-Mail: smoser(a)de.ibm.com ------------------------------------------------------------------------------------------------------------------------------------------- IBM Deutschland Research & Development GmbH / Vorsitzender des Aufsichtsrats: Martina Koederitz Geschäftsführung: Dirk Wittkopp Sitz der Gesellschaft: Böblingen / Registergericht: Amtsgericht Stuttgart, HRB 243294 ** Great minds discuss ideas; average minds discuss events; small minds discuss people. Eleanor Roosevelt |
|
Re: persistence for apps?
Johannes Hiemer <jvhiemer@...>
Gerne Matthias. :-)
toggle quoted message
Show quoted text
Swift should be an easy way to go if you know the S3 API quite well. On 10.12.2015, at 16:53, Matthias Ender <Matthias.Ender(a)sas.com> wrote: |
|
Re: persistence for apps?
Matthias Ender <Matthias.Ender@...>
Danke, Johannes.
We actually have an implementation that uses S3, but want to also be able to also support openstack, on-premise. Rather than re-implementing in swift, nfs would be an easier path from the app development side. But if there is no path on the cf side, we’ll have to rethink. From: Johannes Hiemer [mailto:jvhiemer(a)gmail.com] Sent: Thursday, December 10, 2015 10:21 AM To: Discussions about Cloud Foundry projects and the system overall. <cf-dev(a)lists.cloudfoundry.org> Subject: [cf-dev] Re: persistence for apps? Hi Mathias, the assumption you have is wrong. There are two issues regarding your suggestion: 1) you don't have any control on the cf side (client) over nfs in warden containers. As far as I know this won't be the case for Diego as well 2) you should stick with solutions like swift or s3 for sharing data, which is the propagated way for cloud native applications What kind of data are you going to share between the apps? Mit freundlichen Grüßen Johannes Hiemer On 10.12.2015, at 16:15, Matthias Ender <Matthias.Ender(a)sas.com<mailto:Matthias.Ender(a)sas.com>> wrote: We are looking at solutions to persist and share directory-type information among a couple of apps within our application stack. NFS comes to mind. How would one go about that? A manifest modification to mount the nfs share on the runners, I assume. How would the apps then get access? A volume mount on the Warden container? But where to specify that? Or am I thinking about this the wrong way? thanks for any suggestions, Matthias |
|
Re: persistence for apps?
Johannes Hiemer <jvhiemer@...>
Hi Mathias,
toggle quoted message
Show quoted text
the assumption you have is wrong. There are two issues regarding your suggestion: 1) you don't have any control on the cf side (client) over nfs in warden containers. As far as I know this won't be the case for Diego as well 2) you should stick with solutions like swift or s3 for sharing data, which is the propagated way for cloud native applications What kind of data are you going to share between the apps? Mit freundlichen Grüßen Johannes Hiemer On 10.12.2015, at 16:15, Matthias Ender <Matthias.Ender(a)sas.com> wrote: |
|
persistence for apps?
Matthias Ender <Matthias.Ender@...>
We are looking at solutions to persist and share directory-type information among a couple of apps within our application stack.
NFS comes to mind. How would one go about that? A manifest modification to mount the nfs share on the runners, I assume. How would the apps then get access? A volume mount on the Warden container? But where to specify that? Or am I thinking about this the wrong way? thanks for any suggestions, Matthias |
|
Diego docker app launch issue with Diego's v0.1443.0
Anuj Jain <anuj17280@...>
Hi,
I deployed the latest CF v226 with Diego v0.1443.0 - I was able to successfully upgrade both deployments and verified that CF is working as expected. currently seeing problem with Diego while trying to deploy any docker app - I am getting *'Server error, status code: 500, error code: 170016, message: Runner error: stop app failed: 503' *- below you can see the CF_TRACE output of last few lines. I also notice that while trying to upgrade diego v0.1443.0 - it gave me the error while trying to upgrade database job - the fix which I applied (changed debug2 to debug from diego manifest file - path: properties => consul => log_level: debug) RESPONSE: [2015-12-10T09:35:07-05:00] HTTP/1.1 500 Internal Server Error Content-Length: 110 Content-Type: application/json;charset=utf-8 Date: Thu, 10 Dec 2015 14:35:07 GMT Server: nginx X-Cf-Requestid: 8328f518-4847-41ec-5836-507d4bb054bb X-Content-Type-Options: nosniff X-Vcap-Request-Id: 324d0fc0-2146-48f0-6265-755efb556e23::5c869046-8803-4dac-a620-8ca701f5bd22 { "code": 170016, "description": "Runner error: stop app failed: 503", "error_code": "CF-RunnerError" } FAILED Server error, status code: 500, error code: 170016, message: Runner error: stop app failed: 503 FAILED Server error, status code: 500, error code: 170016, message: Runner error: stop app failed: 503 FAILED Error: Error executing cli core command Starting app testing89 in org PAAS / space dev as admin... FAILED Server error, status code: 500, error code: 170016, message: Runner error: stop app failed: 503 - Anuj |
|
Re: cf start of diego enabled app results in status code: 500 -- where to look for logs?
Tom Sherrod <tom.sherrod@...>
Hi Eric,
toggle quoted message
Show quoted text
Thanks for the pointers. `bosh vms` -- all running Only 1 api vm running. cloud_controller_ng.log is almost constantly being updated. Below is the 500 error capture: {"timestamp":1449752019.6870825,"message":"desire.app.request","log_level":"info","source":"cc.nsync.listener.client","data":{"request_guid":"e046d57f-277a-4f81-452c-3b6fb892432e::53557eba-6212-4727-9b43-ebbb03426b76","process_guid":"9f528159-1a7b-4876-92c9-34d040e9824d-29fd370c-04fd-4481-b432-39431460a963"},"thread_id":69975079694320,"fiber_id":69975079888800,"process_id":6524,"file":"/var/vcap/data/packages/cloud_controller_ng/80b067d32996057a4fc88fe1c553764671e7e8e8.1-1ea6a37e7b427cc868c0727a4bfec391725a79ab/cloud_controller_ng/lib/cloud_controller/diego/nsync_client.rb","lineno":15,"method":"desire_app"} {"timestamp":1449752019.6899576,"message":"Cannot communicate with diego - tried to send start","log_level":"error","source":"cc.diego.runner","data":{"request_guid":"e046d57f-277a-4f81-452c-3b6fb892432e::53557eba-6212-4727-9b43-ebbb03426b76"},"thread_id":69975079694320,"fiber_id":69975079888800,"process_id":6524,"file":"/var/vcap/data/packages/cloud_controller_ng/80b067d32996057a4fc88fe1c553764671e7e8e8.1-1ea6a37e7b427cc868c0727a4bfec391725a79ab/cloud_controller_ng/lib/cloud_controller/diego/runner.rb","lineno":43,"method":"rescue in with_logging"} {"timestamp":1449752019.6909509,"message":"Request failed: 500: {\"code\"=>10001, \"description\"=>\"getaddrinfo: Name or service not known\", \"error_code\"=>\"CF-CannotCommunicateWithDiegoError\", \"backtrace\"=>[\"/var/vcap/data/packages/cloud_controller_ng/80b067d32996057a4fc88fe1c553764671e7e8e8.1-1ea6a37e7b427cc868c0727a4bfec391725a79ab/cloud_controller_ng/lib/cloud_controller/diego/runner.rb:44:in `rescue in with_logging'\", \"/var/vcap/data/packages/cloud_controller_ng/80b067d32996057a4fc88fe1c553764671e7e8e8.1-1ea6a37e7b427cc868c0727a4bfec391725a79ab/cloud_controller_ng/lib/cloud_controller/diego/runner.rb:40:in `with_logging'\", \"/var/vcap/data/packages/cloud_controller_ng/80b067d32996057a4fc88fe1c553764671e7e8e8.1-1ea6a37e7b427cc868c0727a4bfec391725a79ab/cloud_controller_ng/lib/cloud_controller/diego/runner.rb:19:in `start'\", \"/var/vcap/data/packages/cloud_controller_ng/80b067d32996057a4fc88fe1c553764671e7e8e8.1-1ea6a37e7b427cc868c0727a4bfec391725a79ab/cloud_controller_ng/lib/cloud_controller/app_observer.rb:63:in `react_to_state_change'\", \"/var/vcap/data/packages/cloud_controller_ng/80b067d32996057a4fc88fe1c553764671e7e8e8.1-1ea6a37e7b427cc868c0727a4bfec391725a79ab/cloud_controller_ng/lib/cloud_controller/app_observer.rb:31:in `updated'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/app/models/runtime/app.rb:574:in `after_commit'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sequel-4.21.0/lib/sequel/model/base.rb:1920:in `block in _save'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sequel-4.21.0/lib/sequel/database/transactions.rb:280:in `call'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sequel-4.21.0/lib/sequel/database/transactions.rb:280:in `block in remove_transaction'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sequel-4.21.0/lib/sequel/database/transactions.rb:280:in `each'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sequel-4.21.0/lib/sequel/database/transactions.rb:280:in `remove_transaction'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sequel-4.21.0/lib/sequel/database/transactions.rb:156:in `_transaction'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sequel-4.21.0/lib/sequel/database/transactions.rb:108:in `block in transaction'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sequel-4.21.0/lib/sequel/database/connecting.rb:250:in `block in synchronize'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sequel-4.21.0/lib/sequel/connection_pool/threaded.rb:98:in `hold'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sequel-4.21.0/lib/sequel/database/connecting.rb:250:in `synchronize'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sequel-4.21.0/lib/sequel/database/transactions.rb:97:in `transaction'\", \"/var/vcap/data/packages/cloud_controller_ng/80b067d32996057a4fc88fe1c553764671e7e8e8.1-1ea6a37e7b427cc868c0727a4bfec391725a79ab/cloud_controller_ng/app/controllers/base/model_controller.rb:66:in `update'\", \"/var/vcap/data/packages/cloud_controller_ng/80b067d32996057a4fc88fe1c553764671e7e8e8.1-1ea6a37e7b427cc868c0727a4bfec391725a79ab/cloud_controller_ng/app/controllers/base/base_controller.rb:78:in `dispatch'\", \"/var/vcap/data/packages/cloud_controller_ng/80b067d32996057a4fc88fe1c553764671e7e8e8.1-1ea6a37e7b427cc868c0727a4bfec391725a79ab/cloud_controller_ng/lib/cloud_controller/rest_controller/routes.rb:16:in `block in define_route'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sinatra-1.4.6/lib/sinatra/base.rb:1609:in `call'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sinatra-1.4.6/lib/sinatra/base.rb:1609:in `block in compile!'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sinatra-1.4.6/lib/sinatra/base.rb:974:in `[]'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sinatra-1.4.6/lib/sinatra/base.rb:974:in `block (3 levels) in route!'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sinatra-1.4.6/lib/sinatra/base.rb:993:in `route_eval'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sinatra-1.4.6/lib/sinatra/base.rb:974:in `block (2 levels) in route!'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sinatra-1.4.6/lib/sinatra/base.rb:1014:in `block in process_route'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sinatra-1.4.6/lib/sinatra/base.rb:1012:in `catch'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sinatra-1.4.6/lib/sinatra/base.rb:1012:in `process_route'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sinatra-1.4.6/lib/sinatra/base.rb:972:in `block in route!'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sinatra-1.4.6/lib/sinatra/base.rb:971:in `each'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sinatra-1.4.6/lib/sinatra/base.rb:971:in `route!'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sinatra-1.4.6/lib/sinatra/base.rb:1084:in `block in dispatch!'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sinatra-1.4.6/lib/sinatra/base.rb:1066:in `block in invoke'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sinatra-1.4.6/lib/sinatra/base.rb:1066:in `catch'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sinatra-1.4.6/lib/sinatra/base.rb:1066:in `invoke'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sinatra-1.4.6/lib/sinatra/base.rb:1081:in `dispatch!'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sinatra-1.4.6/lib/sinatra/base.rb:906:in `block in call!'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sinatra-1.4.6/lib/sinatra/base.rb:1066:in `block in invoke'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sinatra-1.4.6/lib/sinatra/base.rb:1066:in `catch'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sinatra-1.4.6/lib/sinatra/base.rb:1066:in `invoke'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sinatra-1.4.6/lib/sinatra/base.rb:906:in `call!'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sinatra-1.4.6/lib/sinatra/base.rb:894:in `call'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/rack-protection-1.5.3/lib/rack/protection/xss_header.rb:18:in `call'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/rack-protection-1.5.3/lib/rack/protection/path_traversal.rb:16:in `call'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/rack-protection-1.5.3/lib/rack/protection/json_csrf.rb:18:in `call'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/rack-protection-1.5.3/lib/rack/protection/base.rb:49:in `call'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/rack-protection-1.5.3/lib/rack/protection/base.rb:49:in `call'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/rack-protection-1.5.3/lib/rack/protection/frame_options.rb:31:in `call'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/rack-1.6.4/lib/rack/nulllogger.rb:9:in `call'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/rack-1.6.4/lib/rack/head.rb:13:in `call'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sinatra-1.4.6/lib/sinatra/base.rb:181:in `call'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/sinatra-1.4.6/lib/sinatra/base.rb:2021:in `call'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/rack-1.6.4/lib/rack/urlmap.rb:66:in `block in call'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/rack-1.6.4/lib/rack/urlmap.rb:50:in `each'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/rack-1.6.4/lib/rack/urlmap.rb:50:in `call'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/middleware/request_logs.rb:21:in `call'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/middleware/vcap_request_id.rb:14:in `call'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/middleware/cors.rb:47:in `call_app'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/middleware/cors.rb:12:in `call'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/middleware/request_metrics.rb:12:in `call'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/rack-1.6.4/lib/rack/builder.rb:153:in `call'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/thin-1.6.3/lib/thin/connection.rb:86:in `block in pre_process'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/thin-1.6.3/lib/thin/connection.rb:84:in `catch'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/thin-1.6.3/lib/thin/connection.rb:84:in `pre_process'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/eventmachine-1.0.8/lib/eventmachine.rb:1062:in `call'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.2.0/gems/eventmachine-1.0.8/lib/eventmachine.rb:1062:in `block in spawn_threadpool'\"]}","log_level":"error","source":"cc.api","data":{"request_guid":"e046d57f-277a-4f81-452c-3b6fb892432e::53557eba-6212-4727-9b43-ebbb03426b76"},"thread_id":69975079694320,"fiber_id":69975079888800,"process_id":6524,"file":"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/lib/sinatra/vcap.rb","lineno":53,"method":"block in registered"} {"timestamp":1449752019.691719,"message":"Completed 500 vcap-request-id: e046d57f-277a-4f81-452c-3b6fb892432e::53557eba-6212-4727-9b43-ebbb03426b76","log_level":"info","source":"cc.api","data":{},"thread_id":69975079694320,"fiber_id":69975079888800,"process_id":6524,"file":"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/middleware/request_logs.rb","lineno":23,"method":"call"} On Wed, Dec 9, 2015 at 5:53 PM, Eric Malm <emalm(a)pivotal.io> wrote:
Hi, Tom, |
|
Re: App Container IP Address assignment on vSphere
Eric Malm <emalm@...>
Hi, Daya,
toggle quoted message
Show quoted text
Based on https://github.com/cloudfoundry/warden/blob/master/warden/lib/warden/config.rb#L207-L216, the warden server uses the values of the network.pool_start_address and network.pool_size properties from the rendered warden.yml config file to construct a value for the pool_network property. Warden allocates a /30 subnet for each container, to have room for both the host-side and container-side IP addresses in the veth pair, as well as the broadcast address on the subnet. With the default values of 10.254.0.0 for the pool start address and 256 (= 2^8) for the pool size, warden then calculates the pool network to be 10.254.0.0/22. This /22 subnet includes the 10.254.2.x and 10.254.3.x addresses you have observed on your DEAs. In any case, these 10.254.x.y IP addresses are used only internally on each DEA or Diego cell VM, so there's no conflict between these IP addresses on other VMs that run warden/garden containers. If you examine the 'nat' table in the iptables config, you'll see that for each container, warden creates a NAT rule that directs inbound traffic from a particular port on the host VM's eth0 interface to that same port on the container's host-side veth interface (the one with offset 2 in the container's /30 subnet). The DEA then provides this port as the value of the $PORT environment variable, so the CF app process running in the container can listen on that port for its web traffic. Thanks, Eric On Wed, Dec 9, 2015 at 11:25 PM, Will Pragnell <wpragnell(a)pivotal.io> wrote:
Ah, sorry, my bad! I assumed Garden for some reason. |
|