Re: Announcing Experimental support for Asynchronous Service Operations
Chip Childers <cchilders@...>
Awesome news! Long time coming, and it opens up a whole world of additional capabilities for users.
toggle quoted message
Show quoted text
Nice work everyone! On Jun 4, 2015, at 9:00 AM, Shannon Coen <scoen(a)pivotal.io> wrote: |
|
Announcing Experimental support for Asynchronous Service Operations
Shannon Coen
On behalf of the Services API team, including Dojo participants from IBM
and SAP, I'm pleased to announce experimental availability and published documentation for this much-anticipated feature. As of cf-release v208 and CLI v6.11.1, Cloud Foundry now supports an enhanced service broker integration in support of long-running provisioning, update, and delete operations. This significantly broadens the supported use cases for Cloud Foundry Marketplace Services, and I can't wait to hear what creative things the ecosystem does with it. Provision VMs, orchestrate clusters, install software, move data... yes, your broker can even open support tickets to have those things done manually! This feature is currently considered experimental, as we'd like you all to review our docs, try out the feature, and give us feedback. We very interested to hear about any confusion in the docs or the UX, and any sticky issues you encounter in implementation. Our goal is for our docs enable a painless, intuitive (can we hope for joyful?) implementation experience. We have not bumped the broker API yet for this feature. You'll notice that our documentation for the feature is separate from the stable API docs at this point. Once we're confident in the design (we're relying on your feedback!), we'll bump the broker API version, move the docs for asynchronous operations into the stable docs, AND implement support for asynchronous bind/create-key and unbind/delete-key. Documentation: - http://docs.cloudfoundry.org/services/asynchronous-operations.html - http://docs.cloudfoundry.org/services/api.html Example broker for AWS (contributed by IBM): - http://docs.cloudfoundry.org/services/examples.html - https://github.com/cloudfoundry-samples/go_service_broker Demo of the feature presented at CF Summit 2015: - https://youtu.be/Ij5KSKrAq9Q tl;dr Cloud Foundry expects broker responses within 60 seconds. Now a broker can return an immediate response indicating that a provision, update, or delete operation is in progress. Cloud Foundry then returns a similar response to the client, and begins polling the broker for the status of the operation. Users, via API clients, can discover the status of the operation ("in progress", "succeeded", or "failed"), and brokers can provide user-facing messages in response to each poll which are exposed to users (e.g. "VMs provisioned, installing software, 30% complete"). Thank you, Shannon Coen Product Manager, Cloud Foundry Pivotal, Inc. |
|
Re: Release Notes for v210
CF Runtime
Guillaume,
toggle quoted message
Show quoted text
We run the pipelines using the Docker image built from cf-release/pipeline-image/Dockerfile, which checks out the spiff repo and builds it, so it should be 1.0.6 since that seems to be where master is currently. Which SHA do you have checked out for cf-release/src/loggregator? Do you see: metron_agent: deployment: (( meta.environment )) at the bottom of cf-release/templates/cf-lamb.yml? Joseph Palermo CF Runtime Team On Wed, Jun 3, 2015 at 1:17 PM, Guillaume Berche <bercheg(a)gmail.com> wrote:
Joseph, |
|
Re: Backwards-incompatible NOAA library change
Long Nguyen
Thanks for letting us know!
toggle quoted message
Show quoted text
Long On 3 Jun 2015 18:28, "Erik Jasiak" <ejasiak(a)pivotal.io> wrote:
Hi all, |
|
Backwards-incompatible NOAA library change
Erik Jasiak <ejasiak@...>
Hi all,
On Thursday, June 4th at ~noon MDT, the Loggregator NOAA[1] Library will introduce a backwards-incompatible change, after feedback from other teams and the community. The NOAA library is used for consuming Cloud Foundry Loggregator data, including the firehose. Any update via “go get” will pull down these changes and will break compilations. Details on the change: NOAA is changing how it closes socket connections on requests. Previously, the Close() function in consumer.go [2] did not behave as expected - a client was required to close stopChan separately. Calling Close() on a noaa.Consumer that was not stopped or in a retry loop would do nothing. Now calling Close() stops the consumer, and none of the APIs take a stopChan. This is a much cleaner design that also works more in-line with client expectations. Other info: The Go language maintainers have taken a position that a repository should “never make backwards incompatible changes” [3]. We recognize that while Go may have taken this position to anyone using “go get”, this makes iterating on a community API difficult. We will explore with the Cloud Foundry and Go communities how to better handle major API changes in the future. Many thanks, Erik Jasiak PM - Loggregator, Logging Analytics Metrics Boulder [1] https://github.com/cloudfoundry/noaa [2] https://github.com/cloudfoundry/noaa/blob/master/consumer.go#L55 [3] http://golang.org/doc/faq#get_version |
|
Re: Syslog Drain to Logstash Problems
John Tuley <jtuley@...>
Steve,
Until recently (cf-release v198), binding a syslog service required restarting the app. If you're post-v198, it *should* Just Work. However, one of the things that could be in your way is network security. In order to forward logs to your drain, your loggregator servers must be able to access that server. This is the most common cause we see of systems failing to forward to syslog drains. Please let us know if you have more questions. – John Tuley On Wed, Jun 3, 2015 at 12:37 PM, Steve Wall < steve.wall(a)primetimesoftware.com> wrote: Hello, |
|
Re: Release Notes for v210
Joseph,
toggle quoted message
Show quoted text
I just checked, and I indeed still reproduce the issue against the cf-release v210 branch with the submodule properly updated (including loggregator). What other info could be useful to diagnose the root cause and environement difference with the cf runtime pipeline ? Are the pipeline indeed using latest released spiff version (1.0.6 [8]) ? Guillaume. [8] https://github.com/cloudfoundry-incubator/spiff/releases/tag/v1.0.6 On Wed, Jun 3, 2015 at 9:46 PM, Guillaume Berche <bercheg(a)gmail.com> wrote:
Hi Joseph, |
|
Re: Release Notes for v210
Hi Joseph,
toggle quoted message
Show quoted text
Thanks for your prompt response and the details over the current infrastructures covered by runtime pipelines. Great to hear the nfs template will be merged soon, thanks! I'm indeed using the generate_deployment_manifest from cf-release, and was still experiencing issue described into [5], until I patched both cf-release/templates/cf-lamb.yml (which happens to belong to loggregator repo) and cf-jobs.yml as in [2]. I'll double check tomorrow if I could have be caught by a transient lack of "git submodule update", which could have explained the problem on my side. If this is the case, then I'm sorry for the noise, and the extra associated work. Regards, Guillaume. [2] https://github.com/cloudfoundry/cf-release/pull/696 [5] https://github.com/cloudfoundry/cf-release/issues/690 [7] https://github.com/cloudfoundry/bosh-lite/issues/265 On Wed, Jun 3, 2015 at 7:50 PM, CF Runtime <cfruntime(a)gmail.com> wrote:
Hi Guillaume, |
|
Re: UAA : Is anyone utilizing the Password Score Feature
Winkler, Steve (GE Digital) <steve.winkler@...>
+1
toggle quoted message
Show quoted text
From: Nicholas Calugar <ncalugar(a)pivotal.io<mailto:ncalugar(a)pivotal.io>> Reply-To: "Discussions about Cloud Foundry projects and the system overall." <cf-dev(a)lists.cloudfoundry.org<mailto:cf-dev(a)lists.cloudfoundry.org>> Date: Wednesday, June 3, 2015 at 12:20 PM To: "Discussions about Cloud Foundry projects and the system overall." <cf-dev(a)lists.cloudfoundry.org<mailto:cf-dev(a)lists.cloudfoundry.org>> Cc: CF Developers Mailing List <cf-dev(a)lists.cloudfoundry.org<mailto:cf-dev(a)lists.cloudfoundry.org>> Subject: Re: [cf-dev] UAA : Is anyone utilizing the Password Score Feature Hi Sree, Not sure if this is possible, but maybe instead of requireAtLeastOneSpecialCharacter boolean, you could do minSpecialCharacters int (0-n)? This would allow more rigorous password policies. Nick — Nicholas Calugar On Wed, Jun 3, 2015 at 12:00 PM, Sree Tummidi <stummidi(a)pivotal.io<mailto:stummidi(a)pivotal.io>> wrote:
Hi All, The UAA team is in the process of implementing Password Policy feature<https://urldefense.proofpoint.com/v2/url?u=https-3A__www.pivotaltracker.com_story_show_82182984&d=AwMFaQ&c=IV_clAzoPDE253xZdHuilRgztyh_RiV3wUrLrDQYWSI&r=8jfAtIC0enmugg7W93b4MxFNUdrneLwx6fyzU0yk9a8&m=wNYpag6E0rnGEhlO0X3GJ5d5Hz4fOBCSAOh8yveJ_mw&s=_wh20YK4sGow4AtgdhZx-n4fIJ4x2UiApoSSG8jVOCs&e=> for users stored in UAA. The following properties around password strength will be exposed in the YML configuration. #passwordPolicy: # minLength: 8 # requireAtLeastOneSpecialCharacter: true # requireAtLeastOneUppercaseCharacter: true # requireAtLeastOneLowercaseCharacter: true # requireAtLeastOneDigit: true The Password Policy feature is being implemented to support multi-tenant UAA. Each Tenant/Identity Zone will get its own password policy. The password policy for the default zone will be configurable via YML. UAA currently supports the zxcvbn<https://urldefense.proofpoint.com/v2/url?u=https-3A__blogs.dropbox.com_tech_2012_04_zxcvbn-2Drealistic-2Dpassword-2Dstrength-2Destimation_&d=AwMFaQ&c=IV_clAzoPDE253xZdHuilRgztyh_RiV3wUrLrDQYWSI&r=8jfAtIC0enmugg7W93b4MxFNUdrneLwx6fyzU0yk9a8&m=wNYpag6E0rnGEhlO0X3GJ5d5Hz4fOBCSAOh8yveJ_mw&s=b9G7EEOsCOiXnLJMJTaDbWyjwr386z7IQ5_5wvRZ6ew&e=> style password score. This is currently exposed via the following properties in the YML configuration file. There is an end point<https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_cloudfoundry_uaa_blob_master_docs_UAA-2DAPIs.rst-23query-2Dthe-2Dstrength-2Dof-2Da-2Dpassword-2Dpost-2Dpassword-2Dscore&d=AwMFaQ&c=IV_clAzoPDE253xZdHuilRgztyh_RiV3wUrLrDQYWSI&r=8jfAtIC0enmugg7W93b4MxFNUdrneLwx6fyzU0yk9a8&m=wNYpag6E0rnGEhlO0X3GJ5d5Hz4fOBCSAOh8yveJ_mw&s=JO1Yuq0GHq5FoW8uEHIMP-UNRnynikwtdSksZ0gklXk&e=> for querying the status of the same. password-policy: required-score: <int> We would like to understand if this password score feature is being utilized at all. We don't plan on making this feature multi-tenant and would like to drop this in favor of the new approach which is much more granular and supports multi tenancy. Thanks, Sree Tummidi Sr. Product Manager Identity - Pivotal Cloud Foundry |
|
Re: UAA : Is anyone utilizing the Password Score Feature
Josh Ghiloni
In that vein, it would be nice to be able to specify which characters constitute “special” and to have a list of disallowed characters.
toggle quoted message
Show quoted text
Josh Ghiloni Senior Consultant 303.932.2202 o | 303.590.5427 m | 303.565.2794 f jghiloni(a)ecsteam.com<mailto:jghiloni(a)ecsteam.com> ECS Team Technology Solutions Delivered ECSTeam.com<http://ECSTeam.com> On Jun 3, 2015, at 13:20, Nicholas Calugar <ncalugar(a)pivotal.io<mailto:ncalugar(a)pivotal.io>> wrote:
Hi Sree, Not sure if this is possible, but maybe instead of requireAtLeastOneSpecialCharacter boolean, you could do minSpecialCharacters int (0-n)? This would allow more rigorous password policies. Nick — Nicholas Calugar On Wed, Jun 3, 2015 at 12:00 PM, Sree Tummidi <stummidi(a)pivotal.io<mailto:stummidi(a)pivotal.io>> wrote: Hi All, The UAA team is in the process of implementing Password Policy feature<https://www.pivotaltracker.com/story/show/82182984> for users stored in UAA. The following properties around password strength will be exposed in the YML configuration. #passwordPolicy: # minLength: 8 # requireAtLeastOneSpecialCharacter: true # requireAtLeastOneUppercaseCharacter: true # requireAtLeastOneLowercaseCharacter: true # requireAtLeastOneDigit: true The Password Policy feature is being implemented to support multi-tenant UAA. Each Tenant/Identity Zone will get its own password policy. The password policy for the default zone will be configurable via YML. UAA currently supports the zxcvbn<https://blogs.dropbox.com/tech/2012/04/zxcvbn-realistic-password-strength-estimation/> style password score. This is currently exposed via the following properties in the YML configuration file. There is an end point<https://github.com/cloudfoundry/uaa/blob/master/docs/UAA-APIs.rst#query-the-strength-of-a-password-post-password-score> for querying the status of the same. password-policy: required-score: <int> We would like to understand if this password score feature is being utilized at all. We don't plan on making this feature multi-tenant and would like to drop this in favor of the new approach which is much more granular and supports multi tenancy. Thanks, Sree Tummidi Sr. Product Manager Identity - Pivotal Cloud Foundry _______________________________________________ cf-dev mailing list cf-dev(a)lists.cloudfoundry.org<mailto:cf-dev(a)lists.cloudfoundry.org> https://lists.cloudfoundry.org/mailman/listinfo/cf-dev |
|
Re: UAA : Is anyone utilizing the Password Score Feature
Nicholas Calugar
Hi Sree,
toggle quoted message
Show quoted text
Not sure if this is possible, but maybe instead of requireAtLeastOneSpecialCharacter boolean, you could do minSpecialCharacters int (0-n)? This would allow more rigorous password policies. Nick — Nicholas Calugar On Wed, Jun 3, 2015 at 12:00 PM, Sree Tummidi <stummidi(a)pivotal.io> wrote:
Hi All, |
|
UAA : Is anyone utilizing the Password Score Feature
Sree Tummidi
Hi All,
The UAA team is in the process of implementing Password Policy feature <https://www.pivotaltracker.com/story/show/82182984> for users stored in UAA. The following properties around password strength will be exposed in the YML configuration. #passwordPolicy: # minLength: 8 # requireAtLeastOneSpecialCharacter: true # requireAtLeastOneUppercaseCharacter: true # requireAtLeastOneLowercaseCharacter: true # requireAtLeastOneDigit: true The Password Policy feature is being implemented to support multi-tenant UAA. Each Tenant/Identity Zone will get its own password policy. The password policy for the default zone will be configurable via YML. UAA currently supports the *zxcvbn <https://blogs.dropbox.com/tech/2012/04/zxcvbn-realistic-password-strength-estimation/>* style password score. This is currently exposed via the following properties in the YML configuration file. There is an end point <https://github.com/cloudfoundry/uaa/blob/master/docs/UAA-APIs.rst#query-the-strength-of-a-password-post-password-score> for querying the status of the same. password-policy: required-score: <int> We would like to understand if this password score feature is being utilized at all. We don't plan on making this feature multi-tenant and would like to drop this in favor of the new approach which is much more granular and supports multi tenancy. Thanks, Sree Tummidi Sr. Product Manager Identity - Pivotal Cloud Foundry |
|
Syslog Drain to Logstash Problems
Steve Wall <steve.wall@...>
Hello,
We are having problems draining log messages to Logstash. The drain is setup as a user provided service. cf cups logstash-drain -l syslog://xx.xx.xx.xx:5000 And then bound to the service. cf bind-service myapp logstash-drain But no log messages are coming through to Logstash. Or more specifically, we are using ELK and the messages aren't seen through Kibana. We were able to log into the DEA and using netcat (nc), messages were successfully submitted to the ELK stack. nc -w0 -u xx.xx.xx.xx 5000 <<< "logging from remote" Any suggestions on how to debug this further? -Steve |
|
Re: Release Notes for v210
Eric Malm <emalm@...>
Hi, all,
toggle quoted message
Show quoted text
Please be aware that the Diego team has recently identified a goroutine and memory leak in the Diego codebase for release 0.1247.0 that eventually affects the performance of Diego's receptor component. Further investigation has revealed that this leak was introduced in final release 0.1221.0 and fixed in 0.1259.0. Consequently, we do not recommend the use of Diego final releases from 0.1221.0 through 0.1258.0 in long-running environments. If you do need to mitigate this issue in such an environment, issuing a 'monit restart' to each receptor process on the Diego 'access' VMs once it consumes a majority of available memory on the VM should suffice and should have negligible impact on the performance and availability of the Diego backend, especially if more than one 'access' VM is present in the Diego deployment. The next final release of CF (namely, v211) will be accompanied by a Diego final release that does not exhibit this problem. Additionally, the Diego team has identified and corrected the gaps in our testing pipeline and monitoring configuration that allowed this resource leak to slip through. Thank you for your understanding, and please let me know if you have further questions about this matter. Best, Eric, CF Runtime Diego PM On Tue, May 26, 2015 at 10:59 PM, Dieu Cao <dcao(a)pivotal.io> wrote:
The cf-release v210 was released on May 23rd, 2015 |
|
Re: Release Notes for v210
CF Runtime
Hi Guillaume,
toggle quoted message
Show quoted text
The metron_agent.deployment default can be found in cf-release/templates/cf-lamb.yml which should get merged automatically if using the generate_deployment_manifest script in cf-release. We do currently have pipelines for all supported environments (AWS, vSphere, OpenStack, and BoshLite) Spiff templates are still the recommended way of deploying cf-release, and I would expect the nfs template change to be merged today as it is near the top of our backlog. Joseph Palermo CF Runtime Team On Wed, Jun 3, 2015 at 7:32 AM, Guillaume Berche <bercheg(a)gmail.com> wrote:
Hi, |
|
Re: Staging error: no available stagers (status code: 400, error code: 170001)
Takeshi Morikawa
Please check the 'staging.advertise' of nats message
https://github.com/cloudfoundry/dea_ng#staging sample command: bundle exec nats-sub -s nats://[nats.user]:[nats.password]@[nats_ipaddress]:[nats.port] 'staging.advertise' I have one additional request Can you share your bosh deployment manifest? |
|
Re: Release Notes for v210
Hi,
toggle quoted message
Show quoted text
Thanks for the v210 announcement and the associated release note. It seems that the v209-announced introduction of a new mandatory metron_agent.deployment property did not make it into the default spiff templates [5]. Note I tried updating v209 release note formatting to make this more explicit [6]. I'm wondering whether the pivotal runtime/release team has a cf-release pipeline for vsphere infrastructure (I'm suspecting the aws-based pipelines were fine) ? Is such pipeline using the spiff templates into cf-release/templates [4], or has it moved to something else such as cf-boshworkspace [3] ? If the spiff templates templates into cfrelease/templates are still the recommended way of deploying CF, is there a way to priorize the merge of PRs for known issues in v211 such as [1] and [2], as to avoid the need by the cf-community to maintain its own fork of cfrelease/templates ? Thanks in advance, Guillaume. [1] https://github.com/cloudfoundry/cf-release/pull/689 [2] https://github.com/cloudfoundry/cf-release/pull/696 [3] https://github.com/cloudfoundry-community/cf-boshworkspace [4] https://github.com/cloudfoundry/cf-release/tree/master/templates [5] https://github.com/cloudfoundry/cf-release/issues/690 [6] https://github.com/cloudfoundry-community/cf-docs-contrib/wiki/v209/fdae17795c61691f96f90cc9fd7be90945252937 On Wed, May 27, 2015 at 7:59 AM, Dieu Cao <dcao(a)pivotal.io> wrote:
The cf-release v210 was released on May 23rd, 2015 |
|
Staging error: no available stagers (status code: 400, error code: 170001)
iamflying
Resend the question.
I deployed the cf into openstack successfully. However, I got failure when I tried to push my first php example. Starting app cf-php-demo in org system / space dev as admin... FAILED Server error, status code: 400, error code: 170001, message: Staging error: no available stagers Below is my findings. The DEA and CC have enough resources (4G RAM, 20G disk). Any clues to debug? Attached debug.log. debug.log <http://cf-dev.70369.x6.nabble.com/file/n269/debug.log *My env:* BOSH 1.2978.0 cf version 6.10.0-b78bf10-2015-02-11T22:26:40+00:00 cf release: 207 stemcell: bosh-openstack-kvm-ubuntu-trusty-go_agent | 2969 ubuntu(a)boshclivm:~/apps/cf-php-demo$ CF_TRACE=debug.log cf push Using manifest file /home/ubuntu/apps/cf-php-demo/manifest.yml Creating app cf-php-demo in org system / space dev as admin... OK Using route cf-php-demo.runmyapp.io Binding cf-php-demo.runmyapp.io to cf-php-demo... OK Uploading cf-php-demo... Uploading app files from: /home/ubuntu/apps/cf-php-demo Uploading 231.9K, 13 files Done uploading OK Starting app cf-php-demo in org system / space dev as admin... FAILED Server error, status code: 400, error code: 170001, message: Staging error: no available stagers ubuntu(a)boshclivm:~/apps/cf-php-demo$ cf apps Getting apps in org system / space dev as admin... OK name requested state instances memory disk urls cf-php-demo started 0/1 128M 1G cf-php-demo.cfapps.io ubuntu(a)boshclivm:~/apps/cf-php-demo$ Additional error messages: 1. nginx.access.log api.au.apaas.com - [03/Jun/2015:06:19:51 +0000] "PUT /v2/apps/7b417005-6716-4d5e-bec2-246a51b588c6?async=true&inline-relations-depth=1 HTTP/1.1" 400 435 "-" "go-cli 6.10.0-b78bf10 / linux" 137.172.74.86, 100.64.1.0, 100.64.1.5 vcap_request_id:ae51df28-8371-44f2-4d18-5e13ded3a467::6faf587a-aba8-4467-8e1e-75b018eff93b response_time:0.297 2. cloud_controller_ng.log {"timestamp":1433313002.4425669,"message":"Request failed: 400: {\"code\"=>170001, \"description\"=>\"Staging error: no available stagers\", \"error_code\"=>\"CF-StagingError\", \"backtrace\"=>[\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/lib/cloud_controller/dea/app_stager_task.rb:26:in `stage'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/lib/cloud_controller/dea/stager.rb:45:in `stage_app'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/lib/cloud_controller/app_observer.rb:57:in `react_to_state_change'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/lib/cloud_controller/app_observer.rb:27:in `updated'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/app/models/runtime/app.rb:534:in `after_commit'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sequel-4.21.0/lib/sequel/model/base.rb:1920:in `block in _save'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sequel-4.21.0/lib/sequel/database/transactions.rb:280:in `call'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sequel-4.21.0/lib/sequel/database/transactions.rb:280:in `block in remove_transaction'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sequel-4.21.0/lib/sequel/database/transactions.rb:280:in `each'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sequel-4.21.0/lib/sequel/database/transactions.rb:280:in `remove_transaction'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sequel-4.21.0/lib/sequel/database/transactions.rb:156:in `_transaction'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sequel-4.21.0/lib/sequel/database/transactions.rb:108:in `block in transaction'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sequel-4.21.0/lib/sequel/database/connecting.rb:250:in `block in synchronize'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sequel-4.21.0/lib/sequel/connection_pool/threaded.rb:98:in `hold'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sequel-4.21.0/lib/sequel/database/connecting.rb:250:in `synchronize'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sequel-4.21.0/lib/sequel/database/transactions.rb:97:in `transaction'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/app/controllers/base/model_controller.rb:64:in `update'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/app/controllers/base/base_controller.rb:76:in `dispatch'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/lib/cloud_controller/rest_controller/routes.rb:16:in `block in define_route'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1602:in `call'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1602:in `block in compile!'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:966:in `[]'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:966:in `block (3 levels) in route!'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:985:in `route_eval'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:966:in `block (2 levels) in route!'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1006:in `block in process_route'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1004:in `catch'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1004:in `process_route'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:964:in `block in route!'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:963:in `each'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:963:in `route!'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1076:in `block in dispatch!'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1058:in `block in invoke'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1058:in `catch'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1058:in `invoke'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1073:in `dispatch!'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:898:in `block in call!'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1058:in `block in invoke'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1058:in `catch'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1058:in `invoke'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:898:in `call!'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:886:in `call'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/rack-protection-1.5.3/lib/rack/protection/xss_header.rb:18:in `call'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/rack-protection-1.5.3/lib/rack/protection/path_traversal.rb:16:in `call'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/rack-protection-1.5.3/lib/rack/protection/json_csrf.rb:18:in `call'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/rack-protection-1.5.3/lib/rack/protection/base.rb:49:in `call'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/rack-protection-1.5.3/lib/rack/protection/base.rb:49:in `call'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/rack-protection-1.5.3/lib/rack/protection/frame_options.rb:31:in `call'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/rack-1.5.2/lib/rack/nulllogger.rb:9:in `call'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/rack-1.5.2/lib/rack/head.rb:11:in `call'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:180:in `call'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:2014:in `call'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/rack-1.5.2/lib/rack/builder.rb:138:in `call'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/rack-1.5.2/lib/rack/urlmap.rb:65:in `block in call'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/rack-1.5.2/lib/rack/urlmap.rb:50:in `each'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/rack-1.5.2/lib/rack/urlmap.rb:50:in `call'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/rack-1.5.2/lib/rack/builder.rb:138:in `call'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/thin-1.6.3/lib/thin/connection.rb:86:in `block in pre_process'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/thin-1.6.3/lib/thin/connection.rb:84:in `catch'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/thin-1.6.3/lib/thin/connection.rb:84:in `pre_process'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/eventmachine-1.0.3/lib/eventmachine.rb:1037:in `call'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/eventmachine-1.0.3/lib/eventmachine.rb:1037:in `block in spawn_threadpool'\"]}","log_level":"info","source":"cc.api","data":{"request_guid":"d916023a-d364-492b-50bb-624e5862e455::2c136550-80cb-4d5b-8ab3-6c2dd397fc1a"},"thread_id":69941342398660,"fiber_id":69941329029340,"process_id":1816,"file":"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/lib/sinatra/vcap.rb","lineno":51,"method":"block in registered"} |
|
Re: Scaling Services
Robert Moss
Regarding a) you could try having your services managed by Apache Brooklyn
<http://brooklyn.incubator.apache.org/> it can auto- or manually-scale. I wrote a Service Broker <https://github.com/cloudfoundry-community/brooklyn-service-broker> and a CLI plugin <https://github.com/cloudfoundry-community/brooklyn-plugin> that let's you talk to Brooklyn in a CF native way. Robert -- Cloudsoft Corporation Limited, Registered in Scotland No: SC349230. Registered Office: 13 Dryden Place, Edinburgh, EH9 1RP This e-mail message is confidential and for use by the addressee only. If the message is received by anyone other than the addressee, please return the message to the sender by replying to it and then delete the message from your computer. Internet e-mails are not necessarily secure. Cloudsoft Corporation Limited does not accept responsibility for changes made to this message after it was sent. Whilst all reasonable care has been taken to avoid the transmission of viruses, it is the responsibility of the recipient to ensure that the onward transmission, opening or use of this message and any attachments will not adversely affect its systems or data. No responsibility is accepted by Cloudsoft Corporation Limited in this regard and the recipient should carry out such virus and other checks as it considers appropriate. |
|
Server error, status code: 400, error code: 170001, message: Staging error: no available stagers
iamflying
Hi all,
I got failure when I deployed the cf into openstack. Below is my current findings. The DEA and CC have enough resources (4G RAM, 20G disk). Any clues to debug? Attached debug.log. debug.log <http://cf-dev.70369.x6.nabble.com/file/n269/debug.log> BOSH 1.2978.0 cf version 6.10.0-b78bf10-2015-02-11T22:26:40+00:00 cf release: 207 stemcell: bosh-openstack-kvm-ubuntu-trusty-go_agent | 2969 ubuntu(a)boshclivm:~/apps/cf-php-demo$ CF_TRACE=debug.log cf push Using manifest file /home/ubuntu/apps/cf-php-demo/manifest.yml Creating app cf-php-demo in org system / space dev as admin... OK Using route cf-php-demo.runmyapp.io Binding cf-php-demo.runmyapp.io to cf-php-demo... OK Uploading cf-php-demo... Uploading app files from: /home/ubuntu/apps/cf-php-demo Uploading 231.9K, 13 files Done uploading OK Starting app cf-php-demo in org system / space dev as admin... FAILED Server error, status code: 400, error code: 170001, message: Staging error: no available stagers ubuntu(a)boshclivm:~/apps/cf-php-demo$ cf apps Getting apps in org system / space dev as admin... OK name requested state instances memory disk urls cf-php-demo started 0/1 128M 1G cf-php-demo.cfapps.io ubuntu(a)boshclivm:~/apps/cf-php-demo$ Additional error messages: 1. nginx.access.log api.au.apaas.com - [03/Jun/2015:06:19:51 +0000] "PUT /v2/apps/7b417005-6716-4d5e-bec2-246a51b588c6?async=true&inline-relations-depth=1 HTTP/1.1" 400 435 "-" "go-cli 6.10.0-b78bf10 / linux" 137.172.74.86, 100.64.1.0, 100.64.1.5 vcap_request_id:ae51df28-8371-44f2-4d18-5e13ded3a467::6faf587a-aba8-4467-8e1e-75b018eff93b response_time:0.297 2. cloud_controller_ng.log {"timestamp":1433313002.4425669,"message":"Request failed: 400: {\"code\"=>170001, \"description\"=>\"Staging error: no available stagers\", \"error_code\"=>\"CF-StagingError\", \"backtrace\"=>[\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/lib/cloud_controller/dea/app_stager_task.rb:26:in `stage'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/lib/cloud_controller/dea/stager.rb:45:in `stage_app'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/lib/cloud_controller/app_observer.rb:57:in `react_to_state_change'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/lib/cloud_controller/app_observer.rb:27:in `updated'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/app/models/runtime/app.rb:534:in `after_commit'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sequel-4.21.0/lib/sequel/model/base.rb:1920:in `block in _save'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sequel-4.21.0/lib/sequel/database/transactions.rb:280:in `call'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sequel-4.21.0/lib/sequel/database/transactions.rb:280:in `block in remove_transaction'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sequel-4.21.0/lib/sequel/database/transactions.rb:280:in `each'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sequel-4.21.0/lib/sequel/database/transactions.rb:280:in `remove_transaction'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sequel-4.21.0/lib/sequel/database/transactions.rb:156:in `_transaction'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sequel-4.21.0/lib/sequel/database/transactions.rb:108:in `block in transaction'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sequel-4.21.0/lib/sequel/database/connecting.rb:250:in `block in synchronize'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sequel-4.21.0/lib/sequel/connection_pool/threaded.rb:98:in `hold'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sequel-4.21.0/lib/sequel/database/connecting.rb:250:in `synchronize'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sequel-4.21.0/lib/sequel/database/transactions.rb:97:in `transaction'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/app/controllers/base/model_controller.rb:64:in `update'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/app/controllers/base/base_controller.rb:76:in `dispatch'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/lib/cloud_controller/rest_controller/routes.rb:16:in `block in define_route'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1602:in `call'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1602:in `block in compile!'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:966:in `[]'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:966:in `block (3 levels) in route!'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:985:in `route_eval'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:966:in `block (2 levels) in route!'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1006:in `block in process_route'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1004:in `catch'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1004:in `process_route'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:964:in `block in route!'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:963:in `each'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:963:in `route!'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1076:in `block in dispatch!'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1058:in `block in invoke'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1058:in `catch'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1058:in `invoke'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1073:in `dispatch!'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:898:in `block in call!'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1058:in `block in invoke'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1058:in `catch'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1058:in `invoke'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:898:in `call!'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:886:in `call'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/rack-protection-1.5.3/lib/rack/protection/xss_header.rb:18:in `call'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/rack-protection-1.5.3/lib/rack/protection/path_traversal.rb:16:in `call'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/rack-protection-1.5.3/lib/rack/protection/json_csrf.rb:18:in `call'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/rack-protection-1.5.3/lib/rack/protection/base.rb:49:in `call'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/rack-protection-1.5.3/lib/rack/protection/base.rb:49:in `call'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/rack-protection-1.5.3/lib/rack/protection/frame_options.rb:31:in `call'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/rack-1.5.2/lib/rack/nulllogger.rb:9:in `call'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/rack-1.5.2/lib/rack/head.rb:11:in `call'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:180:in `call'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:2014:in `call'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/rack-1.5.2/lib/rack/builder.rb:138:in `call'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/rack-1.5.2/lib/rack/urlmap.rb:65:in `block in call'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/rack-1.5.2/lib/rack/urlmap.rb:50:in `each'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/rack-1.5.2/lib/rack/urlmap.rb:50:in `call'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/rack-1.5.2/lib/rack/builder.rb:138:in `call'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/thin-1.6.3/lib/thin/connection.rb:86:in `block in pre_process'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/thin-1.6.3/lib/thin/connection.rb:84:in `catch'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/thin-1.6.3/lib/thin/connection.rb:84:in `pre_process'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/eventmachine-1.0.3/lib/eventmachine.rb:1037:in `call'\", \"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/eventmachine-1.0.3/lib/eventmachine.rb:1037:in `block in spawn_threadpool'\"]}","log_level":"info","source":"cc.api","data":{"request_guid":"d916023a-d364-492b-50bb-624e5862e455::2c136550-80cb-4d5b-8ab3-6c2dd397fc1a"},"thread_id":69941342398660,"fiber_id":69941329029340,"process_id":1816,"file":"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/lib/sinatra/vcap.rb","lineno":51,"method":"block in registered"} -- View this message in context: http://cf-dev.70369.x6.nabble.com/Server-error-status-code-400-error-code-170001-message-Staging-error-no-available-stagers-tp269.html Sent from the CF Dev mailing list archive at Nabble.com. |
|