Re: Release Notes for v210
CF Runtime
Guillaume,
We run the pipelines using the Docker image built from
cf-release/pipeline-image/Dockerfile, which checks out the spiff repo and
builds it, so it should be 1.0.6 since that seems to be where master is
currently.
Which SHA do you have checked out for cf-release/src/loggregator?
Do you see:
metron_agent:
deployment: (( meta.environment ))
at the bottom of cf-release/templates/cf-lamb.yml?
Joseph Palermo
CF Runtime Team
toggle quoted message
Show quoted text
We run the pipelines using the Docker image built from
cf-release/pipeline-image/Dockerfile, which checks out the spiff repo and
builds it, so it should be 1.0.6 since that seems to be where master is
currently.
Which SHA do you have checked out for cf-release/src/loggregator?
Do you see:
metron_agent:
deployment: (( meta.environment ))
at the bottom of cf-release/templates/cf-lamb.yml?
Joseph Palermo
CF Runtime Team
On Wed, Jun 3, 2015 at 1:17 PM, Guillaume Berche <bercheg(a)gmail.com> wrote:
Joseph,
I just checked, and I indeed still reproduce the issue against the
cf-release v210 branch with the submodule properly updated (including
loggregator).
What other info could be useful to diagnose the root cause and
environement difference with the cf runtime pipeline ? Are the pipeline
indeed using latest released spiff version (1.0.6 [8]) ?
Guillaume.
[8] https://github.com/cloudfoundry-incubator/spiff/releases/tag/v1.0.6
On Wed, Jun 3, 2015 at 9:46 PM, Guillaume Berche <bercheg(a)gmail.com>
wrote:Hi Joseph,_______________________________________________
Thanks for your prompt response and the details over the current
infrastructures covered by runtime pipelines. Great to hear the nfs
template will be merged soon, thanks!
I'm indeed using the generate_deployment_manifest from cf-release, and
was still experiencing issue described into [5], until I patched both
cf-release/templates/cf-lamb.yml (which happens to belong to loggregator
repo) and cf-jobs.yml as in [2].
I'll double check tomorrow if I could have be caught by a transient lack
of "git submodule update", which could have explained the problem on my
side. If this is the case, then I'm sorry for the noise, and the extra
associated work.
Regards,
Guillaume.
[2] https://github.com/cloudfoundry/cf-release/pull/696
[5] https://github.com/cloudfoundry/cf-release/issues/690
[7] https://github.com/cloudfoundry/bosh-lite/issues/265
On Wed, Jun 3, 2015 at 7:50 PM, CF Runtime <cfruntime(a)gmail.com> wrote:Hi Guillaume,
The metron_agent.deployment default can be found in
cf-release/templates/cf-lamb.yml which should get merged automatically if
using the generate_deployment_manifest script in cf-release.
We do currently have pipelines for all supported environments (AWS,
vSphere, OpenStack, and BoshLite)
Spiff templates are still the recommended way of deploying cf-release,
and I would expect the nfs template change to be merged today as it is near
the top of our backlog.
Joseph Palermo
CF Runtime Team
On Wed, Jun 3, 2015 at 7:32 AM, Guillaume Berche <bercheg(a)gmail.com>
wrote:Hi,_______________________________________________
Thanks for the v210 announcement and the associated release note. It
seems that the v209-announced introduction of a new mandatory
metron_agent.deployment property did not make it into the default spiff
templates [5]. Note I tried updating v209 release note formatting to make
this more explicit [6].
I'm wondering whether the pivotal runtime/release team has a cf-release
pipeline for vsphere infrastructure (I'm suspecting the aws-based pipelines
were fine) ? Is such pipeline using the spiff templates into
cf-release/templates [4], or has it moved to something else such as
cf-boshworkspace [3] ?
If the spiff templates templates into cfrelease/templates are still the
recommended way of deploying CF, is there a way to priorize the merge of
PRs for known issues in v211 such as [1] and [2], as to avoid the need by
the cf-community to maintain its own fork of cfrelease/templates ?
Thanks in advance,
Guillaume.
[1] https://github.com/cloudfoundry/cf-release/pull/689
[2] https://github.com/cloudfoundry/cf-release/pull/696
[3] https://github.com/cloudfoundry-community/cf-boshworkspace
[4] https://github.com/cloudfoundry/cf-release/tree/master/templates
[5] https://github.com/cloudfoundry/cf-release/issues/690
[6] https://github.com/cloudfoundry-community/cf-docs-contrib/wiki/v209/fdae17795c61691f96f90cc9fd7be90945252937
On Wed, May 27, 2015 at 7:59 AM, Dieu Cao <dcao(a)pivotal.io> wrote:The cf-release v210 was released on May 23rd, 2015_______________________________________________
Runtime
- Addressed USN-2617-1 <http://www.ubuntu.com/usn/usn-2617-1/>
CVE-2015-3202
<http://people.canonical.com/~ubuntu-security/cve/2015/CVE-2015-3202.html> FUSE
vulnerabilities
- Removed fuse binaries from lucid64 rootfs . Apps running on
lucid64 stack requiring fuse should switch to cflinuxfs2 details
<https://www.pivotaltracker.com/story/show/95186578>
- fuse binaries updated on cflinuxfs2 rootfs. details
<https://www.pivotaltracker.com/story/show/95177810>
- [Experimental] Work continues on support for Asynchronous
Service Instance Operationsdetails
<https://www.pivotaltracker.com/epic/show/1561148>
- Support for configurable max polling duration
- [Experimental] Work continues on /v3 and Application Process
Types details <https://www.pivotaltracker.com/epic/show/1334418>
- [Experimental] Work continues on Route API details
<https://www.pivotaltracker.com/epic/show/1590160>
- [Experimental] Work continues on Context Path Routes details
<https://www.pivotaltracker.com/epic/show/1808212>
- Work continues on support for Service Keys details
<https://www.pivotaltracker.com/epic/show/1743366>
- Upgrade etcd server to 2.0.1 details
<https://www.pivotaltracker.com/story/show/91070214>
- Should be run as 1 node (for small deployments) or 3 nodes
spread across zones (for HA)
- Also upgrades hm9k dependencies. LAMB client to be upgraded
in a subsequent release. Older client is compatible.
- cloudfoundry/cf-release #670
<https://github.com/cloudfoundry/cf-release/pull/670>: Be able to
specify timeouts for acceptance tests without defaults in the spec.
details <https://www.pivotaltracker.com/story/show/93914198>
- Fix bug where ssl enabled routers were not draining properly
details <https://www.pivotaltracker.com/story/show/94718480>
- cloudfoundry/cloud_controller_ng #378
<https://github.com/cloudfoundry/cf-release/pull/378>: current
usage against the org quota details
<https://www.pivotaltracker.com/story/show/94171010>
UAA
- Bumped to UAA 2.3.0 details
<https://github.com/cloudfoundry/uaa/releases/tag/2.3.0>
Used Configuration
- BOSH Version: 152
- Stemcell Version: 2889
- CC Api Version: 2.27.0
Commit summary
<http://htmlpreview.github.io/?https://github.com/cloudfoundry-community/cf-docs-contrib/blob/master/release_notes/cf-210-whats-in-the-deploy.html>
Compatible Diego Version
- final release 0.1247.0 commit
<https://github.com/cloudfoundry-incubator/diego-release/commit/a122a78eeb344bbfc90b7bcd0fa987d08ef1a5d1>
Manifest and Job Spec Changes
- properties.acceptance_tests.skip_regex added
- properties.app_ssh.host_key_fingerprint added
- properties.app_ssh.port defaults to 2222
- properties.uaa.newrelic added
- properties.login.logout.redirect.parameter.whitelist
On Sat, May 23, 2015 at 9:50 PM, James Bayer <jbayer(a)pivotal.io>
wrote:CVE-2015-3202 details:_______________________________________________
http://lists.cloudfoundry.org/pipermail/cf-dev/2015-May/000194.html
CVE-2015-1834 details:
http://lists.cloudfoundry.org/pipermail/cf-dev/2015-May/000195.html
On Sat, May 23, 2015 at 9:41 PM, James Bayer <jbayer(a)pivotal.io>
wrote:please note that this release addresses CVE-2015-3202 and
CVE-2015-1834 and we strongly recommend upgrading to this release. more
details will be forthcoming after the long united states holiday weekend.
https://github.com/cloudfoundry/cf-release/releases/tag/v210
*https://github.com/cloudfoundry-community/cf-docs-contrib/wiki/v210
<https://github.com/cloudfoundry-community/cf-docs-contrib/wiki/v210>*
--
Thank you,
James Bayer
--
Thank you,
James Bayer
_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev
Re: Backwards-incompatible NOAA library change
Long Nguyen
Thanks for letting us know!
Long
toggle quoted message
Show quoted text
Long
On 3 Jun 2015 18:28, "Erik Jasiak" <ejasiak(a)pivotal.io> wrote:
Hi all,
On Thursday, June 4th at ~noon MDT, the Loggregator NOAA[1] Library will
introduce a backwards-incompatible change, after feedback from other teams
and the community. The NOAA library is used for consuming Cloud Foundry
Loggregator data, including the firehose. Any update via “go get” will
pull down these changes and will break compilations.
Details on the change:
NOAA is changing how it closes socket connections on requests.
Previously, the Close() function in consumer.go [2] did not behave as
expected - a client was required to close stopChan separately. Calling
Close() on a noaa.Consumer that was not stopped or in a retry loop would do
nothing.
Now calling Close() stops the consumer, and none of the APIs take a
stopChan. This is a much cleaner design that also works more in-line with
client expectations.
Other info:
The Go language maintainers have taken a position that a repository should
“never make backwards incompatible changes” [3]. We recognize that while
Go may have taken this position to anyone using “go get”, this makes
iterating on a community API difficult. We will explore with the Cloud
Foundry and Go communities how to better handle major API changes in the
future.
Many thanks,
Erik Jasiak
PM - Loggregator, Logging Analytics Metrics Boulder
[1] https://github.com/cloudfoundry/noaa
[2] https://github.com/cloudfoundry/noaa/blob/master/consumer.go#L55
[3] http://golang.org/doc/faq#get_version
_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev
Backwards-incompatible NOAA library change
Erik Jasiak <ejasiak@...>
Hi all,
On Thursday, June 4th at ~noon MDT, the Loggregator NOAA[1] Library will
introduce a backwards-incompatible change, after feedback from other teams
and the community. The NOAA library is used for consuming Cloud Foundry
Loggregator data, including the firehose. Any update via “go get” will
pull down these changes and will break compilations.
Details on the change:
NOAA is changing how it closes socket connections on requests. Previously,
the Close() function in consumer.go [2] did not behave as expected - a
client was required to close stopChan separately. Calling Close() on a
noaa.Consumer that was not stopped or in a retry loop would do nothing.
Now calling Close() stops the consumer, and none of the APIs take a
stopChan. This is a much cleaner design that also works more in-line with
client expectations.
Other info:
The Go language maintainers have taken a position that a repository should
“never make backwards incompatible changes” [3]. We recognize that while
Go may have taken this position to anyone using “go get”, this makes
iterating on a community API difficult. We will explore with the Cloud
Foundry and Go communities how to better handle major API changes in the
future.
Many thanks,
Erik Jasiak
PM - Loggregator, Logging Analytics Metrics Boulder
[1] https://github.com/cloudfoundry/noaa
[2] https://github.com/cloudfoundry/noaa/blob/master/consumer.go#L55
[3] http://golang.org/doc/faq#get_version
On Thursday, June 4th at ~noon MDT, the Loggregator NOAA[1] Library will
introduce a backwards-incompatible change, after feedback from other teams
and the community. The NOAA library is used for consuming Cloud Foundry
Loggregator data, including the firehose. Any update via “go get” will
pull down these changes and will break compilations.
Details on the change:
NOAA is changing how it closes socket connections on requests. Previously,
the Close() function in consumer.go [2] did not behave as expected - a
client was required to close stopChan separately. Calling Close() on a
noaa.Consumer that was not stopped or in a retry loop would do nothing.
Now calling Close() stops the consumer, and none of the APIs take a
stopChan. This is a much cleaner design that also works more in-line with
client expectations.
Other info:
The Go language maintainers have taken a position that a repository should
“never make backwards incompatible changes” [3]. We recognize that while
Go may have taken this position to anyone using “go get”, this makes
iterating on a community API difficult. We will explore with the Cloud
Foundry and Go communities how to better handle major API changes in the
future.
Many thanks,
Erik Jasiak
PM - Loggregator, Logging Analytics Metrics Boulder
[1] https://github.com/cloudfoundry/noaa
[2] https://github.com/cloudfoundry/noaa/blob/master/consumer.go#L55
[3] http://golang.org/doc/faq#get_version
Re: Syslog Drain to Logstash Problems
John Tuley <jtuley@...>
Steve,
Until recently (cf-release v198), binding a syslog service required
restarting the app. If you're post-v198, it *should* Just Work.
However, one of the things that could be in your way is network security.
In order to forward logs to your drain, your loggregator servers must be
able to access that server. This is the most common cause we see of systems
failing to forward to syslog drains.
Please let us know if you have more questions.
– John Tuley
On Wed, Jun 3, 2015 at 12:37 PM, Steve Wall <
steve.wall(a)primetimesoftware.com> wrote:
Until recently (cf-release v198), binding a syslog service required
restarting the app. If you're post-v198, it *should* Just Work.
However, one of the things that could be in your way is network security.
In order to forward logs to your drain, your loggregator servers must be
able to access that server. This is the most common cause we see of systems
failing to forward to syslog drains.
Please let us know if you have more questions.
– John Tuley
On Wed, Jun 3, 2015 at 12:37 PM, Steve Wall <
steve.wall(a)primetimesoftware.com> wrote:
Hello,
We are having problems draining log messages to Logstash. The drain is
setup as a user provided service.
cf cups logstash-drain -l syslog://xx.xx.xx.xx:5000
And then bound to the service.
cf bind-service myapp logstash-drain
But no log messages are coming through to Logstash. Or more specifically,
we are using ELK and the messages aren't seen through Kibana.
We were able to log into the DEA and using netcat (nc), messages were
successfully submitted to the ELK stack.
nc -w0 -u xx.xx.xx.xx 5000 <<< "logging from remote"
Any suggestions on how to debug this further?
-Steve
_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev
Re: Release Notes for v210
Joseph,
I just checked, and I indeed still reproduce the issue against the
cf-release v210 branch with the submodule properly updated (including
loggregator).
What other info could be useful to diagnose the root cause and environement
difference with the cf runtime pipeline ? Are the pipeline indeed using
latest released spiff version (1.0.6 [8]) ?
Guillaume.
[8] https://github.com/cloudfoundry-incubator/spiff/releases/tag/v1.0.6
toggle quoted message
Show quoted text
I just checked, and I indeed still reproduce the issue against the
cf-release v210 branch with the submodule properly updated (including
loggregator).
What other info could be useful to diagnose the root cause and environement
difference with the cf runtime pipeline ? Are the pipeline indeed using
latest released spiff version (1.0.6 [8]) ?
Guillaume.
[8] https://github.com/cloudfoundry-incubator/spiff/releases/tag/v1.0.6
On Wed, Jun 3, 2015 at 9:46 PM, Guillaume Berche <bercheg(a)gmail.com> wrote:
Hi Joseph,
Thanks for your prompt response and the details over the current
infrastructures covered by runtime pipelines. Great to hear the nfs
template will be merged soon, thanks!
I'm indeed using the generate_deployment_manifest from cf-release, and was
still experiencing issue described into [5], until I patched both
cf-release/templates/cf-lamb.yml (which happens to belong to loggregator
repo) and cf-jobs.yml as in [2].
I'll double check tomorrow if I could have be caught by a transient lack
of "git submodule update", which could have explained the problem on my
side. If this is the case, then I'm sorry for the noise, and the extra
associated work.
Regards,
Guillaume.
[2] https://github.com/cloudfoundry/cf-release/pull/696
[5] https://github.com/cloudfoundry/cf-release/issues/690
[7] https://github.com/cloudfoundry/bosh-lite/issues/265
On Wed, Jun 3, 2015 at 7:50 PM, CF Runtime <cfruntime(a)gmail.com> wrote:Hi Guillaume,
The metron_agent.deployment default can be found in
cf-release/templates/cf-lamb.yml which should get merged automatically if
using the generate_deployment_manifest script in cf-release.
We do currently have pipelines for all supported environments (AWS,
vSphere, OpenStack, and BoshLite)
Spiff templates are still the recommended way of deploying cf-release,
and I would expect the nfs template change to be merged today as it is near
the top of our backlog.
Joseph Palermo
CF Runtime Team
On Wed, Jun 3, 2015 at 7:32 AM, Guillaume Berche <bercheg(a)gmail.com>
wrote:Hi,_______________________________________________
Thanks for the v210 announcement and the associated release note. It
seems that the v209-announced introduction of a new mandatory
metron_agent.deployment property did not make it into the default spiff
templates [5]. Note I tried updating v209 release note formatting to make
this more explicit [6].
I'm wondering whether the pivotal runtime/release team has a cf-release
pipeline for vsphere infrastructure (I'm suspecting the aws-based pipelines
were fine) ? Is such pipeline using the spiff templates into
cf-release/templates [4], or has it moved to something else such as
cf-boshworkspace [3] ?
If the spiff templates templates into cfrelease/templates are still the
recommended way of deploying CF, is there a way to priorize the merge of
PRs for known issues in v211 such as [1] and [2], as to avoid the need by
the cf-community to maintain its own fork of cfrelease/templates ?
Thanks in advance,
Guillaume.
[1] https://github.com/cloudfoundry/cf-release/pull/689
[2] https://github.com/cloudfoundry/cf-release/pull/696
[3] https://github.com/cloudfoundry-community/cf-boshworkspace
[4] https://github.com/cloudfoundry/cf-release/tree/master/templates
[5] https://github.com/cloudfoundry/cf-release/issues/690
[6] https://github.com/cloudfoundry-community/cf-docs-contrib/wiki/v209/fdae17795c61691f96f90cc9fd7be90945252937
On Wed, May 27, 2015 at 7:59 AM, Dieu Cao <dcao(a)pivotal.io> wrote:The cf-release v210 was released on May 23rd, 2015_______________________________________________
Runtime
- Addressed USN-2617-1 <http://www.ubuntu.com/usn/usn-2617-1/>
CVE-2015-3202
<http://people.canonical.com/~ubuntu-security/cve/2015/CVE-2015-3202.html> FUSE
vulnerabilities
- Removed fuse binaries from lucid64 rootfs . Apps running on
lucid64 stack requiring fuse should switch to cflinuxfs2 details
<https://www.pivotaltracker.com/story/show/95186578>
- fuse binaries updated on cflinuxfs2 rootfs. details
<https://www.pivotaltracker.com/story/show/95177810>
- [Experimental] Work continues on support for Asynchronous Service
Instance Operationsdetails
<https://www.pivotaltracker.com/epic/show/1561148>
- Support for configurable max polling duration
- [Experimental] Work continues on /v3 and Application Process
Types details <https://www.pivotaltracker.com/epic/show/1334418>
- [Experimental] Work continues on Route API details
<https://www.pivotaltracker.com/epic/show/1590160>
- [Experimental] Work continues on Context Path Routes details
<https://www.pivotaltracker.com/epic/show/1808212>
- Work continues on support for Service Keys details
<https://www.pivotaltracker.com/epic/show/1743366>
- Upgrade etcd server to 2.0.1 details
<https://www.pivotaltracker.com/story/show/91070214>
- Should be run as 1 node (for small deployments) or 3 nodes
spread across zones (for HA)
- Also upgrades hm9k dependencies. LAMB client to be upgraded in
a subsequent release. Older client is compatible.
- cloudfoundry/cf-release #670
<https://github.com/cloudfoundry/cf-release/pull/670>: Be able to
specify timeouts for acceptance tests without defaults in the spec.
details <https://www.pivotaltracker.com/story/show/93914198>
- Fix bug where ssl enabled routers were not draining properly
details <https://www.pivotaltracker.com/story/show/94718480>
- cloudfoundry/cloud_controller_ng #378
<https://github.com/cloudfoundry/cf-release/pull/378>: current
usage against the org quota details
<https://www.pivotaltracker.com/story/show/94171010>
UAA
- Bumped to UAA 2.3.0 details
<https://github.com/cloudfoundry/uaa/releases/tag/2.3.0>
Used Configuration
- BOSH Version: 152
- Stemcell Version: 2889
- CC Api Version: 2.27.0
Commit summary
<http://htmlpreview.github.io/?https://github.com/cloudfoundry-community/cf-docs-contrib/blob/master/release_notes/cf-210-whats-in-the-deploy.html>
Compatible Diego Version
- final release 0.1247.0 commit
<https://github.com/cloudfoundry-incubator/diego-release/commit/a122a78eeb344bbfc90b7bcd0fa987d08ef1a5d1>
Manifest and Job Spec Changes
- properties.acceptance_tests.skip_regex added
- properties.app_ssh.host_key_fingerprint added
- properties.app_ssh.port defaults to 2222
- properties.uaa.newrelic added
- properties.login.logout.redirect.parameter.whitelist
On Sat, May 23, 2015 at 9:50 PM, James Bayer <jbayer(a)pivotal.io> wrote:CVE-2015-3202 details:_______________________________________________
http://lists.cloudfoundry.org/pipermail/cf-dev/2015-May/000194.html
CVE-2015-1834 details:
http://lists.cloudfoundry.org/pipermail/cf-dev/2015-May/000195.html
On Sat, May 23, 2015 at 9:41 PM, James Bayer <jbayer(a)pivotal.io>
wrote:please note that this release addresses CVE-2015-3202 and
CVE-2015-1834 and we strongly recommend upgrading to this release. more
details will be forthcoming after the long united states holiday weekend.
https://github.com/cloudfoundry/cf-release/releases/tag/v210
*https://github.com/cloudfoundry-community/cf-docs-contrib/wiki/v210
<https://github.com/cloudfoundry-community/cf-docs-contrib/wiki/v210>*
--
Thank you,
James Bayer
--
Thank you,
James Bayer
_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev
Re: Release Notes for v210
Hi Joseph,
Thanks for your prompt response and the details over the current
infrastructures covered by runtime pipelines. Great to hear the nfs
template will be merged soon, thanks!
I'm indeed using the generate_deployment_manifest from cf-release, and was
still experiencing issue described into [5], until I patched both
cf-release/templates/cf-lamb.yml (which happens to belong to loggregator
repo) and cf-jobs.yml as in [2].
I'll double check tomorrow if I could have be caught by a transient lack of
"git submodule update", which could have explained the problem on my side.
If this is the case, then I'm sorry for the noise, and the extra associated
work.
Regards,
Guillaume.
[2] https://github.com/cloudfoundry/cf-release/pull/696
[5] https://github.com/cloudfoundry/cf-release/issues/690
[7] https://github.com/cloudfoundry/bosh-lite/issues/265
toggle quoted message
Show quoted text
Thanks for your prompt response and the details over the current
infrastructures covered by runtime pipelines. Great to hear the nfs
template will be merged soon, thanks!
I'm indeed using the generate_deployment_manifest from cf-release, and was
still experiencing issue described into [5], until I patched both
cf-release/templates/cf-lamb.yml (which happens to belong to loggregator
repo) and cf-jobs.yml as in [2].
I'll double check tomorrow if I could have be caught by a transient lack of
"git submodule update", which could have explained the problem on my side.
If this is the case, then I'm sorry for the noise, and the extra associated
work.
Regards,
Guillaume.
[2] https://github.com/cloudfoundry/cf-release/pull/696
[5] https://github.com/cloudfoundry/cf-release/issues/690
[7] https://github.com/cloudfoundry/bosh-lite/issues/265
On Wed, Jun 3, 2015 at 7:50 PM, CF Runtime <cfruntime(a)gmail.com> wrote:
Hi Guillaume,
The metron_agent.deployment default can be found in
cf-release/templates/cf-lamb.yml which should get merged automatically if
using the generate_deployment_manifest script in cf-release.
We do currently have pipelines for all supported environments (AWS,
vSphere, OpenStack, and BoshLite)
Spiff templates are still the recommended way of deploying cf-release, and
I would expect the nfs template change to be merged today as it is near the
top of our backlog.
Joseph Palermo
CF Runtime Team
On Wed, Jun 3, 2015 at 7:32 AM, Guillaume Berche <bercheg(a)gmail.com>
wrote:Hi,_______________________________________________
Thanks for the v210 announcement and the associated release note. It
seems that the v209-announced introduction of a new mandatory
metron_agent.deployment property did not make it into the default spiff
templates [5]. Note I tried updating v209 release note formatting to make
this more explicit [6].
I'm wondering whether the pivotal runtime/release team has a cf-release
pipeline for vsphere infrastructure (I'm suspecting the aws-based pipelines
were fine) ? Is such pipeline using the spiff templates into
cf-release/templates [4], or has it moved to something else such as
cf-boshworkspace [3] ?
If the spiff templates templates into cfrelease/templates are still the
recommended way of deploying CF, is there a way to priorize the merge of
PRs for known issues in v211 such as [1] and [2], as to avoid the need by
the cf-community to maintain its own fork of cfrelease/templates ?
Thanks in advance,
Guillaume.
[1] https://github.com/cloudfoundry/cf-release/pull/689
[2] https://github.com/cloudfoundry/cf-release/pull/696
[3] https://github.com/cloudfoundry-community/cf-boshworkspace
[4] https://github.com/cloudfoundry/cf-release/tree/master/templates
[5] https://github.com/cloudfoundry/cf-release/issues/690
[6] https://github.com/cloudfoundry-community/cf-docs-contrib/wiki/v209/fdae17795c61691f96f90cc9fd7be90945252937
On Wed, May 27, 2015 at 7:59 AM, Dieu Cao <dcao(a)pivotal.io> wrote:The cf-release v210 was released on May 23rd, 2015_______________________________________________
Runtime
- Addressed USN-2617-1 <http://www.ubuntu.com/usn/usn-2617-1/>
CVE-2015-3202
<http://people.canonical.com/~ubuntu-security/cve/2015/CVE-2015-3202.html> FUSE
vulnerabilities
- Removed fuse binaries from lucid64 rootfs . Apps running on
lucid64 stack requiring fuse should switch to cflinuxfs2 details
<https://www.pivotaltracker.com/story/show/95186578>
- fuse binaries updated on cflinuxfs2 rootfs. details
<https://www.pivotaltracker.com/story/show/95177810>
- [Experimental] Work continues on support for Asynchronous Service
Instance Operationsdetails
<https://www.pivotaltracker.com/epic/show/1561148>
- Support for configurable max polling duration
- [Experimental] Work continues on /v3 and Application Process Types
details <https://www.pivotaltracker.com/epic/show/1334418>
- [Experimental] Work continues on Route API details
<https://www.pivotaltracker.com/epic/show/1590160>
- [Experimental] Work continues on Context Path Routes details
<https://www.pivotaltracker.com/epic/show/1808212>
- Work continues on support for Service Keys details
<https://www.pivotaltracker.com/epic/show/1743366>
- Upgrade etcd server to 2.0.1 details
<https://www.pivotaltracker.com/story/show/91070214>
- Should be run as 1 node (for small deployments) or 3 nodes
spread across zones (for HA)
- Also upgrades hm9k dependencies. LAMB client to be upgraded in
a subsequent release. Older client is compatible.
- cloudfoundry/cf-release #670
<https://github.com/cloudfoundry/cf-release/pull/670>: Be able to
specify timeouts for acceptance tests without defaults in the spec.
details <https://www.pivotaltracker.com/story/show/93914198>
- Fix bug where ssl enabled routers were not draining properly
details <https://www.pivotaltracker.com/story/show/94718480>
- cloudfoundry/cloud_controller_ng #378
<https://github.com/cloudfoundry/cf-release/pull/378>: current usage
against the org quota details
<https://www.pivotaltracker.com/story/show/94171010>
UAA
- Bumped to UAA 2.3.0 details
<https://github.com/cloudfoundry/uaa/releases/tag/2.3.0>
Used Configuration
- BOSH Version: 152
- Stemcell Version: 2889
- CC Api Version: 2.27.0
Commit summary
<http://htmlpreview.github.io/?https://github.com/cloudfoundry-community/cf-docs-contrib/blob/master/release_notes/cf-210-whats-in-the-deploy.html>
Compatible Diego Version
- final release 0.1247.0 commit
<https://github.com/cloudfoundry-incubator/diego-release/commit/a122a78eeb344bbfc90b7bcd0fa987d08ef1a5d1>
Manifest and Job Spec Changes
- properties.acceptance_tests.skip_regex added
- properties.app_ssh.host_key_fingerprint added
- properties.app_ssh.port defaults to 2222
- properties.uaa.newrelic added
- properties.login.logout.redirect.parameter.whitelist
On Sat, May 23, 2015 at 9:50 PM, James Bayer <jbayer(a)pivotal.io> wrote:CVE-2015-3202 details:_______________________________________________
http://lists.cloudfoundry.org/pipermail/cf-dev/2015-May/000194.html
CVE-2015-1834 details:
http://lists.cloudfoundry.org/pipermail/cf-dev/2015-May/000195.html
On Sat, May 23, 2015 at 9:41 PM, James Bayer <jbayer(a)pivotal.io> wrote:please note that this release addresses CVE-2015-3202 and
CVE-2015-1834 and we strongly recommend upgrading to this release. more
details will be forthcoming after the long united states holiday weekend.
https://github.com/cloudfoundry/cf-release/releases/tag/v210
*https://github.com/cloudfoundry-community/cf-docs-contrib/wiki/v210
<https://github.com/cloudfoundry-community/cf-docs-contrib/wiki/v210>*
--
Thank you,
James Bayer
--
Thank you,
James Bayer
_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev
Re: UAA : Is anyone utilizing the Password Score Feature
Winkler, Steve (GE Digital) <steve.winkler@...>
+1
From: Nicholas Calugar <ncalugar(a)pivotal.io<mailto:ncalugar(a)pivotal.io>>
Reply-To: "Discussions about Cloud Foundry projects and the system overall." <cf-dev(a)lists.cloudfoundry.org<mailto:cf-dev(a)lists.cloudfoundry.org>>
Date: Wednesday, June 3, 2015 at 12:20 PM
To: "Discussions about Cloud Foundry projects and the system overall." <cf-dev(a)lists.cloudfoundry.org<mailto:cf-dev(a)lists.cloudfoundry.org>>
Cc: CF Developers Mailing List <cf-dev(a)lists.cloudfoundry.org<mailto:cf-dev(a)lists.cloudfoundry.org>>
Subject: Re: [cf-dev] UAA : Is anyone utilizing the Password Score Feature
Hi Sree,
Not sure if this is possible, but maybe instead of requireAtLeastOneSpecialCharacter boolean, you could do minSpecialCharacters int (0-n)? This would allow more rigorous password policies.
Nick
—
Nicholas Calugar
toggle quoted message
Show quoted text
From: Nicholas Calugar <ncalugar(a)pivotal.io<mailto:ncalugar(a)pivotal.io>>
Reply-To: "Discussions about Cloud Foundry projects and the system overall." <cf-dev(a)lists.cloudfoundry.org<mailto:cf-dev(a)lists.cloudfoundry.org>>
Date: Wednesday, June 3, 2015 at 12:20 PM
To: "Discussions about Cloud Foundry projects and the system overall." <cf-dev(a)lists.cloudfoundry.org<mailto:cf-dev(a)lists.cloudfoundry.org>>
Cc: CF Developers Mailing List <cf-dev(a)lists.cloudfoundry.org<mailto:cf-dev(a)lists.cloudfoundry.org>>
Subject: Re: [cf-dev] UAA : Is anyone utilizing the Password Score Feature
Hi Sree,
Not sure if this is possible, but maybe instead of requireAtLeastOneSpecialCharacter boolean, you could do minSpecialCharacters int (0-n)? This would allow more rigorous password policies.
Nick
—
Nicholas Calugar
On Wed, Jun 3, 2015 at 12:00 PM, Sree Tummidi <stummidi(a)pivotal.io<mailto:stummidi(a)pivotal.io>> wrote:
Hi All,
The UAA team is in the process of implementing Password Policy feature<https://urldefense.proofpoint.com/v2/url?u=https-3A__www.pivotaltracker.com_story_show_82182984&d=AwMFaQ&c=IV_clAzoPDE253xZdHuilRgztyh_RiV3wUrLrDQYWSI&r=8jfAtIC0enmugg7W93b4MxFNUdrneLwx6fyzU0yk9a8&m=wNYpag6E0rnGEhlO0X3GJ5d5Hz4fOBCSAOh8yveJ_mw&s=_wh20YK4sGow4AtgdhZx-n4fIJ4x2UiApoSSG8jVOCs&e=> for users stored in UAA.
The following properties around password strength will be exposed in the YML configuration.
#passwordPolicy:
# minLength: 8
# requireAtLeastOneSpecialCharacter: true
# requireAtLeastOneUppercaseCharacter: true
# requireAtLeastOneLowercaseCharacter: true
# requireAtLeastOneDigit: true
The Password Policy feature is being implemented to support multi-tenant UAA. Each Tenant/Identity Zone will get its own password policy. The password policy for the default zone will be configurable via YML.
UAA currently supports the zxcvbn<https://urldefense.proofpoint.com/v2/url?u=https-3A__blogs.dropbox.com_tech_2012_04_zxcvbn-2Drealistic-2Dpassword-2Dstrength-2Destimation_&d=AwMFaQ&c=IV_clAzoPDE253xZdHuilRgztyh_RiV3wUrLrDQYWSI&r=8jfAtIC0enmugg7W93b4MxFNUdrneLwx6fyzU0yk9a8&m=wNYpag6E0rnGEhlO0X3GJ5d5Hz4fOBCSAOh8yveJ_mw&s=b9G7EEOsCOiXnLJMJTaDbWyjwr386z7IQ5_5wvRZ6ew&e=> style password score. This is currently exposed via the following properties in the YML configuration file. There is an end point<https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_cloudfoundry_uaa_blob_master_docs_UAA-2DAPIs.rst-23query-2Dthe-2Dstrength-2Dof-2Da-2Dpassword-2Dpost-2Dpassword-2Dscore&d=AwMFaQ&c=IV_clAzoPDE253xZdHuilRgztyh_RiV3wUrLrDQYWSI&r=8jfAtIC0enmugg7W93b4MxFNUdrneLwx6fyzU0yk9a8&m=wNYpag6E0rnGEhlO0X3GJ5d5Hz4fOBCSAOh8yveJ_mw&s=JO1Yuq0GHq5FoW8uEHIMP-UNRnynikwtdSksZ0gklXk&e=> for querying the status of the same.
password-policy:
required-score: <int>
We would like to understand if this password score feature is being utilized at all. We don't plan on making this feature multi-tenant and would like to drop this in favor of the new approach which is much more granular and supports multi tenancy.
Thanks,
Sree Tummidi
Sr. Product Manager
Identity - Pivotal Cloud Foundry
Hi All,
The UAA team is in the process of implementing Password Policy feature<https://urldefense.proofpoint.com/v2/url?u=https-3A__www.pivotaltracker.com_story_show_82182984&d=AwMFaQ&c=IV_clAzoPDE253xZdHuilRgztyh_RiV3wUrLrDQYWSI&r=8jfAtIC0enmugg7W93b4MxFNUdrneLwx6fyzU0yk9a8&m=wNYpag6E0rnGEhlO0X3GJ5d5Hz4fOBCSAOh8yveJ_mw&s=_wh20YK4sGow4AtgdhZx-n4fIJ4x2UiApoSSG8jVOCs&e=> for users stored in UAA.
The following properties around password strength will be exposed in the YML configuration.
#passwordPolicy:
# minLength: 8
# requireAtLeastOneSpecialCharacter: true
# requireAtLeastOneUppercaseCharacter: true
# requireAtLeastOneLowercaseCharacter: true
# requireAtLeastOneDigit: true
The Password Policy feature is being implemented to support multi-tenant UAA. Each Tenant/Identity Zone will get its own password policy. The password policy for the default zone will be configurable via YML.
UAA currently supports the zxcvbn<https://urldefense.proofpoint.com/v2/url?u=https-3A__blogs.dropbox.com_tech_2012_04_zxcvbn-2Drealistic-2Dpassword-2Dstrength-2Destimation_&d=AwMFaQ&c=IV_clAzoPDE253xZdHuilRgztyh_RiV3wUrLrDQYWSI&r=8jfAtIC0enmugg7W93b4MxFNUdrneLwx6fyzU0yk9a8&m=wNYpag6E0rnGEhlO0X3GJ5d5Hz4fOBCSAOh8yveJ_mw&s=b9G7EEOsCOiXnLJMJTaDbWyjwr386z7IQ5_5wvRZ6ew&e=> style password score. This is currently exposed via the following properties in the YML configuration file. There is an end point<https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_cloudfoundry_uaa_blob_master_docs_UAA-2DAPIs.rst-23query-2Dthe-2Dstrength-2Dof-2Da-2Dpassword-2Dpost-2Dpassword-2Dscore&d=AwMFaQ&c=IV_clAzoPDE253xZdHuilRgztyh_RiV3wUrLrDQYWSI&r=8jfAtIC0enmugg7W93b4MxFNUdrneLwx6fyzU0yk9a8&m=wNYpag6E0rnGEhlO0X3GJ5d5Hz4fOBCSAOh8yveJ_mw&s=JO1Yuq0GHq5FoW8uEHIMP-UNRnynikwtdSksZ0gklXk&e=> for querying the status of the same.
password-policy:
required-score: <int>
We would like to understand if this password score feature is being utilized at all. We don't plan on making this feature multi-tenant and would like to drop this in favor of the new approach which is much more granular and supports multi tenancy.
Thanks,
Sree Tummidi
Sr. Product Manager
Identity - Pivotal Cloud Foundry
Re: UAA : Is anyone utilizing the Password Score Feature
Josh Ghiloni
In that vein, it would be nice to be able to specify which characters constitute “special” and to have a list of disallowed characters.
Josh Ghiloni
Senior Consultant
303.932.2202 o | 303.590.5427 m | 303.565.2794 f
jghiloni(a)ecsteam.com<mailto:jghiloni(a)ecsteam.com>
ECS Team
Technology Solutions Delivered
ECSTeam.com<http://ECSTeam.com>
toggle quoted message
Show quoted text
Josh Ghiloni
Senior Consultant
303.932.2202 o | 303.590.5427 m | 303.565.2794 f
jghiloni(a)ecsteam.com<mailto:jghiloni(a)ecsteam.com>
ECS Team
Technology Solutions Delivered
ECSTeam.com<http://ECSTeam.com>
On Jun 3, 2015, at 13:20, Nicholas Calugar <ncalugar(a)pivotal.io<mailto:ncalugar(a)pivotal.io>> wrote:
Hi Sree,
Not sure if this is possible, but maybe instead of requireAtLeastOneSpecialCharacter boolean, you could do minSpecialCharacters int (0-n)? This would allow more rigorous password policies.
Nick
—
Nicholas Calugar
On Wed, Jun 3, 2015 at 12:00 PM, Sree Tummidi <stummidi(a)pivotal.io<mailto:stummidi(a)pivotal.io>> wrote:
Hi All,
The UAA team is in the process of implementing Password Policy feature<https://www.pivotaltracker.com/story/show/82182984> for users stored in UAA.
The following properties around password strength will be exposed in the YML configuration.
#passwordPolicy:
# minLength: 8
# requireAtLeastOneSpecialCharacter: true
# requireAtLeastOneUppercaseCharacter: true
# requireAtLeastOneLowercaseCharacter: true
# requireAtLeastOneDigit: true
The Password Policy feature is being implemented to support multi-tenant UAA. Each Tenant/Identity Zone will get its own password policy. The password policy for the default zone will be configurable via YML.
UAA currently supports the zxcvbn<https://blogs.dropbox.com/tech/2012/04/zxcvbn-realistic-password-strength-estimation/> style password score. This is currently exposed via the following properties in the YML configuration file. There is an end point<https://github.com/cloudfoundry/uaa/blob/master/docs/UAA-APIs.rst#query-the-strength-of-a-password-post-password-score> for querying the status of the same.
password-policy:
required-score: <int>
We would like to understand if this password score feature is being utilized at all. We don't plan on making this feature multi-tenant and would like to drop this in favor of the new approach which is much more granular and supports multi tenancy.
Thanks,
Sree Tummidi
Sr. Product Manager
Identity - Pivotal Cloud Foundry
_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org<mailto:cf-dev(a)lists.cloudfoundry.org>
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev
Hi Sree,
Not sure if this is possible, but maybe instead of requireAtLeastOneSpecialCharacter boolean, you could do minSpecialCharacters int (0-n)? This would allow more rigorous password policies.
Nick
—
Nicholas Calugar
On Wed, Jun 3, 2015 at 12:00 PM, Sree Tummidi <stummidi(a)pivotal.io<mailto:stummidi(a)pivotal.io>> wrote:
Hi All,
The UAA team is in the process of implementing Password Policy feature<https://www.pivotaltracker.com/story/show/82182984> for users stored in UAA.
The following properties around password strength will be exposed in the YML configuration.
#passwordPolicy:
# minLength: 8
# requireAtLeastOneSpecialCharacter: true
# requireAtLeastOneUppercaseCharacter: true
# requireAtLeastOneLowercaseCharacter: true
# requireAtLeastOneDigit: true
The Password Policy feature is being implemented to support multi-tenant UAA. Each Tenant/Identity Zone will get its own password policy. The password policy for the default zone will be configurable via YML.
UAA currently supports the zxcvbn<https://blogs.dropbox.com/tech/2012/04/zxcvbn-realistic-password-strength-estimation/> style password score. This is currently exposed via the following properties in the YML configuration file. There is an end point<https://github.com/cloudfoundry/uaa/blob/master/docs/UAA-APIs.rst#query-the-strength-of-a-password-post-password-score> for querying the status of the same.
password-policy:
required-score: <int>
We would like to understand if this password score feature is being utilized at all. We don't plan on making this feature multi-tenant and would like to drop this in favor of the new approach which is much more granular and supports multi tenancy.
Thanks,
Sree Tummidi
Sr. Product Manager
Identity - Pivotal Cloud Foundry
_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org<mailto:cf-dev(a)lists.cloudfoundry.org>
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev
Re: UAA : Is anyone utilizing the Password Score Feature
Nicholas Calugar
Hi Sree,
Not sure if this is possible, but maybe instead of requireAtLeastOneSpecialCharacter boolean, you could do minSpecialCharacters int (0-n)? This would allow more rigorous password policies.
Nick
—
Nicholas Calugar
toggle quoted message
Show quoted text
Not sure if this is possible, but maybe instead of requireAtLeastOneSpecialCharacter boolean, you could do minSpecialCharacters int (0-n)? This would allow more rigorous password policies.
Nick
—
Nicholas Calugar
On Wed, Jun 3, 2015 at 12:00 PM, Sree Tummidi <stummidi(a)pivotal.io> wrote:
Hi All,
The UAA team is in the process of implementing Password Policy feature
<https://www.pivotaltracker.com/story/show/82182984> for users stored in
UAA.
The following properties around password strength will be exposed in the
YML configuration.
#passwordPolicy:
# minLength: 8
# requireAtLeastOneSpecialCharacter: true
# requireAtLeastOneUppercaseCharacter: true
# requireAtLeastOneLowercaseCharacter: true
# requireAtLeastOneDigit: true
The Password Policy feature is being implemented to support multi-tenant
UAA. Each Tenant/Identity Zone will get its own password policy. The
password policy for the default zone will be configurable via YML.
UAA currently supports the *zxcvbn
<https://blogs.dropbox.com/tech/2012/04/zxcvbn-realistic-password-strength-estimation/>*
style
password score. This is currently exposed via the following properties in
the YML configuration file. There is an end point
<https://github.com/cloudfoundry/uaa/blob/master/docs/UAA-APIs.rst#query-the-strength-of-a-password-post-password-score>
for
querying the status of the same.
password-policy:
required-score: <int>
We would like to understand if this password score feature is being
utilized at all. We don't plan on making this feature multi-tenant and
would like to drop this in favor of the new approach which is much more
granular and supports multi tenancy.
Thanks,
Sree Tummidi
Sr. Product Manager
Identity - Pivotal Cloud Foundry
UAA : Is anyone utilizing the Password Score Feature
Sree Tummidi
Hi All,
The UAA team is in the process of implementing Password Policy feature
<https://www.pivotaltracker.com/story/show/82182984> for users stored in
UAA.
The following properties around password strength will be exposed in the
YML configuration.
#passwordPolicy:
# minLength: 8
# requireAtLeastOneSpecialCharacter: true
# requireAtLeastOneUppercaseCharacter: true
# requireAtLeastOneLowercaseCharacter: true
# requireAtLeastOneDigit: true
The Password Policy feature is being implemented to support multi-tenant
UAA. Each Tenant/Identity Zone will get its own password policy. The
password policy for the default zone will be configurable via YML.
UAA currently supports the *zxcvbn
<https://blogs.dropbox.com/tech/2012/04/zxcvbn-realistic-password-strength-estimation/>*
style
password score. This is currently exposed via the following properties in
the YML configuration file. There is an end point
<https://github.com/cloudfoundry/uaa/blob/master/docs/UAA-APIs.rst#query-the-strength-of-a-password-post-password-score>
for
querying the status of the same.
password-policy:
required-score: <int>
We would like to understand if this password score feature is being
utilized at all. We don't plan on making this feature multi-tenant and
would like to drop this in favor of the new approach which is much more
granular and supports multi tenancy.
Thanks,
Sree Tummidi
Sr. Product Manager
Identity - Pivotal Cloud Foundry
The UAA team is in the process of implementing Password Policy feature
<https://www.pivotaltracker.com/story/show/82182984> for users stored in
UAA.
The following properties around password strength will be exposed in the
YML configuration.
#passwordPolicy:
# minLength: 8
# requireAtLeastOneSpecialCharacter: true
# requireAtLeastOneUppercaseCharacter: true
# requireAtLeastOneLowercaseCharacter: true
# requireAtLeastOneDigit: true
The Password Policy feature is being implemented to support multi-tenant
UAA. Each Tenant/Identity Zone will get its own password policy. The
password policy for the default zone will be configurable via YML.
UAA currently supports the *zxcvbn
<https://blogs.dropbox.com/tech/2012/04/zxcvbn-realistic-password-strength-estimation/>*
style
password score. This is currently exposed via the following properties in
the YML configuration file. There is an end point
<https://github.com/cloudfoundry/uaa/blob/master/docs/UAA-APIs.rst#query-the-strength-of-a-password-post-password-score>
for
querying the status of the same.
password-policy:
required-score: <int>
We would like to understand if this password score feature is being
utilized at all. We don't plan on making this feature multi-tenant and
would like to drop this in favor of the new approach which is much more
granular and supports multi tenancy.
Thanks,
Sree Tummidi
Sr. Product Manager
Identity - Pivotal Cloud Foundry
Syslog Drain to Logstash Problems
Steve Wall <steve.wall@...>
Hello,
We are having problems draining log messages to Logstash. The drain is
setup as a user provided service.
cf cups logstash-drain -l syslog://xx.xx.xx.xx:5000
And then bound to the service.
cf bind-service myapp logstash-drain
But no log messages are coming through to Logstash. Or more specifically,
we are using ELK and the messages aren't seen through Kibana.
We were able to log into the DEA and using netcat (nc), messages were
successfully submitted to the ELK stack.
nc -w0 -u xx.xx.xx.xx 5000 <<< "logging from remote"
Any suggestions on how to debug this further?
-Steve
We are having problems draining log messages to Logstash. The drain is
setup as a user provided service.
cf cups logstash-drain -l syslog://xx.xx.xx.xx:5000
And then bound to the service.
cf bind-service myapp logstash-drain
But no log messages are coming through to Logstash. Or more specifically,
we are using ELK and the messages aren't seen through Kibana.
We were able to log into the DEA and using netcat (nc), messages were
successfully submitted to the ELK stack.
nc -w0 -u xx.xx.xx.xx 5000 <<< "logging from remote"
Any suggestions on how to debug this further?
-Steve
Re: Release Notes for v210
Eric Malm <emalm@...>
Hi, all,
Please be aware that the Diego team has recently identified a goroutine and
memory leak in the Diego codebase for release 0.1247.0 that eventually
affects the performance of Diego's receptor component. Further
investigation has revealed that this leak was introduced in final release
0.1221.0 and fixed in 0.1259.0. Consequently, we do not recommend the use
of Diego final releases from 0.1221.0 through 0.1258.0 in long-running
environments. If you do need to mitigate this issue in such an environment,
issuing a 'monit restart' to each receptor process on the Diego 'access'
VMs once it consumes a majority of available memory on the VM should
suffice and should have negligible impact on the performance and
availability of the Diego backend, especially if more than one 'access' VM
is present in the Diego deployment.
The next final release of CF (namely, v211) will be accompanied by a Diego
final release that does not exhibit this problem. Additionally, the Diego
team has identified and corrected the gaps in our testing pipeline and
monitoring configuration that allowed this resource leak to slip through.
Thank you for your understanding, and please let me know if you have
further questions about this matter.
Best,
Eric, CF Runtime Diego PM
toggle quoted message
Show quoted text
Please be aware that the Diego team has recently identified a goroutine and
memory leak in the Diego codebase for release 0.1247.0 that eventually
affects the performance of Diego's receptor component. Further
investigation has revealed that this leak was introduced in final release
0.1221.0 and fixed in 0.1259.0. Consequently, we do not recommend the use
of Diego final releases from 0.1221.0 through 0.1258.0 in long-running
environments. If you do need to mitigate this issue in such an environment,
issuing a 'monit restart' to each receptor process on the Diego 'access'
VMs once it consumes a majority of available memory on the VM should
suffice and should have negligible impact on the performance and
availability of the Diego backend, especially if more than one 'access' VM
is present in the Diego deployment.
The next final release of CF (namely, v211) will be accompanied by a Diego
final release that does not exhibit this problem. Additionally, the Diego
team has identified and corrected the gaps in our testing pipeline and
monitoring configuration that allowed this resource leak to slip through.
Thank you for your understanding, and please let me know if you have
further questions about this matter.
Best,
Eric, CF Runtime Diego PM
On Tue, May 26, 2015 at 10:59 PM, Dieu Cao <dcao(a)pivotal.io> wrote:
The cf-release v210 was released on May 23rd, 2015
Runtime
- Addressed USN-2617-1 <http://www.ubuntu.com/usn/usn-2617-1/>
CVE-2015-3202
<http://people.canonical.com/~ubuntu-security/cve/2015/CVE-2015-3202.html> FUSE
vulnerabilities
- Removed fuse binaries from lucid64 rootfs . Apps running on
lucid64 stack requiring fuse should switch to cflinuxfs2 details
<https://www.pivotaltracker.com/story/show/95186578>
- fuse binaries updated on cflinuxfs2 rootfs. details
<https://www.pivotaltracker.com/story/show/95177810>
- [Experimental] Work continues on support for Asynchronous Service
Instance Operationsdetails
<https://www.pivotaltracker.com/epic/show/1561148>
- Support for configurable max polling duration
- [Experimental] Work continues on /v3 and Application Process Types
details <https://www.pivotaltracker.com/epic/show/1334418>
- [Experimental] Work continues on Route API details
<https://www.pivotaltracker.com/epic/show/1590160>
- [Experimental] Work continues on Context Path Routes details
<https://www.pivotaltracker.com/epic/show/1808212>
- Work continues on support for Service Keys details
<https://www.pivotaltracker.com/epic/show/1743366>
- Upgrade etcd server to 2.0.1 details
<https://www.pivotaltracker.com/story/show/91070214>
- Should be run as 1 node (for small deployments) or 3 nodes spread
across zones (for HA)
- Also upgrades hm9k dependencies. LAMB client to be upgraded in a
subsequent release. Older client is compatible.
- cloudfoundry/cf-release #670
<https://github.com/cloudfoundry/cf-release/pull/670>: Be able to
specify timeouts for acceptance tests without defaults in the spec.
details <https://www.pivotaltracker.com/story/show/93914198>
- Fix bug where ssl enabled routers were not draining properly details
<https://www.pivotaltracker.com/story/show/94718480>
- cloudfoundry/cloud_controller_ng #378
<https://github.com/cloudfoundry/cf-release/pull/378>: current usage
against the org quota details
<https://www.pivotaltracker.com/story/show/94171010>
UAA
- Bumped to UAA 2.3.0 details
<https://github.com/cloudfoundry/uaa/releases/tag/2.3.0>
Used Configuration
- BOSH Version: 152
- Stemcell Version: 2889
- CC Api Version: 2.27.0
Commit summary
<http://htmlpreview.github.io/?https://github.com/cloudfoundry-community/cf-docs-contrib/blob/master/release_notes/cf-210-whats-in-the-deploy.html>
Compatible Diego Version
- final release 0.1247.0 commit
<https://github.com/cloudfoundry-incubator/diego-release/commit/a122a78eeb344bbfc90b7bcd0fa987d08ef1a5d1>
Manifest and Job Spec Changes
- properties.acceptance_tests.skip_regex added
- properties.app_ssh.host_key_fingerprint added
- properties.app_ssh.port defaults to 2222
- properties.uaa.newrelic added
- properties.login.logout.redirect.parameter.whitelist
On Sat, May 23, 2015 at 9:50 PM, James Bayer <jbayer(a)pivotal.io> wrote:CVE-2015-3202 details:_______________________________________________
http://lists.cloudfoundry.org/pipermail/cf-dev/2015-May/000194.html
CVE-2015-1834 details:
http://lists.cloudfoundry.org/pipermail/cf-dev/2015-May/000195.html
On Sat, May 23, 2015 at 9:41 PM, James Bayer <jbayer(a)pivotal.io> wrote:please note that this release addresses CVE-2015-3202 and CVE-2015-1834
and we strongly recommend upgrading to this release. more details will be
forthcoming after the long united states holiday weekend.
https://github.com/cloudfoundry/cf-release/releases/tag/v210
*https://github.com/cloudfoundry-community/cf-docs-contrib/wiki/v210
<https://github.com/cloudfoundry-community/cf-docs-contrib/wiki/v210>*
--
Thank you,
James Bayer
--
Thank you,
James Bayer
_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev
Re: Release Notes for v210
CF Runtime
Hi Guillaume,
The metron_agent.deployment default can be found in
cf-release/templates/cf-lamb.yml which should get merged automatically if
using the generate_deployment_manifest script in cf-release.
We do currently have pipelines for all supported environments (AWS,
vSphere, OpenStack, and BoshLite)
Spiff templates are still the recommended way of deploying cf-release, and
I would expect the nfs template change to be merged today as it is near the
top of our backlog.
Joseph Palermo
CF Runtime Team
toggle quoted message
Show quoted text
The metron_agent.deployment default can be found in
cf-release/templates/cf-lamb.yml which should get merged automatically if
using the generate_deployment_manifest script in cf-release.
We do currently have pipelines for all supported environments (AWS,
vSphere, OpenStack, and BoshLite)
Spiff templates are still the recommended way of deploying cf-release, and
I would expect the nfs template change to be merged today as it is near the
top of our backlog.
Joseph Palermo
CF Runtime Team
On Wed, Jun 3, 2015 at 7:32 AM, Guillaume Berche <bercheg(a)gmail.com> wrote:
Hi,
Thanks for the v210 announcement and the associated release note. It seems
that the v209-announced introduction of a new mandatory
metron_agent.deployment property did not make it into the default spiff
templates [5]. Note I tried updating v209 release note formatting to make
this more explicit [6].
I'm wondering whether the pivotal runtime/release team has a cf-release
pipeline for vsphere infrastructure (I'm suspecting the aws-based pipelines
were fine) ? Is such pipeline using the spiff templates into
cf-release/templates [4], or has it moved to something else such as
cf-boshworkspace [3] ?
If the spiff templates templates into cfrelease/templates are still the
recommended way of deploying CF, is there a way to priorize the merge of
PRs for known issues in v211 such as [1] and [2], as to avoid the need by
the cf-community to maintain its own fork of cfrelease/templates ?
Thanks in advance,
Guillaume.
[1] https://github.com/cloudfoundry/cf-release/pull/689
[2] https://github.com/cloudfoundry/cf-release/pull/696
[3] https://github.com/cloudfoundry-community/cf-boshworkspace
[4] https://github.com/cloudfoundry/cf-release/tree/master/templates
[5] https://github.com/cloudfoundry/cf-release/issues/690
[6] https://github.com/cloudfoundry-community/cf-docs-contrib/wiki/v209/fdae17795c61691f96f90cc9fd7be90945252937
On Wed, May 27, 2015 at 7:59 AM, Dieu Cao <dcao(a)pivotal.io> wrote:The cf-release v210 was released on May 23rd, 2015_______________________________________________
Runtime
- Addressed USN-2617-1 <http://www.ubuntu.com/usn/usn-2617-1/>
CVE-2015-3202
<http://people.canonical.com/~ubuntu-security/cve/2015/CVE-2015-3202.html> FUSE
vulnerabilities
- Removed fuse binaries from lucid64 rootfs . Apps running on
lucid64 stack requiring fuse should switch to cflinuxfs2 details
<https://www.pivotaltracker.com/story/show/95186578>
- fuse binaries updated on cflinuxfs2 rootfs. details
<https://www.pivotaltracker.com/story/show/95177810>
- [Experimental] Work continues on support for Asynchronous Service
Instance Operationsdetails
<https://www.pivotaltracker.com/epic/show/1561148>
- Support for configurable max polling duration
- [Experimental] Work continues on /v3 and Application Process Types
details <https://www.pivotaltracker.com/epic/show/1334418>
- [Experimental] Work continues on Route API details
<https://www.pivotaltracker.com/epic/show/1590160>
- [Experimental] Work continues on Context Path Routes details
<https://www.pivotaltracker.com/epic/show/1808212>
- Work continues on support for Service Keys details
<https://www.pivotaltracker.com/epic/show/1743366>
- Upgrade etcd server to 2.0.1 details
<https://www.pivotaltracker.com/story/show/91070214>
- Should be run as 1 node (for small deployments) or 3 nodes
spread across zones (for HA)
- Also upgrades hm9k dependencies. LAMB client to be upgraded in a
subsequent release. Older client is compatible.
- cloudfoundry/cf-release #670
<https://github.com/cloudfoundry/cf-release/pull/670>: Be able to
specify timeouts for acceptance tests without defaults in the spec.
details <https://www.pivotaltracker.com/story/show/93914198>
- Fix bug where ssl enabled routers were not draining properly details
<https://www.pivotaltracker.com/story/show/94718480>
- cloudfoundry/cloud_controller_ng #378
<https://github.com/cloudfoundry/cf-release/pull/378>: current usage
against the org quota details
<https://www.pivotaltracker.com/story/show/94171010>
UAA
- Bumped to UAA 2.3.0 details
<https://github.com/cloudfoundry/uaa/releases/tag/2.3.0>
Used Configuration
- BOSH Version: 152
- Stemcell Version: 2889
- CC Api Version: 2.27.0
Commit summary
<http://htmlpreview.github.io/?https://github.com/cloudfoundry-community/cf-docs-contrib/blob/master/release_notes/cf-210-whats-in-the-deploy.html>
Compatible Diego Version
- final release 0.1247.0 commit
<https://github.com/cloudfoundry-incubator/diego-release/commit/a122a78eeb344bbfc90b7bcd0fa987d08ef1a5d1>
Manifest and Job Spec Changes
- properties.acceptance_tests.skip_regex added
- properties.app_ssh.host_key_fingerprint added
- properties.app_ssh.port defaults to 2222
- properties.uaa.newrelic added
- properties.login.logout.redirect.parameter.whitelist
On Sat, May 23, 2015 at 9:50 PM, James Bayer <jbayer(a)pivotal.io> wrote:CVE-2015-3202 details:_______________________________________________
http://lists.cloudfoundry.org/pipermail/cf-dev/2015-May/000194.html
CVE-2015-1834 details:
http://lists.cloudfoundry.org/pipermail/cf-dev/2015-May/000195.html
On Sat, May 23, 2015 at 9:41 PM, James Bayer <jbayer(a)pivotal.io> wrote:please note that this release addresses CVE-2015-3202 and CVE-2015-1834
and we strongly recommend upgrading to this release. more details will be
forthcoming after the long united states holiday weekend.
https://github.com/cloudfoundry/cf-release/releases/tag/v210
*https://github.com/cloudfoundry-community/cf-docs-contrib/wiki/v210
<https://github.com/cloudfoundry-community/cf-docs-contrib/wiki/v210>*
--
Thank you,
James Bayer
--
Thank you,
James Bayer
_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev
Re: Staging error: no available stagers (status code: 400, error code: 170001)
Takeshi Morikawa
Please check the 'staging.advertise' of nats message
https://github.com/cloudfoundry/dea_ng#staging
sample command:
bundle exec nats-sub -s
nats://[nats.user]:[nats.password]@[nats_ipaddress]:[nats.port]
'staging.advertise'
I have one additional request
Can you share your bosh deployment manifest?
https://github.com/cloudfoundry/dea_ng#staging
sample command:
bundle exec nats-sub -s
nats://[nats.user]:[nats.password]@[nats_ipaddress]:[nats.port]
'staging.advertise'
I have one additional request
Can you share your bosh deployment manifest?
Re: Release Notes for v210
Hi,
Thanks for the v210 announcement and the associated release note. It seems
that the v209-announced introduction of a new mandatory
metron_agent.deployment property did not make it into the default spiff
templates [5]. Note I tried updating v209 release note formatting to make
this more explicit [6].
I'm wondering whether the pivotal runtime/release team has a cf-release
pipeline for vsphere infrastructure (I'm suspecting the aws-based pipelines
were fine) ? Is such pipeline using the spiff templates into
cf-release/templates [4], or has it moved to something else such as
cf-boshworkspace [3] ?
If the spiff templates templates into cfrelease/templates are still the
recommended way of deploying CF, is there a way to priorize the merge of
PRs for known issues in v211 such as [1] and [2], as to avoid the need by
the cf-community to maintain its own fork of cfrelease/templates ?
Thanks in advance,
Guillaume.
[1] https://github.com/cloudfoundry/cf-release/pull/689
[2] https://github.com/cloudfoundry/cf-release/pull/696
[3] https://github.com/cloudfoundry-community/cf-boshworkspace
[4] https://github.com/cloudfoundry/cf-release/tree/master/templates
[5] https://github.com/cloudfoundry/cf-release/issues/690
[6] https://github.com/cloudfoundry-community/cf-docs-contrib/wiki/v209/fdae17795c61691f96f90cc9fd7be90945252937
toggle quoted message
Show quoted text
Thanks for the v210 announcement and the associated release note. It seems
that the v209-announced introduction of a new mandatory
metron_agent.deployment property did not make it into the default spiff
templates [5]. Note I tried updating v209 release note formatting to make
this more explicit [6].
I'm wondering whether the pivotal runtime/release team has a cf-release
pipeline for vsphere infrastructure (I'm suspecting the aws-based pipelines
were fine) ? Is such pipeline using the spiff templates into
cf-release/templates [4], or has it moved to something else such as
cf-boshworkspace [3] ?
If the spiff templates templates into cfrelease/templates are still the
recommended way of deploying CF, is there a way to priorize the merge of
PRs for known issues in v211 such as [1] and [2], as to avoid the need by
the cf-community to maintain its own fork of cfrelease/templates ?
Thanks in advance,
Guillaume.
[1] https://github.com/cloudfoundry/cf-release/pull/689
[2] https://github.com/cloudfoundry/cf-release/pull/696
[3] https://github.com/cloudfoundry-community/cf-boshworkspace
[4] https://github.com/cloudfoundry/cf-release/tree/master/templates
[5] https://github.com/cloudfoundry/cf-release/issues/690
[6] https://github.com/cloudfoundry-community/cf-docs-contrib/wiki/v209/fdae17795c61691f96f90cc9fd7be90945252937
On Wed, May 27, 2015 at 7:59 AM, Dieu Cao <dcao(a)pivotal.io> wrote:
The cf-release v210 was released on May 23rd, 2015
Runtime
- Addressed USN-2617-1 <http://www.ubuntu.com/usn/usn-2617-1/>
CVE-2015-3202
<http://people.canonical.com/~ubuntu-security/cve/2015/CVE-2015-3202.html> FUSE
vulnerabilities
- Removed fuse binaries from lucid64 rootfs . Apps running on
lucid64 stack requiring fuse should switch to cflinuxfs2 details
<https://www.pivotaltracker.com/story/show/95186578>
- fuse binaries updated on cflinuxfs2 rootfs. details
<https://www.pivotaltracker.com/story/show/95177810>
- [Experimental] Work continues on support for Asynchronous Service
Instance Operationsdetails
<https://www.pivotaltracker.com/epic/show/1561148>
- Support for configurable max polling duration
- [Experimental] Work continues on /v3 and Application Process Types
details <https://www.pivotaltracker.com/epic/show/1334418>
- [Experimental] Work continues on Route API details
<https://www.pivotaltracker.com/epic/show/1590160>
- [Experimental] Work continues on Context Path Routes details
<https://www.pivotaltracker.com/epic/show/1808212>
- Work continues on support for Service Keys details
<https://www.pivotaltracker.com/epic/show/1743366>
- Upgrade etcd server to 2.0.1 details
<https://www.pivotaltracker.com/story/show/91070214>
- Should be run as 1 node (for small deployments) or 3 nodes spread
across zones (for HA)
- Also upgrades hm9k dependencies. LAMB client to be upgraded in a
subsequent release. Older client is compatible.
- cloudfoundry/cf-release #670
<https://github.com/cloudfoundry/cf-release/pull/670>: Be able to
specify timeouts for acceptance tests without defaults in the spec.
details <https://www.pivotaltracker.com/story/show/93914198>
- Fix bug where ssl enabled routers were not draining properly details
<https://www.pivotaltracker.com/story/show/94718480>
- cloudfoundry/cloud_controller_ng #378
<https://github.com/cloudfoundry/cf-release/pull/378>: current usage
against the org quota details
<https://www.pivotaltracker.com/story/show/94171010>
UAA
- Bumped to UAA 2.3.0 details
<https://github.com/cloudfoundry/uaa/releases/tag/2.3.0>
Used Configuration
- BOSH Version: 152
- Stemcell Version: 2889
- CC Api Version: 2.27.0
Commit summary
<http://htmlpreview.github.io/?https://github.com/cloudfoundry-community/cf-docs-contrib/blob/master/release_notes/cf-210-whats-in-the-deploy.html>
Compatible Diego Version
- final release 0.1247.0 commit
<https://github.com/cloudfoundry-incubator/diego-release/commit/a122a78eeb344bbfc90b7bcd0fa987d08ef1a5d1>
Manifest and Job Spec Changes
- properties.acceptance_tests.skip_regex added
- properties.app_ssh.host_key_fingerprint added
- properties.app_ssh.port defaults to 2222
- properties.uaa.newrelic added
- properties.login.logout.redirect.parameter.whitelist
On Sat, May 23, 2015 at 9:50 PM, James Bayer <jbayer(a)pivotal.io> wrote:CVE-2015-3202 details:_______________________________________________
http://lists.cloudfoundry.org/pipermail/cf-dev/2015-May/000194.html
CVE-2015-1834 details:
http://lists.cloudfoundry.org/pipermail/cf-dev/2015-May/000195.html
On Sat, May 23, 2015 at 9:41 PM, James Bayer <jbayer(a)pivotal.io> wrote:please note that this release addresses CVE-2015-3202 and CVE-2015-1834
and we strongly recommend upgrading to this release. more details will be
forthcoming after the long united states holiday weekend.
https://github.com/cloudfoundry/cf-release/releases/tag/v210
*https://github.com/cloudfoundry-community/cf-docs-contrib/wiki/v210
<https://github.com/cloudfoundry-community/cf-docs-contrib/wiki/v210>*
--
Thank you,
James Bayer
--
Thank you,
James Bayer
_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev
Staging error: no available stagers (status code: 400, error code: 170001)
iamflying
Resend the question.
I deployed the cf into openstack successfully. However, I got failure when
I tried to push my first php example.
Starting app cf-php-demo in org system / space dev as admin...
FAILED
Server error, status code: 400, error code: 170001, message: Staging error:
no available stagers
Below is my findings. The DEA and CC have enough resources (4G RAM, 20G
disk). Any clues
to debug? Attached debug.log. debug.log
<http://cf-dev.70369.x6.nabble.com/file/n269/debug.log
*My env:*
BOSH 1.2978.0
cf version 6.10.0-b78bf10-2015-02-11T22:26:40+00:00
cf release: 207
stemcell: bosh-openstack-kvm-ubuntu-trusty-go_agent | 2969
ubuntu(a)boshclivm:~/apps/cf-php-demo$ CF_TRACE=debug.log cf push
Using manifest file /home/ubuntu/apps/cf-php-demo/manifest.yml
Creating app cf-php-demo in org system / space dev as admin...
OK
Using route cf-php-demo.runmyapp.io
Binding cf-php-demo.runmyapp.io to cf-php-demo...
OK
Uploading cf-php-demo...
Uploading app files from: /home/ubuntu/apps/cf-php-demo
Uploading 231.9K, 13 files
Done uploading
OK
Starting app cf-php-demo in org system / space dev as admin...
FAILED
Server error, status code: 400, error code: 170001, message: Staging error:
no available stagers
ubuntu(a)boshclivm:~/apps/cf-php-demo$ cf apps
Getting apps in org system / space dev as admin...
OK
name requested state instances memory disk urls
cf-php-demo started 0/1 128M 1G
cf-php-demo.cfapps.io
ubuntu(a)boshclivm:~/apps/cf-php-demo$
Additional error messages:
1. nginx.access.log
api.au.apaas.com - [03/Jun/2015:06:19:51 +0000] "PUT
/v2/apps/7b417005-6716-4d5e-bec2-246a51b588c6?async=true&inline-relations-depth=1
HTTP/1.1" 400 435 "-" "go-cli 6.10.0-b78bf10 / linux" 137.172.74.86,
100.64.1.0, 100.64.1.5
vcap_request_id:ae51df28-8371-44f2-4d18-5e13ded3a467::6faf587a-aba8-4467-8e1e-75b018eff93b
response_time:0.297
2. cloud_controller_ng.log
{"timestamp":1433313002.4425669,"message":"Request failed: 400:
{\"code\"=>170001, \"description\"=>\"Staging error: no available stagers\",
\"error_code\"=>\"CF-StagingError\",
\"backtrace\"=>[\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/lib/cloud_controller/dea/app_stager_task.rb:26:in
`stage'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/lib/cloud_controller/dea/stager.rb:45:in
`stage_app'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/lib/cloud_controller/app_observer.rb:57:in
`react_to_state_change'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/lib/cloud_controller/app_observer.rb:27:in
`updated'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/app/models/runtime/app.rb:534:in
`after_commit'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sequel-4.21.0/lib/sequel/model/base.rb:1920:in
`block in _save'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sequel-4.21.0/lib/sequel/database/transactions.rb:280:in
`call'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sequel-4.21.0/lib/sequel/database/transactions.rb:280:in
`block in remove_transaction'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sequel-4.21.0/lib/sequel/database/transactions.rb:280:in
`each'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sequel-4.21.0/lib/sequel/database/transactions.rb:280:in
`remove_transaction'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sequel-4.21.0/lib/sequel/database/transactions.rb:156:in
`_transaction'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sequel-4.21.0/lib/sequel/database/transactions.rb:108:in
`block in transaction'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sequel-4.21.0/lib/sequel/database/connecting.rb:250:in
`block in synchronize'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sequel-4.21.0/lib/sequel/connection_pool/threaded.rb:98:in
`hold'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sequel-4.21.0/lib/sequel/database/connecting.rb:250:in
`synchronize'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sequel-4.21.0/lib/sequel/database/transactions.rb:97:in
`transaction'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/app/controllers/base/model_controller.rb:64:in
`update'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/app/controllers/base/base_controller.rb:76:in
`dispatch'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/lib/cloud_controller/rest_controller/routes.rb:16:in
`block in define_route'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1602:in
`call'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1602:in
`block in compile!'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:966:in
`[]'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:966:in
`block (3 levels) in route!'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:985:in
`route_eval'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:966:in
`block (2 levels) in route!'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1006:in
`block in process_route'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1004:in
`catch'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1004:in
`process_route'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:964:in
`block in route!'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:963:in
`each'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:963:in
`route!'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1076:in
`block in dispatch!'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1058:in
`block in invoke'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1058:in
`catch'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1058:in
`invoke'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1073:in
`dispatch!'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:898:in
`block in call!'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1058:in
`block in invoke'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1058:in
`catch'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1058:in
`invoke'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:898:in
`call!'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:886:in
`call'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/rack-protection-1.5.3/lib/rack/protection/xss_header.rb:18:in
`call'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/rack-protection-1.5.3/lib/rack/protection/path_traversal.rb:16:in
`call'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/rack-protection-1.5.3/lib/rack/protection/json_csrf.rb:18:in
`call'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/rack-protection-1.5.3/lib/rack/protection/base.rb:49:in
`call'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/rack-protection-1.5.3/lib/rack/protection/base.rb:49:in
`call'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/rack-protection-1.5.3/lib/rack/protection/frame_options.rb:31:in
`call'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/rack-1.5.2/lib/rack/nulllogger.rb:9:in
`call'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/rack-1.5.2/lib/rack/head.rb:11:in
`call'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:180:in
`call'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:2014:in
`call'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/rack-1.5.2/lib/rack/builder.rb:138:in
`call'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/rack-1.5.2/lib/rack/urlmap.rb:65:in
`block in call'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/rack-1.5.2/lib/rack/urlmap.rb:50:in
`each'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/rack-1.5.2/lib/rack/urlmap.rb:50:in
`call'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/rack-1.5.2/lib/rack/builder.rb:138:in
`call'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/thin-1.6.3/lib/thin/connection.rb:86:in
`block in pre_process'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/thin-1.6.3/lib/thin/connection.rb:84:in
`catch'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/thin-1.6.3/lib/thin/connection.rb:84:in
`pre_process'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/eventmachine-1.0.3/lib/eventmachine.rb:1037:in
`call'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/eventmachine-1.0.3/lib/eventmachine.rb:1037:in
`block in
spawn_threadpool'\"]}","log_level":"info","source":"cc.api","data":{"request_guid":"d916023a-d364-492b-50bb-624e5862e455::2c136550-80cb-4d5b-8ab3-6c2dd397fc1a"},"thread_id":69941342398660,"fiber_id":69941329029340,"process_id":1816,"file":"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/lib/sinatra/vcap.rb","lineno":51,"method":"block
in registered"}
I deployed the cf into openstack successfully. However, I got failure when
I tried to push my first php example.
Starting app cf-php-demo in org system / space dev as admin...
FAILED
Server error, status code: 400, error code: 170001, message: Staging error:
no available stagers
Below is my findings. The DEA and CC have enough resources (4G RAM, 20G
disk). Any clues
to debug? Attached debug.log. debug.log
<http://cf-dev.70369.x6.nabble.com/file/n269/debug.log
*My env:*
BOSH 1.2978.0
cf version 6.10.0-b78bf10-2015-02-11T22:26:40+00:00
cf release: 207
stemcell: bosh-openstack-kvm-ubuntu-trusty-go_agent | 2969
ubuntu(a)boshclivm:~/apps/cf-php-demo$ CF_TRACE=debug.log cf push
Using manifest file /home/ubuntu/apps/cf-php-demo/manifest.yml
Creating app cf-php-demo in org system / space dev as admin...
OK
Using route cf-php-demo.runmyapp.io
Binding cf-php-demo.runmyapp.io to cf-php-demo...
OK
Uploading cf-php-demo...
Uploading app files from: /home/ubuntu/apps/cf-php-demo
Uploading 231.9K, 13 files
Done uploading
OK
Starting app cf-php-demo in org system / space dev as admin...
FAILED
Server error, status code: 400, error code: 170001, message: Staging error:
no available stagers
ubuntu(a)boshclivm:~/apps/cf-php-demo$ cf apps
Getting apps in org system / space dev as admin...
OK
name requested state instances memory disk urls
cf-php-demo started 0/1 128M 1G
cf-php-demo.cfapps.io
ubuntu(a)boshclivm:~/apps/cf-php-demo$
Additional error messages:
1. nginx.access.log
api.au.apaas.com - [03/Jun/2015:06:19:51 +0000] "PUT
/v2/apps/7b417005-6716-4d5e-bec2-246a51b588c6?async=true&inline-relations-depth=1
HTTP/1.1" 400 435 "-" "go-cli 6.10.0-b78bf10 / linux" 137.172.74.86,
100.64.1.0, 100.64.1.5
vcap_request_id:ae51df28-8371-44f2-4d18-5e13ded3a467::6faf587a-aba8-4467-8e1e-75b018eff93b
response_time:0.297
2. cloud_controller_ng.log
{"timestamp":1433313002.4425669,"message":"Request failed: 400:
{\"code\"=>170001, \"description\"=>\"Staging error: no available stagers\",
\"error_code\"=>\"CF-StagingError\",
\"backtrace\"=>[\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/lib/cloud_controller/dea/app_stager_task.rb:26:in
`stage'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/lib/cloud_controller/dea/stager.rb:45:in
`stage_app'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/lib/cloud_controller/app_observer.rb:57:in
`react_to_state_change'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/lib/cloud_controller/app_observer.rb:27:in
`updated'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/app/models/runtime/app.rb:534:in
`after_commit'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sequel-4.21.0/lib/sequel/model/base.rb:1920:in
`block in _save'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sequel-4.21.0/lib/sequel/database/transactions.rb:280:in
`call'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sequel-4.21.0/lib/sequel/database/transactions.rb:280:in
`block in remove_transaction'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sequel-4.21.0/lib/sequel/database/transactions.rb:280:in
`each'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sequel-4.21.0/lib/sequel/database/transactions.rb:280:in
`remove_transaction'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sequel-4.21.0/lib/sequel/database/transactions.rb:156:in
`_transaction'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sequel-4.21.0/lib/sequel/database/transactions.rb:108:in
`block in transaction'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sequel-4.21.0/lib/sequel/database/connecting.rb:250:in
`block in synchronize'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sequel-4.21.0/lib/sequel/connection_pool/threaded.rb:98:in
`hold'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sequel-4.21.0/lib/sequel/database/connecting.rb:250:in
`synchronize'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sequel-4.21.0/lib/sequel/database/transactions.rb:97:in
`transaction'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/app/controllers/base/model_controller.rb:64:in
`update'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/app/controllers/base/base_controller.rb:76:in
`dispatch'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/lib/cloud_controller/rest_controller/routes.rb:16:in
`block in define_route'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1602:in
`call'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1602:in
`block in compile!'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:966:in
`[]'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:966:in
`block (3 levels) in route!'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:985:in
`route_eval'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:966:in
`block (2 levels) in route!'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1006:in
`block in process_route'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1004:in
`catch'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1004:in
`process_route'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:964:in
`block in route!'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:963:in
`each'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:963:in
`route!'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1076:in
`block in dispatch!'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1058:in
`block in invoke'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1058:in
`catch'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1058:in
`invoke'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1073:in
`dispatch!'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:898:in
`block in call!'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1058:in
`block in invoke'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1058:in
`catch'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1058:in
`invoke'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:898:in
`call!'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:886:in
`call'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/rack-protection-1.5.3/lib/rack/protection/xss_header.rb:18:in
`call'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/rack-protection-1.5.3/lib/rack/protection/path_traversal.rb:16:in
`call'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/rack-protection-1.5.3/lib/rack/protection/json_csrf.rb:18:in
`call'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/rack-protection-1.5.3/lib/rack/protection/base.rb:49:in
`call'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/rack-protection-1.5.3/lib/rack/protection/base.rb:49:in
`call'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/rack-protection-1.5.3/lib/rack/protection/frame_options.rb:31:in
`call'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/rack-1.5.2/lib/rack/nulllogger.rb:9:in
`call'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/rack-1.5.2/lib/rack/head.rb:11:in
`call'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:180:in
`call'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:2014:in
`call'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/rack-1.5.2/lib/rack/builder.rb:138:in
`call'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/rack-1.5.2/lib/rack/urlmap.rb:65:in
`block in call'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/rack-1.5.2/lib/rack/urlmap.rb:50:in
`each'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/rack-1.5.2/lib/rack/urlmap.rb:50:in
`call'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/rack-1.5.2/lib/rack/builder.rb:138:in
`call'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/thin-1.6.3/lib/thin/connection.rb:86:in
`block in pre_process'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/thin-1.6.3/lib/thin/connection.rb:84:in
`catch'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/thin-1.6.3/lib/thin/connection.rb:84:in
`pre_process'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/eventmachine-1.0.3/lib/eventmachine.rb:1037:in
`call'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/eventmachine-1.0.3/lib/eventmachine.rb:1037:in
`block in
spawn_threadpool'\"]}","log_level":"info","source":"cc.api","data":{"request_guid":"d916023a-d364-492b-50bb-624e5862e455::2c136550-80cb-4d5b-8ab3-6c2dd397fc1a"},"thread_id":69941342398660,"fiber_id":69941329029340,"process_id":1816,"file":"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/lib/sinatra/vcap.rb","lineno":51,"method":"block
in registered"}
Re: Scaling Services
Robert Moss
Regarding a) you could try having your services managed by Apache Brooklyn
<http://brooklyn.incubator.apache.org/> it can auto- or manually-scale. I
wrote a Service Broker
<https://github.com/cloudfoundry-community/brooklyn-service-broker> and a CLI
plugin <https://github.com/cloudfoundry-community/brooklyn-plugin> that
let's you talk to Brooklyn in a CF native way.
Robert
--
Cloudsoft Corporation Limited, Registered in Scotland No: SC349230.
Registered Office: 13 Dryden Place, Edinburgh, EH9 1RP
This e-mail message is confidential and for use by the addressee only. If
the message is received by anyone other than the addressee, please return
the message to the sender by replying to it and then delete the message
from your computer. Internet e-mails are not necessarily secure. Cloudsoft
Corporation Limited does not accept responsibility for changes made to this
message after it was sent.
Whilst all reasonable care has been taken to avoid the transmission of
viruses, it is the responsibility of the recipient to ensure that the
onward transmission, opening or use of this message and any attachments
will not adversely affect its systems or data. No responsibility is
accepted by Cloudsoft Corporation Limited in this regard and the recipient
should carry out such virus and other checks as it considers appropriate.
<http://brooklyn.incubator.apache.org/> it can auto- or manually-scale. I
wrote a Service Broker
<https://github.com/cloudfoundry-community/brooklyn-service-broker> and a CLI
plugin <https://github.com/cloudfoundry-community/brooklyn-plugin> that
let's you talk to Brooklyn in a CF native way.
Robert
--
Cloudsoft Corporation Limited, Registered in Scotland No: SC349230.
Registered Office: 13 Dryden Place, Edinburgh, EH9 1RP
This e-mail message is confidential and for use by the addressee only. If
the message is received by anyone other than the addressee, please return
the message to the sender by replying to it and then delete the message
from your computer. Internet e-mails are not necessarily secure. Cloudsoft
Corporation Limited does not accept responsibility for changes made to this
message after it was sent.
Whilst all reasonable care has been taken to avoid the transmission of
viruses, it is the responsibility of the recipient to ensure that the
onward transmission, opening or use of this message and any attachments
will not adversely affect its systems or data. No responsibility is
accepted by Cloudsoft Corporation Limited in this regard and the recipient
should carry out such virus and other checks as it considers appropriate.
Server error, status code: 400, error code: 170001, message: Staging error: no available stagers
iamflying
Hi all,
I got failure when I deployed the cf into openstack. Below is my current
findings. The DEA and CC have enough resources (4G RAM, 20G disk). Any clues
to debug? Attached debug.log. debug.log
<http://cf-dev.70369.x6.nabble.com/file/n269/debug.log>
BOSH 1.2978.0
cf version 6.10.0-b78bf10-2015-02-11T22:26:40+00:00
cf release: 207
stemcell: bosh-openstack-kvm-ubuntu-trusty-go_agent | 2969
ubuntu(a)boshclivm:~/apps/cf-php-demo$ CF_TRACE=debug.log cf push
Using manifest file /home/ubuntu/apps/cf-php-demo/manifest.yml
Creating app cf-php-demo in org system / space dev as admin...
OK
Using route cf-php-demo.runmyapp.io
Binding cf-php-demo.runmyapp.io to cf-php-demo...
OK
Uploading cf-php-demo...
Uploading app files from: /home/ubuntu/apps/cf-php-demo
Uploading 231.9K, 13 files
Done uploading
OK
Starting app cf-php-demo in org system / space dev as admin...
FAILED
Server error, status code: 400, error code: 170001, message: Staging error:
no available stagers
ubuntu(a)boshclivm:~/apps/cf-php-demo$ cf apps
Getting apps in org system / space dev as admin...
OK
name requested state instances memory disk urls
cf-php-demo started 0/1 128M 1G
cf-php-demo.cfapps.io
ubuntu(a)boshclivm:~/apps/cf-php-demo$
Additional error messages:
1. nginx.access.log
api.au.apaas.com - [03/Jun/2015:06:19:51 +0000] "PUT
/v2/apps/7b417005-6716-4d5e-bec2-246a51b588c6?async=true&inline-relations-depth=1
HTTP/1.1" 400 435 "-" "go-cli 6.10.0-b78bf10 / linux" 137.172.74.86,
100.64.1.0, 100.64.1.5
vcap_request_id:ae51df28-8371-44f2-4d18-5e13ded3a467::6faf587a-aba8-4467-8e1e-75b018eff93b
response_time:0.297
2. cloud_controller_ng.log
{"timestamp":1433313002.4425669,"message":"Request failed: 400:
{\"code\"=>170001, \"description\"=>\"Staging error: no available stagers\",
\"error_code\"=>\"CF-StagingError\",
\"backtrace\"=>[\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/lib/cloud_controller/dea/app_stager_task.rb:26:in
`stage'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/lib/cloud_controller/dea/stager.rb:45:in
`stage_app'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/lib/cloud_controller/app_observer.rb:57:in
`react_to_state_change'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/lib/cloud_controller/app_observer.rb:27:in
`updated'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/app/models/runtime/app.rb:534:in
`after_commit'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sequel-4.21.0/lib/sequel/model/base.rb:1920:in
`block in _save'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sequel-4.21.0/lib/sequel/database/transactions.rb:280:in
`call'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sequel-4.21.0/lib/sequel/database/transactions.rb:280:in
`block in remove_transaction'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sequel-4.21.0/lib/sequel/database/transactions.rb:280:in
`each'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sequel-4.21.0/lib/sequel/database/transactions.rb:280:in
`remove_transaction'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sequel-4.21.0/lib/sequel/database/transactions.rb:156:in
`_transaction'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sequel-4.21.0/lib/sequel/database/transactions.rb:108:in
`block in transaction'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sequel-4.21.0/lib/sequel/database/connecting.rb:250:in
`block in synchronize'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sequel-4.21.0/lib/sequel/connection_pool/threaded.rb:98:in
`hold'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sequel-4.21.0/lib/sequel/database/connecting.rb:250:in
`synchronize'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sequel-4.21.0/lib/sequel/database/transactions.rb:97:in
`transaction'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/app/controllers/base/model_controller.rb:64:in
`update'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/app/controllers/base/base_controller.rb:76:in
`dispatch'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/lib/cloud_controller/rest_controller/routes.rb:16:in
`block in define_route'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1602:in
`call'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1602:in
`block in compile!'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:966:in
`[]'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:966:in
`block (3 levels) in route!'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:985:in
`route_eval'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:966:in
`block (2 levels) in route!'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1006:in
`block in process_route'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1004:in
`catch'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1004:in
`process_route'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:964:in
`block in route!'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:963:in
`each'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:963:in
`route!'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1076:in
`block in dispatch!'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1058:in
`block in invoke'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1058:in
`catch'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1058:in
`invoke'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1073:in
`dispatch!'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:898:in
`block in call!'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1058:in
`block in invoke'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1058:in
`catch'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1058:in
`invoke'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:898:in
`call!'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:886:in
`call'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/rack-protection-1.5.3/lib/rack/protection/xss_header.rb:18:in
`call'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/rack-protection-1.5.3/lib/rack/protection/path_traversal.rb:16:in
`call'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/rack-protection-1.5.3/lib/rack/protection/json_csrf.rb:18:in
`call'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/rack-protection-1.5.3/lib/rack/protection/base.rb:49:in
`call'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/rack-protection-1.5.3/lib/rack/protection/base.rb:49:in
`call'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/rack-protection-1.5.3/lib/rack/protection/frame_options.rb:31:in
`call'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/rack-1.5.2/lib/rack/nulllogger.rb:9:in
`call'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/rack-1.5.2/lib/rack/head.rb:11:in
`call'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:180:in
`call'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:2014:in
`call'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/rack-1.5.2/lib/rack/builder.rb:138:in
`call'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/rack-1.5.2/lib/rack/urlmap.rb:65:in
`block in call'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/rack-1.5.2/lib/rack/urlmap.rb:50:in
`each'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/rack-1.5.2/lib/rack/urlmap.rb:50:in
`call'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/rack-1.5.2/lib/rack/builder.rb:138:in
`call'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/thin-1.6.3/lib/thin/connection.rb:86:in
`block in pre_process'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/thin-1.6.3/lib/thin/connection.rb:84:in
`catch'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/thin-1.6.3/lib/thin/connection.rb:84:in
`pre_process'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/eventmachine-1.0.3/lib/eventmachine.rb:1037:in
`call'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/eventmachine-1.0.3/lib/eventmachine.rb:1037:in
`block in
spawn_threadpool'\"]}","log_level":"info","source":"cc.api","data":{"request_guid":"d916023a-d364-492b-50bb-624e5862e455::2c136550-80cb-4d5b-8ab3-6c2dd397fc1a"},"thread_id":69941342398660,"fiber_id":69941329029340,"process_id":1816,"file":"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/lib/sinatra/vcap.rb","lineno":51,"method":"block
in registered"}
--
View this message in context: http://cf-dev.70369.x6.nabble.com/Server-error-status-code-400-error-code-170001-message-Staging-error-no-available-stagers-tp269.html
Sent from the CF Dev mailing list archive at Nabble.com.
I got failure when I deployed the cf into openstack. Below is my current
findings. The DEA and CC have enough resources (4G RAM, 20G disk). Any clues
to debug? Attached debug.log. debug.log
<http://cf-dev.70369.x6.nabble.com/file/n269/debug.log>
BOSH 1.2978.0
cf version 6.10.0-b78bf10-2015-02-11T22:26:40+00:00
cf release: 207
stemcell: bosh-openstack-kvm-ubuntu-trusty-go_agent | 2969
ubuntu(a)boshclivm:~/apps/cf-php-demo$ CF_TRACE=debug.log cf push
Using manifest file /home/ubuntu/apps/cf-php-demo/manifest.yml
Creating app cf-php-demo in org system / space dev as admin...
OK
Using route cf-php-demo.runmyapp.io
Binding cf-php-demo.runmyapp.io to cf-php-demo...
OK
Uploading cf-php-demo...
Uploading app files from: /home/ubuntu/apps/cf-php-demo
Uploading 231.9K, 13 files
Done uploading
OK
Starting app cf-php-demo in org system / space dev as admin...
FAILED
Server error, status code: 400, error code: 170001, message: Staging error:
no available stagers
ubuntu(a)boshclivm:~/apps/cf-php-demo$ cf apps
Getting apps in org system / space dev as admin...
OK
name requested state instances memory disk urls
cf-php-demo started 0/1 128M 1G
cf-php-demo.cfapps.io
ubuntu(a)boshclivm:~/apps/cf-php-demo$
Additional error messages:
1. nginx.access.log
api.au.apaas.com - [03/Jun/2015:06:19:51 +0000] "PUT
/v2/apps/7b417005-6716-4d5e-bec2-246a51b588c6?async=true&inline-relations-depth=1
HTTP/1.1" 400 435 "-" "go-cli 6.10.0-b78bf10 / linux" 137.172.74.86,
100.64.1.0, 100.64.1.5
vcap_request_id:ae51df28-8371-44f2-4d18-5e13ded3a467::6faf587a-aba8-4467-8e1e-75b018eff93b
response_time:0.297
2. cloud_controller_ng.log
{"timestamp":1433313002.4425669,"message":"Request failed: 400:
{\"code\"=>170001, \"description\"=>\"Staging error: no available stagers\",
\"error_code\"=>\"CF-StagingError\",
\"backtrace\"=>[\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/lib/cloud_controller/dea/app_stager_task.rb:26:in
`stage'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/lib/cloud_controller/dea/stager.rb:45:in
`stage_app'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/lib/cloud_controller/app_observer.rb:57:in
`react_to_state_change'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/lib/cloud_controller/app_observer.rb:27:in
`updated'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/app/models/runtime/app.rb:534:in
`after_commit'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sequel-4.21.0/lib/sequel/model/base.rb:1920:in
`block in _save'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sequel-4.21.0/lib/sequel/database/transactions.rb:280:in
`call'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sequel-4.21.0/lib/sequel/database/transactions.rb:280:in
`block in remove_transaction'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sequel-4.21.0/lib/sequel/database/transactions.rb:280:in
`each'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sequel-4.21.0/lib/sequel/database/transactions.rb:280:in
`remove_transaction'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sequel-4.21.0/lib/sequel/database/transactions.rb:156:in
`_transaction'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sequel-4.21.0/lib/sequel/database/transactions.rb:108:in
`block in transaction'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sequel-4.21.0/lib/sequel/database/connecting.rb:250:in
`block in synchronize'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sequel-4.21.0/lib/sequel/connection_pool/threaded.rb:98:in
`hold'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sequel-4.21.0/lib/sequel/database/connecting.rb:250:in
`synchronize'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sequel-4.21.0/lib/sequel/database/transactions.rb:97:in
`transaction'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/app/controllers/base/model_controller.rb:64:in
`update'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/app/controllers/base/base_controller.rb:76:in
`dispatch'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/lib/cloud_controller/rest_controller/routes.rb:16:in
`block in define_route'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1602:in
`call'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1602:in
`block in compile!'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:966:in
`[]'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:966:in
`block (3 levels) in route!'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:985:in
`route_eval'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:966:in
`block (2 levels) in route!'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1006:in
`block in process_route'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1004:in
`catch'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1004:in
`process_route'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:964:in
`block in route!'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:963:in
`each'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:963:in
`route!'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1076:in
`block in dispatch!'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1058:in
`block in invoke'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1058:in
`catch'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1058:in
`invoke'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1073:in
`dispatch!'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:898:in
`block in call!'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1058:in
`block in invoke'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1058:in
`catch'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1058:in
`invoke'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:898:in
`call!'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:886:in
`call'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/rack-protection-1.5.3/lib/rack/protection/xss_header.rb:18:in
`call'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/rack-protection-1.5.3/lib/rack/protection/path_traversal.rb:16:in
`call'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/rack-protection-1.5.3/lib/rack/protection/json_csrf.rb:18:in
`call'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/rack-protection-1.5.3/lib/rack/protection/base.rb:49:in
`call'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/rack-protection-1.5.3/lib/rack/protection/base.rb:49:in
`call'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/rack-protection-1.5.3/lib/rack/protection/frame_options.rb:31:in
`call'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/rack-1.5.2/lib/rack/nulllogger.rb:9:in
`call'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/rack-1.5.2/lib/rack/head.rb:11:in
`call'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:180:in
`call'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:2014:in
`call'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/rack-1.5.2/lib/rack/builder.rb:138:in
`call'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/rack-1.5.2/lib/rack/urlmap.rb:65:in
`block in call'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/rack-1.5.2/lib/rack/urlmap.rb:50:in
`each'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/rack-1.5.2/lib/rack/urlmap.rb:50:in
`call'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/rack-1.5.2/lib/rack/builder.rb:138:in
`call'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/thin-1.6.3/lib/thin/connection.rb:86:in
`block in pre_process'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/thin-1.6.3/lib/thin/connection.rb:84:in
`catch'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/thin-1.6.3/lib/thin/connection.rb:84:in
`pre_process'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/eventmachine-1.0.3/lib/eventmachine.rb:1037:in
`call'\",
\"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/vendor/bundle/ruby/2.1.0/gems/eventmachine-1.0.3/lib/eventmachine.rb:1037:in
`block in
spawn_threadpool'\"]}","log_level":"info","source":"cc.api","data":{"request_guid":"d916023a-d364-492b-50bb-624e5862e455::2c136550-80cb-4d5b-8ab3-6c2dd397fc1a"},"thread_id":69941342398660,"fiber_id":69941329029340,"process_id":1816,"file":"/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/lib/sinatra/vcap.rb","lineno":51,"method":"block
in registered"}
--
View this message in context: http://cf-dev.70369.x6.nabble.com/Server-error-status-code-400-error-code-170001-message-Staging-error-no-available-stagers-tp269.html
Sent from the CF Dev mailing list archive at Nabble.com.
Re: Soliciting feedback for design proposal: TCP Routing
James Bayer
shannon and team,
thanks to all of those that worked on this proposal thus far! so many new
workloads will be enabled by adding tcp routing for cf applications. i'm
looking forward to the community feedback.
toggle quoted message
Show quoted text
thanks to all of those that worked on this proposal thus far! so many new
workloads will be enabled by adding tcp routing for cf applications. i'm
looking forward to the community feedback.
On Tue, Jun 2, 2015 at 7:06 PM, Shannon Coen <scoen(a)pivotal.io> wrote:
Currently Cloud Foundry only supports routing of http traffic to
applications. There are many use cases, especially related to IOT, for
which applications need to receive non-http traffic.
Together with Atul Kshirsagar and Fermin Ordaz from GE, we've begun
initial work on a TCP Routing service that would enable routing of non-http
traffic to applications running on Diego in Lattice and Cloud Foundry.
Our project proposal is open for public comment and we welcome your
feedback:
https://docs.google.com/document/d/1PZE_ieAZLew6nUKIB1eaNtDWRrZt57ffqwXch0K6lVw/edit?usp=sharing
We will be requesting this project be accepted into incubation with a
Cloud Foundry Foundation PMC.
Thank you,
Shannon Coen
Product Manager, Cloud Foundry
Pivotal, Inc.
_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev
--
Thank you,
James Bayer
Thank you,
James Bayer
Re: Scaling Services
Alberto A. Flores
My 2 cents:
Your database, while it's deployed in CF, it's based on MySQL. Depending on
the configuration (YML file) you used for that release, scaling that
database is strictly based on whatever MySQL can provide. It's not CF, but
it's BOSH and the Software (in this case MySQL) that enables the "scaling"
of the software. All CF provides is the ability to connect apps to this db
service using the "service broker" model. In other words, CF does not scale
your service, but the BOSH release (e.g. cf-mysql-release) may be
configured to scale using features available in both BOSH and the software
(e.g. MySQL).
With regards to where to save "profile pictures", I've learned that the
right answer is always driven by the "access pattern" of the data. The S3
solution may work, but if you only archiving you can certainly wonder if
it's cost effective to do it that way. The docs you suggest refers to
writing data to disk as an anti-pattern. In general, CF allows to implement
patterns described in the "12 Factor App" http://12factor.net/
With regards to scaling, there are others factors you can consider. Perhaps
putting an "in-memory" store as a service to your app that can hold certain
type of data. I think there's more than one way to skin the cat.
Alberto Flores
@albertoaflores
On Tue, Jun 2, 2015 at 5:12 PM, Flávio Henrique Schuindt da Silva <
flavio.schuindt(a)gmail.com> wrote:
Your database, while it's deployed in CF, it's based on MySQL. Depending on
the configuration (YML file) you used for that release, scaling that
database is strictly based on whatever MySQL can provide. It's not CF, but
it's BOSH and the Software (in this case MySQL) that enables the "scaling"
of the software. All CF provides is the ability to connect apps to this db
service using the "service broker" model. In other words, CF does not scale
your service, but the BOSH release (e.g. cf-mysql-release) may be
configured to scale using features available in both BOSH and the software
(e.g. MySQL).
With regards to where to save "profile pictures", I've learned that the
right answer is always driven by the "access pattern" of the data. The S3
solution may work, but if you only archiving you can certainly wonder if
it's cost effective to do it that way. The docs you suggest refers to
writing data to disk as an anti-pattern. In general, CF allows to implement
patterns described in the "12 Factor App" http://12factor.net/
With regards to scaling, there are others factors you can consider. Perhaps
putting an "in-memory" store as a service to your app that can hold certain
type of data. I think there's more than one way to skin the cat.
Alberto Flores
@albertoaflores
On Tue, Jun 2, 2015 at 5:12 PM, Flávio Henrique Schuindt da Silva <
flavio.schuindt(a)gmail.com> wrote:
Hi, guys.
I'm a beginner in using CF and I successfully deployed cf-mysql-release
[1] and now I can write, read, etc from the database as a service bound in
my application.
Now, I have some questions and it would be really great if someone could
help me.
a) Ok, I have a database that works and its great. Imagine a scenario that
a lot of clients are acessing my app and now I have to scale. How CF scale
the service? I mean there must be some way to give more nodes on the maria
db cluster provided by [1], right?
b) If I need to save profile pictures to a user table. What should I do?
Save it as blob in the database since write data to disk is not recommended
by cf docs because apps are isolated each other in the DEA.
Thank you very much for your patient and time.
[1] - https://github.com/cloudfoundry/cf-mysql-release
_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev