BOSH Manifest and directory_uuid
Alberto A. Flores
Team,
After reviewing the bosh.io site and other email list, I can't find a good
reason on why does the manifest YAML file needs to have the director_uuid in
the manifest. According to the docs found here:
http://bosh.io/docs/deployment-manifest.html#deployment
It is a "required" field. Is this so? Keeping manifests in version control
will create a dependency on a single Director running. Is this by design?
Just looking for clarification more than a rant... :)
--
View this message in context: http://cf-bosh.70367.x6.nabble.com/BOSH-Manifest-and-directory-uuid-tp45.html
Sent from the CF BOSH mailing list archive at Nabble.com.
After reviewing the bosh.io site and other email list, I can't find a good
reason on why does the manifest YAML file needs to have the director_uuid in
the manifest. According to the docs found here:
http://bosh.io/docs/deployment-manifest.html#deployment
It is a "required" field. Is this so? Keeping manifests in version control
will create a dependency on a single Director running. Is this by design?
Just looking for clarification more than a rant... :)
--
View this message in context: http://cf-bosh.70367.x6.nabble.com/BOSH-Manifest-and-directory-uuid-tp45.html
Sent from the CF BOSH mailing list archive at Nabble.com.
Re: BOSH Manifest and directory_uuid
Dmitriy Kalinin
It's a safety feature so that people do not accidentally deploy deployment
with a same name to a wrong environment. For example if you have staging
and prod environments and both have cf deployment.
toggle quoted message
Show quoted text
with a same name to a wrong environment. For example if you have staging
and prod environments and both have cf deployment.
On Wed, May 20, 2015 at 10:30 AM, albertoaflores <aaflores(a)gmail.com> wrote:
Team,
After reviewing the bosh.io site and other email list, I can't find a good
reason on why does the manifest YAML file needs to have the director_uuid
in
the manifest. According to the docs found here:
http://bosh.io/docs/deployment-manifest.html#deployment
It is a "required" field. Is this so? Keeping manifests in version control
will create a dependency on a single Director running. Is this by design?
Just looking for clarification more than a rant... :)
--
View this message in context:
http://cf-bosh.70367.x6.nabble.com/BOSH-Manifest-and-directory-uuid-tp45.html
Sent from the CF BOSH mailing list archive at Nabble.com.
_______________________________________________
cf-bosh mailing list
cf-bosh(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-bosh
Re: BOSH Manifest and directory_uuid
Alberto A. Flores
Thanks Dmitriy, so if I “version control” a manifest, I’d have to keep a different version for “prod” and another one for “staging”? Is there a way to override it in the command line?
--
Alberto Flores
Twitter: @albertoaflores
From: Dmitriy Kalinin <dkalinin(a)pivotal.io>
Reply: Dmitriy Kalinin <dkalinin(a)pivotal.io>>
Date: May 20, 2015 at 3:05:41 PM
To: albertoaflores <aaflores(a)gmail.com>>
Cc: CF BOSH Mailing List <cf-bosh(a)lists.cloudfoundry.org>>
Subject: Re: [cf-bosh] BOSH Manifest and directory_uuid
It's a safety feature so that people do not accidentally deploy deployment with a same name to a wrong environment. For example if you have staging and prod environments and both have cf deployment.
toggle quoted message
Show quoted text
--
Alberto Flores
Twitter: @albertoaflores
From: Dmitriy Kalinin <dkalinin(a)pivotal.io>
Reply: Dmitriy Kalinin <dkalinin(a)pivotal.io>>
Date: May 20, 2015 at 3:05:41 PM
To: albertoaflores <aaflores(a)gmail.com>>
Cc: CF BOSH Mailing List <cf-bosh(a)lists.cloudfoundry.org>>
Subject: Re: [cf-bosh] BOSH Manifest and directory_uuid
It's a safety feature so that people do not accidentally deploy deployment with a same name to a wrong environment. For example if you have staging and prod environments and both have cf deployment.
On Wed, May 20, 2015 at 10:30 AM, albertoaflores <aaflores(a)gmail.com> wrote:
Team,
After reviewing the bosh.io site and other email list, I can't find a good
reason on why does the manifest YAML file needs to have the director_uuid in
the manifest. According to the docs found here:
http://bosh.io/docs/deployment-manifest.html#deployment
It is a "required" field. Is this so? Keeping manifests in version control
will create a dependency on a single Director running. Is this by design?
Just looking for clarification more than a rant... :)
--
View this message in context: http://cf-bosh.70367.x6.nabble.com/BOSH-Manifest-and-directory-uuid-tp45.html
Sent from the CF BOSH mailing list archive at Nabble.com.
_______________________________________________
cf-bosh mailing list
cf-bosh(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-bosh
Team,
After reviewing the bosh.io site and other email list, I can't find a good
reason on why does the manifest YAML file needs to have the director_uuid in
the manifest. According to the docs found here:
http://bosh.io/docs/deployment-manifest.html#deployment
It is a "required" field. Is this so? Keeping manifests in version control
will create a dependency on a single Director running. Is this by design?
Just looking for clarification more than a rant... :)
--
View this message in context: http://cf-bosh.70367.x6.nabble.com/BOSH-Manifest-and-directory-uuid-tp45.html
Sent from the CF BOSH mailing list archive at Nabble.com.
_______________________________________________
cf-bosh mailing list
cf-bosh(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-bosh
Re: BOSH Manifest and directory_uuid
Dr Nic Williams
Try this idea https://github.com/concourse/concourse/blob/master/manifests/bosh-lite.yml#L4
On Wed, May 20, 2015 at 2:25 PM, Alberto Flores <aaflores(a)gmail.com>
wrote:
On Wed, May 20, 2015 at 2:25 PM, Alberto Flores <aaflores(a)gmail.com>
wrote:
Thanks Dmitriy, so if I “version control” a manifest, I’d have to keep a different version for “prod” and another one for “staging”? Is there a way to override it in the command line?
--
Alberto Flores
Twitter: @albertoaflores
From: Dmitriy Kalinin <dkalinin(a)pivotal.io>
Reply: Dmitriy Kalinin <dkalinin(a)pivotal.io>>
Date: May 20, 2015 at 3:05:41 PM
To: albertoaflores <aaflores(a)gmail.com>>
Cc: CF BOSH Mailing List <cf-bosh(a)lists.cloudfoundry.org>>
Subject: Re: [cf-bosh] BOSH Manifest and directory_uuid
It's a safety feature so that people do not accidentally deploy deployment with a same name to a wrong environment. For example if you have staging and prod environments and both have cf deployment.
On Wed, May 20, 2015 at 10:30 AM, albertoaflores <aaflores(a)gmail.com> wrote:
Team,
After reviewing the bosh.io site and other email list, I can't find a good
reason on why does the manifest YAML file needs to have the director_uuid in
the manifest. According to the docs found here:
http://bosh.io/docs/deployment-manifest.html#deployment
It is a "required" field. Is this so? Keeping manifests in version control
will create a dependency on a single Director running. Is this by design?
Just looking for clarification more than a rant... :)
--
View this message in context: http://cf-bosh.70367.x6.nabble.com/BOSH-Manifest-and-directory-uuid-tp45.html
Sent from the CF BOSH mailing list archive at Nabble.com.
_______________________________________________
cf-bosh mailing list
cf-bosh(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-bosh
Re: BOSH Manifest and directory_uuid
Alberto A. Flores
Awesome idea, Dr. Nic..! That works!
I appreciate the feedback!
--
Alberto Flores
Twitter: @albertoaflores
From: Dr Nic Williams <drnicwilliams(a)gmail.com>
Reply: Dr Nic Williams <drnicwilliams(a)gmail.com>>
Date: May 20, 2015 at 5:28:52 PM
To: Alberto Flores <aaflores(a)gmail.com>>
Cc: Dmitriy Kalinin <dkalinin(a)pivotal.io>>, CF BOSH Mailing List <cf-bosh(a)lists.cloudfoundry.org>>
Subject: Re: [cf-bosh] BOSH Manifest and directory_uuid
Try this idea https://github.com/concourse/concourse/blob/master/manifests/bosh-lite.yml#L4
toggle quoted message
Show quoted text
I appreciate the feedback!
--
Alberto Flores
Twitter: @albertoaflores
From: Dr Nic Williams <drnicwilliams(a)gmail.com>
Reply: Dr Nic Williams <drnicwilliams(a)gmail.com>>
Date: May 20, 2015 at 5:28:52 PM
To: Alberto Flores <aaflores(a)gmail.com>>
Cc: Dmitriy Kalinin <dkalinin(a)pivotal.io>>, CF BOSH Mailing List <cf-bosh(a)lists.cloudfoundry.org>>
Subject: Re: [cf-bosh] BOSH Manifest and directory_uuid
Try this idea https://github.com/concourse/concourse/blob/master/manifests/bosh-lite.yml#L4
On Wed, May 20, 2015 at 2:25 PM, Alberto Flores <aaflores(a)gmail.com> wrote:
Thanks Dmitriy, so if I “version control” a manifest, I’d have to keep a different version for “prod” and another one for “staging”? Is there a way to override it in the command line?
--
Alberto Flores
Twitter: @albertoaflores
From: Dmitriy Kalinin <dkalinin(a)pivotal.io>
Reply: Dmitriy Kalinin <dkalinin(a)pivotal.io>>
Date: May 20, 2015 at 3:05:41 PM
To: albertoaflores <aaflores(a)gmail.com>>
Cc: CF BOSH Mailing List <cf-bosh(a)lists.cloudfoundry.org>>
Subject: Re: [cf-bosh] BOSH Manifest and directory_uuid
It's a safety feature so that people do not accidentally deploy deployment with a same name to a wrong environment. For example if you have staging and prod environments and both have cf deployment.
On Wed, May 20, 2015 at 10:30 AM, albertoaflores <aaflores(a)gmail.com> wrote:
Team,
After reviewing the bosh.io site and other email list, I can't find a good
reason on why does the manifest YAML file needs to have the director_uuid in
the manifest. According to the docs found here:
http://bosh.io/docs/deployment-manifest.html#deployment
It is a "required" field. Is this so? Keeping manifests in version control
will create a dependency on a single Director running. Is this by design?
Just looking for clarification more than a rant... :)
--
View this message in context: http://cf-bosh.70367.x6.nabble.com/BOSH-Manifest-and-directory-uuid-tp45.html
Sent from the CF BOSH mailing list archive at Nabble.com.
_______________________________________________
cf-bosh mailing list
cf-bosh(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-bosh
Thanks Dmitriy, so if I “version control” a manifest, I’d have to keep a different version for “prod” and another one for “staging”? Is there a way to override it in the command line?
--
Alberto Flores
Twitter: @albertoaflores
From: Dmitriy Kalinin <dkalinin(a)pivotal.io>
Reply: Dmitriy Kalinin <dkalinin(a)pivotal.io>>
Date: May 20, 2015 at 3:05:41 PM
To: albertoaflores <aaflores(a)gmail.com>>
Cc: CF BOSH Mailing List <cf-bosh(a)lists.cloudfoundry.org>>
Subject: Re: [cf-bosh] BOSH Manifest and directory_uuid
It's a safety feature so that people do not accidentally deploy deployment with a same name to a wrong environment. For example if you have staging and prod environments and both have cf deployment.
On Wed, May 20, 2015 at 10:30 AM, albertoaflores <aaflores(a)gmail.com> wrote:
Team,
After reviewing the bosh.io site and other email list, I can't find a good
reason on why does the manifest YAML file needs to have the director_uuid in
the manifest. According to the docs found here:
http://bosh.io/docs/deployment-manifest.html#deployment
It is a "required" field. Is this so? Keeping manifests in version control
will create a dependency on a single Director running. Is this by design?
Just looking for clarification more than a rant... :)
--
View this message in context: http://cf-bosh.70367.x6.nabble.com/BOSH-Manifest-and-directory-uuid-tp45.html
Sent from the CF BOSH mailing list archive at Nabble.com.
_______________________________________________
cf-bosh mailing list
cf-bosh(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-bosh
Re: Create bosh stemcell failed in AWS region cn-north-1
支雷 <lzhi3937 at gmail.com...>
I have tried full stemcell
bosh-stemcell-2972-aws-xen-ubuntu-trusty-go_agent.tgz, but failed, error
"create stemcell failed: unable to find AKI:" was thrown (please find
details in my first email). And when I tried to "bosh-bootstrap deploy"
command, I got `validate_aws_region': Unknown region: "cn-north-1"
(ArgumentError). Seems cn-north-1 is not supported by bosh aws plugin. Any
suggestions on this issue? Thanks!
2015-05-19 23:58 GMT+08:00 Wayne E. Seguin <wayneeseguin(a)starkandwayne.com>:
toggle quoted message
Show quoted text
bosh-stemcell-2972-aws-xen-ubuntu-trusty-go_agent.tgz, but failed, error
"create stemcell failed: unable to find AKI:" was thrown (please find
details in my first email). And when I tried to "bosh-bootstrap deploy"
command, I got `validate_aws_region': Unknown region: "cn-north-1"
(ArgumentError). Seems cn-north-1 is not supported by bosh aws plugin. Any
suggestions on this issue? Thanks!
2015-05-19 23:58 GMT+08:00 Wayne E. Seguin <wayneeseguin(a)starkandwayne.com>:
The issue is that there appear to not be any light stemcells in your
region, there is another recent question on the list to this effect. In
order to make progress you might want to build your own stemcell to use for
now or try to find and download a full aws hvm stemcell image to upload.
On Mon, May 18, 2015 at 6:12 AM, 支雷 <lzhi3937(a)gmail.com> wrote:Hello,
I tried to deploy micro bosh in AWS region cn-north-1 in several ways,
but all failed. Any suggestions on how to deploy micro bosh in AWS region
cn-north-1? Thanks!
I created a EC2 instance (ubuntu) in the cn-north-1 region with an public
ip, ssh'd into it and installed bosh-cli, bosh_cli_plugin_micro and
bosh_cli_plugin_aws. After that I downloaded stemcell
bosh-stemcell-2972-aws-xen-ubuntu-trusty-go_agent.tgz, and tried " bosh
micro deploy ./bosh-stemcell-2972-aws-xen-ubuntu-trusty-go_agent.tgz" which
resulted in "create stemcell failed: getaddrinfo: Name or service not
known:"
I checked the failed URL, it's "ec2.cn-north-1.amazonaws.com" which is
not accessable. I updated the http.rb and changed the url to "
ec2.cn-north-1.amazonaws.com.cn" and escape the ssl validation and tried
again, another error was thrown:
Stemcell info
-------------
Name: bosh-aws-xen-ubuntu-trusty-go_agent
Version: 2972
Started deploy micro bosh
Started deploy micro bosh > Unpacking stemcell. Done (00:00:08)
Started deploy micro bosh > Uploading stemcell"
create stemcell failed: unable to find AKI:
/var/lib/gems/1.9.1/gems/bosh_aws_cpi-1.2972.0/lib/cloud/aws/aki_picker.rb:15:in
`pick'
/var/lib/gems/1.9.1/gems/bosh_aws_cpi-1.2972.0/lib/cloud/aws/stemcell_creator.rb:100:in
`image_params'
/var/lib/gems/1.9.1/gems/bosh_aws_cpi-1.2972.0/lib/cloud/aws/stemcell_creator.rb:24:in
`create'
/var/lib/gems/1.9.1/gems/bosh_aws_cpi-1.2972.0/lib/cloud/aws/cloud.rb:465:in
`block in create_stemcell'
/var/lib/gems/1.9.1/gems/bosh_common-1.2972.0/lib/common/thread_formatter.rb:49:in
`with_thread_name'
/var/lib/gems/1.9.1/gems/bosh_aws_cpi-1.2972.0/lib/cloud/aws/cloud.rb:445:in
`create_stemcell'
/var/lib/gems/1.9.1/gems/bosh_cli_plugin_micro-1.2972.0/lib/bosh/deployer/instance_manager.rb:228:in
`block (2 levels) in create_stemcell'
/var/lib/gems/1.9.1/gems/bosh_cli_plugin_micro-1.2972.0/lib/bosh/deployer/instance_manager.rb:85:in
`step'
/var/lib/gems/1.9.1/gems/bosh_cli_plugin_micro-1.2972.0/lib/bosh/deployer/instance_manager.rb:227:in
`block in create_stemcell'
/usr/lib/ruby/1.9.1/tmpdir.rb:83:in `mktmpdir'
/var/lib/gems/1.9.1/gems/bosh_cli_plugin_micro-1.2972.0/lib/bosh/deployer/instance_manager.rb:213:in
`create_stemcell'
/var/lib/gems/1.9.1/gems/bosh_cli_plugin_micro-1.2972.0/lib/bosh/deployer/instance_manager.rb:118:in
`create'
/var/lib/gems/1.9.1/gems/bosh_cli_plugin_micro-1.2972.0/lib/bosh/deployer/instance_manager.rb:98:in
`block in create_deployment'
/var/lib/gems/1.9.1/gems/bosh_cli_plugin_micro-1.2972.0/lib/bosh/deployer/instance_manager.rb:92:in
`with_lifecycle'
/var/lib/gems/1.9.1/gems/bosh_cli_plugin_micro-1.2972.0/lib/bosh/deployer/instance_manager.rb:98:in
`create_deployment'
/var/lib/gems/1.9.1/gems/bosh_cli_plugin_micro-1.2972.0/lib/bosh/cli/commands/micro.rb:179:in
`perform'
/var/lib/gems/1.9.1/gems/bosh_cli-1.2972.0/lib/cli/command_handler.rb:57:in
`run'
/var/lib/gems/1.9.1/gems/bosh_cli-1.2972.0/lib/cli/runner.rb:56:in `run'
/var/lib/gems/1.9.1/gems/bosh_cli-1.2972.0/bin/bosh:16:in `<top
(required)>'
/usr/local/bin/bosh:23:in `load'
/usr/local/bin/bosh:23:in `<main>'
After that I installed bosh-bootstrap and executed following command:
bosh-bootstrap deploy
and I selected AWS provider and region 10 (China (Beijing) Region
(cn-north-1)), an error was thrown :
Confirming: Using AWS EC2/cn-north-1
/var/lib/gems/1.9.1/gems/fog-aws-0.1.1/lib/fog/aws/region_methods.rb:6:in
`validate_aws_region': Unknown region: "cn-north-1" (ArgumentError)
from
/var/lib/gems/1.9.1/gems/fog-aws-0.1.1/lib/fog/aws/compute.rb:482:in
`initialize'
from
/var/lib/gems/1.9.1/gems/fog-core-1.30.0/lib/fog/core/service.rb:115:in
`new'
from
/var/lib/gems/1.9.1/gems/fog-core-1.30.0/lib/fog/core/service.rb:115:in
`new'
from
/var/lib/gems/1.9.1/gems/fog-core-1.30.0/lib/fog/compute.rb:60:in `new'
from
/var/lib/gems/1.9.1/gems/cyoi-0.11.3/lib/cyoi/providers/clients/aws_provider_client.rb:257:in
`setup_fog_connection'
from
/var/lib/gems/1.9.1/gems/cyoi-0.11.3/lib/cyoi/providers/clients/fog_provider_client.rb:13:in
`initialize'
from /var/lib/gems/1.9.1/gems/cyoi-0.11.3/lib/cyoi/providers.rb:17:in
`new'
from /var/lib/gems/1.9.1/gems/cyoi-0.11.3/lib/cyoi/providers.rb:17:in
`provider_client'
from
/var/lib/gems/1.9.1/gems/cyoi-0.11.3/lib/cyoi/cli/helpers/provider.rb:6:in
`provider_client'
from
/var/lib/gems/1.9.1/gems/cyoi-0.11.3/lib/cyoi/cli/address.rb:41:in
`address_cli'
from
/var/lib/gems/1.9.1/gems/cyoi-0.11.3/lib/cyoi/cli/address.rb:56:in
`valid_address?'
from
/var/lib/gems/1.9.1/gems/cyoi-0.11.3/lib/cyoi/cli/address.rb:19:in
`execute!'
from
/var/lib/gems/1.9.1/gems/bosh-bootstrap-0.17.0/lib/bosh-bootstrap/cli/commands/deploy.rb:41:in
`select_or_provision_public_networking'
from
/var/lib/gems/1.9.1/gems/bosh-bootstrap-0.17.0/lib/bosh-bootstrap/cli/commands/deploy.rb:21:in
`perform'
from
/var/lib/gems/1.9.1/gems/bosh-bootstrap-0.17.0/lib/bosh-bootstrap/thor_cli.rb:11:in
`deploy'
from /var/lib/gems/1.9.1/gems/thor-0.19.1/lib/thor/command.rb:27:in
`run'
from
/var/lib/gems/1.9.1/gems/thor-0.19.1/lib/thor/invocation.rb:126:in
`invoke_command'
from /var/lib/gems/1.9.1/gems/thor-0.19.1/lib/thor.rb:359:in
`dispatch'
from /var/lib/gems/1.9.1/gems/thor-0.19.1/lib/thor/base.rb:440:in
`start'
from
/var/lib/gems/1.9.1/gems/bosh-bootstrap-0.17.0/bin/bosh-bootstrap:13:in
`<top (required)>'
from /usr/local/bin/bosh-bootstrap:23:in `load'
from /usr/local/bin/bosh-bootstrap:23:in `<main>'
_______________________________________________
cf-bosh mailing list
cf-bosh(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-bosh
Re: Create bosh stemcell failed in AWS region cn-north-1
Dr Nic Williams
There are two issues - the second is that bosh-bootstrap uses a project
"cyoi" (choose your own infrastructure) and underneath it uses "fog" - its
quite possible that either or both do not yet support China (its harder to
get accounts to do testing).
The former is failing inside AWS SDK for Ruby.
BOSH calls into this library here:
https://github.com/cloudfoundry/bosh/blob/develop/bosh_aws_cpi/lib/cloud/aws/aki_picker.rb#L25
We are using aws-sdk (= 1.60.2)
https://github.com/cloudfoundry/bosh/blob/114b3cf107672cfebf444fe7db4703dd804c72cc/Gemfile.lock#L19
The latest version is 2.0.42
https://rubygems.org/gems/aws-sdk/versions/2.0.42
So perhaps China support was added more recently and we need to bump to
newer aws-sdk version.
Try bumping this version in the Gemfile of bosh and using that.
Avoid bosh-bootstrap until you've at least confimed you can get underlying
bosh_cli to work.
toggle quoted message
Show quoted text
"cyoi" (choose your own infrastructure) and underneath it uses "fog" - its
quite possible that either or both do not yet support China (its harder to
get accounts to do testing).
The former is failing inside AWS SDK for Ruby.
BOSH calls into this library here:
https://github.com/cloudfoundry/bosh/blob/develop/bosh_aws_cpi/lib/cloud/aws/aki_picker.rb#L25
We are using aws-sdk (= 1.60.2)
https://github.com/cloudfoundry/bosh/blob/114b3cf107672cfebf444fe7db4703dd804c72cc/Gemfile.lock#L19
The latest version is 2.0.42
https://rubygems.org/gems/aws-sdk/versions/2.0.42
So perhaps China support was added more recently and we need to bump to
newer aws-sdk version.
Try bumping this version in the Gemfile of bosh and using that.
Avoid bosh-bootstrap until you've at least confimed you can get underlying
bosh_cli to work.
On Wed, May 20, 2015 at 8:17 PM, 支雷 <lzhi3937(a)gmail.com> wrote:
I have tried full stemcell
bosh-stemcell-2972-aws-xen-ubuntu-trusty-go_agent.tgz, but failed, error
"create stemcell failed: unable to find AKI:" was thrown (please find
details in my first email). And when I tried to "bosh-bootstrap deploy"
command, I got `validate_aws_region': Unknown region: "cn-north-1"
(ArgumentError). Seems cn-north-1 is not supported by bosh aws plugin. Any
suggestions on this issue? Thanks!
2015-05-19 23:58 GMT+08:00 Wayne E. Seguin <wayneeseguin(a)starkandwayne.com:The issue is that there appear to not be any light stemcells in your_______________________________________________
region, there is another recent question on the list to this effect. In
order to make progress you might want to build your own stemcell to use for
now or try to find and download a full aws hvm stemcell image to upload.
On Mon, May 18, 2015 at 6:12 AM, 支雷 <lzhi3937(a)gmail.com> wrote:Hello,
I tried to deploy micro bosh in AWS region cn-north-1 in several ways,
but all failed. Any suggestions on how to deploy micro bosh in AWS region
cn-north-1? Thanks!
I created a EC2 instance (ubuntu) in the cn-north-1 region with an
public ip, ssh'd into it and installed bosh-cli, bosh_cli_plugin_micro and
bosh_cli_plugin_aws. After that I downloaded stemcell
bosh-stemcell-2972-aws-xen-ubuntu-trusty-go_agent.tgz, and tried " bosh
micro deploy ./bosh-stemcell-2972-aws-xen-ubuntu-trusty-go_agent.tgz" which
resulted in "create stemcell failed: getaddrinfo: Name or service not
known:"
I checked the failed URL, it's "ec2.cn-north-1.amazonaws.com" which is
not accessable. I updated the http.rb and changed the url to "
ec2.cn-north-1.amazonaws.com.cn" and escape the ssl validation and
tried again, another error was thrown:
Stemcell info
-------------
Name: bosh-aws-xen-ubuntu-trusty-go_agent
Version: 2972
Started deploy micro bosh
Started deploy micro bosh > Unpacking stemcell. Done (00:00:08)
Started deploy micro bosh > Uploading stemcell"
create stemcell failed: unable to find AKI:
/var/lib/gems/1.9.1/gems/bosh_aws_cpi-1.2972.0/lib/cloud/aws/aki_picker.rb:15:in
`pick'
/var/lib/gems/1.9.1/gems/bosh_aws_cpi-1.2972.0/lib/cloud/aws/stemcell_creator.rb:100:in
`image_params'
/var/lib/gems/1.9.1/gems/bosh_aws_cpi-1.2972.0/lib/cloud/aws/stemcell_creator.rb:24:in
`create'
/var/lib/gems/1.9.1/gems/bosh_aws_cpi-1.2972.0/lib/cloud/aws/cloud.rb:465:in
`block in create_stemcell'
/var/lib/gems/1.9.1/gems/bosh_common-1.2972.0/lib/common/thread_formatter.rb:49:in
`with_thread_name'
/var/lib/gems/1.9.1/gems/bosh_aws_cpi-1.2972.0/lib/cloud/aws/cloud.rb:445:in
`create_stemcell'
/var/lib/gems/1.9.1/gems/bosh_cli_plugin_micro-1.2972.0/lib/bosh/deployer/instance_manager.rb:228:in
`block (2 levels) in create_stemcell'
/var/lib/gems/1.9.1/gems/bosh_cli_plugin_micro-1.2972.0/lib/bosh/deployer/instance_manager.rb:85:in
`step'
/var/lib/gems/1.9.1/gems/bosh_cli_plugin_micro-1.2972.0/lib/bosh/deployer/instance_manager.rb:227:in
`block in create_stemcell'
/usr/lib/ruby/1.9.1/tmpdir.rb:83:in `mktmpdir'
/var/lib/gems/1.9.1/gems/bosh_cli_plugin_micro-1.2972.0/lib/bosh/deployer/instance_manager.rb:213:in
`create_stemcell'
/var/lib/gems/1.9.1/gems/bosh_cli_plugin_micro-1.2972.0/lib/bosh/deployer/instance_manager.rb:118:in
`create'
/var/lib/gems/1.9.1/gems/bosh_cli_plugin_micro-1.2972.0/lib/bosh/deployer/instance_manager.rb:98:in
`block in create_deployment'
/var/lib/gems/1.9.1/gems/bosh_cli_plugin_micro-1.2972.0/lib/bosh/deployer/instance_manager.rb:92:in
`with_lifecycle'
/var/lib/gems/1.9.1/gems/bosh_cli_plugin_micro-1.2972.0/lib/bosh/deployer/instance_manager.rb:98:in
`create_deployment'
/var/lib/gems/1.9.1/gems/bosh_cli_plugin_micro-1.2972.0/lib/bosh/cli/commands/micro.rb:179:in
`perform'
/var/lib/gems/1.9.1/gems/bosh_cli-1.2972.0/lib/cli/command_handler.rb:57:in
`run'
/var/lib/gems/1.9.1/gems/bosh_cli-1.2972.0/lib/cli/runner.rb:56:in `run'
/var/lib/gems/1.9.1/gems/bosh_cli-1.2972.0/bin/bosh:16:in `<top
(required)>'
/usr/local/bin/bosh:23:in `load'
/usr/local/bin/bosh:23:in `<main>'
After that I installed bosh-bootstrap and executed following command:
bosh-bootstrap deploy
and I selected AWS provider and region 10 (China (Beijing) Region
(cn-north-1)), an error was thrown :
Confirming: Using AWS EC2/cn-north-1
/var/lib/gems/1.9.1/gems/fog-aws-0.1.1/lib/fog/aws/region_methods.rb:6:in
`validate_aws_region': Unknown region: "cn-north-1" (ArgumentError)
from
/var/lib/gems/1.9.1/gems/fog-aws-0.1.1/lib/fog/aws/compute.rb:482:in
`initialize'
from
/var/lib/gems/1.9.1/gems/fog-core-1.30.0/lib/fog/core/service.rb:115:in
`new'
from
/var/lib/gems/1.9.1/gems/fog-core-1.30.0/lib/fog/core/service.rb:115:in
`new'
from
/var/lib/gems/1.9.1/gems/fog-core-1.30.0/lib/fog/compute.rb:60:in `new'
from
/var/lib/gems/1.9.1/gems/cyoi-0.11.3/lib/cyoi/providers/clients/aws_provider_client.rb:257:in
`setup_fog_connection'
from
/var/lib/gems/1.9.1/gems/cyoi-0.11.3/lib/cyoi/providers/clients/fog_provider_client.rb:13:in
`initialize'
from
/var/lib/gems/1.9.1/gems/cyoi-0.11.3/lib/cyoi/providers.rb:17:in `new'
from
/var/lib/gems/1.9.1/gems/cyoi-0.11.3/lib/cyoi/providers.rb:17:in
`provider_client'
from
/var/lib/gems/1.9.1/gems/cyoi-0.11.3/lib/cyoi/cli/helpers/provider.rb:6:in
`provider_client'
from
/var/lib/gems/1.9.1/gems/cyoi-0.11.3/lib/cyoi/cli/address.rb:41:in
`address_cli'
from
/var/lib/gems/1.9.1/gems/cyoi-0.11.3/lib/cyoi/cli/address.rb:56:in
`valid_address?'
from
/var/lib/gems/1.9.1/gems/cyoi-0.11.3/lib/cyoi/cli/address.rb:19:in
`execute!'
from
/var/lib/gems/1.9.1/gems/bosh-bootstrap-0.17.0/lib/bosh-bootstrap/cli/commands/deploy.rb:41:in
`select_or_provision_public_networking'
from
/var/lib/gems/1.9.1/gems/bosh-bootstrap-0.17.0/lib/bosh-bootstrap/cli/commands/deploy.rb:21:in
`perform'
from
/var/lib/gems/1.9.1/gems/bosh-bootstrap-0.17.0/lib/bosh-bootstrap/thor_cli.rb:11:in
`deploy'
from /var/lib/gems/1.9.1/gems/thor-0.19.1/lib/thor/command.rb:27:in
`run'
from
/var/lib/gems/1.9.1/gems/thor-0.19.1/lib/thor/invocation.rb:126:in
`invoke_command'
from /var/lib/gems/1.9.1/gems/thor-0.19.1/lib/thor.rb:359:in
`dispatch'
from /var/lib/gems/1.9.1/gems/thor-0.19.1/lib/thor/base.rb:440:in
`start'
from
/var/lib/gems/1.9.1/gems/bosh-bootstrap-0.17.0/bin/bosh-bootstrap:13:in
`<top (required)>'
from /usr/local/bin/bosh-bootstrap:23:in `load'
from /usr/local/bin/bosh-bootstrap:23:in `<main>'
_______________________________________________
cf-bosh mailing list
cf-bosh(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-bosh
cf-bosh mailing list
cf-bosh(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-bosh
--
Dr Nic Williams
Stark & Wayne LLC - consultancy for Cloud Foundry users
http://drnicwilliams.com
http://starkandwayne.com
cell +1 (415) 860-2185
twitter @drnic
Dr Nic Williams
Stark & Wayne LLC - consultancy for Cloud Foundry users
http://drnicwilliams.com
http://starkandwayne.com
cell +1 (415) 860-2185
twitter @drnic
Re: BOSH Manifest and directory_uuid
Tammer Saleh
We've had this problem as well, and discussed a couple of features in bosh
that could help:
1. Allow the user to tell the director what its name is, and use that in
the UUID field. and/or...
2. Allow the operator to mark a deployment as "protected," and only
require the UUID field in the deploy manifest when targeting such a
deployment.
Cheers,
Tammer Saleh
Director of Product, Pivotal CF, London
http://pivotal.io | http://tammersaleh.com | +44 7463 939332
toggle quoted message
Show quoted text
that could help:
1. Allow the user to tell the director what its name is, and use that in
the UUID field. and/or...
2. Allow the operator to mark a deployment as "protected," and only
require the UUID field in the deploy manifest when targeting such a
deployment.
Cheers,
Tammer Saleh
Director of Product, Pivotal CF, London
http://pivotal.io | http://tammersaleh.com | +44 7463 939332
On Wed, May 20, 2015 at 10:31 PM, Alberto Flores <aaflores(a)gmail.com> wrote:
Awesome idea, Dr. Nic..! That works!
I appreciate the feedback!
--
*Alberto Flores*
Twitter: @albertoaflores
From: Dr Nic Williams <drnicwilliams(a)gmail.com> <drnicwilliams(a)gmail.com>
Reply: Dr Nic Williams <drnicwilliams(a)gmail.com>>
<drnicwilliams(a)gmail.com>
Date: May 20, 2015 at 5:28:52 PM
To: Alberto Flores <aaflores(a)gmail.com>> <aaflores(a)gmail.com>
Cc: Dmitriy Kalinin <dkalinin(a)pivotal.io>> <dkalinin(a)pivotal.io>, CF BOSH
Mailing List <cf-bosh(a)lists.cloudfoundry.org>>
<cf-bosh(a)lists.cloudfoundry.org>
Subject: Re: [cf-bosh] BOSH Manifest and directory_uuid
Try this idea
https://github.com/concourse/concourse/blob/master/manifests/bosh-lite.yml#L4
On Wed, May 20, 2015 at 2:25 PM, Alberto Flores <aaflores(a)gmail.com>
wrote:Thanks Dmitriy, so if I “version control” a manifest, I’d have to keep_______________________________________________
a different version for “prod” and another one for “staging”? Is there a
way to override it in the command line?
--
*Alberto Flores*
Twitter: @albertoaflores
From: Dmitriy Kalinin <dkalinin(a)pivotal.io> <dkalinin(a)pivotal.io>
Reply: Dmitriy Kalinin <dkalinin(a)pivotal.io>> <dkalinin(a)pivotal.io>
Date: May 20, 2015 at 3:05:41 PM
To: albertoaflores <aaflores(a)gmail.com>> <aaflores(a)gmail.com>
Cc: CF BOSH Mailing List <cf-bosh(a)lists.cloudfoundry.org>>
<cf-bosh(a)lists.cloudfoundry.org>
Subject: Re: [cf-bosh] BOSH Manifest and directory_uuid
It's a safety feature so that people do not accidentally deploy
deployment with a same name to a wrong environment. For example if you have
staging and prod environments and both have cf deployment.
On Wed, May 20, 2015 at 10:30 AM, albertoaflores <aaflores(a)gmail.com>
wrote:Team,
After reviewing the bosh.io site and other email list, I can't find a
good
reason on why does the manifest YAML file needs to have the
director_uuid in
the manifest. According to the docs found here:
http://bosh.io/docs/deployment-manifest.html#deployment
It is a "required" field. Is this so? Keeping manifests in version
control
will create a dependency on a single Director running. Is this by design?
Just looking for clarification more than a rant... :)
--
View this message in context:
http://cf-bosh.70367.x6.nabble.com/BOSH-Manifest-and-directory-uuid-tp45.html
Sent from the CF BOSH mailing list archive at Nabble.com.
_______________________________________________
cf-bosh mailing list
cf-bosh(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-bosh
cf-bosh mailing list
cf-bosh(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-bosh
Retrieving external ip in .erb template
Stevo Slavić <sslavic at gmail.com...>
Hello Bosh community,
Is it possible to retrieve external ip in .erb template?
I'd like to release/deploy Apache Kafka using Bosh, and one of the
properties to configure in Kafka server.properties is advertised.host.name
- every instance needs to know it's external ip, to advertise it to others.
Should something like:
advertised.host.name=<%= spec.networks.marshal_dump.first[1].ip %>
work?
Kind regards,
Stevo Slavic.
Is it possible to retrieve external ip in .erb template?
I'd like to release/deploy Apache Kafka using Bosh, and one of the
properties to configure in Kafka server.properties is advertised.host.name
- every instance needs to know it's external ip, to advertise it to others.
Should something like:
advertised.host.name=<%= spec.networks.marshal_dump.first[1].ip %>
work?
Kind regards,
Stevo Slavic.
Most bosh director commands fail with a HTTP 500
Scott Taggart <staggart@...>
Hi folks,
One of my three bosh directors has gotten itself stuck in a strange state where most (but not all) operations fail. I have recreated the director with a couple of different stemcells (but same persistent disk) and the issue persists. Looks like potentially a database issue on the director, but I have done a very quick visual check of a few tables (e.g. vms, deployments) and they seem fine from a glance... not sure what's going on.
Everything CF-related currently/previously under the director is continuing to run fine in this AZ, it's just the director that's lost it:
$ bosh deployments
+---------------------+-----------------------+----------------------------------------------+--------------+
| Name | Release(s) | Stemcell(s) | Cloud Config |
+---------------------+-----------------------+----------------------------------------------+--------------+
| cf-mysql | cf-mysql/19 | bosh-vcloud-esxi-ubuntu-trusty-go_agent/2915 | none |
+---------------------+-----------------------+----------------------------------------------+--------------+
| cf-services-contrib | cf-services-contrib/6 | bosh-vcloud-esxi-ubuntu-trusty-go_agent/2915 | none |
+---------------------+-----------------------+----------------------------------------------+--------------+
| xxxxxxx_cf | cf/208 | bosh-vcloud-esxi-ubuntu-trusty-go_agent/2915 | none |
+---------------------+-----------------------+----------------------------------------------+--------------+
Deployments total: 3
$ bosh releases
+---------------------+----------+-------------+
| Name | Versions | Commit Hash |
+---------------------+----------+-------------+
| cf | 208* | 5d00be54+ |
| cf-mysql | 19* | dfab036b+ |
| cf-services-contrib | 6* | 57fd2098+ |
+---------------------+----------+-------------+
(*) Currently deployed
(+) Uncommitted changes
Releases total: 3
$ bosh locks
No locks
$ bosh tasks
No running tasks
$ bosh vms
Deployment `cf-mysql'
HTTP 500:
$ bosh cloudcheck
Performing cloud check...
Processing deployment manifest
------------------------------
HTTP 500:
The relevant error I get from /var/vcap/sys/log/director/director.debug.log on the director is:
E, [2015-05-25 21:20:15 #1010] [] ERROR -- Director: TypeError - no implicit conversion of nil into String:
/var/vcap/packages/ruby/lib/ruby/2.1.0/fileutils.rb:1572:in `path'
/var/vcap/packages/ruby/lib/ruby/2.1.0/fileutils.rb:1572:in `block in fu_list'
/var/vcap/packages/ruby/lib/ruby/2.1.0/fileutils.rb:1572:in `map'
/var/vcap/packages/ruby/lib/ruby/2.1.0/fileutils.rb:1572:in `fu_list'
/var/vcap/packages/ruby/lib/ruby/2.1.0/fileutils.rb:625:in `rm_r'
/var/vcap/packages/ruby/lib/ruby/2.1.0/fileutils.rb:654:in `rm_rf'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2957.0/lib/bosh/director/api/task_remover.rb:9:in `block in remove'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sequel-3.43.0/lib/sequel/dataset/actions.rb:152:in `block in each'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sequel-3.43.0/lib/sequel/adapters/postgres.rb:525:in `block (2 levels) in fetch_rows'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sequel-3.43.0/lib/sequel/adapters/postgres.rb:720:in `block in yield_hash_rows'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sequel-3.43.0/lib/sequel/adapters/postgres.rb:714:in `times'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sequel-3.43.0/lib/sequel/adapters/postgres.rb:714:in `yield_hash_rows'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sequel-3.43.0/lib/sequel/adapters/postgres.rb:525:in `block in fetch_rows'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sequel-3.43.0/lib/sequel/adapters/postgres.rb:134:in `execute'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sequel-3.43.0/lib/sequel/adapters/postgres.rb:413:in `_execute'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sequel-3.43.0/lib/sequel/adapters/postgres.rb:242:in `block (2 levels) in execute'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sequel-3.43.0/lib/sequel/adapters/postgres.rb:425:in `check_database_errors'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sequel-3.43.0/lib/sequel/adapters/postgres.rb:242:in `block in execute'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sequel-3.43.0/lib/sequel/database/connecting.rb:236:in `block in synchronize'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sequel-3.43.0/lib/sequel/connection_pool/threaded.rb:104:in `hold'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sequel-3.43.0/lib/sequel/database/connecting.rb:236:in `synchronize'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sequel-3.43.0/lib/sequel/adapters/postgres.rb:242:in `execute'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sequel-3.43.0/lib/sequel/dataset/actions.rb:801:in `execute'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sequel-3.43.0/lib/sequel/adapters/postgres.rb:525:in `fetch_rows'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sequel-3.43.0/lib/sequel/dataset/actions.rb:152:in `each'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2957.0/lib/bosh/director/api/task_remover.rb:8:in `remove'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2957.0/lib/bosh/director/api/task_helper.rb:23:in `create_task'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2957.0/lib/bosh/director/job_queue.rb:9:in `enqueue'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2957.0/lib/bosh/director/api/vm_state_manager.rb:5:in `fetch_vm_state'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2957.0/lib/bosh/director/api/controllers/deployments_controller.rb:182:in `block in <class:DeploymentsController>'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1603:in `call'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1603:in `block in compile!'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:966:in `[]'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:966:in `block (3 levels) in route!'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:985:in `route_eval'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:966:in `block (2 levels) in route!'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1006:in `block in process_route'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1004:in `catch'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1004:in `process_route'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:964:in `block in route!'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:963:in `each'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:963:in `route!'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1076:in `block in dispatch!'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1058:in `block in invoke'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1058:in `catch'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1058:in `invoke'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1073:in `dispatch!'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:898:in `block in call!'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1058:in `block in invoke'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1058:in `catch'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1058:in `invoke'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:898:in `call!'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:886:in `call'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/rack-protection-1.5.3/lib/rack/protection/xss_header.rb:18:in `call'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/rack-protection-1.5.3/lib/rack/protection/path_traversal.rb:16:in `call'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/rack-protection-1.5.3/lib/rack/protection/json_csrf.rb:18:in `call'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/rack-protection-1.5.3/lib/rack/protection/base.rb:49:in `call'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/rack-protection-1.5.3/lib/rack/protection/base.rb:49:in `call'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/rack-protection-1.5.3/lib/rack/protection/frame_options.rb:31:in `call'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/rack-1.6.0/lib/rack/nulllogger.rb:9:in `call'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/rack-1.6.0/lib/rack/head.rb:13:in `call'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:180:in `call'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:2014:in `call'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/rack-1.6.0/lib/rack/urlmap.rb:66:in `block in call'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/rack-1.6.0/lib/rack/urlmap.rb:50:in `each'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/rack-1.6.0/lib/rack/urlmap.rb:50:in `call'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/rack-1.6.0/lib/rack/commonlogger.rb:33:in `call'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:217:in `call'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/thin-1.5.1/lib/thin/connection.rb:81:in `block in pre_process'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/thin-1.5.1/lib/thin/connection.rb:79:in `catch'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/thin-1.5.1/lib/thin/connection.rb:79:in `pre_process'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/thin-1.5.1/lib/thin/connection.rb:54:in `process'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/thin-1.5.1/lib/thin/connection.rb:39:in `receive_data'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/eventmachine-1.0.3/lib/eventmachine.rb:187:in `run_machine'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/eventmachine-1.0.3/lib/eventmachine.rb:187:in `run'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/thin-1.5.1/lib/thin/backends/base.rb:63:in `start'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/thin-1.5.1/lib/thin/server.rb:159:in `start'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2957.0/bin/bosh-director:37:in `<top (required)>'
/var/vcap/packages/director/bin/bosh-director:16:in `load'
/var/vcap/packages/director/bin/bosh-director:16:in `<main>'
I've wiped my local bosh config and re-targetted the director and tried running bosh vms without specifying a deployment manifest (i.e. rule the manifest out) - still get the same 500
Any tips appreciated!
Notice:
This message contains information that may be privileged or confidential and is the property of Skyscape. It is intended only for the person to whom it is addressed. If you are not the intended recipient, you are not authorised to read, print, retain, copy, disseminate, distribute, or use this message or any part thereof. If you receive this message in error, please notify the sender immediately and delete all copies of this message. Skyscape reserves the right to monitor all e-mail communications through its networks. Skyscape Cloud Services Limited is registered in England and Wales: Company No: 07619797. Registered office: Hartham Park, Hartham, Corsham, Wiltshire SN13 0RP.
______________________________________________________________________
This email has been scanned by the Symantec Email Security.cloud service.
______________________________________________________________________
One of my three bosh directors has gotten itself stuck in a strange state where most (but not all) operations fail. I have recreated the director with a couple of different stemcells (but same persistent disk) and the issue persists. Looks like potentially a database issue on the director, but I have done a very quick visual check of a few tables (e.g. vms, deployments) and they seem fine from a glance... not sure what's going on.
Everything CF-related currently/previously under the director is continuing to run fine in this AZ, it's just the director that's lost it:
$ bosh deployments
+---------------------+-----------------------+----------------------------------------------+--------------+
| Name | Release(s) | Stemcell(s) | Cloud Config |
+---------------------+-----------------------+----------------------------------------------+--------------+
| cf-mysql | cf-mysql/19 | bosh-vcloud-esxi-ubuntu-trusty-go_agent/2915 | none |
+---------------------+-----------------------+----------------------------------------------+--------------+
| cf-services-contrib | cf-services-contrib/6 | bosh-vcloud-esxi-ubuntu-trusty-go_agent/2915 | none |
+---------------------+-----------------------+----------------------------------------------+--------------+
| xxxxxxx_cf | cf/208 | bosh-vcloud-esxi-ubuntu-trusty-go_agent/2915 | none |
+---------------------+-----------------------+----------------------------------------------+--------------+
Deployments total: 3
$ bosh releases
+---------------------+----------+-------------+
| Name | Versions | Commit Hash |
+---------------------+----------+-------------+
| cf | 208* | 5d00be54+ |
| cf-mysql | 19* | dfab036b+ |
| cf-services-contrib | 6* | 57fd2098+ |
+---------------------+----------+-------------+
(*) Currently deployed
(+) Uncommitted changes
Releases total: 3
$ bosh locks
No locks
$ bosh tasks
No running tasks
$ bosh vms
Deployment `cf-mysql'
HTTP 500:
$ bosh cloudcheck
Performing cloud check...
Processing deployment manifest
------------------------------
HTTP 500:
The relevant error I get from /var/vcap/sys/log/director/director.debug.log on the director is:
E, [2015-05-25 21:20:15 #1010] [] ERROR -- Director: TypeError - no implicit conversion of nil into String:
/var/vcap/packages/ruby/lib/ruby/2.1.0/fileutils.rb:1572:in `path'
/var/vcap/packages/ruby/lib/ruby/2.1.0/fileutils.rb:1572:in `block in fu_list'
/var/vcap/packages/ruby/lib/ruby/2.1.0/fileutils.rb:1572:in `map'
/var/vcap/packages/ruby/lib/ruby/2.1.0/fileutils.rb:1572:in `fu_list'
/var/vcap/packages/ruby/lib/ruby/2.1.0/fileutils.rb:625:in `rm_r'
/var/vcap/packages/ruby/lib/ruby/2.1.0/fileutils.rb:654:in `rm_rf'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2957.0/lib/bosh/director/api/task_remover.rb:9:in `block in remove'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sequel-3.43.0/lib/sequel/dataset/actions.rb:152:in `block in each'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sequel-3.43.0/lib/sequel/adapters/postgres.rb:525:in `block (2 levels) in fetch_rows'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sequel-3.43.0/lib/sequel/adapters/postgres.rb:720:in `block in yield_hash_rows'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sequel-3.43.0/lib/sequel/adapters/postgres.rb:714:in `times'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sequel-3.43.0/lib/sequel/adapters/postgres.rb:714:in `yield_hash_rows'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sequel-3.43.0/lib/sequel/adapters/postgres.rb:525:in `block in fetch_rows'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sequel-3.43.0/lib/sequel/adapters/postgres.rb:134:in `execute'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sequel-3.43.0/lib/sequel/adapters/postgres.rb:413:in `_execute'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sequel-3.43.0/lib/sequel/adapters/postgres.rb:242:in `block (2 levels) in execute'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sequel-3.43.0/lib/sequel/adapters/postgres.rb:425:in `check_database_errors'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sequel-3.43.0/lib/sequel/adapters/postgres.rb:242:in `block in execute'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sequel-3.43.0/lib/sequel/database/connecting.rb:236:in `block in synchronize'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sequel-3.43.0/lib/sequel/connection_pool/threaded.rb:104:in `hold'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sequel-3.43.0/lib/sequel/database/connecting.rb:236:in `synchronize'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sequel-3.43.0/lib/sequel/adapters/postgres.rb:242:in `execute'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sequel-3.43.0/lib/sequel/dataset/actions.rb:801:in `execute'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sequel-3.43.0/lib/sequel/adapters/postgres.rb:525:in `fetch_rows'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sequel-3.43.0/lib/sequel/dataset/actions.rb:152:in `each'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2957.0/lib/bosh/director/api/task_remover.rb:8:in `remove'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2957.0/lib/bosh/director/api/task_helper.rb:23:in `create_task'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2957.0/lib/bosh/director/job_queue.rb:9:in `enqueue'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2957.0/lib/bosh/director/api/vm_state_manager.rb:5:in `fetch_vm_state'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2957.0/lib/bosh/director/api/controllers/deployments_controller.rb:182:in `block in <class:DeploymentsController>'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1603:in `call'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1603:in `block in compile!'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:966:in `[]'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:966:in `block (3 levels) in route!'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:985:in `route_eval'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:966:in `block (2 levels) in route!'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1006:in `block in process_route'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1004:in `catch'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1004:in `process_route'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:964:in `block in route!'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:963:in `each'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:963:in `route!'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1076:in `block in dispatch!'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1058:in `block in invoke'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1058:in `catch'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1058:in `invoke'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1073:in `dispatch!'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:898:in `block in call!'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1058:in `block in invoke'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1058:in `catch'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1058:in `invoke'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:898:in `call!'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:886:in `call'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/rack-protection-1.5.3/lib/rack/protection/xss_header.rb:18:in `call'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/rack-protection-1.5.3/lib/rack/protection/path_traversal.rb:16:in `call'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/rack-protection-1.5.3/lib/rack/protection/json_csrf.rb:18:in `call'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/rack-protection-1.5.3/lib/rack/protection/base.rb:49:in `call'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/rack-protection-1.5.3/lib/rack/protection/base.rb:49:in `call'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/rack-protection-1.5.3/lib/rack/protection/frame_options.rb:31:in `call'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/rack-1.6.0/lib/rack/nulllogger.rb:9:in `call'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/rack-1.6.0/lib/rack/head.rb:13:in `call'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:180:in `call'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:2014:in `call'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/rack-1.6.0/lib/rack/urlmap.rb:66:in `block in call'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/rack-1.6.0/lib/rack/urlmap.rb:50:in `each'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/rack-1.6.0/lib/rack/urlmap.rb:50:in `call'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/rack-1.6.0/lib/rack/commonlogger.rb:33:in `call'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:217:in `call'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/thin-1.5.1/lib/thin/connection.rb:81:in `block in pre_process'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/thin-1.5.1/lib/thin/connection.rb:79:in `catch'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/thin-1.5.1/lib/thin/connection.rb:79:in `pre_process'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/thin-1.5.1/lib/thin/connection.rb:54:in `process'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/thin-1.5.1/lib/thin/connection.rb:39:in `receive_data'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/eventmachine-1.0.3/lib/eventmachine.rb:187:in `run_machine'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/eventmachine-1.0.3/lib/eventmachine.rb:187:in `run'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/thin-1.5.1/lib/thin/backends/base.rb:63:in `start'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/thin-1.5.1/lib/thin/server.rb:159:in `start'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2957.0/bin/bosh-director:37:in `<top (required)>'
/var/vcap/packages/director/bin/bosh-director:16:in `load'
/var/vcap/packages/director/bin/bosh-director:16:in `<main>'
I've wiped my local bosh config and re-targetted the director and tried running bosh vms without specifying a deployment manifest (i.e. rule the manifest out) - still get the same 500
Any tips appreciated!
Notice:
This message contains information that may be privileged or confidential and is the property of Skyscape. It is intended only for the person to whom it is addressed. If you are not the intended recipient, you are not authorised to read, print, retain, copy, disseminate, distribute, or use this message or any part thereof. If you receive this message in error, please notify the sender immediately and delete all copies of this message. Skyscape reserves the right to monitor all e-mail communications through its networks. Skyscape Cloud Services Limited is registered in England and Wales: Company No: 07619797. Registered office: Hartham Park, Hartham, Corsham, Wiltshire SN13 0RP.
______________________________________________________________________
This email has been scanned by the Symantec Email Security.cloud service.
______________________________________________________________________
Regarding installation of bosh
Bharath
Hi guys ,
I am right now working on deploying cloudfoundry on openstack. When i am
running the gem install bosh_cli command it is throwing error saying host
not found. I was able to ping all the other websites from my terminal
like google , wikipedia . In same terminal if i open rubygems.org using a
web browser it is working properly .
Unable to understand what could be the real problem
regards
bharath
I am right now working on deploying cloudfoundry on openstack. When i am
running the gem install bosh_cli command it is throwing error saying host
not found. I was able to ping all the other websites from my terminal
like google , wikipedia . In same terminal if i open rubygems.org using a
web browser it is working properly .
Unable to understand what could be the real problem
regards
bharath
Re: Regarding installation of bosh
James Bayer
can you post the actual terminal output to a gist or something?
toggle quoted message
Show quoted text
On Tue, May 26, 2015 at 12:45 AM, Bharath Posa <bharathp(a)vedams.com> wrote:
Hi guys ,
I am right now working on deploying cloudfoundry on openstack. When i am
running the gem install bosh_cli command it is throwing error saying host
not found. I was able to ping all the other websites from my terminal
like google , wikipedia . In same terminal if i open rubygems.org using
a web browser it is working properly .
Unable to understand what could be the real problem
regards
bharath
_______________________________________________
cf-bosh mailing list
cf-bosh(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-bosh
--
Thank you,
James Bayer
Thank you,
James Bayer
Re: Most bosh director commands fail with a HTTP 500
Dmitriy Kalinin
There currently exists a problem in the Director during task cleanup.
Director tries to clean up task logs for the tasks that do not have
associated directory on disk.
https://www.pivotaltracker.com/story/show/95458780 will fix this.
To fix the Director until we release a bug fix:
- ssh as vcap into the Director VM
- run /var/vcap/jobs/director/bin/director_ctl console
- opens up console to the Director DB
- run Bosh::Director::Models::Task.where(output: nil).update(output:
'/tmp/123')
- updates tasks without task log directories to a dummy destination;
Director will be happy to run rm -rf /tmp/123 when it cleans up tasks.
After that you should be able to run `bosh vms` and other tasks again.
On Mon, May 25, 2015 at 2:27 PM, Scott Taggart <staggart(a)skyscapecloud.com>
wrote:
Director tries to clean up task logs for the tasks that do not have
associated directory on disk.
https://www.pivotaltracker.com/story/show/95458780 will fix this.
To fix the Director until we release a bug fix:
- ssh as vcap into the Director VM
- run /var/vcap/jobs/director/bin/director_ctl console
- opens up console to the Director DB
- run Bosh::Director::Models::Task.where(output: nil).update(output:
'/tmp/123')
- updates tasks without task log directories to a dummy destination;
Director will be happy to run rm -rf /tmp/123 when it cleans up tasks.
After that you should be able to run `bosh vms` and other tasks again.
On Mon, May 25, 2015 at 2:27 PM, Scott Taggart <staggart(a)skyscapecloud.com>
wrote:
Hi folks,
One of my three bosh directors has gotten itself stuck in a strange state
where most (but not all) operations fail. I have recreated the director
with a couple of different stemcells (but same persistent disk) and the
issue persists. Looks like potentially a database issue on the director,
but I have done a very quick visual check of a few tables (e.g. vms,
deployments) and they seem fine from a glance... not sure what's going on.
Everything CF-related currently/previously under the director is
continuing to run fine in this AZ, it's just the director that's lost it:
$ bosh deployments
+---------------------+-----------------------+----------------------------------------------+--------------+
| Name | Release(s) | Stemcell(s) | Cloud Config |
+---------------------+-----------------------+----------------------------------------------+--------------+
| cf-mysql | cf-mysql/19 | bosh-vcloud-esxi-ubuntu-trusty-go_agent/2915 |
none |
+---------------------+-----------------------+----------------------------------------------+--------------+
| cf-services-contrib | cf-services-contrib/6 |
bosh-vcloud-esxi-ubuntu-trusty-go_agent/2915 | none |
+---------------------+-----------------------+----------------------------------------------+--------------+
| xxxxxxx_cf | cf/208 | bosh-vcloud-esxi-ubuntu-trusty-go_agent/2915 |
none |
+---------------------+-----------------------+----------------------------------------------+--------------+
Deployments total: 3
$ bosh releases
+---------------------+----------+-------------+
| Name | Versions | Commit Hash |
+---------------------+----------+-------------+
| cf | 208* | 5d00be54+ |
| cf-mysql | 19* | dfab036b+ |
| cf-services-contrib | 6* | 57fd2098+ |
+---------------------+----------+-------------+
(*) Currently deployed
(+) Uncommitted changes
Releases total: 3
$ bosh locks
No locks
$ bosh tasks
No running tasks
$ bosh vms
Deployment `cf-mysql'
HTTP 500:
$ bosh cloudcheck
Performing cloud check...
Processing deployment manifest
------------------------------
HTTP 500:
The relevant error I get from
/var/vcap/sys/log/director/director.debug.log on the director is:
E, [2015-05-25 21:20:15 #1010] [] ERROR -- Director: TypeError - no
implicit conversion of nil into String:
/var/vcap/packages/ruby/lib/ruby/2.1.0/fileutils.rb:1572:in `path'
/var/vcap/packages/ruby/lib/ruby/2.1.0/fileutils.rb:1572:in `block in
fu_list'
/var/vcap/packages/ruby/lib/ruby/2.1.0/fileutils.rb:1572:in `map'
/var/vcap/packages/ruby/lib/ruby/2.1.0/fileutils.rb:1572:in `fu_list'
/var/vcap/packages/ruby/lib/ruby/2.1.0/fileutils.rb:625:in `rm_r'
/var/vcap/packages/ruby/lib/ruby/2.1.0/fileutils.rb:654:in `rm_rf'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2957.0/lib/bosh/director/api/task_remover.rb:9:in
`block in remove'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sequel-3.43.0/lib/sequel/dataset/actions.rb:152:in
`block in each'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sequel-3.43.0/lib/sequel/adapters/postgres.rb:525:in
`block (2 levels) in fetch_rows'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sequel-3.43.0/lib/sequel/adapters/postgres.rb:720:in
`block in yield_hash_rows'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sequel-3.43.0/lib/sequel/adapters/postgres.rb:714:in
`times'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sequel-3.43.0/lib/sequel/adapters/postgres.rb:714:in
`yield_hash_rows'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sequel-3.43.0/lib/sequel/adapters/postgres.rb:525:in
`block in fetch_rows'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sequel-3.43.0/lib/sequel/adapters/postgres.rb:134:in
`execute'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sequel-3.43.0/lib/sequel/adapters/postgres.rb:413:in
`_execute'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sequel-3.43.0/lib/sequel/adapters/postgres.rb:242:in
`block (2 levels) in execute'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sequel-3.43.0/lib/sequel/adapters/postgres.rb:425:in
`check_database_errors'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sequel-3.43.0/lib/sequel/adapters/postgres.rb:242:in
`block in execute'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sequel-3.43.0/lib/sequel/database/connecting.rb:236:in
`block in synchronize'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sequel-3.43.0/lib/sequel/connection_pool/threaded.rb:104:in
`hold'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sequel-3.43.0/lib/sequel/database/connecting.rb:236:in
`synchronize'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sequel-3.43.0/lib/sequel/adapters/postgres.rb:242:in
`execute'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sequel-3.43.0/lib/sequel/dataset/actions.rb:801:in
`execute'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sequel-3.43.0/lib/sequel/adapters/postgres.rb:525:in
`fetch_rows'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sequel-3.43.0/lib/sequel/dataset/actions.rb:152:in
`each'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2957.0/lib/bosh/director/api/task_remover.rb:8:in
`remove'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2957.0/lib/bosh/director/api/task_helper.rb:23:in
`create_task'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2957.0/lib/bosh/director/job_queue.rb:9:in
`enqueue'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2957.0/lib/bosh/director/api/vm_state_manager.rb:5:in
`fetch_vm_state'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2957.0/lib/bosh/director/api/controllers/deployments_controller.rb:182:in
`block in <class:DeploymentsController>'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1603:in
`call'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1603:in
`block in compile!'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:966:in
`[]'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:966:in
`block (3 levels) in route!'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:985:in
`route_eval'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:966:in
`block (2 levels) in route!'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1006:in
`block in process_route'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1004:in
`catch'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1004:in
`process_route'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:964:in
`block in route!'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:963:in
`each'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:963:in
`route!'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1076:in
`block in dispatch!'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1058:in
`block in invoke'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1058:in
`catch'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1058:in
`invoke'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1073:in
`dispatch!'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:898:in
`block in call!'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1058:in
`block in invoke'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1058:in
`catch'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1058:in
`invoke'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:898:in
`call!'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:886:in
`call'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/rack-protection-1.5.3/lib/rack/protection/xss_header.rb:18:in
`call'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/rack-protection-1.5.3/lib/rack/protection/path_traversal.rb:16:in
`call'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/rack-protection-1.5.3/lib/rack/protection/json_csrf.rb:18:in
`call'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/rack-protection-1.5.3/lib/rack/protection/base.rb:49:in
`call'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/rack-protection-1.5.3/lib/rack/protection/base.rb:49:in
`call'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/rack-protection-1.5.3/lib/rack/protection/frame_options.rb:31:in
`call'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/rack-1.6.0/lib/rack/nulllogger.rb:9:in
`call'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/rack-1.6.0/lib/rack/head.rb:13:in
`call'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:180:in
`call'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:2014:in
`call'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/rack-1.6.0/lib/rack/urlmap.rb:66:in
`block in call'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/rack-1.6.0/lib/rack/urlmap.rb:50:in
`each'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/rack-1.6.0/lib/rack/urlmap.rb:50:in
`call'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/rack-1.6.0/lib/rack/commonlogger.rb:33:in
`call'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:217:in
`call'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/thin-1.5.1/lib/thin/connection.rb:81:in
`block in pre_process'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/thin-1.5.1/lib/thin/connection.rb:79:in
`catch'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/thin-1.5.1/lib/thin/connection.rb:79:in
`pre_process'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/thin-1.5.1/lib/thin/connection.rb:54:in
`process'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/thin-1.5.1/lib/thin/connection.rb:39:in
`receive_data'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/eventmachine-1.0.3/lib/eventmachine.rb:187:in
`run_machine'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/eventmachine-1.0.3/lib/eventmachine.rb:187:in
`run'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/thin-1.5.1/lib/thin/backends/base.rb:63:in
`start'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/thin-1.5.1/lib/thin/server.rb:159:in
`start'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2957.0/bin/bosh-director:37:in
`<top (required)>'
/var/vcap/packages/director/bin/bosh-director:16:in `load'
/var/vcap/packages/director/bin/bosh-director:16:in `<main>'
I've wiped my local bosh config and re-targetted the director and tried
running bosh vms without specifying a deployment manifest (i.e. rule the
manifest out) - still get the same 500
Any tips appreciated!
Notice:
This message contains information that may be privileged or confidential
and is the property of Skyscape. It is intended only for the person to whom
it is addressed. If you are not the intended recipient, you are not
authorised to read, print, retain, copy, disseminate, distribute, or use
this message or any part thereof. If you receive this message in error,
please notify the sender immediately and delete all copies of this message.
Skyscape reserves the right to monitor all e-mail communications through
its networks. Skyscape Cloud Services Limited is registered in England and
Wales: Company No: 07619797. Registered office: Hartham Park, Hartham,
Corsham, Wiltshire SN13 0RP.
______________________________________________________________________
This email has been scanned by the Symantec Email Security.cloud service.
______________________________________________________________________
_______________________________________________
cf-bosh mailing list
cf-bosh(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-bosh
Re: Retrieving external ip in .erb template
Dmitriy Kalinin
Concourse release (github.com/concourse/concourse) does something like
this:
https://github.com/concourse/concourse/blob/master/jobs/atc/templates/atc_ctl.erb#L18-L39
Your example should work also.
toggle quoted message
Show quoted text
this:
https://github.com/concourse/concourse/blob/master/jobs/atc/templates/atc_ctl.erb#L18-L39
Your example should work also.
On Fri, May 22, 2015 at 8:16 AM, Stevo Slavić <sslavic(a)gmail.com> wrote:
Hello Bosh community,
Is it possible to retrieve external ip in .erb template?
I'd like to release/deploy Apache Kafka using Bosh, and one of the
properties to configure in Kafka server.properties is advertised.host.name
- every instance needs to know it's external ip, to advertise it to others.
Should something like:
advertised.host.name=<%= spec.networks.marshal_dump.first[1].ip %>
work?
Kind regards,
Stevo Slavic.
_______________________________________________
cf-bosh mailing list
cf-bosh(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-bosh
Re: Create bosh stemcell failed in AWS region cn-north-1
Dmitriy Kalinin
It seems like this method cannot find appropriate AKIs:
https://github.com/cloudfoundry/bosh/blob/master/bosh_aws_cpi/lib/cloud/aws/aki_picker.rb#L48-L59
I just requested account from AWS to access China region and try to
reproduce the problem.
On Wed, May 20, 2015 at 8:37 PM, Dr Nic Williams <drnicwilliams(a)gmail.com>
wrote:
https://github.com/cloudfoundry/bosh/blob/master/bosh_aws_cpi/lib/cloud/aws/aki_picker.rb#L48-L59
I just requested account from AWS to access China region and try to
reproduce the problem.
On Wed, May 20, 2015 at 8:37 PM, Dr Nic Williams <drnicwilliams(a)gmail.com>
wrote:
There are two issues - the second is that bosh-bootstrap uses a project
"cyoi" (choose your own infrastructure) and underneath it uses "fog" - its
quite possible that either or both do not yet support China (its harder to
get accounts to do testing).
The former is failing inside AWS SDK for Ruby.
BOSH calls into this library here:
https://github.com/cloudfoundry/bosh/blob/develop/bosh_aws_cpi/lib/cloud/aws/aki_picker.rb#L25
We are using aws-sdk (= 1.60.2)
https://github.com/cloudfoundry/bosh/blob/114b3cf107672cfebf444fe7db4703dd804c72cc/Gemfile.lock#L19
The latest version is 2.0.42
https://rubygems.org/gems/aws-sdk/versions/2.0.42
So perhaps China support was added more recently and we need to bump to
newer aws-sdk version.
Try bumping this version in the Gemfile of bosh and using that.
Avoid bosh-bootstrap until you've at least confimed you can get underlying
bosh_cli to work.
On Wed, May 20, 2015 at 8:17 PM, 支雷 <lzhi3937(a)gmail.com> wrote:I have tried full stemcell
bosh-stemcell-2972-aws-xen-ubuntu-trusty-go_agent.tgz, but failed, error
"create stemcell failed: unable to find AKI:" was thrown (please find
details in my first email). And when I tried to "bosh-bootstrap deploy"
command, I got `validate_aws_region': Unknown region: "cn-north-1"
(ArgumentError). Seems cn-north-1 is not supported by bosh aws plugin. Any
suggestions on this issue? Thanks!
2015-05-19 23:58 GMT+08:00 Wayne E. Seguin <
wayneeseguin(a)starkandwayne.com>:The issue is that there appear to not be any light stemcells in your_______________________________________________
region, there is another recent question on the list to this effect. In
order to make progress you might want to build your own stemcell to use for
now or try to find and download a full aws hvm stemcell image to upload.
On Mon, May 18, 2015 at 6:12 AM, 支雷 <lzhi3937(a)gmail.com> wrote:Hello,
I tried to deploy micro bosh in AWS region cn-north-1 in several ways,
but all failed. Any suggestions on how to deploy micro bosh in AWS region
cn-north-1? Thanks!
I created a EC2 instance (ubuntu) in the cn-north-1 region with an
public ip, ssh'd into it and installed bosh-cli, bosh_cli_plugin_micro and
bosh_cli_plugin_aws. After that I downloaded stemcell
bosh-stemcell-2972-aws-xen-ubuntu-trusty-go_agent.tgz, and tried " bosh
micro deploy ./bosh-stemcell-2972-aws-xen-ubuntu-trusty-go_agent.tgz" which
resulted in "create stemcell failed: getaddrinfo: Name or service not
known:"
I checked the failed URL, it's "ec2.cn-north-1.amazonaws.com" which is
not accessable. I updated the http.rb and changed the url to "
ec2.cn-north-1.amazonaws.com.cn" and escape the ssl validation and
tried again, another error was thrown:
Stemcell info
-------------
Name: bosh-aws-xen-ubuntu-trusty-go_agent
Version: 2972
Started deploy micro bosh
Started deploy micro bosh > Unpacking stemcell. Done (00:00:08)
Started deploy micro bosh > Uploading stemcell"
create stemcell failed: unable to find AKI:
/var/lib/gems/1.9.1/gems/bosh_aws_cpi-1.2972.0/lib/cloud/aws/aki_picker.rb:15:in
`pick'
/var/lib/gems/1.9.1/gems/bosh_aws_cpi-1.2972.0/lib/cloud/aws/stemcell_creator.rb:100:in
`image_params'
/var/lib/gems/1.9.1/gems/bosh_aws_cpi-1.2972.0/lib/cloud/aws/stemcell_creator.rb:24:in
`create'
/var/lib/gems/1.9.1/gems/bosh_aws_cpi-1.2972.0/lib/cloud/aws/cloud.rb:465:in
`block in create_stemcell'
/var/lib/gems/1.9.1/gems/bosh_common-1.2972.0/lib/common/thread_formatter.rb:49:in
`with_thread_name'
/var/lib/gems/1.9.1/gems/bosh_aws_cpi-1.2972.0/lib/cloud/aws/cloud.rb:445:in
`create_stemcell'
/var/lib/gems/1.9.1/gems/bosh_cli_plugin_micro-1.2972.0/lib/bosh/deployer/instance_manager.rb:228:in
`block (2 levels) in create_stemcell'
/var/lib/gems/1.9.1/gems/bosh_cli_plugin_micro-1.2972.0/lib/bosh/deployer/instance_manager.rb:85:in
`step'
/var/lib/gems/1.9.1/gems/bosh_cli_plugin_micro-1.2972.0/lib/bosh/deployer/instance_manager.rb:227:in
`block in create_stemcell'
/usr/lib/ruby/1.9.1/tmpdir.rb:83:in `mktmpdir'
/var/lib/gems/1.9.1/gems/bosh_cli_plugin_micro-1.2972.0/lib/bosh/deployer/instance_manager.rb:213:in
`create_stemcell'
/var/lib/gems/1.9.1/gems/bosh_cli_plugin_micro-1.2972.0/lib/bosh/deployer/instance_manager.rb:118:in
`create'
/var/lib/gems/1.9.1/gems/bosh_cli_plugin_micro-1.2972.0/lib/bosh/deployer/instance_manager.rb:98:in
`block in create_deployment'
/var/lib/gems/1.9.1/gems/bosh_cli_plugin_micro-1.2972.0/lib/bosh/deployer/instance_manager.rb:92:in
`with_lifecycle'
/var/lib/gems/1.9.1/gems/bosh_cli_plugin_micro-1.2972.0/lib/bosh/deployer/instance_manager.rb:98:in
`create_deployment'
/var/lib/gems/1.9.1/gems/bosh_cli_plugin_micro-1.2972.0/lib/bosh/cli/commands/micro.rb:179:in
`perform'
/var/lib/gems/1.9.1/gems/bosh_cli-1.2972.0/lib/cli/command_handler.rb:57:in
`run'
/var/lib/gems/1.9.1/gems/bosh_cli-1.2972.0/lib/cli/runner.rb:56:in `run'
/var/lib/gems/1.9.1/gems/bosh_cli-1.2972.0/bin/bosh:16:in `<top
(required)>'
/usr/local/bin/bosh:23:in `load'
/usr/local/bin/bosh:23:in `<main>'
After that I installed bosh-bootstrap and executed following command:
bosh-bootstrap deploy
and I selected AWS provider and region 10 (China (Beijing) Region
(cn-north-1)), an error was thrown :
Confirming: Using AWS EC2/cn-north-1
/var/lib/gems/1.9.1/gems/fog-aws-0.1.1/lib/fog/aws/region_methods.rb:6:in
`validate_aws_region': Unknown region: "cn-north-1" (ArgumentError)
from
/var/lib/gems/1.9.1/gems/fog-aws-0.1.1/lib/fog/aws/compute.rb:482:in
`initialize'
from
/var/lib/gems/1.9.1/gems/fog-core-1.30.0/lib/fog/core/service.rb:115:in
`new'
from
/var/lib/gems/1.9.1/gems/fog-core-1.30.0/lib/fog/core/service.rb:115:in
`new'
from
/var/lib/gems/1.9.1/gems/fog-core-1.30.0/lib/fog/compute.rb:60:in `new'
from
/var/lib/gems/1.9.1/gems/cyoi-0.11.3/lib/cyoi/providers/clients/aws_provider_client.rb:257:in
`setup_fog_connection'
from
/var/lib/gems/1.9.1/gems/cyoi-0.11.3/lib/cyoi/providers/clients/fog_provider_client.rb:13:in
`initialize'
from
/var/lib/gems/1.9.1/gems/cyoi-0.11.3/lib/cyoi/providers.rb:17:in `new'
from
/var/lib/gems/1.9.1/gems/cyoi-0.11.3/lib/cyoi/providers.rb:17:in
`provider_client'
from
/var/lib/gems/1.9.1/gems/cyoi-0.11.3/lib/cyoi/cli/helpers/provider.rb:6:in
`provider_client'
from
/var/lib/gems/1.9.1/gems/cyoi-0.11.3/lib/cyoi/cli/address.rb:41:in
`address_cli'
from
/var/lib/gems/1.9.1/gems/cyoi-0.11.3/lib/cyoi/cli/address.rb:56:in
`valid_address?'
from
/var/lib/gems/1.9.1/gems/cyoi-0.11.3/lib/cyoi/cli/address.rb:19:in
`execute!'
from
/var/lib/gems/1.9.1/gems/bosh-bootstrap-0.17.0/lib/bosh-bootstrap/cli/commands/deploy.rb:41:in
`select_or_provision_public_networking'
from
/var/lib/gems/1.9.1/gems/bosh-bootstrap-0.17.0/lib/bosh-bootstrap/cli/commands/deploy.rb:21:in
`perform'
from
/var/lib/gems/1.9.1/gems/bosh-bootstrap-0.17.0/lib/bosh-bootstrap/thor_cli.rb:11:in
`deploy'
from /var/lib/gems/1.9.1/gems/thor-0.19.1/lib/thor/command.rb:27:in
`run'
from
/var/lib/gems/1.9.1/gems/thor-0.19.1/lib/thor/invocation.rb:126:in
`invoke_command'
from /var/lib/gems/1.9.1/gems/thor-0.19.1/lib/thor.rb:359:in
`dispatch'
from /var/lib/gems/1.9.1/gems/thor-0.19.1/lib/thor/base.rb:440:in
`start'
from
/var/lib/gems/1.9.1/gems/bosh-bootstrap-0.17.0/bin/bosh-bootstrap:13:in
`<top (required)>'
from /usr/local/bin/bosh-bootstrap:23:in `load'
from /usr/local/bin/bosh-bootstrap:23:in `<main>'
_______________________________________________
cf-bosh mailing list
cf-bosh(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-bosh
cf-bosh mailing list
cf-bosh(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-bosh
--
Dr Nic Williams
Stark & Wayne LLC - consultancy for Cloud Foundry users
http://drnicwilliams.com
http://starkandwayne.com
cell +1 (415) 860-2185
twitter @drnic
_______________________________________________
cf-bosh mailing list
cf-bosh(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-bosh
Re: Changeing IP Addresses of containers
Dmitriy Kalinin
Sorry for the late response. Typically following error message (Creating
container: network already acquired) is displayed when container is not
properly deleted from the previous deployments. Currently there is no easy
way to delete containers without accessing garden HTTP API.
On Sun, May 17, 2015 at 10:40 AM, Hildebrandt Andre <myself(a)andrejagusch.de>
wrote:
container: network already acquired) is displayed when container is not
properly deleted from the previous deployments. Currently there is no easy
way to delete containers without accessing garden HTTP API.
On Sun, May 17, 2015 at 10:40 AM, Hildebrandt Andre <myself(a)andrejagusch.de>
wrote:
I want to run two cloud foundry instances on my notebook to try something
out. I have a problem setting the IP addresses to a range in one of them,
so that they can be resolved from the host machine (i.e. the notebook
running the two different vms running the warden containers.
Each VM has its own IP, machine A has 192.168.50.4, machine B has
192.168.59.4. I have created the following routes on the notebook:
10.244/20 192.168.59.4 UGSc 1 0 vboxnet
10.246/20 192.168.50.4 UGSc 0 4 vboxnet
I have changed the deployment manifest of machine A so it should only use
IP addresses starting with 10.245. I then run:
bosh deploy --redact-diff
This fails with the following error:
Started creating bound missing vms > router_z1/0. Failed: Creating VM
with agent ID '35f449ee-80f9-4d04-b165-78c9aa9237c9': Creating container:
network already acquired: 10.246.0.32/30 (00:00:00)
Error 100: Creating VM with agent ID
'35f449ee-80f9-4d04-b165-78c9aa9237c9': Creating container: network already
acquired: 10.246.0.32/30
I do not understand that error message and there is no VM or container
that uses that IP address on my machine. Any hint as to where I’m going
wrong or what I could try out to get this going would be greatly
appreciated.
Best Regards,
André
_______________________________________________
cf-bosh mailing list
cf-bosh(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-bosh
Re: Retrieving external ip in .erb template
Stevo Slavić <sslavic at gmail.com...>
Thanks for reply Dmitriy!
I needed a script that gives me static vip of instance, one which does not
change (without changes to deployment manifest).
I tried and found out that spec.networks.marshal_dump.first[1].ip gives me
elastic ip, not the static vip of instance.
I'm worried, if templates get applied only during "bosh deploy", if events,
other than "bosh deploy", are possible where elastic ip can change without
templates being reevaluated, then advertised ip in configuration file will
be stale/outdated, not in sync with actual elastic ip of vm, and kafka will
not be reachable through advertised ip, but it will be accessible through
static vip, resulting in all sorts of failures.
So, is it possible that elastic ip changes without templates being
reevaluated? If yes, then I need a way to determine instance static vip in
configuration file template.
Kind regards,
Stevo Slavic.
On Wed, May 27, 2015 at 3:02 AM, Dmitriy Kalinin <dkalinin(a)pivotal.io>
wrote:
I needed a script that gives me static vip of instance, one which does not
change (without changes to deployment manifest).
I tried and found out that spec.networks.marshal_dump.first[1].ip gives me
elastic ip, not the static vip of instance.
I'm worried, if templates get applied only during "bosh deploy", if events,
other than "bosh deploy", are possible where elastic ip can change without
templates being reevaluated, then advertised ip in configuration file will
be stale/outdated, not in sync with actual elastic ip of vm, and kafka will
not be reachable through advertised ip, but it will be accessible through
static vip, resulting in all sorts of failures.
So, is it possible that elastic ip changes without templates being
reevaluated? If yes, then I need a way to determine instance static vip in
configuration file template.
Kind regards,
Stevo Slavic.
On Wed, May 27, 2015 at 3:02 AM, Dmitriy Kalinin <dkalinin(a)pivotal.io>
wrote:
Concourse release (github.com/concourse/concourse) does something like
this:
https://github.com/concourse/concourse/blob/master/jobs/atc/templates/atc_ctl.erb#L18-L39
Your example should work also.
On Fri, May 22, 2015 at 8:16 AM, Stevo Slavić <sslavic(a)gmail.com> wrote:Hello Bosh community,
Is it possible to retrieve external ip in .erb template?
I'd like to release/deploy Apache Kafka using Bosh, and one of the
properties to configure in Kafka server.properties is
advertised.host.name - every instance needs to know it's external ip, to
advertise it to others.
Should something like:
advertised.host.name=<%= spec.networks.marshal_dump.first[1].ip %>
work?
Kind regards,
Stevo Slavic.
_______________________________________________
cf-bosh mailing list
cf-bosh(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-bosh
Re: Retrieving external ip in .erb template
Gwenn Etourneau
Question, normally your kafka node will bind to all interface, so something
like <%= spec.networks.yournetworks.ip%> should work no ?
toggle quoted message
Show quoted text
like <%= spec.networks.yournetworks.ip%> should work no ?
On Wed, May 27, 2015 at 3:49 PM, Stevo Slavić <sslavic(a)gmail.com> wrote:
Thanks for reply Dmitriy!
I needed a script that gives me static vip of instance, one which does not
change (without changes to deployment manifest).
I tried and found out that spec.networks.marshal_dump.first[1].ip gives me
elastic ip, not the static vip of instance.
I'm worried, if templates get applied only during "bosh deploy", if
events, other than "bosh deploy", are possible where elastic ip can change
without templates being reevaluated, then advertised ip in configuration
file will be stale/outdated, not in sync with actual elastic ip of vm, and
kafka will not be reachable through advertised ip, but it will be
accessible through static vip, resulting in all sorts of failures.
So, is it possible that elastic ip changes without templates being
reevaluated? If yes, then I need a way to determine instance static vip in
configuration file template.
Kind regards,
Stevo Slavic.
On Wed, May 27, 2015 at 3:02 AM, Dmitriy Kalinin <dkalinin(a)pivotal.io>
wrote:Concourse release (github.com/concourse/concourse) does something like_______________________________________________
this:
https://github.com/concourse/concourse/blob/master/jobs/atc/templates/atc_ctl.erb#L18-L39
Your example should work also.
On Fri, May 22, 2015 at 8:16 AM, Stevo Slavić <sslavic(a)gmail.com> wrote:Hello Bosh community,
Is it possible to retrieve external ip in .erb template?
I'd like to release/deploy Apache Kafka using Bosh, and one of the
properties to configure in Kafka server.properties is
advertised.host.name - every instance needs to know it's external ip,
to advertise it to others.
Should something like:
advertised.host.name=<%= spec.networks.marshal_dump.first[1].ip %>
work?
Kind regards,
Stevo Slavic.
_______________________________________________
cf-bosh mailing list
cf-bosh(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-bosh
cf-bosh mailing list
cf-bosh(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-bosh
can errand job be co-located with normal(service) jobs in one VM deployment?
Tina
Hi,
I have a use case that there are 4 jobs that I'd like to deploy on the same VM, 3 of them are normal jobs (service jobs) that will be monitored, 1 job is not a normal job which I'd like to run manually and no monitoring on this job is needed.
I am thinking to use errand job. But I don't know how to deploy errand job with normal jobs on the same VM. Is it possible? if so, can you let me know or send me a sample yaml file?
Thanks!Tina
I have a use case that there are 4 jobs that I'd like to deploy on the same VM, 3 of them are normal jobs (service jobs) that will be monitored, 1 job is not a normal job which I'd like to run manually and no monitoring on this job is needed.
I am thinking to use errand job. But I don't know how to deploy errand job with normal jobs on the same VM. Is it possible? if so, can you let me know or send me a sample yaml file?
Thanks!Tina
Multi-AZ CF Deployment in Openstack
ryunata <ricky.yunata@...>
I tried to deploy cloud foundry on multiple availability zone using openstack
infrastructure.
I have defined the zones under meta, however it seems that cf was deployed
according to the weight of my availability zone in openstack and not based
on the zone that I have specified in the manifest file. How can I configure
CF so that it is deployed to the zone that I assigned to? This is what I've
set in my manifest file.
director_uuid: DIRECTOR_UUID
meta:
zones:
z1: zone_1
z2: zone_2
--
View this message in context: http://cf-bosh.70367.x6.nabble.com/Multi-AZ-CF-Deployment-in-Openstack-tp64.html
Sent from the CF BOSH mailing list archive at Nabble.com.
infrastructure.
I have defined the zones under meta, however it seems that cf was deployed
according to the weight of my availability zone in openstack and not based
on the zone that I have specified in the manifest file. How can I configure
CF so that it is deployed to the zone that I assigned to? This is what I've
set in my manifest file.
director_uuid: DIRECTOR_UUID
meta:
zones:
z1: zone_1
z2: zone_2
--
View this message in context: http://cf-bosh.70367.x6.nabble.com/Multi-AZ-CF-Deployment-in-Openstack-tp64.html
Sent from the CF BOSH mailing list archive at Nabble.com.