BOSH Manifest and directory_uuid
Alberto A. Flores
Team,
After reviewing the bosh.io site and other email list, I can't find a good reason on why does the manifest YAML file needs to have the director_uuid in the manifest. According to the docs found here: http://bosh.io/docs/deployment-manifest.html#deployment It is a "required" field. Is this so? Keeping manifests in version control will create a dependency on a single Director running. Is this by design? Just looking for clarification more than a rant... :) -- View this message in context: http://cf-bosh.70367.x6.nabble.com/BOSH-Manifest-and-directory-uuid-tp45.html Sent from the CF BOSH mailing list archive at Nabble.com.
|
|
Re: BOSH Manifest and directory_uuid
Dmitriy Kalinin
It's a safety feature so that people do not accidentally deploy deployment
toggle quoted messageShow quoted text
with a same name to a wrong environment. For example if you have staging and prod environments and both have cf deployment.
On Wed, May 20, 2015 at 10:30 AM, albertoaflores <aaflores(a)gmail.com> wrote:
Team,
|
|
Re: BOSH Manifest and directory_uuid
Alberto A. Flores
Thanks Dmitriy, so if I “version control” a manifest, I’d have to keep a different version for “prod” and another one for “staging”? Is there a way to override it in the command line?
toggle quoted messageShow quoted text
-- Alberto Flores Twitter: @albertoaflores From: Dmitriy Kalinin <dkalinin(a)pivotal.io> Reply: Dmitriy Kalinin <dkalinin(a)pivotal.io>> Date: May 20, 2015 at 3:05:41 PM To: albertoaflores <aaflores(a)gmail.com>> Cc: CF BOSH Mailing List <cf-bosh(a)lists.cloudfoundry.org>> Subject: Re: [cf-bosh] BOSH Manifest and directory_uuid It's a safety feature so that people do not accidentally deploy deployment with a same name to a wrong environment. For example if you have staging and prod environments and both have cf deployment.
On Wed, May 20, 2015 at 10:30 AM, albertoaflores <aaflores(a)gmail.com> wrote:
Team, After reviewing the bosh.io site and other email list, I can't find a good reason on why does the manifest YAML file needs to have the director_uuid in the manifest. According to the docs found here: http://bosh.io/docs/deployment-manifest.html#deployment It is a "required" field. Is this so? Keeping manifests in version control will create a dependency on a single Director running. Is this by design? Just looking for clarification more than a rant... :) -- View this message in context: http://cf-bosh.70367.x6.nabble.com/BOSH-Manifest-and-directory-uuid-tp45.html Sent from the CF BOSH mailing list archive at Nabble.com. _______________________________________________ cf-bosh mailing list cf-bosh(a)lists.cloudfoundry.org https://lists.cloudfoundry.org/mailman/listinfo/cf-bosh
|
|
Re: BOSH Manifest and directory_uuid
Dr Nic Williams
Try this idea https://github.com/concourse/concourse/blob/master/manifests/bosh-lite.yml#L4
On Wed, May 20, 2015 at 2:25 PM, Alberto Flores <aaflores(a)gmail.com> wrote: Thanks Dmitriy, so if I “version control” a manifest, I’d have to keep a different version for “prod” and another one for “staging”? Is there a way to override it in the command line?
|
|
Re: BOSH Manifest and directory_uuid
Alberto A. Flores
Awesome idea, Dr. Nic..! That works!
toggle quoted messageShow quoted text
I appreciate the feedback! -- Alberto Flores Twitter: @albertoaflores From: Dr Nic Williams <drnicwilliams(a)gmail.com> Reply: Dr Nic Williams <drnicwilliams(a)gmail.com>> Date: May 20, 2015 at 5:28:52 PM To: Alberto Flores <aaflores(a)gmail.com>> Cc: Dmitriy Kalinin <dkalinin(a)pivotal.io>>, CF BOSH Mailing List <cf-bosh(a)lists.cloudfoundry.org>> Subject: Re: [cf-bosh] BOSH Manifest and directory_uuid Try this idea https://github.com/concourse/concourse/blob/master/manifests/bosh-lite.yml#L4
On Wed, May 20, 2015 at 2:25 PM, Alberto Flores <aaflores(a)gmail.com> wrote:
Thanks Dmitriy, so if I “version control” a manifest, I’d have to keep a different version for “prod” and another one for “staging”? Is there a way to override it in the command line? -- Alberto Flores Twitter: @albertoaflores From: Dmitriy Kalinin <dkalinin(a)pivotal.io> Reply: Dmitriy Kalinin <dkalinin(a)pivotal.io>> Date: May 20, 2015 at 3:05:41 PM To: albertoaflores <aaflores(a)gmail.com>> Cc: CF BOSH Mailing List <cf-bosh(a)lists.cloudfoundry.org>> Subject: Re: [cf-bosh] BOSH Manifest and directory_uuid It's a safety feature so that people do not accidentally deploy deployment with a same name to a wrong environment. For example if you have staging and prod environments and both have cf deployment. On Wed, May 20, 2015 at 10:30 AM, albertoaflores <aaflores(a)gmail.com> wrote: Team, After reviewing the bosh.io site and other email list, I can't find a good reason on why does the manifest YAML file needs to have the director_uuid in the manifest. According to the docs found here: http://bosh.io/docs/deployment-manifest.html#deployment It is a "required" field. Is this so? Keeping manifests in version control will create a dependency on a single Director running. Is this by design? Just looking for clarification more than a rant... :) -- View this message in context: http://cf-bosh.70367.x6.nabble.com/BOSH-Manifest-and-directory-uuid-tp45.html Sent from the CF BOSH mailing list archive at Nabble.com. _______________________________________________ cf-bosh mailing list cf-bosh(a)lists.cloudfoundry.org https://lists.cloudfoundry.org/mailman/listinfo/cf-bosh
|
|
Re: Create bosh stemcell failed in AWS region cn-north-1
支雷 <lzhi3937 at gmail.com...>
I have tried full stemcell
toggle quoted messageShow quoted text
bosh-stemcell-2972-aws-xen-ubuntu-trusty-go_agent.tgz, but failed, error "create stemcell failed: unable to find AKI:" was thrown (please find details in my first email). And when I tried to "bosh-bootstrap deploy" command, I got `validate_aws_region': Unknown region: "cn-north-1" (ArgumentError). Seems cn-north-1 is not supported by bosh aws plugin. Any suggestions on this issue? Thanks! 2015-05-19 23:58 GMT+08:00 Wayne E. Seguin <wayneeseguin(a)starkandwayne.com>:
The issue is that there appear to not be any light stemcells in your
|
|
Re: Create bosh stemcell failed in AWS region cn-north-1
Dr Nic Williams
There are two issues - the second is that bosh-bootstrap uses a project
toggle quoted messageShow quoted text
"cyoi" (choose your own infrastructure) and underneath it uses "fog" - its quite possible that either or both do not yet support China (its harder to get accounts to do testing). The former is failing inside AWS SDK for Ruby. BOSH calls into this library here: https://github.com/cloudfoundry/bosh/blob/develop/bosh_aws_cpi/lib/cloud/aws/aki_picker.rb#L25 We are using aws-sdk (= 1.60.2) https://github.com/cloudfoundry/bosh/blob/114b3cf107672cfebf444fe7db4703dd804c72cc/Gemfile.lock#L19 The latest version is 2.0.42 https://rubygems.org/gems/aws-sdk/versions/2.0.42 So perhaps China support was added more recently and we need to bump to newer aws-sdk version. Try bumping this version in the Gemfile of bosh and using that. Avoid bosh-bootstrap until you've at least confimed you can get underlying bosh_cli to work.
On Wed, May 20, 2015 at 8:17 PM, 支雷 <lzhi3937(a)gmail.com> wrote:
I have tried full stemcell --
Dr Nic Williams Stark & Wayne LLC - consultancy for Cloud Foundry users http://drnicwilliams.com http://starkandwayne.com cell +1 (415) 860-2185 twitter @drnic
|
|
Re: BOSH Manifest and directory_uuid
Tammer Saleh
We've had this problem as well, and discussed a couple of features in bosh
toggle quoted messageShow quoted text
that could help: 1. Allow the user to tell the director what its name is, and use that in the UUID field. and/or... 2. Allow the operator to mark a deployment as "protected," and only require the UUID field in the deploy manifest when targeting such a deployment. Cheers, Tammer Saleh Director of Product, Pivotal CF, London http://pivotal.io | http://tammersaleh.com | +44 7463 939332
On Wed, May 20, 2015 at 10:31 PM, Alberto Flores <aaflores(a)gmail.com> wrote:
Awesome idea, Dr. Nic..! That works!
|
|
Retrieving external ip in .erb template
Stevo Slavić <sslavic at gmail.com...>
Hello Bosh community,
Is it possible to retrieve external ip in .erb template? I'd like to release/deploy Apache Kafka using Bosh, and one of the properties to configure in Kafka server.properties is advertised.host.name - every instance needs to know it's external ip, to advertise it to others. Should something like: advertised.host.name=<%= spec.networks.marshal_dump.first[1].ip %> work? Kind regards, Stevo Slavic.
|
|
Most bosh director commands fail with a HTTP 500
Scott Taggart <staggart@...>
Hi folks,
One of my three bosh directors has gotten itself stuck in a strange state where most (but not all) operations fail. I have recreated the director with a couple of different stemcells (but same persistent disk) and the issue persists. Looks like potentially a database issue on the director, but I have done a very quick visual check of a few tables (e.g. vms, deployments) and they seem fine from a glance... not sure what's going on. Everything CF-related currently/previously under the director is continuing to run fine in this AZ, it's just the director that's lost it: $ bosh deployments +---------------------+-----------------------+----------------------------------------------+--------------+ | Name | Release(s) | Stemcell(s) | Cloud Config | +---------------------+-----------------------+----------------------------------------------+--------------+ | cf-mysql | cf-mysql/19 | bosh-vcloud-esxi-ubuntu-trusty-go_agent/2915 | none | +---------------------+-----------------------+----------------------------------------------+--------------+ | cf-services-contrib | cf-services-contrib/6 | bosh-vcloud-esxi-ubuntu-trusty-go_agent/2915 | none | +---------------------+-----------------------+----------------------------------------------+--------------+ | xxxxxxx_cf | cf/208 | bosh-vcloud-esxi-ubuntu-trusty-go_agent/2915 | none | +---------------------+-----------------------+----------------------------------------------+--------------+ Deployments total: 3 $ bosh releases +---------------------+----------+-------------+ | Name | Versions | Commit Hash | +---------------------+----------+-------------+ | cf | 208* | 5d00be54+ | | cf-mysql | 19* | dfab036b+ | | cf-services-contrib | 6* | 57fd2098+ | +---------------------+----------+-------------+ (*) Currently deployed (+) Uncommitted changes Releases total: 3 $ bosh locks No locks $ bosh tasks No running tasks $ bosh vms Deployment `cf-mysql' HTTP 500: $ bosh cloudcheck Performing cloud check... Processing deployment manifest ------------------------------ HTTP 500: The relevant error I get from /var/vcap/sys/log/director/director.debug.log on the director is: E, [2015-05-25 21:20:15 #1010] [] ERROR -- Director: TypeError - no implicit conversion of nil into String: /var/vcap/packages/ruby/lib/ruby/2.1.0/fileutils.rb:1572:in `path' /var/vcap/packages/ruby/lib/ruby/2.1.0/fileutils.rb:1572:in `block in fu_list' /var/vcap/packages/ruby/lib/ruby/2.1.0/fileutils.rb:1572:in `map' /var/vcap/packages/ruby/lib/ruby/2.1.0/fileutils.rb:1572:in `fu_list' /var/vcap/packages/ruby/lib/ruby/2.1.0/fileutils.rb:625:in `rm_r' /var/vcap/packages/ruby/lib/ruby/2.1.0/fileutils.rb:654:in `rm_rf' /var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2957.0/lib/bosh/director/api/task_remover.rb:9:in `block in remove' /var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sequel-3.43.0/lib/sequel/dataset/actions.rb:152:in `block in each' /var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sequel-3.43.0/lib/sequel/adapters/postgres.rb:525:in `block (2 levels) in fetch_rows' /var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sequel-3.43.0/lib/sequel/adapters/postgres.rb:720:in `block in yield_hash_rows' /var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sequel-3.43.0/lib/sequel/adapters/postgres.rb:714:in `times' /var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sequel-3.43.0/lib/sequel/adapters/postgres.rb:714:in `yield_hash_rows' /var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sequel-3.43.0/lib/sequel/adapters/postgres.rb:525:in `block in fetch_rows' /var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sequel-3.43.0/lib/sequel/adapters/postgres.rb:134:in `execute' /var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sequel-3.43.0/lib/sequel/adapters/postgres.rb:413:in `_execute' /var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sequel-3.43.0/lib/sequel/adapters/postgres.rb:242:in `block (2 levels) in execute' /var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sequel-3.43.0/lib/sequel/adapters/postgres.rb:425:in `check_database_errors' /var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sequel-3.43.0/lib/sequel/adapters/postgres.rb:242:in `block in execute' /var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sequel-3.43.0/lib/sequel/database/connecting.rb:236:in `block in synchronize' /var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sequel-3.43.0/lib/sequel/connection_pool/threaded.rb:104:in `hold' /var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sequel-3.43.0/lib/sequel/database/connecting.rb:236:in `synchronize' /var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sequel-3.43.0/lib/sequel/adapters/postgres.rb:242:in `execute' /var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sequel-3.43.0/lib/sequel/dataset/actions.rb:801:in `execute' /var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sequel-3.43.0/lib/sequel/adapters/postgres.rb:525:in `fetch_rows' /var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sequel-3.43.0/lib/sequel/dataset/actions.rb:152:in `each' /var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2957.0/lib/bosh/director/api/task_remover.rb:8:in `remove' /var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2957.0/lib/bosh/director/api/task_helper.rb:23:in `create_task' /var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2957.0/lib/bosh/director/job_queue.rb:9:in `enqueue' /var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2957.0/lib/bosh/director/api/vm_state_manager.rb:5:in `fetch_vm_state' /var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2957.0/lib/bosh/director/api/controllers/deployments_controller.rb:182:in `block in <class:DeploymentsController>' /var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1603:in `call' /var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1603:in `block in compile!' /var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:966:in `[]' /var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:966:in `block (3 levels) in route!' /var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:985:in `route_eval' /var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:966:in `block (2 levels) in route!' /var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1006:in `block in process_route' /var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1004:in `catch' /var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1004:in `process_route' /var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:964:in `block in route!' /var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:963:in `each' /var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:963:in `route!' /var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1076:in `block in dispatch!' /var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1058:in `block in invoke' /var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1058:in `catch' /var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1058:in `invoke' /var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1073:in `dispatch!' /var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:898:in `block in call!' /var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1058:in `block in invoke' /var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1058:in `catch' /var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1058:in `invoke' /var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:898:in `call!' /var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:886:in `call' /var/vcap/packages/director/gem_home/ruby/2.1.0/gems/rack-protection-1.5.3/lib/rack/protection/xss_header.rb:18:in `call' /var/vcap/packages/director/gem_home/ruby/2.1.0/gems/rack-protection-1.5.3/lib/rack/protection/path_traversal.rb:16:in `call' /var/vcap/packages/director/gem_home/ruby/2.1.0/gems/rack-protection-1.5.3/lib/rack/protection/json_csrf.rb:18:in `call' /var/vcap/packages/director/gem_home/ruby/2.1.0/gems/rack-protection-1.5.3/lib/rack/protection/base.rb:49:in `call' /var/vcap/packages/director/gem_home/ruby/2.1.0/gems/rack-protection-1.5.3/lib/rack/protection/base.rb:49:in `call' /var/vcap/packages/director/gem_home/ruby/2.1.0/gems/rack-protection-1.5.3/lib/rack/protection/frame_options.rb:31:in `call' /var/vcap/packages/director/gem_home/ruby/2.1.0/gems/rack-1.6.0/lib/rack/nulllogger.rb:9:in `call' /var/vcap/packages/director/gem_home/ruby/2.1.0/gems/rack-1.6.0/lib/rack/head.rb:13:in `call' /var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:180:in `call' /var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:2014:in `call' /var/vcap/packages/director/gem_home/ruby/2.1.0/gems/rack-1.6.0/lib/rack/urlmap.rb:66:in `block in call' /var/vcap/packages/director/gem_home/ruby/2.1.0/gems/rack-1.6.0/lib/rack/urlmap.rb:50:in `each' /var/vcap/packages/director/gem_home/ruby/2.1.0/gems/rack-1.6.0/lib/rack/urlmap.rb:50:in `call' /var/vcap/packages/director/gem_home/ruby/2.1.0/gems/rack-1.6.0/lib/rack/commonlogger.rb:33:in `call' /var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:217:in `call' /var/vcap/packages/director/gem_home/ruby/2.1.0/gems/thin-1.5.1/lib/thin/connection.rb:81:in `block in pre_process' /var/vcap/packages/director/gem_home/ruby/2.1.0/gems/thin-1.5.1/lib/thin/connection.rb:79:in `catch' /var/vcap/packages/director/gem_home/ruby/2.1.0/gems/thin-1.5.1/lib/thin/connection.rb:79:in `pre_process' /var/vcap/packages/director/gem_home/ruby/2.1.0/gems/thin-1.5.1/lib/thin/connection.rb:54:in `process' /var/vcap/packages/director/gem_home/ruby/2.1.0/gems/thin-1.5.1/lib/thin/connection.rb:39:in `receive_data' /var/vcap/packages/director/gem_home/ruby/2.1.0/gems/eventmachine-1.0.3/lib/eventmachine.rb:187:in `run_machine' /var/vcap/packages/director/gem_home/ruby/2.1.0/gems/eventmachine-1.0.3/lib/eventmachine.rb:187:in `run' /var/vcap/packages/director/gem_home/ruby/2.1.0/gems/thin-1.5.1/lib/thin/backends/base.rb:63:in `start' /var/vcap/packages/director/gem_home/ruby/2.1.0/gems/thin-1.5.1/lib/thin/server.rb:159:in `start' /var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2957.0/bin/bosh-director:37:in `<top (required)>' /var/vcap/packages/director/bin/bosh-director:16:in `load' /var/vcap/packages/director/bin/bosh-director:16:in `<main>' I've wiped my local bosh config and re-targetted the director and tried running bosh vms without specifying a deployment manifest (i.e. rule the manifest out) - still get the same 500 Any tips appreciated! Notice: This message contains information that may be privileged or confidential and is the property of Skyscape. It is intended only for the person to whom it is addressed. If you are not the intended recipient, you are not authorised to read, print, retain, copy, disseminate, distribute, or use this message or any part thereof. If you receive this message in error, please notify the sender immediately and delete all copies of this message. Skyscape reserves the right to monitor all e-mail communications through its networks. Skyscape Cloud Services Limited is registered in England and Wales: Company No: 07619797. Registered office: Hartham Park, Hartham, Corsham, Wiltshire SN13 0RP. ______________________________________________________________________ This email has been scanned by the Symantec Email Security.cloud service. ______________________________________________________________________
|
|
Regarding installation of bosh
Bharath
Hi guys ,
I am right now working on deploying cloudfoundry on openstack. When i am running the gem install bosh_cli command it is throwing error saying host not found. I was able to ping all the other websites from my terminal like google , wikipedia . In same terminal if i open rubygems.org using a web browser it is working properly . Unable to understand what could be the real problem regards bharath
|
|
Re: Regarding installation of bosh
James Bayer
can you post the actual terminal output to a gist or something?
toggle quoted messageShow quoted text
On Tue, May 26, 2015 at 12:45 AM, Bharath Posa <bharathp(a)vedams.com> wrote:
Hi guys , --
Thank you, James Bayer
|
|
Re: Most bosh director commands fail with a HTTP 500
Dmitriy Kalinin
There currently exists a problem in the Director during task cleanup.
Director tries to clean up task logs for the tasks that do not have associated directory on disk. https://www.pivotaltracker.com/story/show/95458780 will fix this. To fix the Director until we release a bug fix: - ssh as vcap into the Director VM - run /var/vcap/jobs/director/bin/director_ctl console - opens up console to the Director DB - run Bosh::Director::Models::Task.where(output: nil).update(output: '/tmp/123') - updates tasks without task log directories to a dummy destination; Director will be happy to run rm -rf /tmp/123 when it cleans up tasks. After that you should be able to run `bosh vms` and other tasks again. On Mon, May 25, 2015 at 2:27 PM, Scott Taggart <staggart(a)skyscapecloud.com> wrote: Hi folks,
|
|
Re: Retrieving external ip in .erb template
Dmitriy Kalinin
Concourse release (github.com/concourse/concourse) does something like
toggle quoted messageShow quoted text
this: https://github.com/concourse/concourse/blob/master/jobs/atc/templates/atc_ctl.erb#L18-L39 Your example should work also.
On Fri, May 22, 2015 at 8:16 AM, Stevo Slavić <sslavic(a)gmail.com> wrote:
Hello Bosh community,
|
|
Re: Create bosh stemcell failed in AWS region cn-north-1
Dmitriy Kalinin
It seems like this method cannot find appropriate AKIs:
https://github.com/cloudfoundry/bosh/blob/master/bosh_aws_cpi/lib/cloud/aws/aki_picker.rb#L48-L59 I just requested account from AWS to access China region and try to reproduce the problem. On Wed, May 20, 2015 at 8:37 PM, Dr Nic Williams <drnicwilliams(a)gmail.com> wrote: There are two issues - the second is that bosh-bootstrap uses a project
|
|
Re: Changeing IP Addresses of containers
Dmitriy Kalinin
Sorry for the late response. Typically following error message (Creating
container: network already acquired) is displayed when container is not properly deleted from the previous deployments. Currently there is no easy way to delete containers without accessing garden HTTP API. On Sun, May 17, 2015 at 10:40 AM, Hildebrandt Andre <myself(a)andrejagusch.de> wrote: I want to run two cloud foundry instances on my notebook to try something
|
|
Re: Retrieving external ip in .erb template
Stevo Slavić <sslavic at gmail.com...>
Thanks for reply Dmitriy!
I needed a script that gives me static vip of instance, one which does not change (without changes to deployment manifest). I tried and found out that spec.networks.marshal_dump.first[1].ip gives me elastic ip, not the static vip of instance. I'm worried, if templates get applied only during "bosh deploy", if events, other than "bosh deploy", are possible where elastic ip can change without templates being reevaluated, then advertised ip in configuration file will be stale/outdated, not in sync with actual elastic ip of vm, and kafka will not be reachable through advertised ip, but it will be accessible through static vip, resulting in all sorts of failures. So, is it possible that elastic ip changes without templates being reevaluated? If yes, then I need a way to determine instance static vip in configuration file template. Kind regards, Stevo Slavic. On Wed, May 27, 2015 at 3:02 AM, Dmitriy Kalinin <dkalinin(a)pivotal.io> wrote: Concourse release (github.com/concourse/concourse) does something like
|
|
Re: Retrieving external ip in .erb template
Gwenn Etourneau
Question, normally your kafka node will bind to all interface, so something
toggle quoted messageShow quoted text
like <%= spec.networks.yournetworks.ip%> should work no ?
On Wed, May 27, 2015 at 3:49 PM, Stevo Slavić <sslavic(a)gmail.com> wrote:
Thanks for reply Dmitriy!
|
|
can errand job be co-located with normal(service) jobs in one VM deployment?
Tina
Hi,
I have a use case that there are 4 jobs that I'd like to deploy on the same VM, 3 of them are normal jobs (service jobs) that will be monitored, 1 job is not a normal job which I'd like to run manually and no monitoring on this job is needed. I am thinking to use errand job. But I don't know how to deploy errand job with normal jobs on the same VM. Is it possible? if so, can you let me know or send me a sample yaml file? Thanks!Tina
|
|
Multi-AZ CF Deployment in Openstack
ryunata <ricky.yunata@...>
I tried to deploy cloud foundry on multiple availability zone using openstack
infrastructure. I have defined the zones under meta, however it seems that cf was deployed according to the weight of my availability zone in openstack and not based on the zone that I have specified in the manifest file. How can I configure CF so that it is deployed to the zone that I assigned to? This is what I've set in my manifest file. director_uuid: DIRECTOR_UUID meta: zones: z1: zone_1 z2: zone_2 -- View this message in context: http://cf-bosh.70367.x6.nabble.com/Multi-AZ-CF-Deployment-in-Openstack-tp64.html Sent from the CF BOSH mailing list archive at Nabble.com.
|
|