Re: Migrate some deployments from one bosh to another
Dr Nic Williams
I guess technically a deployment is just a bunch of rows in the bosh Postgres database; if you move them them to another bosh then that bosh will think it owns those VMs.
toggle quoted message
Show quoted text
At the same time, initially the VMs will still be listening to the original bosh nats & blobstore. But perhaps on the new bosh you do a recreate of all vms in the deployment and it will replace all the settings on each vms so that they call home to the new bosh. Somewhere above you'd remove the same rows from the old bosh director Postgres DB New bosh would need the blobs - the releases and compiled packages - but I guess they could be recreated if you did a "bosh deploy" rather than a "bosh recreate". Conceptually it should be possible - it's just rows in a database and some blobs; and some VMs who need an attitude adjustment about who's the boss. Dr Nic On Mon, Feb 27, 2017 at 7:25 PM +1000, "Grifalconi, Michael" <michael.grifalconi(a)sap.com> wrote:
Hello all, I would just bring to your attention the discussion I opened on GitHub https://github.com/cloudfoundry/bosh/issues/1601 I am looking for the best way to achieve that or the most valid reason why I should not even try it :) Thanks and regards, Michael |
|
Migrate some deployments from one bosh to another
Grifalconi, Michael
Hello all,
I would just bring to your attention the discussion I opened on GitHub https://github.com/cloudfoundry/bosh/issues/1601 I am looking for the best way to achieve that or the most valid reason why I should not even try it :) Thanks and regards, Michael |
|
Re: Documentation Update AWS
David Sabeti
Hey Leandro,
Thanks for this feedback. One of the things the CF team has been working on is better tooling around deploying Cloud Foundry, including IaaS setup. A lot of this work is still in flight, which is why the CF docs don't have any information about it yet; when the work is complete, there will definitely be an overhaul of the docs to explain how to use the new tools. In your outline, I think I'm noticing the following concerns: 1. How do I set up an AWS account so that I can deploy a CF? What are the requirements (for example, quotas) for my account? 2. Can we default to using the most up-to-date instances? 3. Once my account is set up, how do I deploy CF (with Diego)? 4. How do I understand what I'm deploying and how I can modify it on my own? Let me know if I've understood your questions correctly: I'll make sure that we take this feedback into account when we build the docs for the new tools. In the meantime, I'm happy to point people to the new tools, provided that I emphasize that* these tools are not ready for production and still undergoing active development. You should use these at your own risk.* - To set up IaaS for CF: Take a look a "bbl" (short for bosh-bootloader <https://github.com/cloudfoundry/bosh-bootloader>). This is tool that does all the work to get you to a bosh director -- it create IaaS resources like VPCs, NAT boxes, etc., and the creates a BOSH director that you can target. - To deploy CF: Take a look at cf-deployment <https://github.com/cloudfoundry/cf-deployment>. It contains a BOSH manifest for deploying Cloud Foundry (with Diego!). It also uses new BOSH features to make manifest generation a good deal simpler. This will be the future for how to deploy CF when we deprecated cf-release. Again, these aren't ready for prime time yet, but it should be good for development, testing, and experimentation -- not to mention that we'd love feedback on this from the community. Feel free to follow up if you have any questions. David Sabeti Product Manager, CF Release Integration Team On Thu, Feb 23, 2017 at 12:04 AM Leandro David Cacciagioni < leandro.21.2008(a)gmail.com> wrote: Guys, |
|
AWS-Equivalent VM Types on GCP
Daniel Jones
Hi all,
Some of the team went through the rather tedious task of creating a cloud-config that defines VM types on GCP that are equivalent to named AWS instance types. To save anyone else having to spend an hour wearing out their CMD, F, C, and V keys, we figured we'd share it: https://github.com/EngineerBetter/aws-equivalent-gcp-cloud-config Regards, Daniel Jones - CTO +44 (0)79 8000 9153 @DanielJonesEB <https://twitter.com/DanielJonesEB> *EngineerBetter* Ltd <http://www.engineerbetter.com> - UK Cloud Foundry Specialists |
|
Re: 3363.x warden stemcell ssh problem
Dmitriy Kalinin
3363.9 warden stemcell fixes the problem.
On Fri, Feb 17, 2017 at 4:49 PM, Dmitriy Kalinin <dkalinin(a)pivotal.io> wrote: hey all, |
|
Documentation Update AWS
Leandro David Cacciagioni
Guys,
I have had a few issues over the last weeks deploying Cloud Foundry to AWS and I think that maybe it will be nice to: - Update the docs to be able to deploy without using the bosh aws gem (It's broken at least with ruby 2.3.3/rbenv & Fedora Linux 25). - Also is not nice that this GEM deletes everything when you try to tear down all that was created, it not only delete what was create by it, but everything else in the account if possible. - Update the docs specifying the minimum AWS quotas required for minimal HA deployment. - Update the stub sample to use m4 instances, since this has been in AWS for quite some time and they are already in most of the AWS regions. - Beyond AWS it would be nice a detailed step by step guide to deploy CF with Diego enabled, since for those who have never touched CF is a little difficult to understand all the moving parts of CF. - In my case I was giving more than a week of training, at least one hour per day until they catch up the concepts and understand the basics of how CF works (Forget about make them modify a bosh deployment to match a Diego deployment if that's not in the docs) Hope this help us all. Cheers, Leandro.- |
|
[IMPORTANT] 3363.x azure stemcell may cause data loss on persistent disks
Dmitriy Kalinin
hey all,
DO NOT USE 3363.x *azure* stemcells for upgrades until we ship a new 3363.x stemcell. it contains agent that will try to revert your partitioned disk back to older version of a partitioner which unfortunately will corrupt the data on disk. sorry for inconvenience, dmitriy |
|
Re: smoke_tests errand job [fail]
Tomasz Kapek
Thanks Eric, you are right. I have added proper entries into my stub end it works ;)
|
|
Re: smoke_tests errand job [fail]
Eric Malm <emalm@...>
Hi, Tomasz,
toggle quoted message
Show quoted text
Looks like you need to supply some properties for the smoke-tests errand in your CF deployment manifest. The spec listing the relevant properties for the smoke-tests job template is at https://github.com/cloudfou ndry/cf-release/blob/v252/jobs/smoke-tests/spec. I don't believe the manifest-generation script and associated templates in the cf-release repo automatically populate any of those properties, so if you're using that script you likely need to supply them explicitly in a stub file. Best, Eric On Mon, Feb 20, 2017 at 2:41 AM, Tomasz Kapek <kapekto1(a)gmail.com> wrote:
Hello. |
|
smoke_tests errand job [fail]
Tomasz Kapek
Hello.
I'm trying to launch an errand job from bosh cli on cloud foundry deployment: # bosh run errand smoke_tests --keep-alive and I got following error: ################################################################################################################ go version go1.7.4 linux/amd64 CONFIG=/var/vcap/jobs/smoke-tests/bin/config.json CONFIG=/var/vcap/jobs/smoke-tests/bin/config.json GOPATH=/var/vcap/packages/smoke-tests GOROOT=/var/vcap/data/packages/golang1.7/46105d4480ca083a6cafb8ca307d5e5084c655c4.1-74efcbd4bdbb840a9d80c5793cbb431b210d5b9a OLDPWD=/var/vcap/bosh PATH=/var/vcap/packages/smoke-tests/bin:/var/vcap/packages/cli/bin:/var/vcap/data/packages/golang1.7/46105d4480ca083a6cafb8ca307d5e5084c655c4.1-74efcbd4bdbb840a9d80c5793cbb431b210d5b9a/bin:/usr/sbin:/usr/bin:/sbin:/bin PWD=/var/vcap/packages/smoke-tests/src/github.com/cloudfoundry/cf-smoke-tests SHLVL=1 TMPDIR=/var/vcap/data/tmp _=/usr/bin/env ################################################################################################################ Running smoke tests... panic: missing configuration 'api' After logging into smoke_tests/0 vm i have found out: smoke_tests/32e93888-c340-48bd-9f1f-ea7d3abec927:/var/vcap/jobs/smoke-tests/bin# cat config.json { "suite_name" : "CF_SMOKE_TESTS", "api" : "", "apps_domain" : "", "user" : "", "password" : "", "org" : "", "space" : "", "use_existing_org" : false, "use_existing_space" : false, "logging_app" : "", "runtime_app" : "", "skip_ssl_validation" : false, "syslog_drain_port" : 514, "syslog_ip_address" : "10.11.0.61", "enable_windows_tests" : false, "backend" : "", "enable_etcd_cluster_check_tests" : false, "etcd_ip_address" : "" } seems like config.json has no proper values... Any ideas how to fix it? |
|
3363.x warden stemcell ssh problem
Dmitriy Kalinin
hey all,
3363 and 3363.1 warden stemcells that were published recently have an agent that did not account for newer stemcell security settings (locks down ssh access only to bosh_sshers group users). upcoming 3363.5 stemcell will resolve this problem. sorry for inconvenience, dmitriy |
|
Re: Bosh-init deploy issue : while Installing CPI: cpi.json.erb' for vsphere_cpi/0 (line 38: #<NoMethodError: undefined method `each' for "<CLUSTER>f":String>) (RuntimeError)
ML D
Dmitriy
Thank you so much ! Best regards |
|
Re: Bosh-init deploy issue : while Installing CPI: cpi.json.erb' for vsphere_cpi/0 (line 38: #<NoMethodError: undefined method `each' for "<CLUSTER>f":String>) (RuntimeError)
Dmitriy Kalinin
Looks like you ve placed cluster name (string) where it was expecting an
array. Check out example in this section: https://bosh.io/docs/vsphere-cpi.html#resource-pools. We should probably improve the error message. Story: https://www.pivotaltracker.com/story/show/140085113 On Thu, Feb 16, 2017 at 10:10 AM, Marc-Laurent Delaruelle < marc-laurent.delaruelle(a)renault.com> wrote: Hi all, |
|
Bosh-init deploy issue : while Installing CPI: cpi.json.erb' for vsphere_cpi/0 (line 38: #<NoMethodError: undefined method `each' for "<CLUSTER>f":String>) (RuntimeError)
ML D
Hi all,
I'm a newbie in Cloud foundry and I would loke to try to setup in on vsphere to begin, before trying on AWS. So on vSphere, I meet an issue at install time I tried with the default ruby version included in Centos 7.3, and also ruby 2.1.10 and ruby 2.4.0 The user used by boshinit is full admin on vcenter. But I keep get the same error. templates/cpi.json.erb' for vsphere_cpi/0 (line 38: #<NoMethodError: undefined method `each' for "ch00clf":String>) (RuntimeError) ch00clf is the name of the vsphere cluster to use to deploy cloudfoundry The bosh.yml file contains : vcenter: &vcenter address: xx.xx.xx.xx user: xxxx password: xxxxxxxx datacenters: - name: CloudFoundry vm_folder: Bosh-community-vms template_folder: Bosh-community-stemcells datastore_pattern: DS00* persistent_datastore_pattern: DS00* disk_path: boshdisks clusters: ch00clf agent: {mbus: "nats://nats:nats-password(a)10.252.21.223:4222"} See log below Thanks a lot. MLD Deployment manifest: '/root/bosh/bosh.yml' Deployment state: '/root/bosh/bosh-state.json' Started validating Downloading release 'bosh'... Skipped [Found in local cache] (00:00:00) Validating release 'bosh'... Finished (00:00:03) Downloading release 'bosh-vsphere-cpi'... Skipped [Found in local cache] (00:00:00) Validating release 'bosh-vsphere-cpi'... Finished (00:00:00) Validating cpi release... Finished (00:00:00) Validating deployment manifest... Finished (00:00:00) Downloading stemcell... Skipped [Found in local cache] (00:00:00) Validating stemcell... Finished (00:00:05) Finished validating (00:00:09) Started installing CPI Compiling package 'vsphere_cpi_ruby/e929f50e95ef815d8d276fd69671beb106c1b4ed'... Finished (00:00:00) Compiling package 'vsphere_cpi_mkisofs/72aac8fb0c0089065a00ef38a4e30d7d0e5a16ea'... Finished (00:00:00) Compiling package 'vsphere_cpi/b51773fb362ed051b90eaaefaa27deb384d242b7'... Finished (00:00:00) Installing packages... Finished (00:00:01) Rendering job templates... Failed (00:00:00) Failed installing CPI (00:00:01) Command 'deploy' failed: Installing CPI: Rendering and uploading Jobs: Rendering job templates for installation: Rendering templates for job 'vsphere_cpi/8382dbc1792f09fec98862bdd9104f86d09f20b1': Rendering template src: cpi.json.erb, dst: config/cpi.json: Rendering template src: /root/.bosh_init/installations/d1637fbb-cecd-459e-406f-932f52c458c1/tmp/bosh-init-release181491122/extracted_jobs/vsphere_cpi/templates/cpi.json.erb, dst: /root/.bosh_init/installations/d1637fbb-cecd-459e-406f-932f52c458c1/tmp/rendered-jobs513954989/config/cpi.json: Running ruby to render templates: Running command: 'ruby /root/.bosh_init/installations/d1637fbb-cecd-459e-406f-932f52c458c1/tmp/erb-renderer169293210/erb-render.rb /root/.bosh_init/installations/d1637fbb-cecd-459e-406f-932f52c458c1/tmp/erb-renderer169293210/erb-context.json /root/.bosh_init/installations/d1637fbb-cecd-459e-406f-932f52c458c1/tmp/bosh-init-release181491122/extracted_jobs/vsphere_cpi/templates/cpi.json.erb /root/.bosh_init/installations/d1637fbb-cecd-459e-406f-932f52c458c1/tmp/rendered-jobs513954989/config/cpi.json', stdout: '', stderr: '/root/.bosh_init/installations/d1637fbb-cecd-459e-406f-932f52c458c1/tmp/erb-renderer169293210/erb-render.rb:189:in `rescue in render': Error filling in template '/root/.bosh_init/installations/d1637fbb-cecd-459e-406f-932f52c458c1/tmp/bosh-init-release181491122/extracted_jobs/vsphere_cpi/templates/cpi.json.erb' for vsphere_cpi/0 (line 38: #<NoMethodError: undefined method `each' for "ch00clf":String>) (RuntimeError) from /root/.bosh_init/installations/d1637fbb-cecd-459e-406f-932f52c458c1/tmp/erb-renderer169293210/erb-render.rb:175:in `render' from /root/.bosh_init/installations/d1637fbb-cecd-459e-406f-932f52c458c1/tmp/erb-renderer169293210/erb-render.rb:200:in `<main>' ': exit status 1 [ |
|
Re: bosh-lite : scripts/update
Rune Engseth <rune.engseth@...>
I workaround is found here
toggle quoted message
Show quoted text
https://github.com/niemeyer/gopkg/issues/50 2017-02-14 8:48 GMT+01:00 Rune Engseth <rune.engseth(a)gmail.com>: Hi |
|
Re: bosh-lite : scripts/update
Rune Engseth <rune.engseth@...>
Hi
toggle quoted message
Show quoted text
thanks for the reply. I'm on git version 2.11.1 Should be the latest. 2017-02-13 21:30 GMT+01:00 David Sabeti <dsabeti(a)pivotal.io>: Hi Rune, |
|
Re: bosh-lite : scripts/update
David Sabeti
Hi Rune,
toggle quoted message
Show quoted text
What version of git are you using to clone the repo? I'm pretty sure that the issue is that "https://gopkg.in/yaml.v2" is not the actually url where the repo is hosted. gopkg.in is just a proxy to the actual remote (probably github). I know that older versions of git (versions prior to, and including, 1.7.9.5) don't follow the 301's/redirects. You may be able to solve this by upgrade your version of git. You can see some discussion about this here: https://github.com/spf13/hugo/issues/297 David On Mon, Feb 13, 2017 at 9:39 AM Rune Engseth <rune.engseth(a)gmail.com> wrote:
Hi. Having my first go at bosh-lite and CF. After installing bosh-lite / vagrant / virtualbox, step 2 “Create a deployment manifest” fails ( https://docs.cloudfoundry.org/deploying/boshlite/create_a_manifest.html). When running ./scripts/update, the command fails due to error cloning a number of git-repos. I’m on master branch. Does anyone know how to fix this? *Example of failing git-repos* ------------ Cloning into '/Users/engrun/Development/git-repos/cf-release/src/loggregator/src/ gopkg.in/yaml.v2'... error: RPC failed; HTTP 301 curl 22 The requested URL returned error: 301 fatal: The remote end hung up unexpectedly fatal: clone of 'https://gopkg.in/yaml.v2' into submodule path '/Users/engrun/Development/git-repos/cf-release/src/loggregator/src/ gopkg.in/yaml.v2' failed Failed to clone 'src/gopkg.in/yaml.v2'. Retry scheduled Cloning into '/Users/engrun/Development/git-repos/cf-release/src/loggregator/src/ gopkg.in/check.v1'... error: RPC failed; HTTP 301 curl 22 The requested URL returned error: 301 fatal: The remote end hung up unexpectedly fatal: clone of 'https://gopkg.in/check.v1' into submodule path '/Users/engrun/Development/git-repos/cf-release/src/loggregator/src/ gopkg.in/check.v1' failed Failed to clone 'src/gopkg.in/check.v1' a second time, aborting Submodule path 'src/nats-release': checked out '7ec89b9cf5a7985184e0a58a71be671d2911545e' Submodule path 'src/nats-release/src/github.com/nats-io/gnatsd': checked out '86e883ce7d54d925e3018a761da38074eb732218' Submodule path 'src/nodejs-buildpack-release': checked out 'b1bc4ee09a95662108b04112e47c4f1acdf9f76b' Submodule path 'src/php-buildpack-release': checked out '9f1b901cb4080f32ba0b11dd3ccdca7ca57e0f4a' Submodule path 'src/postgres-release': checked out '4e1ae03f22a6dcb0c8796dc6e678c93feb4e586a' Submodule path 'src/python-buildpack-release': checked out 'ffe3b09f3886887c470e5f627bb96986443be810' Submodule path 'src/routing-release': checked out '661c5bc41d32f62d52ffe84dc206172e6938403b' Cloning into '/Users/engrun/Development/git-repos/cf-release/src/routing-release/src/ gopkg.in/yaml.v2'... error: RPC failed; HTTP 301 curl 22 The requested URL returned error: 301 fatal: The remote end hung up unexpectedly fatal: clone of 'https://gopkg.in/yaml.v2' into submodule path '/Users/engrun/Development/git-repos/cf-release/src/routing-release/src/ gopkg.in/yaml.v2' failed Failed to clone 'src/gopkg.in/yaml.v2'. Retry scheduled Cloning into '/Users/engrun/Development/git-repos/cf-release/src/routing-release/src/ gopkg.in/yaml.v2'... error: RPC failed; HTTP 301 curl 22 The requested URL returned error: 301 fatal: The remote end hung up unexpectedly fatal: clone of 'https://gopkg.in/yaml.v2' into submodule path '/Users/engrun/Development/git-repos/cf-release/src/routing-release/src/ gopkg.in/yaml.v2' failed Failed to clone 'src/gopkg.in/yaml.v2' a second time, aborting Submodule path 'src/ruby-buildpack-release': checked out '3b46e8260202f4f2f9a3cc999fc5a46cff5739fc' Submodule path 'src/smoke-tests': checked out '158d6b008ac18516287af59ccb4c16c208e87bf0' Submodule path 'src/staticfile-buildpack-release': checked out '7f10b3a34d0ad75b934130b6b2e24a8d91aeb6d3' Submodule path 'src/uaa-release': checked out '931b883be50ada66631d4193f0faf4693e022249' Submodule path 'src/uaa-release/src/uaa': checked out '4e652a6b5279ea99077fcfc46c5b8842267806c8' Failed to recurse into submodule path 'src/consul-release' Failed to recurse into submodule path 'src/etcd-release' Failed to recurse into submodule path 'src/loggregator' Failed to recurse into submodule path 'src/routing-release' regards Rune |
|
bosh-lite : scripts/update
Rune Engseth <rune.engseth@...>
Hi. Having my first go at bosh-lite and CF.
After installing bosh-lite / vagrant / virtualbox, step 2 “Create a deployment manifest” fails ( https://docs.cloudfoundry.org/deploying/boshlite/create_a_manifest.html). When running ./scripts/update, the command fails due to error cloning a number of git-repos. I’m on master branch. Does anyone know how to fix this? *Example of failing git-repos* ------------ Cloning into '/Users/engrun/Development/git-repos/cf-release/src/loggregator/src/ gopkg.in/yaml.v2'... error: RPC failed; HTTP 301 curl 22 The requested URL returned error: 301 fatal: The remote end hung up unexpectedly fatal: clone of 'https://gopkg.in/yaml.v2' into submodule path '/Users/engrun/Development/git-repos/cf-release/src/loggregator/src/ gopkg.in/yaml.v2' failed Failed to clone 'src/gopkg.in/yaml.v2'. Retry scheduled Cloning into '/Users/engrun/Development/git-repos/cf-release/src/loggregator/src/ gopkg.in/check.v1'... error: RPC failed; HTTP 301 curl 22 The requested URL returned error: 301 fatal: The remote end hung up unexpectedly fatal: clone of 'https://gopkg.in/check.v1' into submodule path '/Users/engrun/Development/git-repos/cf-release/src/loggregator/src/ gopkg.in/check.v1' failed Failed to clone 'src/gopkg.in/check.v1' a second time, aborting Submodule path 'src/nats-release': checked out '7ec89b9cf5a7985184e0a58a71be671d2911545e' Submodule path 'src/nats-release/src/github.com/nats-io/gnatsd': checked out '86e883ce7d54d925e3018a761da38074eb732218' Submodule path 'src/nodejs-buildpack-release': checked out 'b1bc4ee09a95662108b04112e47c4f1acdf9f76b' Submodule path 'src/php-buildpack-release': checked out '9f1b901cb4080f32ba0b11dd3ccdca7ca57e0f4a' Submodule path 'src/postgres-release': checked out '4e1ae03f22a6dcb0c8796dc6e678c93feb4e586a' Submodule path 'src/python-buildpack-release': checked out 'ffe3b09f3886887c470e5f627bb96986443be810' Submodule path 'src/routing-release': checked out '661c5bc41d32f62d52ffe84dc206172e6938403b' Cloning into '/Users/engrun/Development/git-repos/cf-release/src/routing-release/src/ gopkg.in/yaml.v2'... error: RPC failed; HTTP 301 curl 22 The requested URL returned error: 301 fatal: The remote end hung up unexpectedly fatal: clone of 'https://gopkg.in/yaml.v2' into submodule path '/Users/engrun/Development/git-repos/cf-release/src/routing-release/src/ gopkg.in/yaml.v2' failed Failed to clone 'src/gopkg.in/yaml.v2'. Retry scheduled Cloning into '/Users/engrun/Development/git-repos/cf-release/src/routing-release/src/ gopkg.in/yaml.v2'... error: RPC failed; HTTP 301 curl 22 The requested URL returned error: 301 fatal: The remote end hung up unexpectedly fatal: clone of 'https://gopkg.in/yaml.v2' into submodule path '/Users/engrun/Development/git-repos/cf-release/src/routing-release/src/ gopkg.in/yaml.v2' failed Failed to clone 'src/gopkg.in/yaml.v2' a second time, aborting Submodule path 'src/ruby-buildpack-release': checked out '3b46e8260202f4f2f9a3cc999fc5a46cff5739fc' Submodule path 'src/smoke-tests': checked out '158d6b008ac18516287af59ccb4c16c208e87bf0' Submodule path 'src/staticfile-buildpack-release': checked out '7f10b3a34d0ad75b934130b6b2e24a8d91aeb6d3' Submodule path 'src/uaa-release': checked out '931b883be50ada66631d4193f0faf4693e022249' Submodule path 'src/uaa-release/src/uaa': checked out '4e652a6b5279ea99077fcfc46c5b8842267806c8' Failed to recurse into submodule path 'src/consul-release' Failed to recurse into submodule path 'src/etcd-release' Failed to recurse into submodule path 'src/loggregator' Failed to recurse into submodule path 'src/routing-release' regards Rune |
|
How to upgrade vSphere stemcell to hardware version 9
Jan-Nic B.
Is there a way to upgrade a vSphere stemcell to hardware version 9.
Our current issue with hardware version 8 is the missing CPU instruction rdrand. Based on the post https://content.pivotal.io/blog/challenges-with-randomness-in-multi-tenant-linux-container-platforms this would allow us to use /dev/random and never be blocked due to the entropy pool being empty. We are currently experiencing the issue discussed in the pivotal blog post about containers blocked on /dev/random and failing to start. |
|
Re: how to unlock a deployment
Lin, Lynn <Lynn.X.Lin@...>
Never mind , I found it
From: Lin, Lynn Sent: Thursday, February 09, 2017 7:07 PM To: 'Discussions about the Cloud Foundry BOSH project.' <cf-bosh(a)lists.cloudfoundry.org> Subject: RE: [cf-bosh] Re: how to unlock a deployment Do you mean “bosh cleanup” ? I don’t find bosh-cli available in my Ubuntu machine and google setup procedure ,no lucky From: Konstantin Semenov [mailto:ksemenov(a)pivotal.io] Sent: Thursday, February 09, 2017 5:52 PM To: Discussions about the Cloud Foundry BOSH project. <cf-bosh(a)lists.cloudfoundry.org<mailto:cf-bosh(a)lists.cloudfoundry.org>> Subject: [cf-bosh] Re: how to unlock a deployment try `bosh-cli clean-up` On Thu, Feb 9, 2017 at 9:00 AM Lynn Lin <lynn.lin(a)emc.com<mailto:lynn.lin(a)emc.com>> wrote: when I do a deployment and want to cancel after start deployment , I find it is locked (bosh locks) and try to cancel the tasks (bosh cancel task XX) however several hours later ,it is still in canceling state ,any idea how to unlock ? -- Best regards, Konstantin Semenov Principal Software Engineer Pivotal Labs, Dublin, Ireland Employee ID 165389 |
|