restoring bosh deployment state failing
nshrest6@...
Hi ... i had a bosh director running which i updated with current vsphere cpi release 48, which failed due to the issue with ruby2.4, i tried reverting it back with old versions now i am running into issues ....
```
Started deploying
Waiting for the agent on VM 'vm-c3e42263-5167-467d-bda5-04e8762f63ec'... Failed (00:00:09)
Deleting VM 'vm-c3e42263-5167-467d-bda5-04e8762f63ec'... Finished (00:00:08)
Creating VM for instance 'bosh/0' from stemcell 'sc-5eae3672-c5cb-4351-8bf8-7972b464d0b4'... Finished (00:01:07)
Waiting for the agent on VM 'vm-dc84d899-5444-483d-aeb6-1a247a04a56d' to be ready... Finished (00:00:27)
Attaching disk 'disk-22e02c8a-b143-4534-a640-85705067887c' to VM 'vm-dc84d899-5444-483d-aeb6-1a247a04a56d'... Finished (00:00:18)
Creating disk... Finished (00:00:07)
Attaching disk 'disk-a8e62197-dfed-4596-b07e-4cf9686e852e' to VM 'vm-dc84d899-5444-483d-aeb6-1a247a04a56d'... Finished (00:00:18)
Migrating disk content from 'disk-22e02c8a-b143-4534-a640-85705067887c' to 'disk-a8e62197-dfed-4596-b07e-4cf9686e852e'... Finished (00:01:57)
Detaching disk 'disk-22e02c8a-b143-4534-a640-85705067887c'... Finished (00:00:10)
Deleting disk 'disk-22e02c8a-b143-4534-a640-85705067887c'... Finished (00:00:04)
Rendering job templates... Finished (00:00:06)
Compiling package 'openjdk_1.8.0/a6b85c1cd75382025bbfa49abb737015575aec44'... Skipped [Package already compiled] (00:00:01)
Compiling package 'ruby/c1086875b047d112e46756dcb63d8f19e63b3ac4'... Skipped [Package already compiled] (00:00:00)
Compiling package 'mysql/b7e73acc0bfe05f1c6cbfd97bf92d39b0d3155d5'... Skipped [Package already compiled] (00:00:00)
Compiling package 'libpq/826813f983d38b4b4a95bb8a3df1a2d0efab14b0'... Skipped [Package already compiled] (00:00:00)
Compiling package 'vsphere_cpi_ruby/14067294a0cd16a61646eedc3de4e9ed22d46076'... Finished (00:02:20)
Compiling package 'credhub/c113daadcde5f2add56fb8f62313a96c6e98697e'... Skipped [Package already compiled] (00:00:01)
Compiling package 'vsphere_cpi_mkisofs/72aac8fb0c0089065a00ef38a4e30d7d0e5a16ea'... Finished (00:02:44)
Compiling package 'verify_multidigest/8fc5d654cebad7725c34bb08b3f60b912db7094a'... Skipped [Package already compiled] (00:00:00)
Compiling package 'lunaclient/b922e045db5246ec742f0c4d1496844942d6167a'... Skipped [Package already compiled] (00:00:00)
Compiling package 'bosh-gcscli/83d331c7b6d04de64cd5257a47e1e92021cb4c8a'... Skipped [Package already compiled] (00:00:00)
Compiling package 'postgres/3b1089109c074984577a0bac1b38018d7a2890ef'... Skipped [Package already compiled] (00:00:00)
Compiling package 'uaa_utils/20557445bf996af17995a5f13bf5f87000600f2e'... Skipped [Package already compiled] (00:00:00)
Compiling package 's3cli/bb1c1976d221fdadf13a6bc873896cd5e2433580'... Skipped [Package already compiled] (00:00:00)
Compiling package 'pg_utils_9.4/dbd00a0758a5e6225e1121bfd444db6ec59204ee'... Skipped [Package already compiled] (00:00:00)
Compiling package 'davcli/5f08f8d5ab3addd0e11171f739f072b107b30b8c'... Skipped [Package already compiled] (00:00:00)
Compiling package 'director/ea00c83b4558293b1956564a4532e1af562ea6e0'... Skipped [Package already compiled] (00:00:01)
Compiling package 'postgres-9.4/1da82648840de67015d379264846a447118261a7'... Skipped [Package already compiled] (00:00:00)
Compiling package 'nats/63ae42eb73527625307ff522fb402832b407321d'... Skipped [Package already compiled] (00:00:00)
Compiling package 'vsphere_cpi/e6c27f384060c8d0260f6f0310853d1a886b1128'... Finished (00:00:57)
Compiling package 'nginx/57ca1d048957399c500e0f5fd3275ed4c6d4f762'... Skipped [Package already compiled] (00:00:00)
Compiling package 'mariadb_10.1.23/6ab14e132241110cff0dc160137b71a967d29d53'... Skipped [Package already compiled] (00:00:00)
Compiling package 'uaa/33da697bb3343793c762f06970868565a71d053a'... Skipped [Package already compiled] (00:00:03)
Compiling package 'health_monitor/aa43dacd332bda1131b141aada0ca45b4302273c'... Skipped [Package already compiled] (00:00:00)
Updating instance 'bosh/0'... Finished (00:01:18)
Waiting for instance 'bosh/0' to be running... Failed (00:06:00)
Failed deploying (00:18:44)
Stopping registry... Finished (00:00:00)
Cleaning up rendered CPI jobs... Finished (00:00:00)
```
i logged to bosh director and monit process shows
```
/:/var/vcap/sys/log# monit summary
The Monit daemon 5.2.5 uptime: 5m
Process 'nats' running
Process 'postgres' running
Process 'blobstore_nginx' running
Process 'director' not monitored
Process 'worker_1' not monitored
Process 'worker_2' not monitored
Process 'worker_3' not monitored
Process 'director_scheduler' running
Process 'director_nginx' running
Process 'health_monitor' running
Process 'uaa' running
Process 'credhub' Does not exist
System 'system_localhost' running
```
i am confused where do i start troubleshooting ... any idea if someone encountered similar issue during the bosh director restore ?
|
|
GCP 3586.18 Stemcell Issues
Michael Xu <mxu@...>
Hello Cloud Foundry! The BOSH team is currently investigating an issue when using Google Cloud Platform 3586.18 light and full stemcells. BOSH deployments on GCP using either of these stemcells will fail, and result in a timeout with unresponsive agents. In the meantime, please use the 3586.16 version. Currently this `unresponsive agent` issue seems to only manifest when deploying to GCP and using this specific version, but if you are experiencing similar deployment failures in other cases, please let us know! Feel free to join us at the #bosh CF Slack channel, we are happy to help. Thanks, Michael Xu && BOSH team
|
|
post hook into bosh delete vm command
estein@...
Hi,
We are interested in deploying a zabbix agent via a bosh package. We want the package to automatically add the VM once created to the Zabbix monitor, which can be done via post-start or post-deployment I believe. However, I can't figure out how to do the opposite: namely in the case where a VM is deleted, we need it's entry in the Zabbix monitor to be deleted. I know the APIs that allows the deletion and we have a Java application that can do it, but is there a way to hook into that delete mechanism? Drain/post-stop is called regardless of whether the VM is stopped or deleted, so I don't think that will work. Hoping someone knows. Thanks.
|
|
Re: post hook into bosh delete vm command
adrian.kurt@...
Hi
First of all I suggest adding your Zabbix integration as a regular bosh-release instead of a package. You can use the runtime-config to add it to all deployed vms.
If I remember correctly there are two parameters passed to the drain script and based on those you should be able to find out if the vm is about to be deleted.
Kind regards Adrian
From: cf-bosh@... [mailto:cf-bosh@...]
On Behalf Of estein@...
Sent: Mittwoch, 20. Juni 2018 22:18 To: cf-bosh@... Subject: [cf-bosh] post hook into bosh delete vm command
Hi,
|
|
Re: post hook into bosh delete vm command
Hi, I contributed recently the details about this « cluster scale-in » condition in the BOSH documentation: And there are example here: and here: But for detecting BOSH-manged nodes showing up and going, maybe you should see how the prometheus team solves this problem. Especially, look how the node_exporter list all nodes that are managed by the BOSH server in order to feed the Prometheus system with the accurate list. Best, /Benjamin GANDON
|
|
Re: post hook into bosh delete vm command
estein@...
That actually does sound perfect. I'll check it out.
|
|
Re: post hook into bosh delete vm command
estein@...
Sorry, yes, I meant as a bosh release. I'm still getting used to the nomenclature.
|
|
Re: post hook into bosh delete vm command
Marco Voelz
Hi,
Please note that all of the things Benjamin mentioned are mere workarounds rather than a desireable solution. We're looking for feedback for the proposal at https://github.com/cloudfoundry/bosh-notes/blob/master/proposals/drain-actions.md which would introduce a way for scripts to detect state changes in a drain script.
Warm regards Marco
From: cf-bosh@... <cf-bosh@...> on behalf of estein@... <estein@...>
Sent: Thursday, June 21, 2018 5:20:56 PM To: cf-bosh@... Subject: Re: [cf-bosh] post hook into bosh delete vm command Sorry, yes, I meant as a bosh release. I'm still getting used to the nomenclature.
|
|
Incubation proposal: CF Containerization
Cornelius Schumacher <cschum@...>
Hi all,
We would like to propose the CF Containerization effort for incubation in the BOSH PMC. The full proposal can be found here: https://docs.google.com/document/d/1_IvFf-cCR4_Hxg-L7Z_R51EKhZfBqlprrs5NgC2iO2w/edit As a first step towards this, we are proposing the Fissile code base as a starting point, with the goal of transforming it in the direction of the above proposal. Fissile is a tool that allows developers to convert existing BOSH releases to docker images and deploy them to Kubernetes. Fissile is currently used in SUSE CAP (https://www.suse.com/products/cloud-application-platform) and IBM Cloud Foundry Enterprise Environment (https://console.bluemix.net/ docs/cloud-foundry/). Fissile is fully open source and can currently be found on GitHub at https://github.com/SUSE/fissile The project would follow a distributed committer model. Project Lead: Vlad Iovanov Initial Committers: - Jan Dubois (SUSE) - Mark Yen (SUSE) - Mario Manno (SUSE) - Enrique Encalada (IBM) - Matthias Diester (IBM) - Gong (Grace) Zhang (IBM) SAP is also currently evaluating additional staffing. We are looking forward to your questions and comments. Best Regards, Cornelius -- Cornelius Schumacher <cschum@...>
|
|
variable blob package name?
estein@...
All,
We have the scenario where a bosh package will be deployed to two different servers, and those servers require a different version of the package. Everything else is the same save for the version of the package. Is there a way to dynamically choose the package version required via a parameter in the manifest file? Or do I simply have to copy the job and package and update the associated packing/spec file and/or create two separate releases (one for version 1 and another for version 2)? For example: server 1 requires mytar-1.2.2.tar.gz server 2 requires mytar-2.2.2.tar.gz Ideally there would be a single package file like: tar -xvf mytar/mytar-${VERSION}.tar.gz Is what I'm trying to do possible or again does it require simply having a duplicate of the jobs/packages and each one labeled accordingly in the release (or two separate releases)?
|
|
Forgot password
mounika.k@...
Hi,
We are unable to find API end points for the "forgot password" in UAA server. Can anyone suggest me? Thank you
|
|
Re: Forgot password
Ronak Banka
Hi Mounika, Try this Thanks Ronak
On Fri, 6 Jul 2018 at 7:07 PM, <mounika.k@...> wrote: Hi,
|
|
Re: Forgot password
mounika.k@...
But I was asked about forgot password not change password. Thanks
|
|
Re: Forgot password
Ronak Banka
Mounika, Forgot password is a functionality which can be implemented in different ways. On backend it will use change password API call to make the changes. Thanks
On Fri, 6 Jul 2018 at 9:56 PM, <mounika.k@...> wrote: But I was asked about forgot password not change password. Thanks
|
|
Xenial stemcells now available: migration plan
Hi everyone. My name is Frédéric. I joined the BOSH team as a product manager recently, and work from Pivotal's Toronto office. Nice to meet you all! A few weeks ago, the BOSH team introduced a new stemcell line based on Ubuntu 16.04 (Xenial Xerus) on bosh.io. For the time being, this line will be maintained in parallel with the previous ones, based on Ubuntu 14.04 (Trusty Tahr). Canonical will provide security updates for Trusty until April 2019 per their official support lifecycle policy. Because Canonical will no longer provide security updates for Trusty after April 2019, we strongly recommend users start migrating towards the Xenial-based stemcell line now. The BOSH team will continue to support the current 3586.x line of Trusty-based stemcells with upstream security patches until the CFAR migration to Xenial is complete. We do not plan on releasing any new major versions of Trusty-based stemcells, unless consumers have a specific request for a new major, and instead will focus on Xenial going forward. We are currently evaluating when we will retire the Trusty-based stemcells, and are looking for feedback from the community about technical blockers that could impede the adoption of Xenial stemcells. If you are a release author, please take the time to verify your software on Xenial-based stemcells at the earliest opportunity. As a reminder, operators using cf-deployment must keep in mind that the repository will switch to Xenial-based stemcells well before April 2019 and should plan accordingly (more details on this can be found here). Frédéric Desbiens Product Manager | Pivotal Cloud Foundry BOSH
|
|
Re: Xenial stemcells now available: migration plan
Dr Nic Williams
Frédéric, when will Xenial stemcells move from X to X.Y version numbering with less regular updates of the major version X? This will help compiled releases to not have to update as often.
Nic
From: 30461002200n behalf of
Sent: Wednesday, July 11, 2018 7:02 am To: cf-bosh@...; cf-dev@... Subject: [cf-bosh] Xenial stemcells now available: migration plan Hi everyone.
My name is Frédéric. I joined the BOSH team as a product manager recently, and work from Pivotal's Toronto office. Nice to meet you all!
A few weeks ago, the BOSH team introduced a new stemcell line based on Ubuntu 16.04 (Xenial Xerus) on bosh.io. For the time being, this line will be maintained in parallel with the previous ones, based on Ubuntu 14.04 (Trusty Tahr). Canonical will provide security updates for Trusty until April 2019 per their official support lifecycle policy. Because Canonical will no longer provide security updates for Trusty after April 2019, we strongly recommend users start migrating towards the Xenial-based stemcell line now. The BOSH team will continue to support the current 3586.x line of Trusty-based stemcells with upstream security patches until the CFAR migration to Xenial is complete. We do not plan on releasing any new major versions of Trusty-based stemcells, unless consumers have a specific request for a new major, and instead will focus on Xenial going forward. We are currently evaluating when we will retire the Trusty-based stemcells, and are looking for feedback from the community about technical blockers that could impede the adoption of Xenial stemcells. If you are a release author, please take the time to verify your software on Xenial-based stemcells at the earliest opportunity. As a reminder, operators using cf-deployment must keep in mind that the repository will switch to Xenial-based stemcells well before April 2019 and should plan accordingly (more details on this can be found here). Frédéric Desbiens
Product Manager | Pivotal Cloud Foundry BOSH
|
|
Re: Xenial stemcells now available: migration plan
Hi Nic. I expect this will happen in the near future. However, we still have some things to figure out and cannot commit to a date just yet. We will post an announcement to the list as soon as we will have come to a decision. Best,
On Tue, Jul 10, 2018 at 2:42 PM Dr Nic Williams <drnicwilliams@...> wrote:
|
|
Re: post hook into bosh delete vm command
Indeed, thank you for mentioning it Marco!
toggle quoted messageShow quoted text
I was focused towards providing a working solution in my answer, not broadening the problem too much. I definitely support this proposal. From the very start I found the {"persistent_disk":0} solution very workaround-ish indeed. For in-memory database, or anything stateless that needs draining, this currenty-state-of-the-art-workaround would not work. In which form are you expecting feedback or support? Best, Benjamin GANDON
|
|
Re: Incubation proposal: CF Containerization
Dmitriy Kalinin
Thank you for submitting this proposal. Let's shoot for collecting and resolving most of the comments in the next month by Aug 10th and voting at that time to incubate it in BOSH PMC.
On Tue, Jul 3, 2018 at 2:25 AM, Cornelius Schumacher <cschum@...> wrote: Hi all,
|
|
BOSH PMC Lead - Call for Nominations
Chip Childers
All, -- Dmitry Kalinin has decided to step down from his formal role in the CFF as the PMC Lead for the BOSH PMC. As such, nominations are now open for that roll. Dmitry has nominated Marco Voelz (from SAP) to take on this roll (and Marco has acknowledged his interest). If any CFF member participating in a BOSH PMC project wishes to nominate someone else, please do so by the end of the week by contacting me directly. Nominations will close by Monday July 23. After the nomination window, we will hold a vote of the BOSH PMC to select the new lead. Thanks, and please let me know if you have any questions about this. For reference, the PMC lead is defined in the CFF's Dev Governance Policy here > https://www.cloudfoundry.org/wp-content/uploads/2017/01/CFF_Development_Governance.pdf Relevant text here: "The PMC Lead is responsible for coordinating the activities of their respective PMCs, assisting in prioritizing that PMC’s Projects’ Backlog, assisting in resolving disputes within that PMC, and meeting with their respective PMC members. [snip] (PMC Leads) shall be either (1) a Project Lead participating in one of the PMC’s Projects or (2) a participant from a Member that has at least one Dedicated Committer in that PMC’s Active Projects. PMC Leads should generally be the most qualified Project Lead in that PMC. [snip] the PMC Lead/s may be nominated by any PMC Member, and elected by the PMC members using Governance by Contribution within that PMC. " -chip Chip Childers CTO, Cloud Foundry Foundation 1.267.250.0815
|
|