Problem with missing routes due to recent DHCP -> static change
Aaron Huber
We were testing a newer version of the stemcells along with bosh-init in our
lab on OpenStack and ran into an unexpected issue. Our SDN configuration requires static routes to be added via DHCP to get to the metadata web service (169.254.169.254). We have BOSH configured to use manual IP addresses but previously the stemcells were still configured to use DHCP in that case, so the routes were working fine. In the new configuration, the BOSH agent is manually configuring the IP settings (switching from DHCP to static) which a good thing for stability, but we are losing the route. This results in a failed deployment because the agent continuously pings the user-data in the metadata web service in order to get the registry URL to fetch updated configuration, and the agent is never able to mount a persistent disk because it never gets the updated configuration. I don't think having the static routes is an unusual configuration for OpenStack, so at the moment using "manual" addressing is not going to work in that case. Can logic be added to also convert any static routes picked up via DHCP when the agent switches the network config to static? Aaron Huber Intel Corporation -- View this message in context: http://cf-bosh.70367.x6.nabble.com/Problem-with-missing-routes-due-to-recent-DHCP-static-change-tp105.html Sent from the CF BOSH mailing list archive at Nabble.com. |
|
Re: Problem with missing routes due to recent DHCP -> static change
Dmitriy Kalinin
Ah that's interesting. Are you seeing that network manager unsets the
static routes when eth0 or some other interface gets reloaded? On Mon, Jun 1, 2015 at 10:51 AM, aaron_huber <aaron.m.huber(a)intel.com> wrote: We were testing a newer version of the stemcells along with bosh-init in |
|
Re: Problem with missing routes due to recent DHCP -> static change
Aaron Huber
Yes, once the /etc/network/interfaces file is converted to "static" and it
does an ifdown/ifup then the route disappears because it is no longer being added by the DHCP client. Technically I think the best solution would be to just add any routes that were configured in DHCP to the interfaces file (at least on Ubuntu, see http://askubuntu.com/questions/548940/add-static-route-in-ubuntu-14-04). I was just poking around on the best place to find the info. The /var/lib/dhcp/dhclient.eth0.leases file will contain an entry like the following that specifies the route information retrieved from DHCP: option rfc3442-classless-static-routes 32,169,254,169,254,10,65,25,10; That would be equivalent to: post-up route add 169.254.169.254/32 gw 10.65.25.10 Aaron -- View this message in context: http://cf-bosh.70367.x6.nabble.com/Problem-with-missing-routes-due-to-recent-DHCP-static-change-tp105p107.html Sent from the CF BOSH mailing list archive at Nabble.com. |
|
cf-stub.yml example with minimum or required info
Ali
Hi All,
Im running into a manifest problems during deployment of CF on vSphere 5.5, most of the errors are regarding missing/incorrect properties in cf-stub.yml (and results in cf-deployment.yml), Im using spiff to generate cf-deployment and as I understand editing cf-deployment.yml is not recommended, my question is: Is there an example of “cf-stub.yml” which include all must have required info for deployment? Im following the docs, the cf-stub.yml here http://docs.cloudfoundry.org/deploying/cf-stub-vsphere.html is missing many of required info, looking online I did not find much and most of them were out dated. Im totally new to CF and apologize if my question is basic. Thank you Ali |
|
Re: Create bosh stemcell failed in AWS region cn-north-1
王小锋 <zzuwxf at gmail.com...>
Hi, Wayne
toggle quoted message
Show quoted text
I also met the same issue as 支雷, could you please let us know how to create custom stemcell? Is there any guide? thanks a lot. 2015-06-01 20:23 GMT+08:00 Wayne E. Seguin <wayneeseguin(a)starkandwayne.com>: 支雷, |
|
Re: Bosh deploy failed on AWS-Failed loading settings via fetcher
Mark Wong <mark.wy.wong@...>
After I stopped all the VMs include the director nat and the VM running
bosh CLI. I tried redeploy again. VMs get started but soon after I am able to ssh into them, they get shutdown. I attached the debug. |
|
Re: Most bosh director commands fail with a HTTP 500
Dmitriy Kalinin
BOSH release 169 (stemcell version 2978) fixes this problem (
https://github.com/cloudfoundry/bosh/commit/24797724994d5a59f98828e477a738c9b643c78a ). On Thu, May 28, 2015 at 2:18 AM, Scott Taggart <staggart(a)skyscapecloud.com> wrote: Thanks Dmitriy – this fixed our issue J |
|
Re: Most bosh director commands fail with a HTTP 500
Scott Taggart <staggart@...>
Great! Thanks for following up Dmitriy, I'll roll out the new stemcell ASAP to my directors
From: "Dmitriy Kalinin" <dkalinin(a)pivotal.io> To: "Scott Taggart" <staggart(a)skyscapecloud.com> Cc: "cf-bosh" <cf-bosh(a)lists.cloudfoundry.org> Sent: Tuesday, 2 June, 2015 19:17:47 Subject: Re: [cf-bosh] Most bosh director commands fail with a HTTP 500 BOSH release 169 (stemcell version 2978) fixes this problem ( https://github.com/cloudfoundry/bosh/commit/24797724994d5a59f98828e477a738c9b643c78a ). On Thu, May 28, 2015 at 2:18 AM, Scott Taggart < staggart(a)skyscapecloud.com > wrote: Thanks Dmitriy – this fixed our issue J From: Dmitriy Kalinin [mailto: dkalinin(a)pivotal.io ] Sent: 26 May 2015 19:54 To: Scott Taggart Cc: CF BOSH Mailing List Subject: Re: [cf-bosh] Most bosh director commands fail with a HTTP 500 There currently exists a problem in the Director during task cleanup. Director tries to clean up task logs for the tasks that do not have associated directory on disk. https://www.pivotaltracker.com/story/show/95458780 will fix this. To fix the Director until we release a bug fix: - ssh as vcap into the Director VM - run /var/vcap/jobs/director/bin/director_ctl console - opens up console to the Director DB - run Bosh::Director::Models::Task.where(output: nil).update(output: '/tmp/123') - updates tasks without task log directories to a dummy destination; Director will be happy to run rm -rf /tmp/123 when it cleans up tasks. After that you should be able to run `bosh vms` and other tasks again. On Mon, May 25, 2015 at 2:27 PM, Scott Taggart < staggart(a)skyscapecloud.com > wrote: Hi folks, One of my three bosh directors has gotten itself stuck in a strange state where most (but not all) operations fail. I have recreated the director with a couple of different stemcells (but same persistent disk) and the issue persists. Looks like potentially a database issue on the director, but I have done a very quick visual check of a few tables (e.g. vms, deployments) and they seem fine from a glance... not sure what's going on. Everything CF-related currently/previously under the director is continuing to run fine in this AZ, it's just the director that's lost it: $ bosh deployments +---------------------+-----------------------+----------------------------------------------+--------------+ | Name | Release(s) | Stemcell(s) | Cloud Config | +---------------------+-----------------------+----------------------------------------------+--------------+ | cf-mysql | cf-mysql/19 | bosh-vcloud-esxi-ubuntu-trusty-go_agent/2915 | none | +---------------------+-----------------------+----------------------------------------------+--------------+ | cf-services-contrib | cf-services-contrib/6 | bosh-vcloud-esxi-ubuntu-trusty-go_agent/2915 | none | +---------------------+-----------------------+----------------------------------------------+--------------+ | xxxxxxx_cf | cf/208 | bosh-vcloud-esxi-ubuntu-trusty-go_agent/2915 | none | +---------------------+-----------------------+----------------------------------------------+--------------+ Deployments total: 3 $ bosh releases +---------------------+----------+-------------+ | Name | Versions | Commit Hash | +---------------------+----------+-------------+ | cf | 208* | 5d00be54+ | | cf-mysql | 19* | dfab036b+ | | cf-services-contrib | 6* | 57fd2098+ | +---------------------+----------+-------------+ (*) Currently deployed (+) Uncommitted changes Releases total: 3 $ bosh locks No locks $ bosh tasks No running tasks $ bosh vms Deployment `cf-mysql' HTTP 500: $ bosh cloudcheck Performing cloud check... Processing deployment manifest ------------------------------ HTTP 500: The relevant error I get from /var/vcap/sys/log/director/director.debug.log on the director is: E, [2015-05-25 21:20:15 #1010] [] ERROR -- Director: TypeError - no implicit conversion of nil into String: /var/vcap/packages/ruby/lib/ruby/2.1.0/fileutils.rb:1572:in `path' /var/vcap/packages/ruby/lib/ruby/2.1.0/fileutils.rb:1572:in `block in fu_list' /var/vcap/packages/ruby/lib/ruby/2.1.0/fileutils.rb:1572:in `map' /var/vcap/packages/ruby/lib/ruby/2.1.0/fileutils.rb:1572:in `fu_list' /var/vcap/packages/ruby/lib/ruby/2.1.0/fileutils.rb:625:in `rm_r' /var/vcap/packages/ruby/lib/ruby/2.1.0/fileutils.rb:654:in `rm_rf' /var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2957.0/lib/bosh/director/api/task_remover.rb:9:in `block in remove' /var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sequel-3.43.0/lib/sequel/dataset/actions.rb:152:in `block in each' /var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sequel-3.43.0/lib/sequel/adapters/postgres.rb:525:in `block (2 levels) in fetch_rows' /var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sequel-3.43.0/lib/sequel/adapters/postgres.rb:720:in `block in yield_hash_rows' /var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sequel-3.43.0/lib/sequel/adapters/postgres.rb:714:in `times' /var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sequel-3.43.0/lib/sequel/adapters/postgres.rb:714:in `yield_hash_rows' /var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sequel-3.43.0/lib/sequel/adapters/postgres.rb:525:in `block in fetch_rows' /var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sequel-3.43.0/lib/sequel/adapters/postgres.rb:134:in `execute' /var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sequel-3.43.0/lib/sequel/adapters/postgres.rb:413:in `_execute' /var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sequel-3.43.0/lib/sequel/adapters/postgres.rb:242:in `block (2 levels) in execute' /var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sequel-3.43.0/lib/sequel/adapters/postgres.rb:425:in `check_database_errors' /var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sequel-3.43.0/lib/sequel/adapters/postgres.rb:242:in `block in execute' /var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sequel-3.43.0/lib/sequel/database/connecting.rb:236:in `block in synchronize' /var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sequel-3.43.0/lib/sequel/connection_pool/threaded.rb:104:in `hold' /var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sequel-3.43.0/lib/sequel/database/connecting.rb:236:in `synchronize' /var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sequel-3.43.0/lib/sequel/adapters/postgres.rb:242:in `execute' /var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sequel-3.43.0/lib/sequel/dataset/actions.rb:801:in `execute' /var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sequel-3.43.0/lib/sequel/adapters/postgres.rb:525:in `fetch_rows' /var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sequel-3.43.0/lib/sequel/dataset/actions.rb:152:in `each' /var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2957.0/lib/bosh/director/api/task_remover.rb:8:in `remove' /var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2957.0/lib/bosh/director/api/task_helper.rb:23:in `create_task' /var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2957.0/lib/bosh/director/job_queue.rb:9:in `enqueue' /var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2957.0/lib/bosh/director/api/vm_state_manager.rb:5:in `fetch_vm_state' /var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2957.0/lib/bosh/director/api/controllers/deployments_controller.rb:182:in `block in <class:DeploymentsController>' /var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1603:in `call' /var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1603:in `block in compile!' /var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:966:in `[]' /var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:966:in `block (3 levels) in route!' /var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:985:in `route_eval' /var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:966:in `block (2 levels) in route!' /var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1006:in `block in process_route' /var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1004:in `catch' /var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1004:in `process_route' /var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:964:in `block in route!' /var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:963:in `each' /var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:963:in `route!' /var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1076:in `block in dispatch!' /var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1058:in `block in invoke' /var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1058:in `catch' /var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1058:in `invoke' /var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1073:in `dispatch!' /var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:898:in `block in call!' /var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1058:in `block in invoke' /var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1058:in `catch' /var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1058:in `invoke' /var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:898:in `call!' /var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:886:in `call' /var/vcap/packages/director/gem_home/ruby/2.1.0/gems/rack-protection-1.5.3/lib/rack/protection/xss_header.rb:18:in `call' /var/vcap/packages/director/gem_home/ruby/2.1.0/gems/rack-protection-1.5.3/lib/rack/protection/path_traversal.rb:16:in `call' /var/vcap/packages/director/gem_home/ruby/2.1.0/gems/rack-protection-1.5.3/lib/rack/protection/json_csrf.rb:18:in `call' /var/vcap/packages/director/gem_home/ruby/2.1.0/gems/rack-protection-1.5.3/lib/rack/protection/base.rb:49:in `call' /var/vcap/packages/director/gem_home/ruby/2.1.0/gems/rack-protection-1.5.3/lib/rack/protection/base.rb:49:in `call' /var/vcap/packages/director/gem_home/ruby/2.1.0/gems/rack-protection-1.5.3/lib/rack/protection/frame_options.rb:31:in `call' /var/vcap/packages/director/gem_home/ruby/2.1.0/gems/rack-1.6.0/lib/rack/nulllogger.rb:9:in `call' /var/vcap/packages/director/gem_home/ruby/2.1.0/gems/rack-1.6.0/lib/rack/head.rb:13:in `call' /var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:180:in `call' /var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:2014:in `call' /var/vcap/packages/director/gem_home/ruby/2.1.0/gems/rack-1.6.0/lib/rack/urlmap.rb:66:in `block in call' /var/vcap/packages/director/gem_home/ruby/2.1.0/gems/rack-1.6.0/lib/rack/urlmap.rb:50:in `each' /var/vcap/packages/director/gem_home/ruby/2.1.0/gems/rack-1.6.0/lib/rack/urlmap.rb:50:in `call' /var/vcap/packages/director/gem_home/ruby/2.1.0/gems/rack-1.6.0/lib/rack/commonlogger.rb:33:in `call' /var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:217:in `call' /var/vcap/packages/director/gem_home/ruby/2.1.0/gems/thin-1.5.1/lib/thin/connection.rb:81:in `block in pre_process' /var/vcap/packages/director/gem_home/ruby/2.1.0/gems/thin-1.5.1/lib/thin/connection.rb:79:in `catch' /var/vcap/packages/director/gem_home/ruby/2.1.0/gems/thin-1.5.1/lib/thin/connection.rb:79:in `pre_process' /var/vcap/packages/director/gem_home/ruby/2.1.0/gems/thin-1.5.1/lib/thin/connection.rb:54:in `process' /var/vcap/packages/director/gem_home/ruby/2.1.0/gems/thin-1.5.1/lib/thin/connection.rb:39:in `receive_data' /var/vcap/packages/director/gem_home/ruby/2.1.0/gems/eventmachine-1.0.3/lib/eventmachine.rb:187:in `run_machine' /var/vcap/packages/director/gem_home/ruby/2.1.0/gems/eventmachine-1.0.3/lib/eventmachine.rb:187:in `run' /var/vcap/packages/director/gem_home/ruby/2.1.0/gems/thin-1.5.1/lib/thin/backends/base.rb:63:in `start' /var/vcap/packages/director/gem_home/ruby/2.1.0/gems/thin-1.5.1/lib/thin/server.rb:159:in `start' /var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2957.0/bin/bosh-director:37:in `<top (required)>' /var/vcap/packages/director/bin/bosh-director:16:in `load' /var/vcap/packages/director/bin/bosh-director:16:in `<main>' I've wiped my local bosh config and re-targetted the director and tried running bosh vms without specifying a deployment manifest (i.e. rule the manifest out) - still get the same 500 Any tips appreciated! Notice: This message contains information that may be privileged or confidential and is the property of Skyscape. It is intended only for the person to whom it is addressed. If you are not the intended recipient, you are not authorised to read, print, retain, copy, disseminate, distribute, or use this message or any part thereof. If you receive this message in error, please notify the sender immediately and delete all copies of this message. Skyscape reserves the right to monitor all e-mail communications through its networks. Skyscape Cloud Services Limited is registered in England and Wales: Company No: 07619797. Registered office: Hartham Park, Hartham, Corsham, Wiltshire SN13 0RP. ______________________________________________________________________ This email has been scanned by the Symantec Email Security.cloud service. ______________________________________________________________________ _______________________________________________ cf-bosh mailing list cf-bosh(a)lists.cloudfoundry.org https://lists.cloudfoundry.org/mailman/listinfo/cf-bosh ______________________________________________________________________ This email has been scanned by the Symantec Email Security.cloud service. ______________________________________________________________________ Notice: This message contains information that may be privileged or confidential and is the property of Skyscape. It is intended only for the person to whom it is addressed. If you are not the intended recipient, you are not authorised to read, print, retain, copy, disseminate, distribute, or use this message or any part thereof. If you receive this message in error, please notify the sender immediately and delete all copies of this message. Skyscape reserves the right to monitor all e-mail communications through its networks. Skyscape Cloud Services Limited is registered in England and Wales: Company No: 07619797. Registered office: Hartham Park, Hartham, Corsham, Wiltshire SN13 0RP. ______________________________________________________________________ This email has been scanned by the Symantec Email Security.cloud service. ______________________________________________________________________ ______________________________________________________________________ This email has been scanned by the Symantec Email Security.cloud service. ______________________________________________________________________ Notice: This message contains information that may be privileged or confidential and is the property of Skyscape. It is intended only for the person to whom it is addressed. If you are not the intended recipient, you are not authorised to read, print, retain, copy, disseminate, distribute, or use this message or any part thereof. If you receive this message in error, please notify the sender immediately and delete all copies of this message. Skyscape reserves the right to monitor all e-mail communications through its networks. Skyscape Cloud Services Limited is registered in England and Wales: Company No: 07619797. Registered office: Hartham Park, Hartham, Corsham, Wiltshire SN13 0RP. ______________________________________________________________________ This email has been scanned by the Symantec Email Security.cloud service. ______________________________________________________________________ |
|
Re: Create bosh stemcell failed in AWS region cn-north-1
Wayne E. Seguin
Absolutely 小锋,
toggle quoted message
Show quoted text
This is the guide I followed in order to build my own custom stemcell for a client: https://github.com/cloudfoundry/bosh/blob/master/bosh-stemcell/README.md ~Wayne On Mon, Jun 1, 2015 at 11:12 PM, 王小锋 <zzuwxf(a)gmail.com> wrote:
Hi, Wayne |
|
Migrating a full-stack bosh deployment to bosh-init
Allan Espinosa
Hi,
We currently have a binary bosh [1] setup. However we would like to transition to bosh-init to prevent having to manage multiple bosh deployments. I'm looking at how to regenerate the state file described in [2]. I can find my VM CID from "bosh vms bosh-meta --details" but can't get the other information from the director. Is there other places to retrieve the information? Or do I have to poke things below the cpi (vSphere in our case). We're using the vsphere cpi for our deployment. Thanks Allan [1] https://blog.starkandwayne.com/2014/07/10/resurrecting-bosh-with-binary-boshes/ [2] https://bosh.io/docs/using-bosh-init.html#recover-deployment-state |
|
Re: Migrating a full-stack bosh deployment to bosh-init
Gwenn Etourneau
disk id should be present on the IaaS layer, so I guess if you look into
Vsphere you should find it. If I remember should be the form disk-some-uuid, vcenter>yourvm>editsetting>yourdisk and should be in the first field (/disk-some-uui.vmdk). But I don't have any vcenter to check now ... On Wed, Jun 3, 2015 at 11:47 AM, Espinosa, Allan | Allan | OPS < allan.espinosa(a)rakuten.com> wrote: Hi, |
|
Re: Resuming UAA work
David Ehringer
What are some of the functions that a read-only user scope would be able to
perform. I really like the idea of a read-only scope but it seems like today there are only a few functions that aren't intended to modify the state of the system or indirectly can allow for modification of the system (e.g. bosh ssh/scp). By far the biggest authorization requirement we get from our security teams is being able to provide a level of "admin" access that can perform most functions but can't access credentials and sensitive information. Simply hooking in UAA obviously doesn't help with this as this is deeply related to how deployment manifests work in general. But I mention it because this is the type of authorization and access control requirements our security teams are providing. -- View this message in context: http://cf-bosh.70367.x6.nabble.com/cf-bosh-Resuming-UAA-work-tp75p116.html Sent from the CF BOSH mailing list archive at Nabble.com. |
|
CF install failing on OpenStack
eoghank
Hi,
I have an baremetal install of OpenStack on Ubuntu 14.04 and am having issues with the bosh install. All the endpoints are correctly configured and I have run through all the pre-req tests for a CF install on openstack. The install is failing with this error. Can anyone provide any pointers on as to what could be causing this? {"type": "step_started", "id": "microbosh.setting_manifest"} Running "bundle exec bosh -n micro deployment micro/" Deployment set to '/var/tempest/workspaces/default/deployments/micro/micro_bosh.yml' {"type": "step_finished", "id": "microbosh.setting_manifest"} {"type": "step_started", "id": "microbosh.deploying"} Running "bundle exec bosh -n micro deploy /var/tempest/stemcells/bosh-stemcell-2975-openstack-kvm-ubuntu-trusty-go_agent-raw.tgz --update-if-exists" Verifying stemcell... File exists and readable OK Verifying tarball... Read tarball OK Manifest exists OK Stemcell image file OK Stemcell properties OK Stemcell info ------------- Name: bosh-openstack-kvm-ubuntu-trusty-go_agent-raw Version: 2975 /home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/fog-1.27.0/lib/fog/openstack/volume.rb:191: warning: duplicated key at line 196 ignored: :openstack_region Started deploy micro bosh Started deploy micro bosh > Unpacking stemcell. Done (00:00:05) Started deploy micro bosh > Uploading stemcell. Done (00:00:53) Started deploy micro bosh > Creating VM from 1552e46e-8291-461f-966a-ac6332d313be. Done (00:00:44) /home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/fog-1.27.0/lib/fog/openstack/volume.rb:191: warning: duplicated key at line 196 ignored: :openstack_region Started deploy micro bosh > Waiting for the agent. Done (00:02:19) Started deploy micro bosh > Updating persistent disk Started deploy micro bosh > Create disklog writing failed. can't be called from trap context /home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/excon-0.45.3/lib/excon/socket.rb:108:in `getaddrinfo': getaddrinfo: Name or service not known (SocketError) (Excon::Errors::SocketError) from /home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/excon-0.45.3/lib/excon/socket.rb:108:in `connect' from /home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/excon-0.45.3/lib/excon/socket.rb:28:in `initialize' from /home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/excon-0.45.3/lib/excon/connection.rb:389:in `new' from /home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/excon-0.45.3/lib/excon/connection.rb:389:in `socket' from /home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/excon-0.45.3/lib/excon/connection.rb:106:in `request_call' from /home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/excon-0.45.3/lib/excon/middlewares/mock.rb:47:in `request_call' from /home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/excon-0.45.3/lib/excon/middlewares/instrumentor.rb:19:in `block in request_call' from /home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/bosh_openstack_cpi-1.2975.0/lib/cloud/openstack/excon_logging_instrumentor.rb:10:in `instrument' from /home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/excon-0.45.3/lib/excon/middlewares/instrumentor.rb:18:in `request_call' from /home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/excon-0.45.3/lib/excon/middlewares/base.rb:15:in `request_call' from /home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/excon-0.45.3/lib/excon/middlewares/base.rb:15:in `request_call' from /home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/excon-0.45.3/lib/excon/middlewares/base.rb:15:in `request_call' from /home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/excon-0.45.3/lib/excon/connection.rb:233:in `request' from /home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/fog-core-1.30.0/lib/fog/core/connection.rb:81:in `request' from /home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/fog-1.27.0/lib/fog/openstack/volume.rb:156:in `request' from /home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/fog-1.27.0/lib/fog/openstack/requests/volume/create_volume.rb:19:in `create_volume' from /home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/fog-1.27.0/lib/fog/openstack/models/volume/volume.rb:29:in `save' from /home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/fog-core-1.30.0/lib/fog/core/collection.rb:51:in `create' from /home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/bosh_openstack_cpi-1.2975.0/lib/cloud/openstack/cloud.rb:425:in `block (2 levels) in create_disk' from /home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/bosh_openstack_cpi-1.2975.0/lib/cloud/openstack/helpers.rb:26:in `with_openstack' from /home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/bosh_openstack_cpi-1.2975.0/lib/cloud/openstack/cloud.rb:425:in `block in create_disk' from /home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/bosh_common-1.2975.0/lib/common/thread_formatter.rb:49:in `with_thread_name' from /home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/bosh_openstack_cpi-1.2975.0/lib/cloud/openstack/cloud.rb:403:in `create_disk' from /home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/bosh_cli_plugin_micro-1.2975.0/lib/bosh/deployer/instance_manager.rb:282:in `block in create_disk' from /home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/bosh_cli_plugin_micro-1.2975.0/lib/bosh/deployer/instance_manager.rb:85:in `step' from /home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/bosh_cli_plugin_micro-1.2975.0/lib/bosh/deployer/instance_manager.rb:280:in `create_disk' from /home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/bosh_cli_plugin_micro-1.2975.0/lib/bosh/deployer/instance_manager.rb:352:in `update_persistent_disk' from /home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/bosh_cli_plugin_micro-1.2975.0/lib/bosh/deployer/instance_manager.rb:137:in `block in create' from /home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/bosh_cli_plugin_micro-1.2975.0/lib/bosh/deployer/instance_manager.rb:85:in `step' from /home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/bosh_cli_plugin_micro-1.2975.0/lib/bosh/deployer/instance_manager.rb:136:in `create' from /home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/bosh_cli_plugin_micro-1.2975.0/lib/bosh/deployer/instance_manager.rb:98:in `block in create_deployment' from /home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/bosh_cli_plugin_micro-1.2975.0/lib/bosh/deployer/instance_manager.rb:92:in `with_lifecycle' from /home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/bosh_cli_plugin_micro-1.2975.0/lib/bosh/deployer/instance_manager.rb:98:in `create_deployment' from /home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/bosh_cli_plugin_micro-1.2975.0/lib/bosh/cli/commands/micro.rb:179:in `perform' from /home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/bosh_cli-1.2975.0/lib/cli/command_handler.rb:57:in `run' from /home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/bosh_cli-1.2975.0/lib/cli/runner.rb:56:in `run' from /home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/bosh_cli-1.2975.0/bin/bosh:16:in `<top (required)>' from /home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/bin/bosh:23:in `load' from /home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/bin/bosh:23:in `<main>' {"type": "step_finished", "id": "microbosh.deploying"} Exited with 1. Thanks, Eoghan |
|
Re: Resuming UAA work
Dmitriy Kalinin
I've updated https://github.com/cloudfoundry/bosh-notes/blob/master/uaa.md
to list out planned viewable resources by read-only users. By far the biggest authorization requirement we get from our securityteams is being able to provide a level of "admin" access that can perform most functions but can't access credentials and sensitive information. What kind of "admin" access do you think should be provided? On Wed, Jun 3, 2015 at 8:07 AM, dehringer <david.ehringer(a)gmail.com> wrote: What are some of the functions that a read-only user scope would be able to |
|
Re: Migrating a full-stack bosh deployment to bosh-init
Dmitriy Kalinin
Unfortunately we do not show persistent disk IDs in any of the CLI commands
*yet*. You can either look at the vsphere settings for the VM for which disk is attached or look at the /var/vcap/bosh/settings.json on the VM that has the disk attached. On Tue, Jun 2, 2015 at 10:37 PM, Gwenn Etourneau <getourneau(a)pivotal.io> wrote: disk id should be present on the IaaS layer, so I guess if you look into |
|
Re: CF install failing on OpenStack
Looks like the openstack endpoint DNS name is not resolved from the local
toggle quoted message
Show quoted text
box. Have you checked against typo in the yml config ? There is a bosh micro log file which may provide additional traces. Is your network set up requiring use of an http_proxy to reach the openstack endpoint (and the proxy is doing DNS resolution, which not available on the local box) ? Hope this can help, Guillaume. On Wed, Jun 3, 2015 at 7:45 PM, Eoghan <eoghank(a)gmail.com> wrote:
Hi, |
|
Re: cf-stub.yml example with minimum or required info
ryunata <ricky.yunata@...>
I have the same problem too.. I'm running on Openstack. I would appreciate if
someone also can give an example of stub file and cf-deployment manifest file example on openstack. -- View this message in context: http://cf-bosh.70367.x6.nabble.com/cf-bosh-cf-stub-yml-example-with-minimum-or-required-info-tp108p121.html Sent from the CF BOSH mailing list archive at Nabble.com. |
|
Re: cf-stub.yml example with minimum or required info
CF Runtime
Hi Ali,
We try to keep those docs up to date, but it is possible they are missing some pieces. Can you tell me what errors you are getting? Joseph Palermo CF Runtime Team |
|
Re: CF install failing on OpenStack
Gwenn Etourneau
I get the same problem and was due to dns
toggle quoted message
Show quoted text
on the opsmanager box. /etc/resolv.conf Envoyé de mon iPhone Le 4 juin 2015 à 05:28, Guillaume Berche <bercheg(a)gmail.com> a écrit :
Looks like the openstack endpoint DNS name is not resolved from the local box. Have you checked against typo in the yml config ? There is a bosh micro log file which may provide additional traces. Is your network set up requiring use of an http_proxy to reach the openstack endpoint (and the proxy is doing DNS resolution, which not available on the local box) ? Hope this can help, Guillaume. On Wed, Jun 3, 2015 at 7:45 PM, Eoghan <eoghank(a)gmail.com> wrote: Hi,_______________________________________________ cf-bosh mailing list cf-bosh(a)lists.cloudfoundry.org https://lists.cloudfoundry.org/mailman/listinfo/cf-bosh |
|
Re: CF install failing on OpenStack
Aristoteles Neto
I’ve just updated from stemcell 2905 to 2978, only to find out that the DNS address was missing fmor the resolv.conf.
toggle quoted message
Show quoted text
I’m currently having a look to try and determine why, but it might pay to ensure you have a DNS address on your resolv.conf (or try an earlier stemcell). Regards, -- Neto On 4/06/2015, at 8:27, Guillaume Berche <bercheg(a)gmail.com> wrote:
Looks like the openstack endpoint DNS name is not resolved from the local box. Have you checked against typo in the yml config ? There is a bosh micro log file which may provide additional traces. |
|