Date   

Re: cf-stub.yml example with minimum or required info

Ali
 

Thanks Joseph for your help, please see the error below:

20JXXW:cf-release ali00$ ./generate_deployment_manifest vsphere cf-stub.yml > cf-deployment.yml
2015/06/04 13:34:50 error generating manifest: unresolved nodes:
(( static_ips(12) )) in ./templates/cf-infrastructure-vsphere.yml jobs.[5].networks.[0].static_ips
(( static_ips(16) )) in ./templates/cf-infrastructure-vsphere.yml jobs.[8].networks.[0].static_ips
(( static_ips(14, 15) )) in ./templates/cf-infrastructure-vsphere.yml jobs.[15].networks.[0].static_ips
(( static_ips(17, 18, 19) )) in ./templates/cf-infrastructure-vsphere.yml jobs.[17].networks.[0].static_ips
(( jobs.postgres_z1.networks.cf1.static_ips.[0] )) in dynaml properties.databases.address
(( properties.databases.address )) in dynaml properties.ccdb.address
(( properties.databases.address )) in dynaml properties.uaadb.address
M-2XX0JW:cf-release ali00$


I do not want to bug cf-bosh alias with every error I run into so my ask is to find a sample of cf-stub.yml with all minimum required values, Im sure Im missing a lot :), the sample online here http://docs.cloudfoundry.org/deploying/cf-stub-vsphere.html, when I first run it I got an error regarding “Error 40001: Required property `range' was not specified in object”, then after I added “range” property I got the error above.

Im looking for building a POC CF with minimum effort, do have one network (10.166.166.0/23) and vSphere 5.x, I want to use it for both CF networks (cf1 and cf2), not sure how many Ips I need on each network, and if I have to specify nodes spec and vsphere info in cf-stub since I do not see section for it?

I also tried bosh-lite and it worked fine on Ubuntu 14.

Here is my cf-stub.yml in case you want to have a look


# The following line helps maintain current documentation at http://docs.cloudfoundry.org.
# code_snippet cf-stub-vsphere start
---
name: cloudfoundry
director_uuid: b9a1bf7b-952f-48e1-a496-f6543d7a782c

releases:
- name: cf-210
version: latest


networks:

- name: cf1

subnets:

- range: 10.166.166.0/23

gateway: 10.195.76.1

static:

- 10.166.166.104 - 10.166.166.115

reserved:

# .1 is special

- 10.166.166.2 - 10.166.166.101

- 10.166.166.120 - 10.194.167.254

# .255 is special

dns: [10.166.168.183]

cloud_properties:

name: '10.166.166.x'

- name: cf2

subnets:

- range: 10.166.166.0/23

gateway: 10.166.166.1

static:

- 10.166.166.120 - 10.166.166.140

reserved:

# .1 is special

- 10.166.166.2 - 10.166.166.101

- 10.166.166.120 - 10.195.167.254

# .255 is special

dns: [10.166.168.183]

cloud_properties:

name: '10.166.166.x'

jobs:
ha_proxy_z1:
properties:
ha_proxy:
disable_http: true
properties:
cc:
droplets:
droplet_directory_key: the_key
buildpacks:
buildpack_directory_key: bd_key
staging_upload_user: username
staging_upload_password: password
bulk_api_password: password
db_encryption_key: the_key
dea_next:
disk_mb: 2048
memory_mb: 1024
loggregator_endpoint:
shared_secret: loggregator_endpoint_secret
nats:
user: nats_user
password: nats_password
router:
enable_ssl: true
ssl_cert: |
-----BEGIN CERTIFICATE-----
MIIDBjCCAe4CCQCz3nn1SWrDdTANBgkqhkiG9w0BAQUFADBFMQswCQYDVQQGEwJB
VTETMBEGA1UECBMKU29tZS1TdGF0ZTEhMB8GA1UEChMYSW50ZXJuZXQgV2lkZ2l0
cyBQdHkgTHRkMB4XDTE1MDMwMzE4NTMyNloXDTE2MDMwMjE4NTMyNlowRTELMAkG
A1UEBhMCQVUxEzARBgNVBAgTClNvbWUtU3RhdGUxITAfBgNVBAoTGEludGVybmV0
IFdpZGdpdHMgUHR5IEx0ZDCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEB
AKtTK9xq/ycRO3fWbk1abunYf9CY6sl0Wlqm9UPMkI4j0itY2OyGyn1YuCCiEdM3
b8guGSWB0XSL5PBq33e7ioiaH98UEe+Ai+TBxnJsro5WQ/TMywzRDhZ4E7gxDBav
88ZY+y7ts0HznfxqEIn0Gu/UK+s6ajYcIy7d9L988+hA3K1FSdes8MavXhrI4xA1
fY21gESfFkD4SsqvrkISC012pa7oVw1f94slIVcAG+l9MMAkatBGxgWAQO6kxk5o
oH1Z5q2m0afeQBfFqzu5lCITLfgTWCUZUmbF6UpRhmD850/LqNtryAPrLLqXxdig
OHiWqvFpCusOu/4z1uGC5xECAwEAATANBgkqhkiG9w0BAQUFAAOCAQEAV5RAFVQy
8Krs5c9ebYRseXO6czL9/Rfrt/weiC1XLcDkE2i2yYsBXazMYr58o4hACJwe2hoC
bihBZ9XnVpASEYHDLwDj3zxFP/bTuKs7tLhP7wz0lo8i6k5VSPAGBq2kjc/cO9a3
TMmLPks/Xm42MCSWGDnCEX1854B3+JK3CNEGqSY7FYXU4W9pZtHPZ3gBoy0ymSpg
mpleiY1Tbn5I2X7vviMW7jeviB5ivkZaXtObjyM3vtPLB+ILpa15ZhDSE5o71sjA
jXqrE1n5o/GXHX+1M8v3aJc30Az7QAqWohW/tw5SoiSmVQZWd7gFht9vSzaH2WgO
LwcpBC7+cUJEww==
-----END CERTIFICATE-----
ssl_key: |
-----BEGIN RSA PRIVATE KEY-----
MIIEpAIBAAKCAQEAq1Mr3Gr/JxE7d9ZuTVpu6dh/0JjqyXRaWqb1Q8yQjiPSK1jY
7IbKfVi4IKIR0zdvyC4ZJYHRdIvk8Grfd7uKiJof3xQR74CL5MHGcmyujlZD9MzL
DNEOFngTuDEMFq/zxlj7Lu2zQfOd/GoQifQa79Qr6zpqNhwjLt30v3zz6EDcrUVJ
16zwxq9eGsjjEDV9jbWARJ8WQPhKyq+uQhILTXalruhXDV/3iyUhVwAb6X0wwCRq
0EbGBYBA7qTGTmigfVnmrabRp95AF8WrO7mUIhMt+BNYJRlSZsXpSlGGYPznT8uo
22vIA+ssupfF2KA4eJaq8WkK6w67/jPW4YLnEQIDAQABAoIBAQCDVqpcOoZKK9K8
Bt3eXQKEMJ2ji2cKczFFJ5MEm9EBtoJLCryZbqfSue3Fzpj9pBUEkBpk/4VT5F7o
0/Vmc5Y7LHRcbqVlRtV30/lPBPQ4V/eWtly/AZDcNsdfP/J1fgPSvaoqCr2ORLWL
qL/vEfyIeM4GcWy0+JMcPbmABslw9O6Ptc5RGiP98vCLHQh/++sOtj6PH1pt+2X/
Uecv3b1Hk/3Oe+M8ySorJD3KA94QTRnKX+zubkxRg/zCAki+as8rQc/d+BfVG698
ylUT5LVLNuwbWnffY2Zt5x5CDqH01mJnHmxzQEfn68rb3bGFaYPEn9EP+maQijv6
SsUM9A3lAoGBAODRDRn4gEIxjPICp6aawRrMDlRc+k6IWDF7wudjxJlaxFr2t7FF
rFYm+jrcG6qMTyq+teR8uHpcKm9X8ax0L6N6gw5rVzIeIOGma/ZuYIYXX2XJx5SW
SOas1xW6qEIbOMv+Xu9w2SWbhTgyRmtlxxjr2e7gQLz9z/vuTReJpInnAoGBAMMW
sq5lqUfAQzqxlhTobQ7tnB48rUQvkGPE92SlDj2TUt9phek2/TgRJT6mdcozvimt
JPhxKg3ioxG8NPmN0EytjpSiKqlxS1R2po0fb75vputfpw16Z8/2Vik+xYqNMTLo
SpeVkHu7fbtNYEK2qcU44OyOZ/V+5Oo9TuBIFRhHAoGACkqHhwDRHjaWdR2Z/w5m
eIuOvF3lN2MWZm175ouynDKDeoaAsiS2VttB6R/aRFxX42UHfoYXC8LcTmyAK5zF
8X3SMf7H5wtqBepQVt+Gm5zGSSqLcEnQ3H5c+impOh105CGoxt0rk4Ui/AeRIalv
C70AJOcvD3eu5aFq9gDe/1ECgYBAhkVbASzYGnMh+pKVH7rScSxto8v6/XBYT1Ez
7JOlMhD667/qvtFJtgIHkq7qzepbhnTv5x3tscQVnZY34/u9ILpD1s8dc+dibEvx
6S/gYLVorB5ois/DLMqaobRcew6Gs+XX9RPwmLahOJpZ9mh4XrOmCgPAYtP71YM9
ExpHCQKBgQCMMDDWGMRdFMJgXbx1uMere7OoniBdZaOexjbglRh1rMVSXqzBoU8+
yhEuHGAsHGWQdSBHnqRe9O0Bj/Vlw2VVEaJeL1ewRHb+jXSnuKclZOJgMsJAvgGm
SOWIahDrATA4g1T6yLBWQPhj3ZXD3eCMxT1Q3DvpG1DjgvXwmXQJAA==
-----END RSA PRIVATE KEY-----
cipher_suites: TLS_RSA_WITH_RC4_128_SHA:TLS_RSA_WITH_AES_128_CBC_SHA
status:
user: router_user
password: router_password
login:
logout:
redirect:
parameter:
disable: false
uaa:
admin:
client_secret: admin_secret
batch:
username: batch_username
password: batch_password
cc:
client_secret: cc_client_secret
clients:
app-direct:
secret: app-direct_secret
developer_console:
secret: developer_console_secret
login:
secret: login_client_secret
notifications:
secret: notification_secret
doppler:
secret: doppler_secret
cloud_controller_username_lookup:
secret: cloud_controller_username_lookup_secret
gorouter:
secret: gorouter_secret

jwt:
verification_key: vk
signing_key: sk
scim:
users:
- admin|fakepassword|scim.write,scim.read,openid,cloud_controller.admin,doppler.firehose

# code_snippet cf-stub-vsphere end
# The previous line helps maintain current documentation at http://docs.cloudfoundry.org.




Thank you

Ahmed





From: CF Runtime <cfruntime(a)gmail.com<mailto:cfruntime(a)gmail.com>>
Reply-To: "Discussions about the Cloud Foundry BOSH project." <cf-bosh(a)lists.cloudfoundry.org<mailto:cf-bosh(a)lists.cloudfoundry.org>>
Date: Wednesday, June 3, 2015 at 5:40 PM
To: "cf-bosh(a)lists.cloudfoundry.org<mailto:cf-bosh(a)lists.cloudfoundry.org>" <cf-bosh(a)lists.cloudfoundry.org<mailto:cf-bosh(a)lists.cloudfoundry.org>>
Subject: Re: [cf-bosh] cf-stub.yml example with minimum or required info

Hi Ali,

We try to keep those docs up to date, but it is possible they are missing some pieces.

Can you tell me what errors you are getting?

Joseph Palermo
CF Runtime Team


Re: CF install failing on OpenStack

eoghank
 

Thanks Guillaume, it looks like DNS resolution of the Cinder endpoint was
causing the volume failure.

Eoghan



--
View this message in context: http://cf-bosh.70367.x6.nabble.com/cf-bosh-CF-install-failing-on-OpenStack-tp117p125.html
Sent from the CF BOSH mailing list archive at Nabble.com.


Re: CF install failing on OpenStack

Aristoteles Neto
 

I’ve just updated from stemcell 2905 to 2978, only to find out that the DNS address was missing fmor the resolv.conf.

I’m currently having a look to try and determine why, but it might pay to ensure you have a DNS address on your resolv.conf (or try an earlier stemcell).

Regards,

-- Neto

On 4/06/2015, at 8:27, Guillaume Berche <bercheg(a)gmail.com> wrote:

Looks like the openstack endpoint DNS name is not resolved from the local box. Have you checked against typo in the yml config ? There is a bosh micro log file which may provide additional traces.

Is your network set up requiring use of an http_proxy to reach the openstack endpoint (and the proxy is doing DNS resolution, which not available on the local box) ?

Hope this can help,

Guillaume.

On Wed, Jun 3, 2015 at 7:45 PM, Eoghan <eoghank(a)gmail.com> wrote:
Hi,

I have an baremetal install of OpenStack on Ubuntu 14.04 and am having issues with the bosh install. All the endpoints are correctly configured and I have run through all the pre-req tests for a CF install on openstack.

The install is failing with this error. Can anyone provide any pointers on as to what could be causing this?

{"type": "step_started", "id": "microbosh.setting_manifest"}

Running "bundle exec bosh -n micro deployment micro/"

Deployment set to '/var/tempest/workspaces/default/deployments/micro/micro_bosh.yml'

{"type": "step_finished", "id": "microbosh.setting_manifest"}

{"type": "step_started", "id": "microbosh.deploying"}

Running "bundle exec bosh -n micro deploy /var/tempest/stemcells/bosh-stemcell-2975-openstack-kvm-ubuntu-trusty-go_agent-raw.tgz --update-if-exists"


Verifying stemcell...

File exists and readable OK

Verifying tarball...

Read tarball OK

Manifest exists OK

Stemcell image file OK

Stemcell properties OK


Stemcell info

-------------

Name: bosh-openstack-kvm-ubuntu-trusty-go_agent-raw

Version: 2975


/home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/fog-1.27.0/lib/fog/openstack/volume.rb:191: warning: duplicated key at line 196 ignored: :openstack_region

Started deploy micro bosh

Started deploy micro bosh > Unpacking stemcell. Done (00:00:05)

Started deploy micro bosh > Uploading stemcell. Done (00:00:53)

Started deploy micro bosh > Creating VM from 1552e46e-8291-461f-966a-ac6332d313be. Done (00:00:44)

/home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/fog-1.27.0/lib/fog/openstack/volume.rb:191: warning: duplicated key at line 196 ignored: :openstack_region

Started deploy micro bosh > Waiting for the agent. Done (00:02:19)

Started deploy micro bosh > Updating persistent disk

Started deploy micro bosh > Create disklog writing failed. can't be called from trap context

/home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/excon-0.45.3/lib/excon/socket.rb:108:in `getaddrinfo': getaddrinfo: Name or service not known (SocketError) (Excon::Errors::SocketError)

from /home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/excon-0.45.3/lib/excon/socket.rb:108:in `connect'

from /home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/excon-0.45.3/lib/excon/socket.rb:28:in `initialize'

from /home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/excon-0.45.3/lib/excon/connection.rb:389:in `new'

from /home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/excon-0.45.3/lib/excon/connection.rb:389:in `socket'

from /home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/excon-0.45.3/lib/excon/connection.rb:106:in `request_call'

from /home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/excon-0.45.3/lib/excon/middlewares/mock.rb:47:in `request_call'

from /home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/excon-0.45.3/lib/excon/middlewares/instrumentor.rb:19:in `block in request_call'

from /home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/bosh_openstack_cpi-1.2975.0/lib/cloud/openstack/excon_logging_instrumentor.rb:10:in `instrument'

from /home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/excon-0.45.3/lib/excon/middlewares/instrumentor.rb:18:in `request_call'

from /home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/excon-0.45.3/lib/excon/middlewares/base.rb:15:in `request_call'

from /home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/excon-0.45.3/lib/excon/middlewares/base.rb:15:in `request_call'

from /home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/excon-0.45.3/lib/excon/middlewares/base.rb:15:in `request_call'

from /home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/excon-0.45.3/lib/excon/connection.rb:233:in `request'

from /home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/fog-core-1.30.0/lib/fog/core/connection.rb:81:in `request'

from /home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/fog-1.27.0/lib/fog/openstack/volume.rb:156:in `request'

from /home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/fog-1.27.0/lib/fog/openstack/requests/volume/create_volume.rb:19:in `create_volume'

from /home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/fog-1.27.0/lib/fog/openstack/models/volume/volume.rb:29:in `save'

from /home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/fog-core-1.30.0/lib/fog/core/collection.rb:51:in `create'

from /home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/bosh_openstack_cpi-1.2975.0/lib/cloud/openstack/cloud.rb:425:in `block (2 levels) in create_disk'

from /home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/bosh_openstack_cpi-1.2975.0/lib/cloud/openstack/helpers.rb:26:in `with_openstack'

from /home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/bosh_openstack_cpi-1.2975.0/lib/cloud/openstack/cloud.rb:425:in `block in create_disk'

from /home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/bosh_common-1.2975.0/lib/common/thread_formatter.rb:49:in `with_thread_name'

from /home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/bosh_openstack_cpi-1.2975.0/lib/cloud/openstack/cloud.rb:403:in `create_disk'

from /home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/bosh_cli_plugin_micro-1.2975.0/lib/bosh/deployer/instance_manager.rb:282:in `block in create_disk'

from /home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/bosh_cli_plugin_micro-1.2975.0/lib/bosh/deployer/instance_manager.rb:85:in `step'

from /home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/bosh_cli_plugin_micro-1.2975.0/lib/bosh/deployer/instance_manager.rb:280:in `create_disk'

from /home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/bosh_cli_plugin_micro-1.2975.0/lib/bosh/deployer/instance_manager.rb:352:in `update_persistent_disk'

from /home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/bosh_cli_plugin_micro-1.2975.0/lib/bosh/deployer/instance_manager.rb:137:in `block in create'

from /home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/bosh_cli_plugin_micro-1.2975.0/lib/bosh/deployer/instance_manager.rb:85:in `step'

from /home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/bosh_cli_plugin_micro-1.2975.0/lib/bosh/deployer/instance_manager.rb:136:in `create'

from /home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/bosh_cli_plugin_micro-1.2975.0/lib/bosh/deployer/instance_manager.rb:98:in `block in create_deployment'

from /home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/bosh_cli_plugin_micro-1.2975.0/lib/bosh/deployer/instance_manager.rb:92:in `with_lifecycle'

from /home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/bosh_cli_plugin_micro-1.2975.0/lib/bosh/deployer/instance_manager.rb:98:in `create_deployment'

from /home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/bosh_cli_plugin_micro-1.2975.0/lib/bosh/cli/commands/micro.rb:179:in `perform'

from /home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/bosh_cli-1.2975.0/lib/cli/command_handler.rb:57:in `run'

from /home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/bosh_cli-1.2975.0/lib/cli/runner.rb:56:in `run'

from /home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/bosh_cli-1.2975.0/bin/bosh:16:in `<top (required)>'

from /home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/bin/bosh:23:in `load'

from /home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/bin/bosh:23:in `<main>'

{"type": "step_finished", "id": "microbosh.deploying"}

Exited with 1.


Thanks,
Eoghan

_______________________________________________
cf-bosh mailing list
cf-bosh(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-bosh


_______________________________________________
cf-bosh mailing list
cf-bosh(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-bosh


Re: CF install failing on OpenStack

Gwenn Etourneau
 

I get the same problem and was due to dns
on the opsmanager box.

/etc/resolv.conf


Envoyé de mon iPhone

Le 4 juin 2015 à 05:28, Guillaume Berche <bercheg(a)gmail.com> a écrit :

Looks like the openstack endpoint DNS name is not resolved from the local
box. Have you checked against typo in the yml config ? There is a bosh
micro log file which may provide additional traces.

Is your network set up requiring use of an http_proxy to reach the
openstack endpoint (and the proxy is doing DNS resolution, which not
available on the local box) ?

Hope this can help,

Guillaume.

On Wed, Jun 3, 2015 at 7:45 PM, Eoghan <eoghank(a)gmail.com> wrote:

Hi,

I have an baremetal install of OpenStack on Ubuntu 14.04 and am having
issues with the bosh install. All the endpoints are correctly configured
and I have run through all the pre-req tests for a CF install on openstack.

The install is failing with this error. Can anyone provide any pointers on
as to what could be causing this?

{"type": "step_started", "id": "microbosh.setting_manifest"}

Running "bundle exec bosh -n micro deployment micro/"

Deployment set to
'/var/tempest/workspaces/default/deployments/micro/micro_bosh.yml'

{"type": "step_finished", "id": "microbosh.setting_manifest"}

{"type": "step_started", "id": "microbosh.deploying"}

Running "bundle exec bosh -n micro deploy
/var/tempest/stemcells/bosh-stemcell-2975-openstack-kvm-ubuntu-trusty-go_agent-raw.tgz
--update-if-exists"



Verifying stemcell...

File exists and readable OK

Verifying tarball...

Read tarball OK

Manifest exists OK

Stemcell image file OK

Stemcell properties OK



Stemcell info

-------------

Name: bosh-openstack-kvm-ubuntu-trusty-go_agent-raw

Version: 2975



/home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/fog-1.27.0/lib/fog/openstack/volume.rb:191:
warning: duplicated key at line 196 ignored: :openstack_region

Started deploy micro bosh

Started deploy micro bosh > Unpacking stemcell. Done (00:00:05)

Started deploy micro bosh > Uploading stemcell. Done (00:00:53)

Started deploy micro bosh > Creating VM from
1552e46e-8291-461f-966a-ac6332d313be. Done (00:00:44)

/home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/fog-1.27.0/lib/fog/openstack/volume.rb:191:
warning: duplicated key at line 196 ignored: :openstack_region

Started deploy micro bosh > Waiting for the agent. Done (00:02:19)

Started deploy micro bosh > Updating persistent disk

Started deploy micro bosh > Create disklog writing failed. can't be
called from trap context

/home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/excon-0.45.3/lib/excon/socket.rb:108:in
`getaddrinfo': getaddrinfo: Name or service not known (SocketError)
(Excon::Errors::SocketError)

from
/home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/excon-0.45.3/lib/excon/socket.rb:108:in
`connect'

from
/home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/excon-0.45.3/lib/excon/socket.rb:28:in
`initialize'

from
/home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/excon-0.45.3/lib/excon/connection.rb:389:in
`new'

from
/home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/excon-0.45.3/lib/excon/connection.rb:389:in
`socket'

from
/home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/excon-0.45.3/lib/excon/connection.rb:106:in
`request_call'

from
/home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/excon-0.45.3/lib/excon/middlewares/mock.rb:47:in
`request_call'

from
/home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/excon-0.45.3/lib/excon/middlewares/instrumentor.rb:19:in
`block in request_call'

from
/home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/bosh_openstack_cpi-1.2975.0/lib/cloud/openstack/excon_logging_instrumentor.rb:10:in
`instrument'

from
/home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/excon-0.45.3/lib/excon/middlewares/instrumentor.rb:18:in
`request_call'

from
/home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/excon-0.45.3/lib/excon/middlewares/base.rb:15:in
`request_call'

from
/home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/excon-0.45.3/lib/excon/middlewares/base.rb:15:in
`request_call'

from
/home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/excon-0.45.3/lib/excon/middlewares/base.rb:15:in
`request_call'

from
/home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/excon-0.45.3/lib/excon/connection.rb:233:in
`request'

from
/home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/fog-core-1.30.0/lib/fog/core/connection.rb:81:in
`request'

from
/home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/fog-1.27.0/lib/fog/openstack/volume.rb:156:in
`request'

from
/home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/fog-1.27.0/lib/fog/openstack/requests/volume/create_volume.rb:19:in
`create_volume'

from
/home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/fog-1.27.0/lib/fog/openstack/models/volume/volume.rb:29:in
`save'

from
/home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/fog-core-1.30.0/lib/fog/core/collection.rb:51:in
`create'

from
/home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/bosh_openstack_cpi-1.2975.0/lib/cloud/openstack/cloud.rb:425:in
`block (2 levels) in create_disk'

from
/home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/bosh_openstack_cpi-1.2975.0/lib/cloud/openstack/helpers.rb:26:in
`with_openstack'

from
/home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/bosh_openstack_cpi-1.2975.0/lib/cloud/openstack/cloud.rb:425:in
`block in create_disk'

from
/home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/bosh_common-1.2975.0/lib/common/thread_formatter.rb:49:in
`with_thread_name'

from
/home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/bosh_openstack_cpi-1.2975.0/lib/cloud/openstack/cloud.rb:403:in
`create_disk'

from
/home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/bosh_cli_plugin_micro-1.2975.0/lib/bosh/deployer/instance_manager.rb:282:in
`block in create_disk'

from
/home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/bosh_cli_plugin_micro-1.2975.0/lib/bosh/deployer/instance_manager.rb:85:in
`step'

from
/home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/bosh_cli_plugin_micro-1.2975.0/lib/bosh/deployer/instance_manager.rb:280:in
`create_disk'

from
/home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/bosh_cli_plugin_micro-1.2975.0/lib/bosh/deployer/instance_manager.rb:352:in
`update_persistent_disk'

from
/home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/bosh_cli_plugin_micro-1.2975.0/lib/bosh/deployer/instance_manager.rb:137:in
`block in create'

from
/home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/bosh_cli_plugin_micro-1.2975.0/lib/bosh/deployer/instance_manager.rb:85:in
`step'

from
/home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/bosh_cli_plugin_micro-1.2975.0/lib/bosh/deployer/instance_manager.rb:136:in
`create'

from
/home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/bosh_cli_plugin_micro-1.2975.0/lib/bosh/deployer/instance_manager.rb:98:in
`block in create_deployment'

from
/home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/bosh_cli_plugin_micro-1.2975.0/lib/bosh/deployer/instance_manager.rb:92:in
`with_lifecycle'

from
/home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/bosh_cli_plugin_micro-1.2975.0/lib/bosh/deployer/instance_manager.rb:98:in
`create_deployment'

from
/home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/bosh_cli_plugin_micro-1.2975.0/lib/bosh/cli/commands/micro.rb:179:in
`perform'

from
/home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/bosh_cli-1.2975.0/lib/cli/command_handler.rb:57:in
`run'

from
/home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/bosh_cli-1.2975.0/lib/cli/runner.rb:56:in
`run'

from
/home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/bosh_cli-1.2975.0/bin/bosh:16:in
`<top (required)>'

from /home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/bin/bosh:23:in
`load'

from /home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/bin/bosh:23:in
`<main>'

{"type": "step_finished", "id": "microbosh.deploying"}

Exited with 1.

Thanks,
Eoghan

_______________________________________________
cf-bosh mailing list
cf-bosh(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-bosh

_______________________________________________
cf-bosh mailing list
cf-bosh(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-bosh


Re: cf-stub.yml example with minimum or required info

CF Runtime
 

Hi Ali,

We try to keep those docs up to date, but it is possible they are missing
some pieces.

Can you tell me what errors you are getting?

Joseph Palermo
CF Runtime Team


Re: cf-stub.yml example with minimum or required info

ryunata <ricky.yunata@...>
 

I have the same problem too.. I'm running on Openstack. I would appreciate if
someone also can give an example of stub file and cf-deployment manifest
file example on openstack.



--
View this message in context: http://cf-bosh.70367.x6.nabble.com/cf-bosh-cf-stub-yml-example-with-minimum-or-required-info-tp108p121.html
Sent from the CF BOSH mailing list archive at Nabble.com.


Re: CF install failing on OpenStack

Guillaume Berche
 

Looks like the openstack endpoint DNS name is not resolved from the local
box. Have you checked against typo in the yml config ? There is a bosh
micro log file which may provide additional traces.

Is your network set up requiring use of an http_proxy to reach the
openstack endpoint (and the proxy is doing DNS resolution, which not
available on the local box) ?

Hope this can help,

Guillaume.

On Wed, Jun 3, 2015 at 7:45 PM, Eoghan <eoghank(a)gmail.com> wrote:

Hi,

I have an baremetal install of OpenStack on Ubuntu 14.04 and am having
issues with the bosh install. All the endpoints are correctly configured
and I have run through all the pre-req tests for a CF install on openstack.

The install is failing with this error. Can anyone provide any pointers on
as to what could be causing this?

{"type": "step_started", "id": "microbosh.setting_manifest"}

Running "bundle exec bosh -n micro deployment micro/"

Deployment set to
'/var/tempest/workspaces/default/deployments/micro/micro_bosh.yml'

{"type": "step_finished", "id": "microbosh.setting_manifest"}

{"type": "step_started", "id": "microbosh.deploying"}

Running "bundle exec bosh -n micro deploy
/var/tempest/stemcells/bosh-stemcell-2975-openstack-kvm-ubuntu-trusty-go_agent-raw.tgz
--update-if-exists"



Verifying stemcell...

File exists and readable OK

Verifying tarball...

Read tarball OK

Manifest exists OK

Stemcell image file OK

Stemcell properties OK



Stemcell info

-------------

Name: bosh-openstack-kvm-ubuntu-trusty-go_agent-raw

Version: 2975



/home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/fog-1.27.0/lib/fog/openstack/volume.rb:191:
warning: duplicated key at line 196 ignored: :openstack_region

Started deploy micro bosh

Started deploy micro bosh > Unpacking stemcell. Done (00:00:05)

Started deploy micro bosh > Uploading stemcell. Done (00:00:53)

Started deploy micro bosh > Creating VM from
1552e46e-8291-461f-966a-ac6332d313be. Done (00:00:44)

/home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/fog-1.27.0/lib/fog/openstack/volume.rb:191:
warning: duplicated key at line 196 ignored: :openstack_region

Started deploy micro bosh > Waiting for the agent. Done (00:02:19)

Started deploy micro bosh > Updating persistent disk

Started deploy micro bosh > Create disklog writing failed. can't be
called from trap context

/home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/excon-0.45.3/lib/excon/socket.rb:108:in
`getaddrinfo': getaddrinfo: Name or service not known (SocketError)
(Excon::Errors::SocketError)

from
/home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/excon-0.45.3/lib/excon/socket.rb:108:in
`connect'

from
/home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/excon-0.45.3/lib/excon/socket.rb:28:in
`initialize'

from
/home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/excon-0.45.3/lib/excon/connection.rb:389:in
`new'

from
/home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/excon-0.45.3/lib/excon/connection.rb:389:in
`socket'

from
/home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/excon-0.45.3/lib/excon/connection.rb:106:in
`request_call'

from
/home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/excon-0.45.3/lib/excon/middlewares/mock.rb:47:in
`request_call'

from
/home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/excon-0.45.3/lib/excon/middlewares/instrumentor.rb:19:in
`block in request_call'

from
/home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/bosh_openstack_cpi-1.2975.0/lib/cloud/openstack/excon_logging_instrumentor.rb:10:in
`instrument'

from
/home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/excon-0.45.3/lib/excon/middlewares/instrumentor.rb:18:in
`request_call'

from
/home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/excon-0.45.3/lib/excon/middlewares/base.rb:15:in
`request_call'

from
/home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/excon-0.45.3/lib/excon/middlewares/base.rb:15:in
`request_call'

from
/home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/excon-0.45.3/lib/excon/middlewares/base.rb:15:in
`request_call'

from
/home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/excon-0.45.3/lib/excon/connection.rb:233:in
`request'

from
/home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/fog-core-1.30.0/lib/fog/core/connection.rb:81:in
`request'

from
/home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/fog-1.27.0/lib/fog/openstack/volume.rb:156:in
`request'

from
/home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/fog-1.27.0/lib/fog/openstack/requests/volume/create_volume.rb:19:in
`create_volume'

from
/home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/fog-1.27.0/lib/fog/openstack/models/volume/volume.rb:29:in
`save'

from
/home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/fog-core-1.30.0/lib/fog/core/collection.rb:51:in
`create'

from
/home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/bosh_openstack_cpi-1.2975.0/lib/cloud/openstack/cloud.rb:425:in
`block (2 levels) in create_disk'

from
/home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/bosh_openstack_cpi-1.2975.0/lib/cloud/openstack/helpers.rb:26:in
`with_openstack'

from
/home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/bosh_openstack_cpi-1.2975.0/lib/cloud/openstack/cloud.rb:425:in
`block in create_disk'

from
/home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/bosh_common-1.2975.0/lib/common/thread_formatter.rb:49:in
`with_thread_name'

from
/home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/bosh_openstack_cpi-1.2975.0/lib/cloud/openstack/cloud.rb:403:in
`create_disk'

from
/home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/bosh_cli_plugin_micro-1.2975.0/lib/bosh/deployer/instance_manager.rb:282:in
`block in create_disk'

from
/home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/bosh_cli_plugin_micro-1.2975.0/lib/bosh/deployer/instance_manager.rb:85:in
`step'

from
/home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/bosh_cli_plugin_micro-1.2975.0/lib/bosh/deployer/instance_manager.rb:280:in
`create_disk'

from
/home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/bosh_cli_plugin_micro-1.2975.0/lib/bosh/deployer/instance_manager.rb:352:in
`update_persistent_disk'

from
/home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/bosh_cli_plugin_micro-1.2975.0/lib/bosh/deployer/instance_manager.rb:137:in
`block in create'

from
/home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/bosh_cli_plugin_micro-1.2975.0/lib/bosh/deployer/instance_manager.rb:85:in
`step'

from
/home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/bosh_cli_plugin_micro-1.2975.0/lib/bosh/deployer/instance_manager.rb:136:in
`create'

from
/home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/bosh_cli_plugin_micro-1.2975.0/lib/bosh/deployer/instance_manager.rb:98:in
`block in create_deployment'

from
/home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/bosh_cli_plugin_micro-1.2975.0/lib/bosh/deployer/instance_manager.rb:92:in
`with_lifecycle'

from
/home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/bosh_cli_plugin_micro-1.2975.0/lib/bosh/deployer/instance_manager.rb:98:in
`create_deployment'

from
/home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/bosh_cli_plugin_micro-1.2975.0/lib/bosh/cli/commands/micro.rb:179:in
`perform'

from
/home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/bosh_cli-1.2975.0/lib/cli/command_handler.rb:57:in
`run'

from
/home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/bosh_cli-1.2975.0/lib/cli/runner.rb:56:in
`run'

from
/home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/bosh_cli-1.2975.0/bin/bosh:16:in
`<top (required)>'

from /home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/bin/bosh:23:in
`load'

from /home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/bin/bosh:23:in
`<main>'

{"type": "step_finished", "id": "microbosh.deploying"}

Exited with 1.

Thanks,
Eoghan

_______________________________________________
cf-bosh mailing list
cf-bosh(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-bosh


Re: Migrating a full-stack bosh deployment to bosh-init

Dmitriy Kalinin
 

Unfortunately we do not show persistent disk IDs in any of the CLI commands
*yet*. You can either look at the vsphere settings for the VM for which
disk is attached or look at the /var/vcap/bosh/settings.json on the VM that
has the disk attached.

On Tue, Jun 2, 2015 at 10:37 PM, Gwenn Etourneau <getourneau(a)pivotal.io>
wrote:

disk id should be present on the IaaS layer, so I guess if you look into
Vsphere you should find it.
If I remember should be the form disk-some-uuid,
vcenter>yourvm>editsetting>yourdisk and should be in the first field
(/disk-some-uui.vmdk).

But I don't have any vcenter to check now ...

On Wed, Jun 3, 2015 at 11:47 AM, Espinosa, Allan | Allan | OPS <
allan.espinosa(a)rakuten.com> wrote:

Hi,

We currently have a binary bosh [1] setup. However we would like to
transition to bosh-init to prevent having to manage multiple bosh
deployments.

I'm looking at how to regenerate the state file described in [2]. I can
find my VM CID from "bosh vms bosh-meta --details" but can't get the other
information from the director. Is there other places to retrieve the
information? Or do I have to poke things below the cpi (vSphere in our
case).

We're using the vsphere cpi for our deployment.

Thanks
Allan

[1]
https://blog.starkandwayne.com/2014/07/10/resurrecting-bosh-with-binary-boshes/
[2] https://bosh.io/docs/using-bosh-init.html#recover-deployment-state
_______________________________________________
cf-bosh mailing list
cf-bosh(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-bosh

_______________________________________________
cf-bosh mailing list
cf-bosh(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-bosh


Re: Resuming UAA work

Dmitriy Kalinin
 

I've updated https://github.com/cloudfoundry/bosh-notes/blob/master/uaa.md
to list out planned viewable resources by read-only users.

By far the biggest authorization requirement we get from our security
teams is being able to provide a level of "admin" access that can perform
most functions but can't access credentials and sensitive information.

What kind of "admin" access do you think should be provided?





On Wed, Jun 3, 2015 at 8:07 AM, dehringer <david.ehringer(a)gmail.com> wrote:

What are some of the functions that a read-only user scope would be able to
perform. I really like the idea of a read-only scope but it seems like
today
there are only a few functions that aren't intended to modify the state of
the system or indirectly can allow for modification of the system (e.g.
bosh
ssh/scp).

By far the biggest authorization requirement we get from our security teams
is being able to provide a level of "admin" access that can perform most
functions but can't access credentials and sensitive information. Simply
hooking in UAA obviously doesn't help with this as this is deeply related
to
how deployment manifests work in general. But I mention it because this is
the type of authorization and access control requirements our security
teams
are providing.



--
View this message in context:
http://cf-bosh.70367.x6.nabble.com/cf-bosh-Resuming-UAA-work-tp75p116.html
Sent from the CF BOSH mailing list archive at Nabble.com.
_______________________________________________
cf-bosh mailing list
cf-bosh(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-bosh


CF install failing on OpenStack

eoghank
 

Hi,

I have an baremetal install of OpenStack on Ubuntu 14.04 and am having
issues with the bosh install. All the endpoints are correctly configured
and I have run through all the pre-req tests for a CF install on openstack.

The install is failing with this error. Can anyone provide any pointers on
as to what could be causing this?

{"type": "step_started", "id": "microbosh.setting_manifest"}

Running "bundle exec bosh -n micro deployment micro/"

Deployment set to
'/var/tempest/workspaces/default/deployments/micro/micro_bosh.yml'

{"type": "step_finished", "id": "microbosh.setting_manifest"}

{"type": "step_started", "id": "microbosh.deploying"}

Running "bundle exec bosh -n micro deploy
/var/tempest/stemcells/bosh-stemcell-2975-openstack-kvm-ubuntu-trusty-go_agent-raw.tgz
--update-if-exists"



Verifying stemcell...

File exists and readable OK

Verifying tarball...

Read tarball OK

Manifest exists OK

Stemcell image file OK

Stemcell properties OK



Stemcell info

-------------

Name: bosh-openstack-kvm-ubuntu-trusty-go_agent-raw

Version: 2975



/home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/fog-1.27.0/lib/fog/openstack/volume.rb:191:
warning: duplicated key at line 196 ignored: :openstack_region

Started deploy micro bosh

Started deploy micro bosh > Unpacking stemcell. Done (00:00:05)

Started deploy micro bosh > Uploading stemcell. Done (00:00:53)

Started deploy micro bosh > Creating VM from
1552e46e-8291-461f-966a-ac6332d313be. Done (00:00:44)

/home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/fog-1.27.0/lib/fog/openstack/volume.rb:191:
warning: duplicated key at line 196 ignored: :openstack_region

Started deploy micro bosh > Waiting for the agent. Done (00:02:19)

Started deploy micro bosh > Updating persistent disk

Started deploy micro bosh > Create disklog writing failed. can't be
called from trap context

/home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/excon-0.45.3/lib/excon/socket.rb:108:in
`getaddrinfo': getaddrinfo: Name or service not known (SocketError)
(Excon::Errors::SocketError)

from
/home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/excon-0.45.3/lib/excon/socket.rb:108:in
`connect'

from
/home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/excon-0.45.3/lib/excon/socket.rb:28:in
`initialize'

from
/home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/excon-0.45.3/lib/excon/connection.rb:389:in
`new'

from
/home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/excon-0.45.3/lib/excon/connection.rb:389:in
`socket'

from
/home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/excon-0.45.3/lib/excon/connection.rb:106:in
`request_call'

from
/home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/excon-0.45.3/lib/excon/middlewares/mock.rb:47:in
`request_call'

from
/home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/excon-0.45.3/lib/excon/middlewares/instrumentor.rb:19:in
`block in request_call'

from
/home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/bosh_openstack_cpi-1.2975.0/lib/cloud/openstack/excon_logging_instrumentor.rb:10:in
`instrument'

from
/home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/excon-0.45.3/lib/excon/middlewares/instrumentor.rb:18:in
`request_call'

from
/home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/excon-0.45.3/lib/excon/middlewares/base.rb:15:in
`request_call'

from
/home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/excon-0.45.3/lib/excon/middlewares/base.rb:15:in
`request_call'

from
/home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/excon-0.45.3/lib/excon/middlewares/base.rb:15:in
`request_call'

from
/home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/excon-0.45.3/lib/excon/connection.rb:233:in
`request'

from
/home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/fog-core-1.30.0/lib/fog/core/connection.rb:81:in
`request'

from
/home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/fog-1.27.0/lib/fog/openstack/volume.rb:156:in
`request'

from
/home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/fog-1.27.0/lib/fog/openstack/requests/volume/create_volume.rb:19:in
`create_volume'

from
/home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/fog-1.27.0/lib/fog/openstack/models/volume/volume.rb:29:in
`save'

from
/home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/fog-core-1.30.0/lib/fog/core/collection.rb:51:in
`create'

from
/home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/bosh_openstack_cpi-1.2975.0/lib/cloud/openstack/cloud.rb:425:in
`block (2 levels) in create_disk'

from
/home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/bosh_openstack_cpi-1.2975.0/lib/cloud/openstack/helpers.rb:26:in
`with_openstack'

from
/home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/bosh_openstack_cpi-1.2975.0/lib/cloud/openstack/cloud.rb:425:in
`block in create_disk'

from
/home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/bosh_common-1.2975.0/lib/common/thread_formatter.rb:49:in
`with_thread_name'

from
/home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/bosh_openstack_cpi-1.2975.0/lib/cloud/openstack/cloud.rb:403:in
`create_disk'

from
/home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/bosh_cli_plugin_micro-1.2975.0/lib/bosh/deployer/instance_manager.rb:282:in
`block in create_disk'

from
/home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/bosh_cli_plugin_micro-1.2975.0/lib/bosh/deployer/instance_manager.rb:85:in
`step'

from
/home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/bosh_cli_plugin_micro-1.2975.0/lib/bosh/deployer/instance_manager.rb:280:in
`create_disk'

from
/home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/bosh_cli_plugin_micro-1.2975.0/lib/bosh/deployer/instance_manager.rb:352:in
`update_persistent_disk'

from
/home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/bosh_cli_plugin_micro-1.2975.0/lib/bosh/deployer/instance_manager.rb:137:in
`block in create'

from
/home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/bosh_cli_plugin_micro-1.2975.0/lib/bosh/deployer/instance_manager.rb:85:in
`step'

from
/home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/bosh_cli_plugin_micro-1.2975.0/lib/bosh/deployer/instance_manager.rb:136:in
`create'

from
/home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/bosh_cli_plugin_micro-1.2975.0/lib/bosh/deployer/instance_manager.rb:98:in
`block in create_deployment'

from
/home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/bosh_cli_plugin_micro-1.2975.0/lib/bosh/deployer/instance_manager.rb:92:in
`with_lifecycle'

from
/home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/bosh_cli_plugin_micro-1.2975.0/lib/bosh/deployer/instance_manager.rb:98:in
`create_deployment'

from
/home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/bosh_cli_plugin_micro-1.2975.0/lib/bosh/cli/commands/micro.rb:179:in
`perform'

from
/home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/bosh_cli-1.2975.0/lib/cli/command_handler.rb:57:in
`run'

from
/home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/bosh_cli-1.2975.0/lib/cli/runner.rb:56:in
`run'

from
/home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/gems/bosh_cli-1.2975.0/bin/bosh:16:in
`<top (required)>'

from /home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/bin/bosh:23:in
`load'

from /home/tempest-web/tempest/web/vendor/bundle/ruby/2.2.0/bin/bosh:23:in
`<main>'

{"type": "step_finished", "id": "microbosh.deploying"}

Exited with 1.

Thanks,
Eoghan


Re: Resuming UAA work

David Ehringer
 

What are some of the functions that a read-only user scope would be able to
perform. I really like the idea of a read-only scope but it seems like today
there are only a few functions that aren't intended to modify the state of
the system or indirectly can allow for modification of the system (e.g. bosh
ssh/scp).

By far the biggest authorization requirement we get from our security teams
is being able to provide a level of "admin" access that can perform most
functions but can't access credentials and sensitive information. Simply
hooking in UAA obviously doesn't help with this as this is deeply related to
how deployment manifests work in general. But I mention it because this is
the type of authorization and access control requirements our security teams
are providing.



--
View this message in context: http://cf-bosh.70367.x6.nabble.com/cf-bosh-Resuming-UAA-work-tp75p116.html
Sent from the CF BOSH mailing list archive at Nabble.com.


Re: Migrating a full-stack bosh deployment to bosh-init

Gwenn Etourneau
 

disk id should be present on the IaaS layer, so I guess if you look into
Vsphere you should find it.
If I remember should be the form disk-some-uuid,
vcenter>yourvm>editsetting>yourdisk and should be in the first field
(/disk-some-uui.vmdk).

But I don't have any vcenter to check now ...

On Wed, Jun 3, 2015 at 11:47 AM, Espinosa, Allan | Allan | OPS <
allan.espinosa(a)rakuten.com> wrote:

Hi,

We currently have a binary bosh [1] setup. However we would like to
transition to bosh-init to prevent having to manage multiple bosh
deployments.

I'm looking at how to regenerate the state file described in [2]. I can
find my VM CID from "bosh vms bosh-meta --details" but can't get the other
information from the director. Is there other places to retrieve the
information? Or do I have to poke things below the cpi (vSphere in our
case).

We're using the vsphere cpi for our deployment.

Thanks
Allan

[1]
https://blog.starkandwayne.com/2014/07/10/resurrecting-bosh-with-binary-boshes/
[2] https://bosh.io/docs/using-bosh-init.html#recover-deployment-state
_______________________________________________
cf-bosh mailing list
cf-bosh(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-bosh


Migrating a full-stack bosh deployment to bosh-init

Allan Espinosa
 

Hi,

We currently have a binary bosh [1] setup. However we would like to transition to bosh-init to prevent having to manage multiple bosh deployments.

I'm looking at how to regenerate the state file described in [2]. I can find my VM CID from "bosh vms bosh-meta --details" but can't get the other information from the director. Is there other places to retrieve the information? Or do I have to poke things below the cpi (vSphere in our case).

We're using the vsphere cpi for our deployment.

Thanks
Allan

[1] https://blog.starkandwayne.com/2014/07/10/resurrecting-bosh-with-binary-boshes/
[2] https://bosh.io/docs/using-bosh-init.html#recover-deployment-state


Re: Create bosh stemcell failed in AWS region cn-north-1

Wayne E. Seguin
 

Absolutely 小锋,

This is the guide I followed in order to build my own custom stemcell for a
client:
https://github.com/cloudfoundry/bosh/blob/master/bosh-stemcell/README.md

~Wayne

On Mon, Jun 1, 2015 at 11:12 PM, 王小锋 <zzuwxf(a)gmail.com> wrote:

Hi, Wayne

I also met the same issue as 支雷, could you please let us know how to
create custom stemcell? Is there any guide? thanks a lot.

2015-06-01 20:23 GMT+08:00 Wayne E. Seguin <wayneeseguin(a)starkandwayne.com
:
支雷,

Have you tried creating your own custom stemcell yet while you wait?

~Wayne

On Fri, May 29, 2015 at 5:18 AM, 支雷 <lzhi3937(a)gmail.com> wrote:

I have been blocked by this issue for two weeks, and have no progress. I
am looking forward to you to solve this problem. Thanks a lot.

2015-05-27 9:11 GMT+08:00 Dmitriy Kalinin <dkalinin(a)pivotal.io>:

It seems like this method cannot find appropriate AKIs:
https://github.com/cloudfoundry/bosh/blob/master/bosh_aws_cpi/lib/cloud/aws/aki_picker.rb#L48-L59

I just requested account from AWS to access China region and try to
reproduce the problem.

On Wed, May 20, 2015 at 8:37 PM, Dr Nic Williams <
drnicwilliams(a)gmail.com> wrote:

There are two issues - the second is that bosh-bootstrap uses a
project "cyoi" (choose your own infrastructure) and underneath it uses
"fog" - its quite possible that either or both do not yet support China
(its harder to get accounts to do testing).

The former is failing inside AWS SDK for Ruby.

BOSH calls into this library here:
https://github.com/cloudfoundry/bosh/blob/develop/bosh_aws_cpi/lib/cloud/aws/aki_picker.rb#L25

We are using aws-sdk (= 1.60.2)
https://github.com/cloudfoundry/bosh/blob/114b3cf107672cfebf444fe7db4703dd804c72cc/Gemfile.lock#L19

The latest version is 2.0.42
https://rubygems.org/gems/aws-sdk/versions/2.0.42

So perhaps China support was added more recently and we need to bump
to newer aws-sdk version.

Try bumping this version in the Gemfile of bosh and using that.

Avoid bosh-bootstrap until you've at least confimed you can get
underlying bosh_cli to work.


On Wed, May 20, 2015 at 8:17 PM, 支雷 <lzhi3937(a)gmail.com> wrote:

I have tried full stemcell
bosh-stemcell-2972-aws-xen-ubuntu-trusty-go_agent.tgz, but failed, error
"create stemcell failed: unable to find AKI:" was thrown (please find
details in my first email). And when I tried to "bosh-bootstrap deploy"
command, I got `validate_aws_region': Unknown region: "cn-north-1"
(ArgumentError). Seems cn-north-1 is not supported by bosh aws plugin. Any
suggestions on this issue? Thanks!

2015-05-19 23:58 GMT+08:00 Wayne E. Seguin <
wayneeseguin(a)starkandwayne.com>:

The issue is that there appear to not be any light stemcells in your
region, there is another recent question on the list to this effect. In
order to make progress you might want to build your own stemcell to use for
now or try to find and download a full aws hvm stemcell image to upload.

On Mon, May 18, 2015 at 6:12 AM, 支雷 <lzhi3937(a)gmail.com> wrote:

Hello,

I tried to deploy micro bosh in AWS region cn-north-1 in several
ways, but all failed. Any suggestions on how to deploy micro bosh in AWS
region cn-north-1? Thanks!

I created a EC2 instance (ubuntu) in the cn-north-1 region with an
public ip, ssh'd into it and installed bosh-cli, bosh_cli_plugin_micro and
bosh_cli_plugin_aws. After that I downloaded stemcell
bosh-stemcell-2972-aws-xen-ubuntu-trusty-go_agent.tgz, and tried " bosh
micro deploy ./bosh-stemcell-2972-aws-xen-ubuntu-trusty-go_agent.tgz" which
resulted in "create stemcell failed: getaddrinfo: Name or service not
known:"

I checked the failed URL, it's "ec2.cn-north-1.amazonaws.com"
which is not accessable. I updated the http.rb and changed the url to "
ec2.cn-north-1.amazonaws.com.cn" and escape the ssl validation and
tried again, another error was thrown:

Stemcell info
-------------
Name: bosh-aws-xen-ubuntu-trusty-go_agent
Version: 2972

Started deploy micro bosh
Started deploy micro bosh > Unpacking stemcell. Done (00:00:08)
Started deploy micro bosh > Uploading stemcell"
create stemcell failed: unable to find AKI:
/var/lib/gems/1.9.1/gems/bosh_aws_cpi-1.2972.0/lib/cloud/aws/aki_picker.rb:15:in
`pick'
/var/lib/gems/1.9.1/gems/bosh_aws_cpi-1.2972.0/lib/cloud/aws/stemcell_creator.rb:100:in
`image_params'
/var/lib/gems/1.9.1/gems/bosh_aws_cpi-1.2972.0/lib/cloud/aws/stemcell_creator.rb:24:in
`create'
/var/lib/gems/1.9.1/gems/bosh_aws_cpi-1.2972.0/lib/cloud/aws/cloud.rb:465:in
`block in create_stemcell'
/var/lib/gems/1.9.1/gems/bosh_common-1.2972.0/lib/common/thread_formatter.rb:49:in
`with_thread_name'
/var/lib/gems/1.9.1/gems/bosh_aws_cpi-1.2972.0/lib/cloud/aws/cloud.rb:445:in
`create_stemcell'
/var/lib/gems/1.9.1/gems/bosh_cli_plugin_micro-1.2972.0/lib/bosh/deployer/instance_manager.rb:228:in
`block (2 levels) in create_stemcell'
/var/lib/gems/1.9.1/gems/bosh_cli_plugin_micro-1.2972.0/lib/bosh/deployer/instance_manager.rb:85:in
`step'
/var/lib/gems/1.9.1/gems/bosh_cli_plugin_micro-1.2972.0/lib/bosh/deployer/instance_manager.rb:227:in
`block in create_stemcell'
/usr/lib/ruby/1.9.1/tmpdir.rb:83:in `mktmpdir'
/var/lib/gems/1.9.1/gems/bosh_cli_plugin_micro-1.2972.0/lib/bosh/deployer/instance_manager.rb:213:in
`create_stemcell'
/var/lib/gems/1.9.1/gems/bosh_cli_plugin_micro-1.2972.0/lib/bosh/deployer/instance_manager.rb:118:in
`create'
/var/lib/gems/1.9.1/gems/bosh_cli_plugin_micro-1.2972.0/lib/bosh/deployer/instance_manager.rb:98:in
`block in create_deployment'
/var/lib/gems/1.9.1/gems/bosh_cli_plugin_micro-1.2972.0/lib/bosh/deployer/instance_manager.rb:92:in
`with_lifecycle'
/var/lib/gems/1.9.1/gems/bosh_cli_plugin_micro-1.2972.0/lib/bosh/deployer/instance_manager.rb:98:in
`create_deployment'
/var/lib/gems/1.9.1/gems/bosh_cli_plugin_micro-1.2972.0/lib/bosh/cli/commands/micro.rb:179:in
`perform'
/var/lib/gems/1.9.1/gems/bosh_cli-1.2972.0/lib/cli/command_handler.rb:57:in
`run'
/var/lib/gems/1.9.1/gems/bosh_cli-1.2972.0/lib/cli/runner.rb:56:in
`run'
/var/lib/gems/1.9.1/gems/bosh_cli-1.2972.0/bin/bosh:16:in `<top
(required)>'
/usr/local/bin/bosh:23:in `load'
/usr/local/bin/bosh:23:in `<main>'

After that I installed bosh-bootstrap and executed following
command:

bosh-bootstrap deploy

and I selected AWS provider and region 10 (China (Beijing) Region
(cn-north-1)), an error was thrown :

Confirming: Using AWS EC2/cn-north-1
/var/lib/gems/1.9.1/gems/fog-aws-0.1.1/lib/fog/aws/region_methods.rb:6:in
`validate_aws_region': Unknown region: "cn-north-1" (ArgumentError)
from
/var/lib/gems/1.9.1/gems/fog-aws-0.1.1/lib/fog/aws/compute.rb:482:in
`initialize'
from
/var/lib/gems/1.9.1/gems/fog-core-1.30.0/lib/fog/core/service.rb:115:in
`new'
from
/var/lib/gems/1.9.1/gems/fog-core-1.30.0/lib/fog/core/service.rb:115:in
`new'
from
/var/lib/gems/1.9.1/gems/fog-core-1.30.0/lib/fog/compute.rb:60:in `new'
from
/var/lib/gems/1.9.1/gems/cyoi-0.11.3/lib/cyoi/providers/clients/aws_provider_client.rb:257:in
`setup_fog_connection'
from
/var/lib/gems/1.9.1/gems/cyoi-0.11.3/lib/cyoi/providers/clients/fog_provider_client.rb:13:in
`initialize'
from
/var/lib/gems/1.9.1/gems/cyoi-0.11.3/lib/cyoi/providers.rb:17:in `new'
from
/var/lib/gems/1.9.1/gems/cyoi-0.11.3/lib/cyoi/providers.rb:17:in
`provider_client'
from
/var/lib/gems/1.9.1/gems/cyoi-0.11.3/lib/cyoi/cli/helpers/provider.rb:6:in
`provider_client'
from
/var/lib/gems/1.9.1/gems/cyoi-0.11.3/lib/cyoi/cli/address.rb:41:in
`address_cli'
from
/var/lib/gems/1.9.1/gems/cyoi-0.11.3/lib/cyoi/cli/address.rb:56:in
`valid_address?'
from
/var/lib/gems/1.9.1/gems/cyoi-0.11.3/lib/cyoi/cli/address.rb:19:in
`execute!'
from
/var/lib/gems/1.9.1/gems/bosh-bootstrap-0.17.0/lib/bosh-bootstrap/cli/commands/deploy.rb:41:in
`select_or_provision_public_networking'
from
/var/lib/gems/1.9.1/gems/bosh-bootstrap-0.17.0/lib/bosh-bootstrap/cli/commands/deploy.rb:21:in
`perform'
from
/var/lib/gems/1.9.1/gems/bosh-bootstrap-0.17.0/lib/bosh-bootstrap/thor_cli.rb:11:in
`deploy'
from
/var/lib/gems/1.9.1/gems/thor-0.19.1/lib/thor/command.rb:27:in `run'
from
/var/lib/gems/1.9.1/gems/thor-0.19.1/lib/thor/invocation.rb:126:in
`invoke_command'
from /var/lib/gems/1.9.1/gems/thor-0.19.1/lib/thor.rb:359:in
`dispatch'
from
/var/lib/gems/1.9.1/gems/thor-0.19.1/lib/thor/base.rb:440:in `start'
from
/var/lib/gems/1.9.1/gems/bosh-bootstrap-0.17.0/bin/bosh-bootstrap:13:in
`<top (required)>'
from /usr/local/bin/bosh-bootstrap:23:in `load'
from /usr/local/bin/bosh-bootstrap:23:in `<main>'


_______________________________________________
cf-bosh mailing list
cf-bosh(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-bosh

_______________________________________________
cf-bosh mailing list
cf-bosh(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-bosh


--
Dr Nic Williams
Stark & Wayne LLC - consultancy for Cloud Foundry users
http://drnicwilliams.com
http://starkandwayne.com
cell +1 (415) 860-2185
twitter @drnic

_______________________________________________
cf-bosh mailing list
cf-bosh(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-bosh

_______________________________________________
cf-bosh mailing list
cf-bosh(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-bosh

_______________________________________________
cf-bosh mailing list
cf-bosh(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-bosh

_______________________________________________
cf-bosh mailing list
cf-bosh(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-bosh


Re: Most bosh director commands fail with a HTTP 500

Scott Taggart <staggart@...>
 

Great! Thanks for following up Dmitriy, I'll roll out the new stemcell ASAP to my directors


From: "Dmitriy Kalinin" <dkalinin(a)pivotal.io>
To: "Scott Taggart" <staggart(a)skyscapecloud.com>
Cc: "cf-bosh" <cf-bosh(a)lists.cloudfoundry.org>
Sent: Tuesday, 2 June, 2015 19:17:47
Subject: Re: [cf-bosh] Most bosh director commands fail with a HTTP 500

BOSH release 169 (stemcell version 2978) fixes this problem ( https://github.com/cloudfoundry/bosh/commit/24797724994d5a59f98828e477a738c9b643c78a ).

On Thu, May 28, 2015 at 2:18 AM, Scott Taggart < staggart(a)skyscapecloud.com > wrote:





Thanks Dmitriy – this fixed our issue J



From: Dmitriy Kalinin [mailto: dkalinin(a)pivotal.io ]
Sent: 26 May 2015 19:54
To: Scott Taggart
Cc: CF BOSH Mailing List
Subject: Re: [cf-bosh] Most bosh director commands fail with a HTTP 500





There currently exists a problem in the Director during task cleanup. Director tries to clean up task logs for the tasks that do not have associated directory on disk. https://www.pivotaltracker.com/story/show/95458780 will fix this.





To fix the Director until we release a bug fix:


- ssh as vcap into the Director VM


- run /var/vcap/jobs/director/bin/director_ctl console


- opens up console to the Director DB


- run Bosh::Director::Models::Task.where(output: nil).update(output: '/tmp/123')


- updates tasks without task log directories to a dummy destination; Director will be happy to run rm -rf /tmp/123 when it cleans up tasks.





After that you should be able to run `bosh vms` and other tasks again.








On Mon, May 25, 2015 at 2:27 PM, Scott Taggart < staggart(a)skyscapecloud.com > wrote:


Hi folks,





One of my three bosh directors has gotten itself stuck in a strange state where most (but not all) operations fail. I have recreated the director with a couple of different stemcells (but same persistent disk) and the issue persists. Looks like potentially a database issue on the director, but I have done a very quick visual check of a few tables (e.g. vms, deployments) and they seem fine from a glance... not sure what's going on.





Everything CF-related currently/previously under the director is continuing to run fine in this AZ, it's just the director that's lost it:





$ bosh deployments

+---------------------+-----------------------+----------------------------------------------+--------------+
| Name | Release(s) | Stemcell(s) | Cloud Config |
+---------------------+-----------------------+----------------------------------------------+--------------+
| cf-mysql | cf-mysql/19 | bosh-vcloud-esxi-ubuntu-trusty-go_agent/2915 | none |
+---------------------+-----------------------+----------------------------------------------+--------------+
| cf-services-contrib | cf-services-contrib/6 | bosh-vcloud-esxi-ubuntu-trusty-go_agent/2915 | none |
+---------------------+-----------------------+----------------------------------------------+--------------+
| xxxxxxx_cf | cf/208 | bosh-vcloud-esxi-ubuntu-trusty-go_agent/2915 | none |
+---------------------+-----------------------+----------------------------------------------+--------------+

Deployments total: 3





$ bosh releases

+---------------------+----------+-------------+
| Name | Versions | Commit Hash |
+---------------------+----------+-------------+
| cf | 208* | 5d00be54+ |
| cf-mysql | 19* | dfab036b+ |
| cf-services-contrib | 6* | 57fd2098+ |
+---------------------+----------+-------------+
(*) Currently deployed
(+) Uncommitted changes

Releases total: 3





$ bosh locks
No locks



$ bosh tasks
No running tasks





$ bosh vms
Deployment `cf-mysql'
HTTP 500:





$ bosh cloudcheck
Performing cloud check...

Processing deployment manifest
------------------------------
HTTP 500:





The relevant error I get from /var/vcap/sys/log/director/director.debug.log on the director is:


E, [2015-05-25 21:20:15 #1010] [] ERROR -- Director: TypeError - no implicit conversion of nil into String:
/var/vcap/packages/ruby/lib/ruby/2.1.0/fileutils.rb:1572:in `path'
/var/vcap/packages/ruby/lib/ruby/2.1.0/fileutils.rb:1572:in `block in fu_list'
/var/vcap/packages/ruby/lib/ruby/2.1.0/fileutils.rb:1572:in `map'
/var/vcap/packages/ruby/lib/ruby/2.1.0/fileutils.rb:1572:in `fu_list'
/var/vcap/packages/ruby/lib/ruby/2.1.0/fileutils.rb:625:in `rm_r'
/var/vcap/packages/ruby/lib/ruby/2.1.0/fileutils.rb:654:in `rm_rf'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2957.0/lib/bosh/director/api/task_remover.rb:9:in `block in remove'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sequel-3.43.0/lib/sequel/dataset/actions.rb:152:in `block in each'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sequel-3.43.0/lib/sequel/adapters/postgres.rb:525:in `block (2 levels) in fetch_rows'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sequel-3.43.0/lib/sequel/adapters/postgres.rb:720:in `block in yield_hash_rows'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sequel-3.43.0/lib/sequel/adapters/postgres.rb:714:in `times'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sequel-3.43.0/lib/sequel/adapters/postgres.rb:714:in `yield_hash_rows'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sequel-3.43.0/lib/sequel/adapters/postgres.rb:525:in `block in fetch_rows'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sequel-3.43.0/lib/sequel/adapters/postgres.rb:134:in `execute'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sequel-3.43.0/lib/sequel/adapters/postgres.rb:413:in `_execute'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sequel-3.43.0/lib/sequel/adapters/postgres.rb:242:in `block (2 levels) in execute'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sequel-3.43.0/lib/sequel/adapters/postgres.rb:425:in `check_database_errors'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sequel-3.43.0/lib/sequel/adapters/postgres.rb:242:in `block in execute'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sequel-3.43.0/lib/sequel/database/connecting.rb:236:in `block in synchronize'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sequel-3.43.0/lib/sequel/connection_pool/threaded.rb:104:in `hold'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sequel-3.43.0/lib/sequel/database/connecting.rb:236:in `synchronize'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sequel-3.43.0/lib/sequel/adapters/postgres.rb:242:in `execute'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sequel-3.43.0/lib/sequel/dataset/actions.rb:801:in `execute'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sequel-3.43.0/lib/sequel/adapters/postgres.rb:525:in `fetch_rows'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sequel-3.43.0/lib/sequel/dataset/actions.rb:152:in `each'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2957.0/lib/bosh/director/api/task_remover.rb:8:in `remove'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2957.0/lib/bosh/director/api/task_helper.rb:23:in `create_task'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2957.0/lib/bosh/director/job_queue.rb:9:in `enqueue'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2957.0/lib/bosh/director/api/vm_state_manager.rb:5:in `fetch_vm_state'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2957.0/lib/bosh/director/api/controllers/deployments_controller.rb:182:in `block in <class:DeploymentsController>'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1603:in `call'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1603:in `block in compile!'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:966:in `[]'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:966:in `block (3 levels) in route!'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:985:in `route_eval'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:966:in `block (2 levels) in route!'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1006:in `block in process_route'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1004:in `catch'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1004:in `process_route'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:964:in `block in route!'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:963:in `each'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:963:in `route!'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1076:in `block in dispatch!'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1058:in `block in invoke'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1058:in `catch'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1058:in `invoke'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1073:in `dispatch!'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:898:in `block in call!'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1058:in `block in invoke'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1058:in `catch'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1058:in `invoke'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:898:in `call!'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:886:in `call'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/rack-protection-1.5.3/lib/rack/protection/xss_header.rb:18:in `call'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/rack-protection-1.5.3/lib/rack/protection/path_traversal.rb:16:in `call'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/rack-protection-1.5.3/lib/rack/protection/json_csrf.rb:18:in `call'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/rack-protection-1.5.3/lib/rack/protection/base.rb:49:in `call'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/rack-protection-1.5.3/lib/rack/protection/base.rb:49:in `call'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/rack-protection-1.5.3/lib/rack/protection/frame_options.rb:31:in `call'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/rack-1.6.0/lib/rack/nulllogger.rb:9:in `call'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/rack-1.6.0/lib/rack/head.rb:13:in `call'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:180:in `call'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:2014:in `call'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/rack-1.6.0/lib/rack/urlmap.rb:66:in `block in call'


/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/rack-1.6.0/lib/rack/urlmap.rb:50:in `each'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/rack-1.6.0/lib/rack/urlmap.rb:50:in `call'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/rack-1.6.0/lib/rack/commonlogger.rb:33:in `call'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:217:in `call'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/thin-1.5.1/lib/thin/connection.rb:81:in `block in pre_process'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/thin-1.5.1/lib/thin/connection.rb:79:in `catch'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/thin-1.5.1/lib/thin/connection.rb:79:in `pre_process'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/thin-1.5.1/lib/thin/connection.rb:54:in `process'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/thin-1.5.1/lib/thin/connection.rb:39:in `receive_data'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/eventmachine-1.0.3/lib/eventmachine.rb:187:in `run_machine'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/eventmachine-1.0.3/lib/eventmachine.rb:187:in `run'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/thin-1.5.1/lib/thin/backends/base.rb:63:in `start'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/thin-1.5.1/lib/thin/server.rb:159:in `start'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2957.0/bin/bosh-director:37:in `<top (required)>'
/var/vcap/packages/director/bin/bosh-director:16:in `load'
/var/vcap/packages/director/bin/bosh-director:16:in `<main>'





I've wiped my local bosh config and re-targetted the director and tried running bosh vms without specifying a deployment manifest (i.e. rule the manifest out) - still get the same 500





Any tips appreciated!



Notice:
This message contains information that may be privileged or confidential and is the property of Skyscape. It is intended only for the person to whom it is addressed. If you are not the intended recipient, you are not authorised to read, print, retain, copy, disseminate, distribute, or use this message or any part thereof. If you receive this message in error, please notify the sender immediately and delete all copies of this message. Skyscape reserves the right to monitor all e-mail communications through its networks. Skyscape Cloud Services Limited is registered in England and Wales: Company No: 07619797. Registered office: Hartham Park, Hartham, Corsham, Wiltshire SN13 0RP.

______________________________________________________________________
This email has been scanned by the Symantec Email Security.cloud service.
______________________________________________________________________



_______________________________________________
cf-bosh mailing list
cf-bosh(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-bosh






______________________________________________________________________
This email has been scanned by the Symantec Email Security.cloud service.
______________________________________________________________________

Notice:
This message contains information that may be privileged or confidential and is the property of Skyscape. It is intended only for the person to whom it is addressed. If you are not the intended recipient, you are not authorised to read, print, retain, copy, disseminate, distribute, or use this message or any part thereof. If you receive this message in error, please notify the sender immediately and delete all copies of this message. Skyscape reserves the right to monitor all e-mail communications through its networks. Skyscape Cloud Services Limited is registered in England and Wales: Company No: 07619797. Registered office: Hartham Park, Hartham, Corsham, Wiltshire SN13 0RP.

______________________________________________________________________
This email has been scanned by the Symantec Email Security.cloud service.
______________________________________________________________________





______________________________________________________________________
This email has been scanned by the Symantec Email Security.cloud service.
______________________________________________________________________

Notice:
This message contains information that may be privileged or confidential and is the property of Skyscape. It is intended only for the person to whom it is addressed. If you are not the intended recipient, you are not authorised to read, print, retain, copy, disseminate, distribute, or use this message or any part thereof. If you receive this message in error, please notify the sender immediately and delete all copies of this message. Skyscape reserves the right to monitor all e-mail communications through its networks. Skyscape Cloud Services Limited is registered in England and Wales: Company No: 07619797. Registered office: Hartham Park, Hartham, Corsham, Wiltshire SN13 0RP.

______________________________________________________________________
This email has been scanned by the Symantec Email Security.cloud service.
______________________________________________________________________


Re: Most bosh director commands fail with a HTTP 500

Dmitriy Kalinin
 

BOSH release 169 (stemcell version 2978) fixes this problem (
https://github.com/cloudfoundry/bosh/commit/24797724994d5a59f98828e477a738c9b643c78a
).

On Thu, May 28, 2015 at 2:18 AM, Scott Taggart <staggart(a)skyscapecloud.com>
wrote:

Thanks Dmitriy – this fixed our issue J



*From:* Dmitriy Kalinin [mailto:dkalinin(a)pivotal.io]
*Sent:* 26 May 2015 19:54
*To:* Scott Taggart
*Cc:* CF BOSH Mailing List
*Subject:* Re: [cf-bosh] Most bosh director commands fail with a HTTP 500



There currently exists a problem in the Director during task cleanup.
Director tries to clean up task logs for the tasks that do not have
associated directory on disk.
https://www.pivotaltracker.com/story/show/95458780 will fix this.



To fix the Director until we release a bug fix:

- ssh as vcap into the Director VM

- run /var/vcap/jobs/director/bin/director_ctl console

- opens up console to the Director DB

- run Bosh::Director::Models::Task.where(output: nil).update(output:
'/tmp/123')

- updates tasks without task log directories to a dummy destination;
Director will be happy to run rm -rf /tmp/123 when it cleans up tasks.



After that you should be able to run `bosh vms` and other tasks again.





On Mon, May 25, 2015 at 2:27 PM, Scott Taggart <staggart(a)skyscapecloud.com>
wrote:

Hi folks,



One of my three bosh directors has gotten itself stuck in a strange state
where most (but not all) operations fail. I have recreated the director
with a couple of different stemcells (but same persistent disk) and the
issue persists. Looks like potentially a database issue on the director,
but I have done a very quick visual check of a few tables (e.g. vms,
deployments) and they seem fine from a glance... not sure what's going on.



Everything CF-related currently/previously under the director is
continuing to run fine in this AZ, it's just the director that's lost it:



$ bosh deployments


+---------------------+-----------------------+----------------------------------------------+--------------+
| Name | Release(s) | Stemcell(s) | Cloud Config |

+---------------------+-----------------------+----------------------------------------------+--------------+
| cf-mysql | cf-mysql/19 | bosh-vcloud-esxi-ubuntu-trusty-go_agent/2915 |
none |

+---------------------+-----------------------+----------------------------------------------+--------------+
| cf-services-contrib | cf-services-contrib/6 |
bosh-vcloud-esxi-ubuntu-trusty-go_agent/2915 | none |

+---------------------+-----------------------+----------------------------------------------+--------------+
| xxxxxxx_cf | cf/208 | bosh-vcloud-esxi-ubuntu-trusty-go_agent/2915 |
none |

+---------------------+-----------------------+----------------------------------------------+--------------+

Deployments total: 3



$ bosh releases

+---------------------+----------+-------------+
| Name | Versions | Commit Hash |
+---------------------+----------+-------------+
| cf | 208* | 5d00be54+ |
| cf-mysql | 19* | dfab036b+ |
| cf-services-contrib | 6* | 57fd2098+ |
+---------------------+----------+-------------+
(*) Currently deployed
(+) Uncommitted changes

Releases total: 3



$ bosh locks
No locks


$ bosh tasks
No running tasks



$ bosh vms
Deployment `cf-mysql'
HTTP 500:



$ bosh cloudcheck
Performing cloud check...

Processing deployment manifest
------------------------------
HTTP 500:



The relevant error I get from
/var/vcap/sys/log/director/director.debug.log on the director is:

E, [2015-05-25 21:20:15 #1010] [] ERROR -- Director: TypeError - no
implicit conversion of nil into String:
/var/vcap/packages/ruby/lib/ruby/2.1.0/fileutils.rb:1572:in `path'
/var/vcap/packages/ruby/lib/ruby/2.1.0/fileutils.rb:1572:in `block in
fu_list'
/var/vcap/packages/ruby/lib/ruby/2.1.0/fileutils.rb:1572:in `map'
/var/vcap/packages/ruby/lib/ruby/2.1.0/fileutils.rb:1572:in `fu_list'
/var/vcap/packages/ruby/lib/ruby/2.1.0/fileutils.rb:625:in `rm_r'
/var/vcap/packages/ruby/lib/ruby/2.1.0/fileutils.rb:654:in `rm_rf'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2957.0/lib/bosh/director/api/task_remover.rb:9:in
`block in remove'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sequel-3.43.0/lib/sequel/dataset/actions.rb:152:in
`block in each'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sequel-3.43.0/lib/sequel/adapters/postgres.rb:525:in
`block (2 levels) in fetch_rows'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sequel-3.43.0/lib/sequel/adapters/postgres.rb:720:in
`block in yield_hash_rows'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sequel-3.43.0/lib/sequel/adapters/postgres.rb:714:in
`times'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sequel-3.43.0/lib/sequel/adapters/postgres.rb:714:in
`yield_hash_rows'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sequel-3.43.0/lib/sequel/adapters/postgres.rb:525:in
`block in fetch_rows'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sequel-3.43.0/lib/sequel/adapters/postgres.rb:134:in
`execute'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sequel-3.43.0/lib/sequel/adapters/postgres.rb:413:in
`_execute'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sequel-3.43.0/lib/sequel/adapters/postgres.rb:242:in
`block (2 levels) in execute'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sequel-3.43.0/lib/sequel/adapters/postgres.rb:425:in
`check_database_errors'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sequel-3.43.0/lib/sequel/adapters/postgres.rb:242:in
`block in execute'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sequel-3.43.0/lib/sequel/database/connecting.rb:236:in
`block in synchronize'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sequel-3.43.0/lib/sequel/connection_pool/threaded.rb:104:in
`hold'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sequel-3.43.0/lib/sequel/database/connecting.rb:236:in
`synchronize'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sequel-3.43.0/lib/sequel/adapters/postgres.rb:242:in
`execute'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sequel-3.43.0/lib/sequel/dataset/actions.rb:801:in
`execute'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sequel-3.43.0/lib/sequel/adapters/postgres.rb:525:in
`fetch_rows'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sequel-3.43.0/lib/sequel/dataset/actions.rb:152:in
`each'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2957.0/lib/bosh/director/api/task_remover.rb:8:in
`remove'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2957.0/lib/bosh/director/api/task_helper.rb:23:in
`create_task'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2957.0/lib/bosh/director/job_queue.rb:9:in
`enqueue'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2957.0/lib/bosh/director/api/vm_state_manager.rb:5:in
`fetch_vm_state'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2957.0/lib/bosh/director/api/controllers/deployments_controller.rb:182:in
`block in <class:DeploymentsController>'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1603:in
`call'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1603:in
`block in compile!'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:966:in
`[]'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:966:in
`block (3 levels) in route!'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:985:in
`route_eval'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:966:in
`block (2 levels) in route!'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1006:in
`block in process_route'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1004:in
`catch'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1004:in
`process_route'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:964:in
`block in route!'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:963:in
`each'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:963:in
`route!'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1076:in
`block in dispatch!'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1058:in
`block in invoke'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1058:in
`catch'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1058:in
`invoke'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1073:in
`dispatch!'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:898:in
`block in call!'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1058:in
`block in invoke'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1058:in
`catch'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1058:in
`invoke'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:898:in
`call!'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:886:in
`call'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/rack-protection-1.5.3/lib/rack/protection/xss_header.rb:18:in
`call'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/rack-protection-1.5.3/lib/rack/protection/path_traversal.rb:16:in
`call'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/rack-protection-1.5.3/lib/rack/protection/json_csrf.rb:18:in
`call'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/rack-protection-1.5.3/lib/rack/protection/base.rb:49:in
`call'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/rack-protection-1.5.3/lib/rack/protection/base.rb:49:in
`call'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/rack-protection-1.5.3/lib/rack/protection/frame_options.rb:31:in
`call'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/rack-1.6.0/lib/rack/nulllogger.rb:9:in
`call'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/rack-1.6.0/lib/rack/head.rb:13:in
`call'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:180:in
`call'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:2014:in
`call'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/rack-1.6.0/lib/rack/urlmap.rb:66:in
`block in call'

/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/rack-1.6.0/lib/rack/urlmap.rb:50:in
`each'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/rack-1.6.0/lib/rack/urlmap.rb:50:in
`call'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/rack-1.6.0/lib/rack/commonlogger.rb:33:in
`call'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:217:in
`call'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/thin-1.5.1/lib/thin/connection.rb:81:in
`block in pre_process'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/thin-1.5.1/lib/thin/connection.rb:79:in
`catch'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/thin-1.5.1/lib/thin/connection.rb:79:in
`pre_process'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/thin-1.5.1/lib/thin/connection.rb:54:in
`process'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/thin-1.5.1/lib/thin/connection.rb:39:in
`receive_data'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/eventmachine-1.0.3/lib/eventmachine.rb:187:in
`run_machine'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/eventmachine-1.0.3/lib/eventmachine.rb:187:in
`run'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/thin-1.5.1/lib/thin/backends/base.rb:63:in
`start'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/thin-1.5.1/lib/thin/server.rb:159:in
`start'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2957.0/bin/bosh-director:37:in
`<top (required)>'
/var/vcap/packages/director/bin/bosh-director:16:in `load'
/var/vcap/packages/director/bin/bosh-director:16:in `<main>'



I've wiped my local bosh config and re-targetted the director and tried
running bosh vms without specifying a deployment manifest (i.e. rule the
manifest out) - still get the same 500



Any tips appreciated!


Notice:
This message contains information that may be privileged or confidential
and is the property of Skyscape. It is intended only for the person to whom
it is addressed. If you are not the intended recipient, you are not
authorised to read, print, retain, copy, disseminate, distribute, or use
this message or any part thereof. If you receive this message in error,
please notify the sender immediately and delete all copies of this message.
Skyscape reserves the right to monitor all e-mail communications through
its networks. Skyscape Cloud Services Limited is registered in England and
Wales: Company No: 07619797. Registered office: Hartham Park, Hartham,
Corsham, Wiltshire SN13 0RP.

______________________________________________________________________
This email has been scanned by the Symantec Email Security.cloud service.
______________________________________________________________________


_______________________________________________
cf-bosh mailing list
cf-bosh(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-bosh




______________________________________________________________________
This email has been scanned by the Symantec Email Security.cloud service.
______________________________________________________________________

Notice:
This message contains information that may be privileged or confidential
and is the property of Skyscape. It is intended only for the person to whom
it is addressed. If you are not the intended recipient, you are not
authorised to read, print, retain, copy, disseminate, distribute, or use
this message or any part thereof. If you receive this message in error,
please notify the sender immediately and delete all copies of this message.
Skyscape reserves the right to monitor all e-mail communications through
its networks. Skyscape Cloud Services Limited is registered in England and
Wales: Company No: 07619797. Registered office: Hartham Park, Hartham,
Corsham, Wiltshire SN13 0RP.

______________________________________________________________________
This email has been scanned by the Symantec Email Security.cloud service.
______________________________________________________________________


Re: Bosh deploy failed on AWS-Failed loading settings via fetcher

Mark Wong <mark.wy.wong@...>
 

After I stopped all the VMs include the director nat and the VM running
bosh CLI. I tried redeploy again. VMs get started but soon after I am able
to ssh into them, they get shutdown. I attached the debug.


Re: Create bosh stemcell failed in AWS region cn-north-1

王小锋 <zzuwxf at gmail.com...>
 

Hi, Wayne

I also met the same issue as 支雷, could you please let us know how to create
custom stemcell? Is there any guide? thanks a lot.

2015-06-01 20:23 GMT+08:00 Wayne E. Seguin <wayneeseguin(a)starkandwayne.com>:

支雷,

Have you tried creating your own custom stemcell yet while you wait?

~Wayne

On Fri, May 29, 2015 at 5:18 AM, 支雷 <lzhi3937(a)gmail.com> wrote:

I have been blocked by this issue for two weeks, and have no progress. I
am looking forward to you to solve this problem. Thanks a lot.

2015-05-27 9:11 GMT+08:00 Dmitriy Kalinin <dkalinin(a)pivotal.io>:

It seems like this method cannot find appropriate AKIs:
https://github.com/cloudfoundry/bosh/blob/master/bosh_aws_cpi/lib/cloud/aws/aki_picker.rb#L48-L59

I just requested account from AWS to access China region and try to
reproduce the problem.

On Wed, May 20, 2015 at 8:37 PM, Dr Nic Williams <
drnicwilliams(a)gmail.com> wrote:

There are two issues - the second is that bosh-bootstrap uses a project
"cyoi" (choose your own infrastructure) and underneath it uses "fog" - its
quite possible that either or both do not yet support China (its harder to
get accounts to do testing).

The former is failing inside AWS SDK for Ruby.

BOSH calls into this library here:
https://github.com/cloudfoundry/bosh/blob/develop/bosh_aws_cpi/lib/cloud/aws/aki_picker.rb#L25

We are using aws-sdk (= 1.60.2)
https://github.com/cloudfoundry/bosh/blob/114b3cf107672cfebf444fe7db4703dd804c72cc/Gemfile.lock#L19

The latest version is 2.0.42
https://rubygems.org/gems/aws-sdk/versions/2.0.42

So perhaps China support was added more recently and we need to bump to
newer aws-sdk version.

Try bumping this version in the Gemfile of bosh and using that.

Avoid bosh-bootstrap until you've at least confimed you can get
underlying bosh_cli to work.


On Wed, May 20, 2015 at 8:17 PM, 支雷 <lzhi3937(a)gmail.com> wrote:

I have tried full stemcell
bosh-stemcell-2972-aws-xen-ubuntu-trusty-go_agent.tgz, but failed, error
"create stemcell failed: unable to find AKI:" was thrown (please find
details in my first email). And when I tried to "bosh-bootstrap deploy"
command, I got `validate_aws_region': Unknown region: "cn-north-1"
(ArgumentError). Seems cn-north-1 is not supported by bosh aws plugin. Any
suggestions on this issue? Thanks!

2015-05-19 23:58 GMT+08:00 Wayne E. Seguin <
wayneeseguin(a)starkandwayne.com>:

The issue is that there appear to not be any light stemcells in your
region, there is another recent question on the list to this effect. In
order to make progress you might want to build your own stemcell to use for
now or try to find and download a full aws hvm stemcell image to upload.

On Mon, May 18, 2015 at 6:12 AM, 支雷 <lzhi3937(a)gmail.com> wrote:

Hello,

I tried to deploy micro bosh in AWS region cn-north-1 in several
ways, but all failed. Any suggestions on how to deploy micro bosh in AWS
region cn-north-1? Thanks!

I created a EC2 instance (ubuntu) in the cn-north-1 region with an
public ip, ssh'd into it and installed bosh-cli, bosh_cli_plugin_micro and
bosh_cli_plugin_aws. After that I downloaded stemcell
bosh-stemcell-2972-aws-xen-ubuntu-trusty-go_agent.tgz, and tried " bosh
micro deploy ./bosh-stemcell-2972-aws-xen-ubuntu-trusty-go_agent.tgz" which
resulted in "create stemcell failed: getaddrinfo: Name or service not
known:"

I checked the failed URL, it's "ec2.cn-north-1.amazonaws.com" which
is not accessable. I updated the http.rb and changed the url to "
ec2.cn-north-1.amazonaws.com.cn" and escape the ssl validation and
tried again, another error was thrown:

Stemcell info
-------------
Name: bosh-aws-xen-ubuntu-trusty-go_agent
Version: 2972

Started deploy micro bosh
Started deploy micro bosh > Unpacking stemcell. Done (00:00:08)
Started deploy micro bosh > Uploading stemcell"
create stemcell failed: unable to find AKI:
/var/lib/gems/1.9.1/gems/bosh_aws_cpi-1.2972.0/lib/cloud/aws/aki_picker.rb:15:in
`pick'
/var/lib/gems/1.9.1/gems/bosh_aws_cpi-1.2972.0/lib/cloud/aws/stemcell_creator.rb:100:in
`image_params'
/var/lib/gems/1.9.1/gems/bosh_aws_cpi-1.2972.0/lib/cloud/aws/stemcell_creator.rb:24:in
`create'
/var/lib/gems/1.9.1/gems/bosh_aws_cpi-1.2972.0/lib/cloud/aws/cloud.rb:465:in
`block in create_stemcell'
/var/lib/gems/1.9.1/gems/bosh_common-1.2972.0/lib/common/thread_formatter.rb:49:in
`with_thread_name'
/var/lib/gems/1.9.1/gems/bosh_aws_cpi-1.2972.0/lib/cloud/aws/cloud.rb:445:in
`create_stemcell'
/var/lib/gems/1.9.1/gems/bosh_cli_plugin_micro-1.2972.0/lib/bosh/deployer/instance_manager.rb:228:in
`block (2 levels) in create_stemcell'
/var/lib/gems/1.9.1/gems/bosh_cli_plugin_micro-1.2972.0/lib/bosh/deployer/instance_manager.rb:85:in
`step'
/var/lib/gems/1.9.1/gems/bosh_cli_plugin_micro-1.2972.0/lib/bosh/deployer/instance_manager.rb:227:in
`block in create_stemcell'
/usr/lib/ruby/1.9.1/tmpdir.rb:83:in `mktmpdir'
/var/lib/gems/1.9.1/gems/bosh_cli_plugin_micro-1.2972.0/lib/bosh/deployer/instance_manager.rb:213:in
`create_stemcell'
/var/lib/gems/1.9.1/gems/bosh_cli_plugin_micro-1.2972.0/lib/bosh/deployer/instance_manager.rb:118:in
`create'
/var/lib/gems/1.9.1/gems/bosh_cli_plugin_micro-1.2972.0/lib/bosh/deployer/instance_manager.rb:98:in
`block in create_deployment'
/var/lib/gems/1.9.1/gems/bosh_cli_plugin_micro-1.2972.0/lib/bosh/deployer/instance_manager.rb:92:in
`with_lifecycle'
/var/lib/gems/1.9.1/gems/bosh_cli_plugin_micro-1.2972.0/lib/bosh/deployer/instance_manager.rb:98:in
`create_deployment'
/var/lib/gems/1.9.1/gems/bosh_cli_plugin_micro-1.2972.0/lib/bosh/cli/commands/micro.rb:179:in
`perform'
/var/lib/gems/1.9.1/gems/bosh_cli-1.2972.0/lib/cli/command_handler.rb:57:in
`run'
/var/lib/gems/1.9.1/gems/bosh_cli-1.2972.0/lib/cli/runner.rb:56:in
`run'
/var/lib/gems/1.9.1/gems/bosh_cli-1.2972.0/bin/bosh:16:in `<top
(required)>'
/usr/local/bin/bosh:23:in `load'
/usr/local/bin/bosh:23:in `<main>'

After that I installed bosh-bootstrap and executed following command:

bosh-bootstrap deploy

and I selected AWS provider and region 10 (China (Beijing) Region
(cn-north-1)), an error was thrown :

Confirming: Using AWS EC2/cn-north-1
/var/lib/gems/1.9.1/gems/fog-aws-0.1.1/lib/fog/aws/region_methods.rb:6:in
`validate_aws_region': Unknown region: "cn-north-1" (ArgumentError)
from
/var/lib/gems/1.9.1/gems/fog-aws-0.1.1/lib/fog/aws/compute.rb:482:in
`initialize'
from
/var/lib/gems/1.9.1/gems/fog-core-1.30.0/lib/fog/core/service.rb:115:in
`new'
from
/var/lib/gems/1.9.1/gems/fog-core-1.30.0/lib/fog/core/service.rb:115:in
`new'
from
/var/lib/gems/1.9.1/gems/fog-core-1.30.0/lib/fog/compute.rb:60:in `new'
from
/var/lib/gems/1.9.1/gems/cyoi-0.11.3/lib/cyoi/providers/clients/aws_provider_client.rb:257:in
`setup_fog_connection'
from
/var/lib/gems/1.9.1/gems/cyoi-0.11.3/lib/cyoi/providers/clients/fog_provider_client.rb:13:in
`initialize'
from
/var/lib/gems/1.9.1/gems/cyoi-0.11.3/lib/cyoi/providers.rb:17:in `new'
from
/var/lib/gems/1.9.1/gems/cyoi-0.11.3/lib/cyoi/providers.rb:17:in
`provider_client'
from
/var/lib/gems/1.9.1/gems/cyoi-0.11.3/lib/cyoi/cli/helpers/provider.rb:6:in
`provider_client'
from
/var/lib/gems/1.9.1/gems/cyoi-0.11.3/lib/cyoi/cli/address.rb:41:in
`address_cli'
from
/var/lib/gems/1.9.1/gems/cyoi-0.11.3/lib/cyoi/cli/address.rb:56:in
`valid_address?'
from
/var/lib/gems/1.9.1/gems/cyoi-0.11.3/lib/cyoi/cli/address.rb:19:in
`execute!'
from
/var/lib/gems/1.9.1/gems/bosh-bootstrap-0.17.0/lib/bosh-bootstrap/cli/commands/deploy.rb:41:in
`select_or_provision_public_networking'
from
/var/lib/gems/1.9.1/gems/bosh-bootstrap-0.17.0/lib/bosh-bootstrap/cli/commands/deploy.rb:21:in
`perform'
from
/var/lib/gems/1.9.1/gems/bosh-bootstrap-0.17.0/lib/bosh-bootstrap/thor_cli.rb:11:in
`deploy'
from
/var/lib/gems/1.9.1/gems/thor-0.19.1/lib/thor/command.rb:27:in `run'
from
/var/lib/gems/1.9.1/gems/thor-0.19.1/lib/thor/invocation.rb:126:in
`invoke_command'
from /var/lib/gems/1.9.1/gems/thor-0.19.1/lib/thor.rb:359:in
`dispatch'
from
/var/lib/gems/1.9.1/gems/thor-0.19.1/lib/thor/base.rb:440:in `start'
from
/var/lib/gems/1.9.1/gems/bosh-bootstrap-0.17.0/bin/bosh-bootstrap:13:in
`<top (required)>'
from /usr/local/bin/bosh-bootstrap:23:in `load'
from /usr/local/bin/bosh-bootstrap:23:in `<main>'


_______________________________________________
cf-bosh mailing list
cf-bosh(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-bosh

_______________________________________________
cf-bosh mailing list
cf-bosh(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-bosh


--
Dr Nic Williams
Stark & Wayne LLC - consultancy for Cloud Foundry users
http://drnicwilliams.com
http://starkandwayne.com
cell +1 (415) 860-2185
twitter @drnic

_______________________________________________
cf-bosh mailing list
cf-bosh(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-bosh

_______________________________________________
cf-bosh mailing list
cf-bosh(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-bosh

_______________________________________________
cf-bosh mailing list
cf-bosh(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-bosh


cf-stub.yml example with minimum or required info

Ali
 

Hi All,


Im running into a manifest problems during deployment of CF on vSphere 5.5, most of the errors are regarding missing/incorrect properties in cf-stub.yml (and results in cf-deployment.yml), Im using spiff to generate cf-deployment and as I understand editing cf-deployment.yml is not recommended, my question is:


Is there an example of “cf-stub.yml” which include all must have required info for deployment?

Im following the docs, the cf-stub.yml here http://docs.cloudfoundry.org/deploying/cf-stub-vsphere.html is missing many of required info, looking online I did not find much and most of them were out dated.


Im totally new to CF and apologize if my question is basic.


Thank you
Ali


Re: Problem with missing routes due to recent DHCP -> static change

Aaron Huber
 

Yes, once the /etc/network/interfaces file is converted to "static" and it
does an ifdown/ifup then the route disappears because it is no longer being
added by the DHCP client. Technically I think the best solution would be to
just add any routes that were configured in DHCP to the interfaces file (at
least on Ubuntu, see
http://askubuntu.com/questions/548940/add-static-route-in-ubuntu-14-04).

I was just poking around on the best place to find the info. The
/var/lib/dhcp/dhclient.eth0.leases file will contain an entry like the
following that specifies the route information retrieved from DHCP:

option rfc3442-classless-static-routes 32,169,254,169,254,10,65,25,10;

That would be equivalent to:

post-up route add 169.254.169.254/32 gw 10.65.25.10

Aaron



--
View this message in context: http://cf-bosh.70367.x6.nabble.com/Problem-with-missing-routes-due-to-recent-DHCP-static-change-tp105p107.html
Sent from the CF BOSH mailing list archive at Nabble.com.