Date   

Re: Resuming UAA work

Alberto A. Flores
 

Filip,

Does the uaa have a cli? It seems like uaac is a "cloudfoundry" thing. Sound like cli interactions are expected through curl.

PS: wasn't sure where to ask this question since the UAA is a project of it's own. Maybe it's too early to have a mailing list for it. Do you inow where we can post questions for it? Cf mailing list?

Alberto
Twitter: albertoaflores

On May 28, 2015, at 6:53 PM, Filip Hanik <fhanik(a)pivotal.io> wrote:

The UAA doesn't depend on CF, it can be leveraged as a stand alone product.


On Thu, May 28, 2015 at 4:52 PM, Aristoteles Neto <aristoteles.neto(a)webdrive.co.nz> wrote:
From the perspective of using BOSH without CF, moving the users to the manifest is actually an improvement, as it allows you to list the actual users without logging in to the DB.

Are there any plans to split out UAA from Cloud Foundry? More specifically I’d love to be able to have groups / permissions scheme for deployments / commands without needing to install CF.

-- Neto



On 29/05/2015, at 10:33, Dmitriy Kalinin <dkalinin(a)pivotal.io> wrote:

Hey all,

We have resumed BOSH & UAA integration work: https://www.pivotaltracker.com/n/projects/1285490 to be worked on by a single pair.

As part of this work we are going to provide two options how to configure the Director auth:
- without UAA [default] (already exists, but we want to simplify it)
- with UAA (currently being worked on)

Currently Director only works without UAA and has its own user management functionality. There is the users table in the DB and CLI provides create/delete user commands. I would like to simplify this functionality as much as possible. Users would be configured statically in the manifest for the Director so that we can delete users table and associated commands.

Here is how the Director manifest would look like for 'Director without UAA' configuration:

properties:
director:
users:
- {name: admin, hashed_password: $1$0497b6da$8/0owfq5zblA3o7kXQgGy} # crypted 'password'
- {name: admin2, hashed_password: $1$0497b6da$8/0owfq5zblA3o7kXQgGy} # crypted 'password'
...

For more complex use cases, we will encourage people to use Director auth via UAA once that becomes available so that LDAP, password, lockout policies, etc. can be configured.

Thoughts?

Dmitriy
_______________________________________________
cf-bosh mailing list
cf-bosh(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-bosh

_______________________________________________
cf-bosh mailing list
cf-bosh(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-bosh
_______________________________________________
cf-bosh mailing list
cf-bosh(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-bosh


Re: Create bosh stemcell failed in AWS region cn-north-1

支雷 <lzhi3937 at gmail.com...>
 

I have been blocked by this issue for two weeks, and have no progress. I am
looking forward to you to solve this problem. Thanks a lot.

2015-05-27 9:11 GMT+08:00 Dmitriy Kalinin <dkalinin(a)pivotal.io>:

It seems like this method cannot find appropriate AKIs:
https://github.com/cloudfoundry/bosh/blob/master/bosh_aws_cpi/lib/cloud/aws/aki_picker.rb#L48-L59

I just requested account from AWS to access China region and try to
reproduce the problem.

On Wed, May 20, 2015 at 8:37 PM, Dr Nic Williams <drnicwilliams(a)gmail.com>
wrote:

There are two issues - the second is that bosh-bootstrap uses a project
"cyoi" (choose your own infrastructure) and underneath it uses "fog" - its
quite possible that either or both do not yet support China (its harder to
get accounts to do testing).

The former is failing inside AWS SDK for Ruby.

BOSH calls into this library here:
https://github.com/cloudfoundry/bosh/blob/develop/bosh_aws_cpi/lib/cloud/aws/aki_picker.rb#L25

We are using aws-sdk (= 1.60.2)
https://github.com/cloudfoundry/bosh/blob/114b3cf107672cfebf444fe7db4703dd804c72cc/Gemfile.lock#L19

The latest version is 2.0.42
https://rubygems.org/gems/aws-sdk/versions/2.0.42

So perhaps China support was added more recently and we need to bump to
newer aws-sdk version.

Try bumping this version in the Gemfile of bosh and using that.

Avoid bosh-bootstrap until you've at least confimed you can get
underlying bosh_cli to work.


On Wed, May 20, 2015 at 8:17 PM, 支雷 <lzhi3937(a)gmail.com> wrote:

I have tried full stemcell
bosh-stemcell-2972-aws-xen-ubuntu-trusty-go_agent.tgz, but failed, error
"create stemcell failed: unable to find AKI:" was thrown (please find
details in my first email). And when I tried to "bosh-bootstrap deploy"
command, I got `validate_aws_region': Unknown region: "cn-north-1"
(ArgumentError). Seems cn-north-1 is not supported by bosh aws plugin. Any
suggestions on this issue? Thanks!

2015-05-19 23:58 GMT+08:00 Wayne E. Seguin <
wayneeseguin(a)starkandwayne.com>:

The issue is that there appear to not be any light stemcells in your
region, there is another recent question on the list to this effect. In
order to make progress you might want to build your own stemcell to use for
now or try to find and download a full aws hvm stemcell image to upload.

On Mon, May 18, 2015 at 6:12 AM, 支雷 <lzhi3937(a)gmail.com> wrote:

Hello,

I tried to deploy micro bosh in AWS region cn-north-1 in several ways,
but all failed. Any suggestions on how to deploy micro bosh in AWS region
cn-north-1? Thanks!

I created a EC2 instance (ubuntu) in the cn-north-1 region with an
public ip, ssh'd into it and installed bosh-cli, bosh_cli_plugin_micro and
bosh_cli_plugin_aws. After that I downloaded stemcell
bosh-stemcell-2972-aws-xen-ubuntu-trusty-go_agent.tgz, and tried " bosh
micro deploy ./bosh-stemcell-2972-aws-xen-ubuntu-trusty-go_agent.tgz" which
resulted in "create stemcell failed: getaddrinfo: Name or service not
known:"

I checked the failed URL, it's "ec2.cn-north-1.amazonaws.com" which
is not accessable. I updated the http.rb and changed the url to "
ec2.cn-north-1.amazonaws.com.cn" and escape the ssl validation and
tried again, another error was thrown:

Stemcell info
-------------
Name: bosh-aws-xen-ubuntu-trusty-go_agent
Version: 2972

Started deploy micro bosh
Started deploy micro bosh > Unpacking stemcell. Done (00:00:08)
Started deploy micro bosh > Uploading stemcell"
create stemcell failed: unable to find AKI:
/var/lib/gems/1.9.1/gems/bosh_aws_cpi-1.2972.0/lib/cloud/aws/aki_picker.rb:15:in
`pick'
/var/lib/gems/1.9.1/gems/bosh_aws_cpi-1.2972.0/lib/cloud/aws/stemcell_creator.rb:100:in
`image_params'
/var/lib/gems/1.9.1/gems/bosh_aws_cpi-1.2972.0/lib/cloud/aws/stemcell_creator.rb:24:in
`create'
/var/lib/gems/1.9.1/gems/bosh_aws_cpi-1.2972.0/lib/cloud/aws/cloud.rb:465:in
`block in create_stemcell'
/var/lib/gems/1.9.1/gems/bosh_common-1.2972.0/lib/common/thread_formatter.rb:49:in
`with_thread_name'
/var/lib/gems/1.9.1/gems/bosh_aws_cpi-1.2972.0/lib/cloud/aws/cloud.rb:445:in
`create_stemcell'
/var/lib/gems/1.9.1/gems/bosh_cli_plugin_micro-1.2972.0/lib/bosh/deployer/instance_manager.rb:228:in
`block (2 levels) in create_stemcell'
/var/lib/gems/1.9.1/gems/bosh_cli_plugin_micro-1.2972.0/lib/bosh/deployer/instance_manager.rb:85:in
`step'
/var/lib/gems/1.9.1/gems/bosh_cli_plugin_micro-1.2972.0/lib/bosh/deployer/instance_manager.rb:227:in
`block in create_stemcell'
/usr/lib/ruby/1.9.1/tmpdir.rb:83:in `mktmpdir'
/var/lib/gems/1.9.1/gems/bosh_cli_plugin_micro-1.2972.0/lib/bosh/deployer/instance_manager.rb:213:in
`create_stemcell'
/var/lib/gems/1.9.1/gems/bosh_cli_plugin_micro-1.2972.0/lib/bosh/deployer/instance_manager.rb:118:in
`create'
/var/lib/gems/1.9.1/gems/bosh_cli_plugin_micro-1.2972.0/lib/bosh/deployer/instance_manager.rb:98:in
`block in create_deployment'
/var/lib/gems/1.9.1/gems/bosh_cli_plugin_micro-1.2972.0/lib/bosh/deployer/instance_manager.rb:92:in
`with_lifecycle'
/var/lib/gems/1.9.1/gems/bosh_cli_plugin_micro-1.2972.0/lib/bosh/deployer/instance_manager.rb:98:in
`create_deployment'
/var/lib/gems/1.9.1/gems/bosh_cli_plugin_micro-1.2972.0/lib/bosh/cli/commands/micro.rb:179:in
`perform'
/var/lib/gems/1.9.1/gems/bosh_cli-1.2972.0/lib/cli/command_handler.rb:57:in
`run'
/var/lib/gems/1.9.1/gems/bosh_cli-1.2972.0/lib/cli/runner.rb:56:in
`run'
/var/lib/gems/1.9.1/gems/bosh_cli-1.2972.0/bin/bosh:16:in `<top
(required)>'
/usr/local/bin/bosh:23:in `load'
/usr/local/bin/bosh:23:in `<main>'

After that I installed bosh-bootstrap and executed following command:

bosh-bootstrap deploy

and I selected AWS provider and region 10 (China (Beijing) Region
(cn-north-1)), an error was thrown :

Confirming: Using AWS EC2/cn-north-1
/var/lib/gems/1.9.1/gems/fog-aws-0.1.1/lib/fog/aws/region_methods.rb:6:in
`validate_aws_region': Unknown region: "cn-north-1" (ArgumentError)
from
/var/lib/gems/1.9.1/gems/fog-aws-0.1.1/lib/fog/aws/compute.rb:482:in
`initialize'
from
/var/lib/gems/1.9.1/gems/fog-core-1.30.0/lib/fog/core/service.rb:115:in
`new'
from
/var/lib/gems/1.9.1/gems/fog-core-1.30.0/lib/fog/core/service.rb:115:in
`new'
from
/var/lib/gems/1.9.1/gems/fog-core-1.30.0/lib/fog/compute.rb:60:in `new'
from
/var/lib/gems/1.9.1/gems/cyoi-0.11.3/lib/cyoi/providers/clients/aws_provider_client.rb:257:in
`setup_fog_connection'
from
/var/lib/gems/1.9.1/gems/cyoi-0.11.3/lib/cyoi/providers/clients/fog_provider_client.rb:13:in
`initialize'
from
/var/lib/gems/1.9.1/gems/cyoi-0.11.3/lib/cyoi/providers.rb:17:in `new'
from
/var/lib/gems/1.9.1/gems/cyoi-0.11.3/lib/cyoi/providers.rb:17:in
`provider_client'
from
/var/lib/gems/1.9.1/gems/cyoi-0.11.3/lib/cyoi/cli/helpers/provider.rb:6:in
`provider_client'
from
/var/lib/gems/1.9.1/gems/cyoi-0.11.3/lib/cyoi/cli/address.rb:41:in
`address_cli'
from
/var/lib/gems/1.9.1/gems/cyoi-0.11.3/lib/cyoi/cli/address.rb:56:in
`valid_address?'
from
/var/lib/gems/1.9.1/gems/cyoi-0.11.3/lib/cyoi/cli/address.rb:19:in
`execute!'
from
/var/lib/gems/1.9.1/gems/bosh-bootstrap-0.17.0/lib/bosh-bootstrap/cli/commands/deploy.rb:41:in
`select_or_provision_public_networking'
from
/var/lib/gems/1.9.1/gems/bosh-bootstrap-0.17.0/lib/bosh-bootstrap/cli/commands/deploy.rb:21:in
`perform'
from
/var/lib/gems/1.9.1/gems/bosh-bootstrap-0.17.0/lib/bosh-bootstrap/thor_cli.rb:11:in
`deploy'
from
/var/lib/gems/1.9.1/gems/thor-0.19.1/lib/thor/command.rb:27:in `run'
from
/var/lib/gems/1.9.1/gems/thor-0.19.1/lib/thor/invocation.rb:126:in
`invoke_command'
from /var/lib/gems/1.9.1/gems/thor-0.19.1/lib/thor.rb:359:in
`dispatch'
from /var/lib/gems/1.9.1/gems/thor-0.19.1/lib/thor/base.rb:440:in
`start'
from
/var/lib/gems/1.9.1/gems/bosh-bootstrap-0.17.0/bin/bosh-bootstrap:13:in
`<top (required)>'
from /usr/local/bin/bosh-bootstrap:23:in `load'
from /usr/local/bin/bosh-bootstrap:23:in `<main>'


_______________________________________________
cf-bosh mailing list
cf-bosh(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-bosh

_______________________________________________
cf-bosh mailing list
cf-bosh(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-bosh


--
Dr Nic Williams
Stark & Wayne LLC - consultancy for Cloud Foundry users
http://drnicwilliams.com
http://starkandwayne.com
cell +1 (415) 860-2185
twitter @drnic

_______________________________________________
cf-bosh mailing list
cf-bosh(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-bosh


Re: Resuming UAA work

Alberto A. Flores
 

Thanks Dmitriy!

Alberto
Twitter: albertoaflores

On May 28, 2015, at 8:32 PM, Dmitriy Kalinin <dkalinin(a)pivotal.io> wrote:

Sure feel free to leave comments on the issue (https://github.com/cloudfoundry/bosh-notes/issues/8) or file a PR against that document and I will try to incorporate it.

On Thu, May 28, 2015 at 5:26 PM, Alberto A. Flores <aaflores(a)gmail.com> wrote:
Thanks for the response!

+1 on the "bosh-director.DIRECTOR-UUID.admin" scope. I assume this means that in the event of multiple directors, users will have to have multiple scopes associated to their credentials (either through uaa or local). That would be a great start.

Is there anyway i can follow/vote on the items regarding authz? I like the proposed scope schemes to create some ACL control. I'm hoping to use BOSH as a viable tool to empower datacenter operators. As this is defined, the idea or different roles is essential. (Are pull request welcomed?)

Alberto
Twitter: albertoaflores

On May 28, 2015, at 7:20 PM, Dmitriy Kalinin <dkalinin(a)pivotal.io> wrote:

bosh-director.DIRECTOR-UUID.admin
_______________________________________________
cf-bosh mailing list
cf-bosh(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-bosh
_______________________________________________
cf-bosh mailing list
cf-bosh(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-bosh


Re: Resuming UAA work

Dmitriy Kalinin
 

Sure feel free to leave comments on the issue (
https://github.com/cloudfoundry/bosh-notes/issues/8) or file a PR against
that document and I will try to incorporate it.

On Thu, May 28, 2015 at 5:26 PM, Alberto A. Flores <aaflores(a)gmail.com>
wrote:

Thanks for the response!

+1 on the "bosh-director.DIRECTOR-UUID.admin" scope. I assume this means
that in the event of multiple directors, users will have to have multiple
scopes associated to their credentials (either through uaa or local). That
would be a great start.

Is there anyway i can follow/vote on the items regarding authz? I like the
proposed scope schemes to create some ACL control. I'm hoping to use BOSH
as a viable tool to empower datacenter operators. As this is defined, the
idea or different roles is essential. (Are pull request welcomed?)

Alberto
Twitter: albertoaflores

On May 28, 2015, at 7:20 PM, Dmitriy Kalinin <dkalinin(a)pivotal.io>
wrote:

bosh-director.DIRECTOR-UUID.admin
_______________________________________________
cf-bosh mailing list
cf-bosh(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-bosh


Re: Resuming UAA work

Alberto A. Flores
 

Thanks for the response!

+1 on the "bosh-director.DIRECTOR-UUID.admin" scope. I assume this means that in the event of multiple directors, users will have to have multiple scopes associated to their credentials (either through uaa or local). That would be a great start.

Is there anyway i can follow/vote on the items regarding authz? I like the proposed scope schemes to create some ACL control. I'm hoping to use BOSH as a viable tool to empower datacenter operators. As this is defined, the idea or different roles is essential. (Are pull request welcomed?)

Alberto
Twitter: albertoaflores

On May 28, 2015, at 7:20 PM, Dmitriy Kalinin <dkalinin(a)pivotal.io> wrote:

bosh-director.DIRECTOR-UUID.admin


Re: Resuming UAA work

Dmitriy Kalinin
 

I've added a bosh-notes page about UAA integration:
https://github.com/cloudfoundry/bosh-notes/blob/master/uaa.md (and
associated issue: https://github.com/cloudfoundry/bosh-notes/issues/8)

On Thu, May 28, 2015 at 4:20 PM, Dmitriy Kalinin <dkalinin(a)pivotal.io>
wrote:

We will ask the UAA team to start producing uaa-release shortly, which
will be independent of cf-release. Eventually I expect people will remove
uaa job from cf-release and just use uaa-release.

Regarding scope: Our first goal is to allow to use one scope to determine
who can interact with the Director as an admin. We have not decided yet
whether we should make it configurable or something that is implied. Two
options:
- Director manifest says `admin_scope: some-scope-in-uaa`
- Director assumes `bosh-director.DIRECTOR-UUID.admin` scope gives admin
access

Regarding authz: Eventually we will introduce readonly user scope
(bosh-director.DIRECTOR-UUID.read-only), and after that we will introduce
deployment specific scopes (e.g.
bosh-director.DIRECTOR-UUID.deployment.DEPLOYMENT.admin/read-only). This
work is not scoped out yet.

Thoughts on the scope conventions?

On Thu, May 28, 2015 at 3:57 PM, Alberto A. Flores <aaflores(a)gmail.com>
wrote:

Love it!

I'm wondering what are your thoughts for authorization? Wondering if the
Director will introduce roles of any kind...

Alberto
Twitter: albertoaflores

On May 28, 2015, at 6:49 PM, Mike Youngstrom <youngm(a)gmail.com> wrote:

Sounds great! I assume you'll use a uaa oauth scope to determine if the
given user actually has access to bosh? bosh.admin?

On Thu, May 28, 2015 at 4:33 PM, Dmitriy Kalinin <dkalinin(a)pivotal.io>
wrote:

Hey all,

We have resumed BOSH & UAA integration work:
https://www.pivotaltracker.com/n/projects/1285490 to be worked on by a
single pair.

As part of this work we are going to provide two options how to
configure the Director auth:
- without UAA [default] (already exists, but we want to simplify it)
- with UAA (currently being worked on)

Currently Director only works without UAA and has its own user
management functionality. There is the users table in the DB and CLI
provides create/delete user commands. I would like to simplify this
functionality as much as possible. Users would be configured statically in
the manifest for the Director so that we can delete users table and
associated commands.

Here is how the Director manifest would look like for 'Director without
UAA' configuration:

properties:
director:
users:
- {name: admin, hashed_password: $1$0497b6da$8/0owfq5zblA3o7kXQgGy}
# crypted 'password'
- {name: admin2, hashed_password: $1$0497b6da$8/0owfq5zblA3o7kXQgGy}
# crypted 'password'
...

For more complex use cases, we will encourage people to use Director
auth via UAA once that becomes available so that LDAP, password, lockout
policies, etc. can be configured.

Thoughts?

Dmitriy

_______________________________________________
cf-bosh mailing list
cf-bosh(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-bosh

_______________________________________________
cf-bosh mailing list
cf-bosh(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-bosh


_______________________________________________
cf-bosh mailing list
cf-bosh(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-bosh


Re: Resuming UAA work

Dmitriy Kalinin
 

We will ask the UAA team to start producing uaa-release shortly, which will
be independent of cf-release. Eventually I expect people will remove uaa
job from cf-release and just use uaa-release.

Regarding scope: Our first goal is to allow to use one scope to determine
who can interact with the Director as an admin. We have not decided yet
whether we should make it configurable or something that is implied. Two
options:
- Director manifest says `admin_scope: some-scope-in-uaa`
- Director assumes `bosh-director.DIRECTOR-UUID.admin` scope gives admin
access

Regarding authz: Eventually we will introduce readonly user scope
(bosh-director.DIRECTOR-UUID.read-only), and after that we will introduce
deployment specific scopes (e.g.
bosh-director.DIRECTOR-UUID.deployment.DEPLOYMENT.admin/read-only). This
work is not scoped out yet.

Thoughts on the scope conventions?

On Thu, May 28, 2015 at 3:57 PM, Alberto A. Flores <aaflores(a)gmail.com>
wrote:

Love it!

I'm wondering what are your thoughts for authorization? Wondering if the
Director will introduce roles of any kind...

Alberto
Twitter: albertoaflores

On May 28, 2015, at 6:49 PM, Mike Youngstrom <youngm(a)gmail.com> wrote:

Sounds great! I assume you'll use a uaa oauth scope to determine if the
given user actually has access to bosh? bosh.admin?

On Thu, May 28, 2015 at 4:33 PM, Dmitriy Kalinin <dkalinin(a)pivotal.io>
wrote:

Hey all,

We have resumed BOSH & UAA integration work:
https://www.pivotaltracker.com/n/projects/1285490 to be worked on by a
single pair.

As part of this work we are going to provide two options how to configure
the Director auth:
- without UAA [default] (already exists, but we want to simplify it)
- with UAA (currently being worked on)

Currently Director only works without UAA and has its own user management
functionality. There is the users table in the DB and CLI provides
create/delete user commands. I would like to simplify this functionality as
much as possible. Users would be configured statically in the manifest for
the Director so that we can delete users table and associated commands.

Here is how the Director manifest would look like for 'Director without
UAA' configuration:

properties:
director:
users:
- {name: admin, hashed_password: $1$0497b6da$8/0owfq5zblA3o7kXQgGy} #
crypted 'password'
- {name: admin2, hashed_password: $1$0497b6da$8/0owfq5zblA3o7kXQgGy}
# crypted 'password'
...

For more complex use cases, we will encourage people to use Director auth
via UAA once that becomes available so that LDAP, password, lockout
policies, etc. can be configured.

Thoughts?

Dmitriy

_______________________________________________
cf-bosh mailing list
cf-bosh(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-bosh

_______________________________________________
cf-bosh mailing list
cf-bosh(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-bosh


_______________________________________________
cf-bosh mailing list
cf-bosh(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-bosh


Re: Resuming UAA work

Alberto A. Flores
 

Love it!

I'm wondering what are your thoughts for authorization? Wondering if the Director will introduce roles of any kind...

Alberto
Twitter: albertoaflores

On May 28, 2015, at 6:49 PM, Mike Youngstrom <youngm(a)gmail.com> wrote:

Sounds great! I assume you'll use a uaa oauth scope to determine if the given user actually has access to bosh? bosh.admin?

On Thu, May 28, 2015 at 4:33 PM, Dmitriy Kalinin <dkalinin(a)pivotal.io> wrote:
Hey all,

We have resumed BOSH & UAA integration work: https://www.pivotaltracker.com/n/projects/1285490 to be worked on by a single pair.

As part of this work we are going to provide two options how to configure the Director auth:
- without UAA [default] (already exists, but we want to simplify it)
- with UAA (currently being worked on)

Currently Director only works without UAA and has its own user management functionality. There is the users table in the DB and CLI provides create/delete user commands. I would like to simplify this functionality as much as possible. Users would be configured statically in the manifest for the Director so that we can delete users table and associated commands.

Here is how the Director manifest would look like for 'Director without UAA' configuration:

properties:
director:
users:
- {name: admin, hashed_password: $1$0497b6da$8/0owfq5zblA3o7kXQgGy} # crypted 'password'
- {name: admin2, hashed_password: $1$0497b6da$8/0owfq5zblA3o7kXQgGy} # crypted 'password'
...

For more complex use cases, we will encourage people to use Director auth via UAA once that becomes available so that LDAP, password, lockout policies, etc. can be configured.

Thoughts?

Dmitriy

_______________________________________________
cf-bosh mailing list
cf-bosh(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-bosh
_______________________________________________
cf-bosh mailing list
cf-bosh(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-bosh


Re: Resuming UAA work

Aristoteles Neto
 

True - silly me.

Sounds great :)

-- Neto

On 29/05/2015, at 10:53, Filip Hanik <fhanik(a)pivotal.io> wrote:

The UAA doesn't depend on CF, it can be leveraged as a stand alone product.


On Thu, May 28, 2015 at 4:52 PM, Aristoteles Neto <aristoteles.neto(a)webdrive.co.nz> wrote:
From the perspective of using BOSH without CF, moving the users to the manifest is actually an improvement, as it allows you to list the actual users without logging in to the DB.

Are there any plans to split out UAA from Cloud Foundry? More specifically I’d love to be able to have groups / permissions scheme for deployments / commands without needing to install CF.

-- Neto



On 29/05/2015, at 10:33, Dmitriy Kalinin <dkalinin(a)pivotal.io> wrote:

Hey all,

We have resumed BOSH & UAA integration work: https://www.pivotaltracker.com/n/projects/1285490 to be worked on by a single pair.

As part of this work we are going to provide two options how to configure the Director auth:
- without UAA [default] (already exists, but we want to simplify it)
- with UAA (currently being worked on)

Currently Director only works without UAA and has its own user management functionality. There is the users table in the DB and CLI provides create/delete user commands. I would like to simplify this functionality as much as possible. Users would be configured statically in the manifest for the Director so that we can delete users table and associated commands.

Here is how the Director manifest would look like for 'Director without UAA' configuration:

properties:
director:
users:
- {name: admin, hashed_password: $1$0497b6da$8/0owfq5zblA3o7kXQgGy} # crypted 'password'
- {name: admin2, hashed_password: $1$0497b6da$8/0owfq5zblA3o7kXQgGy} # crypted 'password'
...

For more complex use cases, we will encourage people to use Director auth via UAA once that becomes available so that LDAP, password, lockout policies, etc. can be configured.

Thoughts?

Dmitriy
_______________________________________________
cf-bosh mailing list
cf-bosh(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-bosh

_______________________________________________
cf-bosh mailing list
cf-bosh(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-bosh


_______________________________________________
cf-bosh mailing list
cf-bosh(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-bosh


Re: Resuming UAA work

Filip Hanik
 

The UAA doesn't depend on CF, it can be leveraged as a stand alone product.


On Thu, May 28, 2015 at 4:52 PM, Aristoteles Neto <
aristoteles.neto(a)webdrive.co.nz> wrote:

From the perspective of using BOSH without CF, moving the users to the
manifest is actually an improvement, as it allows you to list the actual
users without logging in to the DB.

Are there any plans to split out UAA from Cloud Foundry? More specifically
I’d love to be able to have groups / permissions scheme for deployments /
commands without needing to install CF.

-- Neto



On 29/05/2015, at 10:33, Dmitriy Kalinin <dkalinin(a)pivotal.io> wrote:

Hey all,

We have resumed BOSH & UAA integration work:
https://www.pivotaltracker.com/n/projects/1285490 to be worked on by a
single pair.

As part of this work we are going to provide two options how to configure
the Director auth:
- without UAA [default] (already exists, but we want to simplify it)
- with UAA (currently being worked on)

Currently Director only works without UAA and has its own user management
functionality. There is the users table in the DB and CLI provides
create/delete user commands. I would like to simplify this functionality as
much as possible. Users would be configured statically in the manifest for
the Director so that we can delete users table and associated commands.

Here is how the Director manifest would look like for 'Director without
UAA' configuration:

properties:
director:
users:
- {name: admin, hashed_password: $1$0497b6da$8/0owfq5zblA3o7kXQgGy} #
crypted 'password'
- {name: admin2, hashed_password: $1$0497b6da$8/0owfq5zblA3o7kXQgGy} #
crypted 'password'
...

For more complex use cases, we will encourage people to use Director auth
via UAA once that becomes available so that LDAP, password, lockout
policies, etc. can be configured.

Thoughts?

Dmitriy
_______________________________________________
cf-bosh mailing list
cf-bosh(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-bosh



_______________________________________________
cf-bosh mailing list
cf-bosh(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-bosh


Re: Resuming UAA work

Aristoteles Neto
 

From the perspective of using BOSH without CF, moving the users to the manifest is actually an improvement, as it allows you to list the actual users without logging in to the DB.

Are there any plans to split out UAA from Cloud Foundry? More specifically I’d love to be able to have groups / permissions scheme for deployments / commands without needing to install CF.

-- Neto

On 29/05/2015, at 10:33, Dmitriy Kalinin <dkalinin(a)pivotal.io> wrote:

Hey all,

We have resumed BOSH & UAA integration work: https://www.pivotaltracker.com/n/projects/1285490 to be worked on by a single pair.

As part of this work we are going to provide two options how to configure the Director auth:
- without UAA [default] (already exists, but we want to simplify it)
- with UAA (currently being worked on)

Currently Director only works without UAA and has its own user management functionality. There is the users table in the DB and CLI provides create/delete user commands. I would like to simplify this functionality as much as possible. Users would be configured statically in the manifest for the Director so that we can delete users table and associated commands.

Here is how the Director manifest would look like for 'Director without UAA' configuration:

properties:
director:
users:
- {name: admin, hashed_password: $1$0497b6da$8/0owfq5zblA3o7kXQgGy} # crypted 'password'
- {name: admin2, hashed_password: $1$0497b6da$8/0owfq5zblA3o7kXQgGy} # crypted 'password'
...

For more complex use cases, we will encourage people to use Director auth via UAA once that becomes available so that LDAP, password, lockout policies, etc. can be configured.

Thoughts?

Dmitriy
_______________________________________________
cf-bosh mailing list
cf-bosh(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-bosh


Re: Resuming UAA work

Mike Youngstrom
 

Sounds great! I assume you'll use a uaa oauth scope to determine if the
given user actually has access to bosh? bosh.admin?

On Thu, May 28, 2015 at 4:33 PM, Dmitriy Kalinin <dkalinin(a)pivotal.io>
wrote:

Hey all,

We have resumed BOSH & UAA integration work:
https://www.pivotaltracker.com/n/projects/1285490 to be worked on by a
single pair.

As part of this work we are going to provide two options how to configure
the Director auth:
- without UAA [default] (already exists, but we want to simplify it)
- with UAA (currently being worked on)

Currently Director only works without UAA and has its own user management
functionality. There is the users table in the DB and CLI provides
create/delete user commands. I would like to simplify this functionality as
much as possible. Users would be configured statically in the manifest for
the Director so that we can delete users table and associated commands.

Here is how the Director manifest would look like for 'Director without
UAA' configuration:

properties:
director:
users:
- {name: admin, hashed_password: $1$0497b6da$8/0owfq5zblA3o7kXQgGy} #
crypted 'password'
- {name: admin2, hashed_password: $1$0497b6da$8/0owfq5zblA3o7kXQgGy} #
crypted 'password'
...

For more complex use cases, we will encourage people to use Director auth
via UAA once that becomes available so that LDAP, password, lockout
policies, etc. can be configured.

Thoughts?

Dmitriy

_______________________________________________
cf-bosh mailing list
cf-bosh(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-bosh


Resuming UAA work

Dmitriy Kalinin
 

Hey all,

We have resumed BOSH & UAA integration work:
https://www.pivotaltracker.com/n/projects/1285490 to be worked on by a
single pair.

As part of this work we are going to provide two options how to configure
the Director auth:
- without UAA [default] (already exists, but we want to simplify it)
- with UAA (currently being worked on)

Currently Director only works without UAA and has its own user management
functionality. There is the users table in the DB and CLI provides
create/delete user commands. I would like to simplify this functionality as
much as possible. Users would be configured statically in the manifest for
the Director so that we can delete users table and associated commands.

Here is how the Director manifest would look like for 'Director without
UAA' configuration:

properties:
director:
users:
- {name: admin, hashed_password: $1$0497b6da$8/0owfq5zblA3o7kXQgGy} #
crypted 'password'
- {name: admin2, hashed_password: $1$0497b6da$8/0owfq5zblA3o7kXQgGy} #
crypted 'password'
...

For more complex use cases, we will encourage people to use Director auth
via UAA once that becomes available so that LDAP, password, lockout
policies, etc. can be configured.

Thoughts?

Dmitriy


Re: bosh vms failing with http 500

Dmitriy Kalinin
 

Please see solution in this thread:
https://lists.cloudfoundry.org/pipermail/cf-bosh/2015-May/000112.html

We have a story at the top of the backlog to fix this:
https://www.pivotaltracker.com/story/show/95458780

On Thu, May 28, 2015 at 7:26 AM, <jaffar.yelavalli(a)accenture.com> wrote:

Hi All,

Post a sudden restart of Bosh director (due to issues with cloud
provider), bosh vms command started giving http 500.

Below are some observations and logs.

Appreciate any help on this:



*$ bosh vms magnolia-prod*

Deployment `←[0m←[32mmagnolia-prod←[0m'

←[0m←[31mHTTP 500: ←[0m



*/var/vcap/sys/log/director/error.log:*

2015/05/28 13:41:12 [error] 15666#0: *6751 connect() failed (111:
Connection refused) while connecting to upstream, client: 127.0.0.1,
server: , request: "GET /deployments HTTP/1.1", upstream: "
http://127.0.0.1:25556/deployments", host: "127.0.0.1:25555"



*/var/vcap/sys/log/director/director.debug.log: *

D, [2015-05-28T14:20:01.363745 #24401] [0x11d008c] DEBUG -- : (0.000160s)
BEGIN

D, [2015-05-28T14:20:01.366036 #24401] [0x11d008c] DEBUG -- : (0.001031s)
INSERT INTO "tasks" ("user_id", "type", "description", "state",
"timestamp", "checkpoint_time") VALUES (1, 'vms', 'retrieve vm-stats',
'queued', '2015-05-28 14:20:01.362161+0000', '2015-05-28
14:20:01.362165+0000') RETURNING *

D, [2015-05-28T14:20:01.375214 #24401] [0x11d008c] DEBUG -- : (0.008387s)
COMMIT

D, [2015-05-28T14:20:01.375420 #24401] [0x11d008c] DEBUG -- : Released
connection: 23154800

D, [2015-05-28T14:20:01.376541 #24401] [0x11d008c] DEBUG -- : Acquired
connection: 23154800

D, [2015-05-28T14:20:01.377447 #24401] [0x11d008c] DEBUG -- : (0.000772s)
SELECT * FROM "tasks" WHERE (state NOT IN ('processing', 'queued')) ORDER
BY "id" DESC LIMIT 2 OFFSET 500

D, [2015-05-28T14:20:01.378228 #24401] [0x11d008c] DEBUG -- : Released
connection: 23154800

E, [2015-05-28T14:20:01.378543 #24401] [0x11d008c] ERROR -- : TypeError -
can't convert nil into String:

/var/vcap/packages/ruby/lib/ruby/1.9.1/fileutils.rb:1508:in `path'

/var/vcap/packages/ruby/lib/ruby/1.9.1/fileutils.rb:1508:in `block in
fu_list'

/var/vcap/packages/ruby/lib/ruby/1.9.1/fileutils.rb:1508:in `map'

/var/vcap/packages/ruby/lib/ruby/1.9.1/fileutils.rb:1508:in `fu_list'

/var/vcap/packages/ruby/lib/ruby/1.9.1/fileutils.rb:619:in `rm_r'

/var/vcap/packages/ruby/lib/ruby/1.9.1/fileutils.rb:648:in `rm_rf'

/var/vcap/packages/director/gem_home/gems/bosh-director-1.2559.0/lib/bosh/director/api/task_remover.rb:10:in
`block in remove'

/var/vcap/packages/director/gem_home/gems/sequel-3.43.0/lib/sequel/dataset/actions.rb:152:in
`block in each'

/var/vcap/packages/director/gem_home/gems/sequel-3.43.0/lib/sequel/adapters/postgres.rb:525:in
`block (2 levels) in fetch_rows'

/var/vcap/packages/director/gem_home/gems/sequel-3.43.0/lib/sequel/adapters/postgres.rb:720:in
`block in yield_hash_rows'

/var/vcap/packages/director/gem_home/gems/sequel-3.43.0/lib/sequel/adapters/postgres.rb:714:in
`times'

/var/vcap/packages/director/gem_home/gems/sequel-3.43.0/lib/sequel/adapters/postgres.rb:714:in
`yield_hash_rows'

/var/vcap/packages/director/gem_home/gems/sequel-3.43.0/lib/sequel/adapters/postgres.rb:525:in
`block in fetch_rows'

/var/vcap/packages/director/gem_home/gems/sequel-3.43.0/lib/sequel/adapters/postgres.rb:134:in
`execute'

/var/vcap/packages/director/gem_home/gems/sequel-3.43.0/lib/sequel/adapters/postgres.rb:413:in
`_execute'

/var/vcap/packages/director/gem_home/gems/sequel-3.43.0/lib/sequel/adapters/postgres.rb:242:in
`block (2 levels) in execute'

/var/vcap/packages/director/gem_home/gems/sequel-3.43.0/lib/sequel/adapters/postgres.rb:425:in
`check_database_errors'

/var/vcap/packages/director/gem_home/gems/sequel-3.43.0/lib/sequel/adapters/postgres.rb:242:in
`block in execute'

/var/vcap/packages/director/gem_home/gems/sequel-3.43.0/lib/sequel/database/connecting.rb:236:in
`block in synchronize'

/var/vcap/packages/director/gem_home/gems/sequel-3.43.0/lib/sequel/connection_pool/threaded.rb:104:in
`hold'

/var/vcap/packages/director/gem_home/gems/sequel-3.43.0/lib/sequel/database/connecting.rb:236:in
`synchronize'

/var/vcap/packages/director/gem_home/gems/sequel-3.43.0/lib/sequel/adapters/postgres.rb:242:in
`execute'

/var/vcap/packages/director/gem_home/gems/sequel-3.43.0/lib/sequel/dataset/actions.rb:801:in
`execute'

/var/vcap/packages/director/gem_home/gems/sequel-3.43.0/lib/sequel/adapters/postgres.rb:525:in
`fetch_rows'

/var/vcap/packages/director/gem_home/gems/sequel-3.43.0/lib/sequel/dataset/actions.rb:152:in
`each'

/var/vcap/packages/director/gem_home/gems/bosh-director-1.2559.0/lib/bosh/director/api/task_remover.rb:9:in
`remove'

/var/vcap/packages/director/gem_home/gems/bosh-director-1.2559.0/lib/bosh/director/api/task_helper.rb:24:in
`create_task'

/var/vcap/packages/director/gem_home/gems/bosh-director-1.2559.0/lib/bosh/director/job_queue.rb:9:in
`enqueue'

/var/vcap/packages/director/gem_home/gems/bosh-director-1.2559.0/lib/bosh/director/api/vm_state_manager.rb:5:in
`fetch_vm_state'

/var/vcap/packages/director/gem_home/gems/bosh-director-1.2559.0/lib/bosh/director/api/controllers/deployments_controller.rb:166:in
`block in <class:DeploymentsController>'

/var/vcap/packages/director/gem_home/gems/sinatra-1.4.3/lib/sinatra/base.rb:1541:in
`call'

/var/vcap/packages/director/gem_home/gems/sinatra-1.4.3/lib/sinatra/base.rb:1541:in
`block in compile!'

/var/vcap/packages/director/gem_home/gems/sinatra-1.4.3/lib/sinatra/base.rb:950:in
`[]'

/var/vcap/packages/director/gem_home/gems/sinatra-1.4.3/lib/sinatra/base.rb:950:in
`block (3 levels) in route!'

/var/vcap/packages/director/gem_home/gems/sinatra-1.4.3/lib/sinatra/base.rb:966:in
`route_eval'

/var/vcap/packages/director/gem_home/gems/sinatra-1.4.3/lib/sinatra/base.rb:950:in
`block (2 levels) in route!'

/var/vcap/packages/director/gem_home/gems/sinatra-1.4.3/lib/sinatra/base.rb:987:in
`block in process_route'

/var/vcap/packages/director/gem_home/gems/sinatra-1.4.3/lib/sinatra/base.rb:985:in
`catch'

/var/vcap/packages/director/gem_home/gems/sinatra-1.4.3/lib/sinatra/base.rb:985:in
`process_route'

/var/vcap/packages/director/gem_home/gems/sinatra-1.4.3/lib/sinatra/base.rb:948:in
`block in route!'

/var/vcap/packages/director/gem_home/gems/sinatra-1.4.3/lib/sinatra/base.rb:947:in
`each'

/var/vcap/packages/director/gem_home/gems/sinatra-1.4.3/lib/sinatra/base.rb:947:in
`route!'

/var/vcap/packages/director/gem_home/gems/sinatra-1.4.3/lib/sinatra/base.rb:1059:in
`block in dispatch!'

/var/vcap/packages/director/gem_home/gems/sinatra-1.4.3/lib/sinatra/base.rb:1041:in
`block in invoke'

/var/vcap/packages/director/gem_home/gems/sinatra-1.4.3/lib/sinatra/base.rb:1041:in
`catch'

/var/vcap/packages/director/gem_home/gems/sinatra-1.4.3/lib/sinatra/base.rb:1041:in
`invoke'

/var/vcap/packages/director/gem_home/gems/sinatra-1.4.3/lib/sinatra/base.rb:1056:in
`dispatch!'

/var/vcap/packages/director/gem_home/gems/sinatra-1.4.3/lib/sinatra/base.rb:882:in
`block in call!'

/var/vcap/packages/director/gem_home/gems/sinatra-1.4.3/lib/sinatra/base.rb:1041:in
`block in invoke'

/var/vcap/packages/director/gem_home/gems/sinatra-1.4.3/lib/sinatra/base.rb:1041:in
`catch'

/var/vcap/packages/director/gem_home/gems/sinatra-1.4.3/lib/sinatra/base.rb:1041:in
`invoke'

/var/vcap/packages/director/gem_home/gems/sinatra-1.4.3/lib/sinatra/base.rb:882:in
`call!'

/var/vcap/packages/director/gem_home/gems/sinatra-1.4.3/lib/sinatra/base.rb:870:in
`call'

/var/vcap/packages/director/gem_home/gems/rack-protection-1.5.0/lib/rack/protection/xss_header.rb:18:in
`call'

/var/vcap/packages/director/gem_home/gems/rack-protection-1.5.0/lib/rack/protection/path_traversal.rb:16:in
`call'

/var/vcap/packages/director/gem_home/gems/rack-protection-1.5.0/lib/rack/protection/json_csrf.rb:18:in
`call'

/var/vcap/packages/director/gem_home/gems/rack-protection-1.5.0/lib/rack/protection/base.rb:49:in
`call'

/var/vcap/packages/director/gem_home/gems/rack-protection-1.5.0/lib/rack/protection/base.rb:49:in
`call'

/var/vcap/packages/director/gem_home/gems/rack-protection-1.5.0/lib/rack/protection/frame_options.rb:31:in
`call'

/var/vcap/packages/director/gem_home/gems/rack-1.5.2/lib/rack/nulllogger.rb:9:in
`call'

/var/vcap/packages/director/gem_home/gems/rack-1.5.2/lib/rack/head.rb:11:in
`call'

/var/vcap/packages/director/gem_home/gems/sinatra-1.4.3/lib/sinatra/base.rb:175:in
`call'

/var/vcap/packages/director/gem_home/gems/sinatra-1.4.3/lib/sinatra/base.rb:1949:in
`call'

/var/vcap/packages/director/gem_home/gems/sinatra-1.4.3/lib/sinatra/base.rb:929:in
`forward'

/var/vcap/packages/director/gem_home/gems/sinatra-1.4.3/lib/sinatra/base.rb:1000:in
`route_missing'

/var/vcap/packages/director/gem_home/gems/sinatra-1.4.3/lib/sinatra/base.rb:961:in
`route!'

/var/vcap/packages/director/gem_home/gems/sinatra-1.4.3/lib/sinatra/base.rb:957:in
`route!'

/var/vcap/packages/director/gem_home/gems/sinatra-1.4.3/lib/sinatra/base.rb:957:in
`route!'

/var/vcap/packages/director/gem_home/gems/sinatra-1.4.3/lib/sinatra/base.rb:1059:in
`block in dispatch!'

/var/vcap/packages/director/gem_home/gems/sinatra-1.4.3/lib/sinatra/base.rb:1041:in
`block in invoke'

/var/vcap/packages/director/gem_home/gems/sinatra-1.4.3/lib/sinatra/base.rb:1041:in
`catch'

/var/vcap/packages/director/gem_home/gems/sinatra-1.4.3/lib/sinatra/base.rb:1041:in
`invoke'

/var/vcap/packages/director/gem_home/gems/sinatra-1.4.3/lib/sinatra/base.rb:1056:in
`dispatch!'

/var/vcap/packages/director/gem_home/gems/sinatra-1.4.3/lib/sinatra/base.rb:882:in
`block in call!'

/var/vcap/packages/director/gem_home/gems/sinatra-1.4.3/lib/sinatra/base.rb:1041:in
`block in invoke'

/var/vcap/packages/director/gem_home/gems/sinatra-1.4.3/lib/sinatra/base.rb:1041:in
`catch'

/var/vcap/packages/director/gem_home/gems/sinatra-1.4.3/lib/sinatra/base.rb:1041:in
`invoke'

/var/vcap/packages/director/gem_home/gems/sinatra-1.4.3/lib/sinatra/base.rb:882:in
`call!'

/var/vcap/packages/director/gem_home/gems/sinatra-1.4.3/lib/sinatra/base.rb:870:in
`call'

/var/vcap/packages/director/gem_home/gems/rack-protection-1.5.0/lib/rack/protection/xss_header.rb:18:in
`call'

/var/vcap/packages/director/gem_home/gems/rack-protection-1.5.0/lib/rack/protection/path_traversal.rb:16:in
`call'

/var/vcap/packages/director/gem_home/gems/rack-protection-1.5.0/lib/rack/protection/json_csrf.rb:18:in
`call'

/var/vcap/packages/director/gem_home/gems/rack-protection-1.5.0/lib/rack/protection/base.rb:49:in
`call'

/var/vcap/packages/director/gem_home/gems/rack-protection-1.5.0/lib/rack/protection/base.rb:49:in
`call'

/var/vcap/packages/director/gem_home/gems/rack-protection-1.5.0/lib/rack/protection/frame_options.rb:31:in
`call'

/var/vcap/packages/director/gem_home/gems/rack-1.5.2/lib/rack/nulllogger.rb:9:in
`call'

/var/vcap/packages/director/gem_home/gems/rack-1.5.2/lib/rack/head.rb:11:in
`call'

/var/vcap/packages/director/gem_home/gems/sinatra-1.4.3/lib/sinatra/base.rb:175:in
`call'

/var/vcap/packages/director/gem_home/gems/sinatra-1.4.3/lib/sinatra/base.rb:1949:in
`call'

/var/vcap/packages/director/gem_home/gems/rack-protection-1.5.0/lib/rack/protection/xss_header.rb:18:in
`call'

/var/vcap/packages/director/gem_home/gems/rack-protection-1.5.0/lib/rack/protection/path_traversal.rb:16:in
`call'

/var/vcap/packages/director/gem_home/gems/rack-protection-1.5.0/lib/rack/protection/json_csrf.rb:18:in
`call'

/var/vcap/packages/director/gem_home/gems/rack-protection-1.5.0/lib/rack/protection/base.rb:49:in
`call'

/var/vcap/packages/director/gem_home/gems/rack-protection-1.5.0/lib/rack/protection/base.rb:49:in
`call'

/var/vcap/packages/director/gem_home/gems/rack-protection-1.5.0/lib/rack/protection/frame_options.rb:31:in
`call'

/var/vcap/packages/director/gem_home/gems/rack-1.5.2/lib/rack/nulllogger.rb:9:in
`call'

/var/vcap/packages/director/gem_home/gems/rack-1.5.2/lib/rack/head.rb:11:in
`call'

/var/vcap/packages/director/gem_home/gems/sinatra-1.4.3/lib/sinatra/showexceptions.rb:21:in
`call'

/var/vcap/packages/director/gem_home/gems/sinatra-1.4.3/lib/sinatra/base.rb:175:in
`call'

/var/vcap/packages/director/gem_home/gems/sinatra-1.4.3/lib/sinatra/base.rb:1949:in
`call'

/var/vcap/packages/director/gem_home/gems/rack-1.5.2/lib/rack/builder.rb:138:in
`call'

/var/vcap/packages/director/gem_home/gems/rack-1.5.2/lib/rack/urlmap.rb:65:in
`block in call'

/var/vcap/packages/director/gem_home/gems/rack-1.5.2/lib/rack/urlmap.rb:50:in
`each'

/var/vcap/packages/director/gem_home/gems/rack-1.5.2/lib/rack/urlmap.rb:50:in
`call'

/var/vcap/packages/director/gem_home/gems/rack-1.5.2/lib/rack/commonlogger.rb:33:in
`call'

/var/vcap/packages/director/gem_home/gems/sinatra-1.4.3/lib/sinatra/base.rb:212:in
`call'

/var/vcap/packages/director/gem_home/gems/thin-1.5.1/lib/thin/connection.rb:81:in
`block in pre_process'

/var/vcap/packages/director/gem_home/gems/thin-1.5.1/lib/thin/connection.rb:79:in
`catch'

/var/vcap/packages/director/gem_home/gems/thin-1.5.1/lib/thin/connection.rb:79:in
`pre_process'

/var/vcap/packages/director/gem_home/gems/thin-1.5.1/lib/thin/connection.rb:54:in
`process'

/var/vcap/packages/director/gem_home/gems/thin-1.5.1/lib/thin/connection.rb:39:in
`receive_data'

/var/vcap/packages/director/gem_home/gems/eventmachine-0.12.10/lib/eventmachine.rb:256:in
`run_machine'

/var/vcap/packages/director/gem_home/gems/eventmachine-0.12.10/lib/eventmachine.rb:256:in
`run'

/var/vcap/packages/director/gem_home/gems/thin-1.5.1/lib/thin/backends/base.rb:63:in
`start'

/var/vcap/packages/director/gem_home/gems/thin-1.5.1/lib/thin/server.rb:159:in
`start'

/var/vcap/packages/director/gem_home/gems/bosh-director-1.2559.0/bin/bosh-director:36:in
`<top (required)>'

/var/vcap/packages/director/bin/bosh-director:23:in `load'

/var/vcap/packages/director/bin/bosh-director:23:in `<main>'

D, [2015-05-28T14:20:14.772521 #24401] [0x11d008c] DEBUG -- : Acquired
connection: 23154800

D, [2015-05-28T14:20:14.773114 #24401] [0x11d008c] DEBUG -- : (0.000395s)
SELECT COUNT(*) AS "count" FROM "users" LIMIT 1

D, [2015-05-28T14:20:14.773252 #24401] [0x11d008c] DEBUG -- : Released
connection: 23154800

D, [2015-05-28T14:20:14.773602 #24401] [0x11d008c] DEBUG -- : Acquired
connection: 23154800

D, [2015-05-28T14:20:14.773702 #24401] [0x11d008c] DEBUG -- : Released
connection: 23154800

D, [2015-05-28T14:20:14.773807 #24401] [0x11d008c] DEBUG -- : Acquired
connection: 23154800

D, [2015-05-28T14:20:14.774130 #24401] [0x11d008c] DEBUG -- : (0.000233s)
SELECT * FROM "users" WHERE ("username" = 'admin') LIMIT 1

D, [2015-05-28T14:20:14.774246 #24401] [0x11d008c] DEBUG -- : Released
connection: 23154800

D, [2015-05-28T14:21:14.799355 #24401] [0x11d008c] DEBUG -- : Acquired
connection: 23154800

D, [2015-05-28T14:21:14.799962 #24401] [0x11d008c] DEBUG -- : (0.000394s)
SELECT COUNT(*) AS "count" FROM "users" LIMIT 1

D, [2015-05-28T14:21:14.800088 #24401] [0x11d008c] DEBUG -- : Released
connection: 23154800

D, [2015-05-28T14:21:14.800423 #24401] [0x11d008c] DEBUG -- : Acquired
connection: 23154800

D, [2015-05-28T14:21:14.800507 #24401] [0x11d008c] DEBUG -- : Released
connection: 23154800



Let me know if any further information is required.



Thanks,

Jaffar Yelavalli

------------------------------

This message is for the designated recipient only and may contain
privileged, proprietary, or otherwise confidential information. If you have
received it in error, please notify the sender immediately and delete the
original. Any other use of the e-mail by you is prohibited. Where allowed
by local law, electronic communications with Accenture and its affiliates,
including e-mail and instant messaging (including content), may be scanned
by our systems for the purposes of information security and assessment of
internal compliance with Accenture policy.

______________________________________________________________________________________

www.accenture.com

_______________________________________________
cf-bosh mailing list
cf-bosh(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-bosh


Re: deployment to OpenStack with Keystone v3 (domains)

Dmitriy Kalinin
 

Not to my knowledge.

On Wed, May 27, 2015 at 10:19 PM, Koper, Dies <diesk(a)fast.au.fujitsu.com>
wrote:

Fog recently added support for authentication with OpenStack’s Keystone
v3 APIs, allowing deployment to domains other than the default domain.
Is anyone working on enhancing the BOSH CPI for OpenStack to allow BOSH
deployment to an OpenStack domain?



Cheers,

Dies Koper

_______________________________________________
cf-bosh mailing list
cf-bosh(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-bosh


bosh vms failing with http 500

jaffar.yelavalli@...
 

Hi All,
Post a sudden restart of Bosh director (due to issues with cloud provider), bosh vms command started giving http 500.
Below are some observations and logs.
Appreciate any help on this:

$ bosh vms magnolia-prod
Deployment `←[0m←[32mmagnolia-prod←[0m'
←[0m←[31mHTTP 500: ←[0m

/var/vcap/sys/log/director/error.log:
2015/05/28 13:41:12 [error] 15666#0: *6751 connect() failed (111: Connection refused) while connecting to upstream, client: 127.0.0.1, server: , request: "GET /deployments HTTP/1.1", upstream: "http://127.0.0.1:25556/deployments", host: "127.0.0.1:25555"

/var/vcap/sys/log/director/director.debug.log:
D, [2015-05-28T14:20:01.363745 #24401] [0x11d008c] DEBUG -- : (0.000160s) BEGIN
D, [2015-05-28T14:20:01.366036 #24401] [0x11d008c] DEBUG -- : (0.001031s) INSERT INTO "tasks" ("user_id", "type", "description", "state", "timestamp", "checkpoint_time") VALUES (1, 'vms', 'retrieve vm-stats', 'queued', '2015-05-28 14:20:01.362161+0000', '2015-05-28 14:20:01.362165+0000') RETURNING *
D, [2015-05-28T14:20:01.375214 #24401] [0x11d008c] DEBUG -- : (0.008387s) COMMIT
D, [2015-05-28T14:20:01.375420 #24401] [0x11d008c] DEBUG -- : Released connection: 23154800
D, [2015-05-28T14:20:01.376541 #24401] [0x11d008c] DEBUG -- : Acquired connection: 23154800
D, [2015-05-28T14:20:01.377447 #24401] [0x11d008c] DEBUG -- : (0.000772s) SELECT * FROM "tasks" WHERE (state NOT IN ('processing', 'queued')) ORDER BY "id" DESC LIMIT 2 OFFSET 500
D, [2015-05-28T14:20:01.378228 #24401] [0x11d008c] DEBUG -- : Released connection: 23154800
E, [2015-05-28T14:20:01.378543 #24401] [0x11d008c] ERROR -- : TypeError - can't convert nil into String:
/var/vcap/packages/ruby/lib/ruby/1.9.1/fileutils.rb:1508:in `path'
/var/vcap/packages/ruby/lib/ruby/1.9.1/fileutils.rb:1508:in `block in fu_list'
/var/vcap/packages/ruby/lib/ruby/1.9.1/fileutils.rb:1508:in `map'
/var/vcap/packages/ruby/lib/ruby/1.9.1/fileutils.rb:1508:in `fu_list'
/var/vcap/packages/ruby/lib/ruby/1.9.1/fileutils.rb:619:in `rm_r'
/var/vcap/packages/ruby/lib/ruby/1.9.1/fileutils.rb:648:in `rm_rf'
/var/vcap/packages/director/gem_home/gems/bosh-director-1.2559.0/lib/bosh/director/api/task_remover.rb:10:in `block in remove'
/var/vcap/packages/director/gem_home/gems/sequel-3.43.0/lib/sequel/dataset/actions.rb:152:in `block in each'
/var/vcap/packages/director/gem_home/gems/sequel-3.43.0/lib/sequel/adapters/postgres.rb:525:in `block (2 levels) in fetch_rows'
/var/vcap/packages/director/gem_home/gems/sequel-3.43.0/lib/sequel/adapters/postgres.rb:720:in `block in yield_hash_rows'
/var/vcap/packages/director/gem_home/gems/sequel-3.43.0/lib/sequel/adapters/postgres.rb:714:in `times'
/var/vcap/packages/director/gem_home/gems/sequel-3.43.0/lib/sequel/adapters/postgres.rb:714:in `yield_hash_rows'
/var/vcap/packages/director/gem_home/gems/sequel-3.43.0/lib/sequel/adapters/postgres.rb:525:in `block in fetch_rows'
/var/vcap/packages/director/gem_home/gems/sequel-3.43.0/lib/sequel/adapters/postgres.rb:134:in `execute'
/var/vcap/packages/director/gem_home/gems/sequel-3.43.0/lib/sequel/adapters/postgres.rb:413:in `_execute'
/var/vcap/packages/director/gem_home/gems/sequel-3.43.0/lib/sequel/adapters/postgres.rb:242:in `block (2 levels) in execute'
/var/vcap/packages/director/gem_home/gems/sequel-3.43.0/lib/sequel/adapters/postgres.rb:425:in `check_database_errors'
/var/vcap/packages/director/gem_home/gems/sequel-3.43.0/lib/sequel/adapters/postgres.rb:242:in `block in execute'
/var/vcap/packages/director/gem_home/gems/sequel-3.43.0/lib/sequel/database/connecting.rb:236:in `block in synchronize'
/var/vcap/packages/director/gem_home/gems/sequel-3.43.0/lib/sequel/connection_pool/threaded.rb:104:in `hold'
/var/vcap/packages/director/gem_home/gems/sequel-3.43.0/lib/sequel/database/connecting.rb:236:in `synchronize'
/var/vcap/packages/director/gem_home/gems/sequel-3.43.0/lib/sequel/adapters/postgres.rb:242:in `execute'
/var/vcap/packages/director/gem_home/gems/sequel-3.43.0/lib/sequel/dataset/actions.rb:801:in `execute'
/var/vcap/packages/director/gem_home/gems/sequel-3.43.0/lib/sequel/adapters/postgres.rb:525:in `fetch_rows'
/var/vcap/packages/director/gem_home/gems/sequel-3.43.0/lib/sequel/dataset/actions.rb:152:in `each'
/var/vcap/packages/director/gem_home/gems/bosh-director-1.2559.0/lib/bosh/director/api/task_remover.rb:9:in `remove'
/var/vcap/packages/director/gem_home/gems/bosh-director-1.2559.0/lib/bosh/director/api/task_helper.rb:24:in `create_task'
/var/vcap/packages/director/gem_home/gems/bosh-director-1.2559.0/lib/bosh/director/job_queue.rb:9:in `enqueue'
/var/vcap/packages/director/gem_home/gems/bosh-director-1.2559.0/lib/bosh/director/api/vm_state_manager.rb:5:in `fetch_vm_state'
/var/vcap/packages/director/gem_home/gems/bosh-director-1.2559.0/lib/bosh/director/api/controllers/deployments_controller.rb:166:in `block in <class:DeploymentsController>'
/var/vcap/packages/director/gem_home/gems/sinatra-1.4.3/lib/sinatra/base.rb:1541:in `call'
/var/vcap/packages/director/gem_home/gems/sinatra-1.4.3/lib/sinatra/base.rb:1541:in `block in compile!'
/var/vcap/packages/director/gem_home/gems/sinatra-1.4.3/lib/sinatra/base.rb:950:in `[]'
/var/vcap/packages/director/gem_home/gems/sinatra-1.4.3/lib/sinatra/base.rb:950:in `block (3 levels) in route!'
/var/vcap/packages/director/gem_home/gems/sinatra-1.4.3/lib/sinatra/base.rb:966:in `route_eval'
/var/vcap/packages/director/gem_home/gems/sinatra-1.4.3/lib/sinatra/base.rb:950:in `block (2 levels) in route!'
/var/vcap/packages/director/gem_home/gems/sinatra-1.4.3/lib/sinatra/base.rb:987:in `block in process_route'
/var/vcap/packages/director/gem_home/gems/sinatra-1.4.3/lib/sinatra/base.rb:985:in `catch'
/var/vcap/packages/director/gem_home/gems/sinatra-1.4.3/lib/sinatra/base.rb:985:in `process_route'
/var/vcap/packages/director/gem_home/gems/sinatra-1.4.3/lib/sinatra/base.rb:948:in `block in route!'
/var/vcap/packages/director/gem_home/gems/sinatra-1.4.3/lib/sinatra/base.rb:947:in `each'
/var/vcap/packages/director/gem_home/gems/sinatra-1.4.3/lib/sinatra/base.rb:947:in `route!'
/var/vcap/packages/director/gem_home/gems/sinatra-1.4.3/lib/sinatra/base.rb:1059:in `block in dispatch!'
/var/vcap/packages/director/gem_home/gems/sinatra-1.4.3/lib/sinatra/base.rb:1041:in `block in invoke'
/var/vcap/packages/director/gem_home/gems/sinatra-1.4.3/lib/sinatra/base.rb:1041:in `catch'
/var/vcap/packages/director/gem_home/gems/sinatra-1.4.3/lib/sinatra/base.rb:1041:in `invoke'
/var/vcap/packages/director/gem_home/gems/sinatra-1.4.3/lib/sinatra/base.rb:1056:in `dispatch!'
/var/vcap/packages/director/gem_home/gems/sinatra-1.4.3/lib/sinatra/base.rb:882:in `block in call!'
/var/vcap/packages/director/gem_home/gems/sinatra-1.4.3/lib/sinatra/base.rb:1041:in `block in invoke'
/var/vcap/packages/director/gem_home/gems/sinatra-1.4.3/lib/sinatra/base.rb:1041:in `catch'
/var/vcap/packages/director/gem_home/gems/sinatra-1.4.3/lib/sinatra/base.rb:1041:in `invoke'
/var/vcap/packages/director/gem_home/gems/sinatra-1.4.3/lib/sinatra/base.rb:882:in `call!'
/var/vcap/packages/director/gem_home/gems/sinatra-1.4.3/lib/sinatra/base.rb:870:in `call'
/var/vcap/packages/director/gem_home/gems/rack-protection-1.5.0/lib/rack/protection/xss_header.rb:18:in `call'
/var/vcap/packages/director/gem_home/gems/rack-protection-1.5.0/lib/rack/protection/path_traversal.rb:16:in `call'
/var/vcap/packages/director/gem_home/gems/rack-protection-1.5.0/lib/rack/protection/json_csrf.rb:18:in `call'
/var/vcap/packages/director/gem_home/gems/rack-protection-1.5.0/lib/rack/protection/base.rb:49:in `call'
/var/vcap/packages/director/gem_home/gems/rack-protection-1.5.0/lib/rack/protection/base.rb:49:in `call'
/var/vcap/packages/director/gem_home/gems/rack-protection-1.5.0/lib/rack/protection/frame_options.rb:31:in `call'
/var/vcap/packages/director/gem_home/gems/rack-1.5.2/lib/rack/nulllogger.rb:9:in `call'
/var/vcap/packages/director/gem_home/gems/rack-1.5.2/lib/rack/head.rb:11:in `call'
/var/vcap/packages/director/gem_home/gems/sinatra-1.4.3/lib/sinatra/base.rb:175:in `call'
/var/vcap/packages/director/gem_home/gems/sinatra-1.4.3/lib/sinatra/base.rb:1949:in `call'
/var/vcap/packages/director/gem_home/gems/sinatra-1.4.3/lib/sinatra/base.rb:929:in `forward'
/var/vcap/packages/director/gem_home/gems/sinatra-1.4.3/lib/sinatra/base.rb:1000:in `route_missing'
/var/vcap/packages/director/gem_home/gems/sinatra-1.4.3/lib/sinatra/base.rb:961:in `route!'
/var/vcap/packages/director/gem_home/gems/sinatra-1.4.3/lib/sinatra/base.rb:957:in `route!'
/var/vcap/packages/director/gem_home/gems/sinatra-1.4.3/lib/sinatra/base.rb:957:in `route!'
/var/vcap/packages/director/gem_home/gems/sinatra-1.4.3/lib/sinatra/base.rb:1059:in `block in dispatch!'
/var/vcap/packages/director/gem_home/gems/sinatra-1.4.3/lib/sinatra/base.rb:1041:in `block in invoke'
/var/vcap/packages/director/gem_home/gems/sinatra-1.4.3/lib/sinatra/base.rb:1041:in `catch'
/var/vcap/packages/director/gem_home/gems/sinatra-1.4.3/lib/sinatra/base.rb:1041:in `invoke'
/var/vcap/packages/director/gem_home/gems/sinatra-1.4.3/lib/sinatra/base.rb:1056:in `dispatch!'
/var/vcap/packages/director/gem_home/gems/sinatra-1.4.3/lib/sinatra/base.rb:882:in `block in call!'
/var/vcap/packages/director/gem_home/gems/sinatra-1.4.3/lib/sinatra/base.rb:1041:in `block in invoke'
/var/vcap/packages/director/gem_home/gems/sinatra-1.4.3/lib/sinatra/base.rb:1041:in `catch'
/var/vcap/packages/director/gem_home/gems/sinatra-1.4.3/lib/sinatra/base.rb:1041:in `invoke'
/var/vcap/packages/director/gem_home/gems/sinatra-1.4.3/lib/sinatra/base.rb:882:in `call!'
/var/vcap/packages/director/gem_home/gems/sinatra-1.4.3/lib/sinatra/base.rb:870:in `call'
/var/vcap/packages/director/gem_home/gems/rack-protection-1.5.0/lib/rack/protection/xss_header.rb:18:in `call'
/var/vcap/packages/director/gem_home/gems/rack-protection-1.5.0/lib/rack/protection/path_traversal.rb:16:in `call'
/var/vcap/packages/director/gem_home/gems/rack-protection-1.5.0/lib/rack/protection/json_csrf.rb:18:in `call'
/var/vcap/packages/director/gem_home/gems/rack-protection-1.5.0/lib/rack/protection/base.rb:49:in `call'
/var/vcap/packages/director/gem_home/gems/rack-protection-1.5.0/lib/rack/protection/base.rb:49:in `call'
/var/vcap/packages/director/gem_home/gems/rack-protection-1.5.0/lib/rack/protection/frame_options.rb:31:in `call'
/var/vcap/packages/director/gem_home/gems/rack-1.5.2/lib/rack/nulllogger.rb:9:in `call'
/var/vcap/packages/director/gem_home/gems/rack-1.5.2/lib/rack/head.rb:11:in `call'
/var/vcap/packages/director/gem_home/gems/sinatra-1.4.3/lib/sinatra/base.rb:175:in `call'
/var/vcap/packages/director/gem_home/gems/sinatra-1.4.3/lib/sinatra/base.rb:1949:in `call'
/var/vcap/packages/director/gem_home/gems/rack-protection-1.5.0/lib/rack/protection/xss_header.rb:18:in `call'
/var/vcap/packages/director/gem_home/gems/rack-protection-1.5.0/lib/rack/protection/path_traversal.rb:16:in `call'
/var/vcap/packages/director/gem_home/gems/rack-protection-1.5.0/lib/rack/protection/json_csrf.rb:18:in `call'
/var/vcap/packages/director/gem_home/gems/rack-protection-1.5.0/lib/rack/protection/base.rb:49:in `call'
/var/vcap/packages/director/gem_home/gems/rack-protection-1.5.0/lib/rack/protection/base.rb:49:in `call'
/var/vcap/packages/director/gem_home/gems/rack-protection-1.5.0/lib/rack/protection/frame_options.rb:31:in `call'
/var/vcap/packages/director/gem_home/gems/rack-1.5.2/lib/rack/nulllogger.rb:9:in `call'
/var/vcap/packages/director/gem_home/gems/rack-1.5.2/lib/rack/head.rb:11:in `call'
/var/vcap/packages/director/gem_home/gems/sinatra-1.4.3/lib/sinatra/showexceptions.rb:21:in `call'
/var/vcap/packages/director/gem_home/gems/sinatra-1.4.3/lib/sinatra/base.rb:175:in `call'
/var/vcap/packages/director/gem_home/gems/sinatra-1.4.3/lib/sinatra/base.rb:1949:in `call'
/var/vcap/packages/director/gem_home/gems/rack-1.5.2/lib/rack/builder.rb:138:in `call'
/var/vcap/packages/director/gem_home/gems/rack-1.5.2/lib/rack/urlmap.rb:65:in `block in call'
/var/vcap/packages/director/gem_home/gems/rack-1.5.2/lib/rack/urlmap.rb:50:in `each'
/var/vcap/packages/director/gem_home/gems/rack-1.5.2/lib/rack/urlmap.rb:50:in `call'
/var/vcap/packages/director/gem_home/gems/rack-1.5.2/lib/rack/commonlogger.rb:33:in `call'
/var/vcap/packages/director/gem_home/gems/sinatra-1.4.3/lib/sinatra/base.rb:212:in `call'
/var/vcap/packages/director/gem_home/gems/thin-1.5.1/lib/thin/connection.rb:81:in `block in pre_process'
/var/vcap/packages/director/gem_home/gems/thin-1.5.1/lib/thin/connection.rb:79:in `catch'
/var/vcap/packages/director/gem_home/gems/thin-1.5.1/lib/thin/connection.rb:79:in `pre_process'
/var/vcap/packages/director/gem_home/gems/thin-1.5.1/lib/thin/connection.rb:54:in `process'
/var/vcap/packages/director/gem_home/gems/thin-1.5.1/lib/thin/connection.rb:39:in `receive_data'
/var/vcap/packages/director/gem_home/gems/eventmachine-0.12.10/lib/eventmachine.rb:256:in `run_machine'
/var/vcap/packages/director/gem_home/gems/eventmachine-0.12.10/lib/eventmachine.rb:256:in `run'
/var/vcap/packages/director/gem_home/gems/thin-1.5.1/lib/thin/backends/base.rb:63:in `start'
/var/vcap/packages/director/gem_home/gems/thin-1.5.1/lib/thin/server.rb:159:in `start'
/var/vcap/packages/director/gem_home/gems/bosh-director-1.2559.0/bin/bosh-director:36:in `<top (required)>'
/var/vcap/packages/director/bin/bosh-director:23:in `load'
/var/vcap/packages/director/bin/bosh-director:23:in `<main>'
D, [2015-05-28T14:20:14.772521 #24401] [0x11d008c] DEBUG -- : Acquired connection: 23154800
D, [2015-05-28T14:20:14.773114 #24401] [0x11d008c] DEBUG -- : (0.000395s) SELECT COUNT(*) AS "count" FROM "users" LIMIT 1
D, [2015-05-28T14:20:14.773252 #24401] [0x11d008c] DEBUG -- : Released connection: 23154800
D, [2015-05-28T14:20:14.773602 #24401] [0x11d008c] DEBUG -- : Acquired connection: 23154800
D, [2015-05-28T14:20:14.773702 #24401] [0x11d008c] DEBUG -- : Released connection: 23154800
D, [2015-05-28T14:20:14.773807 #24401] [0x11d008c] DEBUG -- : Acquired connection: 23154800
D, [2015-05-28T14:20:14.774130 #24401] [0x11d008c] DEBUG -- : (0.000233s) SELECT * FROM "users" WHERE ("username" = 'admin') LIMIT 1
D, [2015-05-28T14:20:14.774246 #24401] [0x11d008c] DEBUG -- : Released connection: 23154800
D, [2015-05-28T14:21:14.799355 #24401] [0x11d008c] DEBUG -- : Acquired connection: 23154800
D, [2015-05-28T14:21:14.799962 #24401] [0x11d008c] DEBUG -- : (0.000394s) SELECT COUNT(*) AS "count" FROM "users" LIMIT 1
D, [2015-05-28T14:21:14.800088 #24401] [0x11d008c] DEBUG -- : Released connection: 23154800
D, [2015-05-28T14:21:14.800423 #24401] [0x11d008c] DEBUG -- : Acquired connection: 23154800
D, [2015-05-28T14:21:14.800507 #24401] [0x11d008c] DEBUG -- : Released connection: 23154800

Let me know if any further information is required.

Thanks,
Jaffar Yelavalli

________________________________

This message is for the designated recipient only and may contain privileged, proprietary, or otherwise confidential information. If you have received it in error, please notify the sender immediately and delete the original. Any other use of the e-mail by you is prohibited. Where allowed by local law, electronic communications with Accenture and its affiliates, including e-mail and instant messaging (including content), may be scanned by our systems for the purposes of information security and assessment of internal compliance with Accenture policy.
______________________________________________________________________________________

www.accenture.com


Re: Regarding installation of bosh

Kamil Burzynski <nopik@...>
 

Isn't it wrong that ping to google.com goes out of 216.59.196.14 address
but ping to rubygems.org goes out of 10.0.0.2? Looks like routing or
default gateway settings is broken on that machine. Check your network
settings.

On 28/05/15 07:19 , Bharath Posa wrote:
Hi james ,

thanks for the replay. I am attaching my terminal output screen along
with this mail. you can also see the ping to google.com
<http://google.com> is successful but the ping for rubygems.org
<http://rubygems.org> is failing

regards
bharath

On Tue, May 26, 2015 at 8:35 PM, James Bayer <jbayer(a)pivotal.io
<mailto:jbayer(a)pivotal.io>> wrote:

can you post the actual terminal output to a gist or something?

On Tue, May 26, 2015 at 12:45 AM, Bharath Posa
<bharathp(a)vedams.com <mailto:bharathp(a)vedams.com>> wrote:

Hi guys ,


I am right now working on deploying cloudfoundry on openstack.
When i am running the gem install bosh_cli command it is
throwing error saying host not found. I was able to ping all
the other websites from my terminal like google , wikipedia .
In same terminal if i open rubygems.org <http://rubygems.org>
using a web browser it is working properly .


Unable to understand what could be the real problem

regards
bharath

_______________________________________________
cf-bosh mailing list
cf-bosh(a)lists.cloudfoundry.org
<mailto:cf-bosh(a)lists.cloudfoundry.org>
https://lists.cloudfoundry.org/mailman/listinfo/cf-bosh




--
Thank you,

James Bayer




_______________________________________________
cf-bosh mailing list
cf-bosh(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-bosh
--
Best regards from
Kamil Burzynski


Re: Most bosh director commands fail with a HTTP 500

Scott Taggart <staggart@...>
 

Thanks Dmitriy – this fixed our issue J



From: Dmitriy Kalinin [mailto:dkalinin(a)pivotal.io]
Sent: 26 May 2015 19:54
To: Scott Taggart
Cc: CF BOSH Mailing List
Subject: Re: [cf-bosh] Most bosh director commands fail with a HTTP 500



There currently exists a problem in the Director during task cleanup.
Director tries to clean up task logs for the tasks that do not have
associated directory on disk.
https://www.pivotaltracker.com/story/show/95458780 will fix this.



To fix the Director until we release a bug fix:

- ssh as vcap into the Director VM

- run /var/vcap/jobs/director/bin/director_ctl console

- opens up console to the Director DB

- run Bosh::Director::Models::Task.where(output: nil).update(output:
'/tmp/123')

- updates tasks without task log directories to a dummy destination;
Director will be happy to run rm -rf /tmp/123 when it cleans up tasks.



After that you should be able to run `bosh vms` and other tasks again.





On Mon, May 25, 2015 at 2:27 PM, Scott Taggart <staggart(a)skyscapecloud.com>
wrote:

Hi folks,



One of my three bosh directors has gotten itself stuck in a strange state
where most (but not all) operations fail. I have recreated the director with
a couple of different stemcells (but same persistent disk) and the issue
persists. Looks like potentially a database issue on the director, but I
have done a very quick visual check of a few tables (e.g. vms, deployments)
and they seem fine from a glance... not sure what's going on.



Everything CF-related currently/previously under the director is continuing
to run fine in this AZ, it's just the director that's lost it:



$ bosh deployments

+---------------------+-----------------------+----------------------------------------------+--------------+
| Name | Release(s) | Stemcell(s) | Cloud Config |
+---------------------+-----------------------+----------------------------------------------+--------------+
| cf-mysql | cf-mysql/19 | bosh-vcloud-esxi-ubuntu-trusty-go_agent/2915 |
none |
+---------------------+-----------------------+----------------------------------------------+--------------+
| cf-services-contrib | cf-services-contrib/6 |
bosh-vcloud-esxi-ubuntu-trusty-go_agent/2915 | none |
+---------------------+-----------------------+----------------------------------------------+--------------+
| xxxxxxx_cf | cf/208 | bosh-vcloud-esxi-ubuntu-trusty-go_agent/2915 | none
|
+---------------------+-----------------------+----------------------------------------------+--------------+

Deployments total: 3



$ bosh releases

+---------------------+----------+-------------+
| Name | Versions | Commit Hash |
+---------------------+----------+-------------+
| cf | 208* | 5d00be54+ |
| cf-mysql | 19* | dfab036b+ |
| cf-services-contrib | 6* | 57fd2098+ |
+---------------------+----------+-------------+
(*) Currently deployed
(+) Uncommitted changes

Releases total: 3



$ bosh locks
No locks


$ bosh tasks
No running tasks



$ bosh vms
Deployment `cf-mysql'
HTTP 500:



$ bosh cloudcheck
Performing cloud check...

Processing deployment manifest
------------------------------
HTTP 500:



The relevant error I get from /var/vcap/sys/log/director/director.debug.log
on the director is:

E, [2015-05-25 21:20:15 #1010] [] ERROR -- Director: TypeError - no implicit
conversion of nil into String:
/var/vcap/packages/ruby/lib/ruby/2.1.0/fileutils.rb:1572:in `path'
/var/vcap/packages/ruby/lib/ruby/2.1.0/fileutils.rb:1572:in `block in
fu_list'
/var/vcap/packages/ruby/lib/ruby/2.1.0/fileutils.rb:1572:in `map'
/var/vcap/packages/ruby/lib/ruby/2.1.0/fileutils.rb:1572:in `fu_list'
/var/vcap/packages/ruby/lib/ruby/2.1.0/fileutils.rb:625:in `rm_r'
/var/vcap/packages/ruby/lib/ruby/2.1.0/fileutils.rb:654:in `rm_rf'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2957.0/lib/bosh/director/api/task_remover.rb:9:in
`block in remove'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sequel-3.43.0/lib/sequel/dataset/actions.rb:152:in
`block in each'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sequel-3.43.0/lib/sequel/adapters/postgres.rb:525:in
`block (2 levels) in fetch_rows'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sequel-3.43.0/lib/sequel/adapters/postgres.rb:720:in
`block in yield_hash_rows'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sequel-3.43.0/lib/sequel/adapters/postgres.rb:714:in
`times'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sequel-3.43.0/lib/sequel/adapters/postgres.rb:714:in
`yield_hash_rows'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sequel-3.43.0/lib/sequel/adapters/postgres.rb:525:in
`block in fetch_rows'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sequel-3.43.0/lib/sequel/adapters/postgres.rb:134:in
`execute'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sequel-3.43.0/lib/sequel/adapters/postgres.rb:413:in
`_execute'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sequel-3.43.0/lib/sequel/adapters/postgres.rb:242:in
`block (2 levels) in execute'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sequel-3.43.0/lib/sequel/adapters/postgres.rb:425:in
`check_database_errors'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sequel-3.43.0/lib/sequel/adapters/postgres.rb:242:in
`block in execute'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sequel-3.43.0/lib/sequel/database/connecting.rb:236:in
`block in synchronize'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sequel-3.43.0/lib/sequel/connection_pool/threaded.rb:104:in
`hold'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sequel-3.43.0/lib/sequel/database/connecting.rb:236:in
`synchronize'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sequel-3.43.0/lib/sequel/adapters/postgres.rb:242:in
`execute'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sequel-3.43.0/lib/sequel/dataset/actions.rb:801:in
`execute'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sequel-3.43.0/lib/sequel/adapters/postgres.rb:525:in
`fetch_rows'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sequel-3.43.0/lib/sequel/dataset/actions.rb:152:in
`each'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2957.0/lib/bosh/director/api/task_remover.rb:8:in
`remove'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2957.0/lib/bosh/director/api/task_helper.rb:23:in
`create_task'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2957.0/lib/bosh/director/job_queue.rb:9:in
`enqueue'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2957.0/lib/bosh/director/api/vm_state_manager.rb:5:in
`fetch_vm_state'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2957.0/lib/bosh/director/api/controllers/deployments_controller.rb:182:in
`block in <class:DeploymentsController>'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1603:in
`call'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1603:in
`block in compile!'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:966:in
`[]'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:966:in
`block (3 levels) in route!'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:985:in
`route_eval'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:966:in
`block (2 levels) in route!'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1006:in
`block in process_route'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1004:in
`catch'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1004:in
`process_route'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:964:in
`block in route!'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:963:in
`each'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:963:in
`route!'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1076:in
`block in dispatch!'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1058:in
`block in invoke'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1058:in
`catch'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1058:in
`invoke'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1073:in
`dispatch!'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:898:in
`block in call!'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1058:in
`block in invoke'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1058:in
`catch'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:1058:in
`invoke'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:898:in
`call!'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:886:in
`call'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/rack-protection-1.5.3/lib/rack/protection/xss_header.rb:18:in
`call'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/rack-protection-1.5.3/lib/rack/protection/path_traversal.rb:16:in
`call'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/rack-protection-1.5.3/lib/rack/protection/json_csrf.rb:18:in
`call'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/rack-protection-1.5.3/lib/rack/protection/base.rb:49:in
`call'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/rack-protection-1.5.3/lib/rack/protection/base.rb:49:in
`call'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/rack-protection-1.5.3/lib/rack/protection/frame_options.rb:31:in
`call'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/rack-1.6.0/lib/rack/nulllogger.rb:9:in
`call'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/rack-1.6.0/lib/rack/head.rb:13:in
`call'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:180:in
`call'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:2014:in
`call'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/rack-1.6.0/lib/rack/urlmap.rb:66:in
`block in call'

/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/rack-1.6.0/lib/rack/urlmap.rb:50:in
`each'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/rack-1.6.0/lib/rack/urlmap.rb:50:in
`call'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/rack-1.6.0/lib/rack/commonlogger.rb:33:in
`call'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/sinatra-1.4.5/lib/sinatra/base.rb:217:in
`call'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/thin-1.5.1/lib/thin/connection.rb:81:in
`block in pre_process'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/thin-1.5.1/lib/thin/connection.rb:79:in
`catch'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/thin-1.5.1/lib/thin/connection.rb:79:in
`pre_process'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/thin-1.5.1/lib/thin/connection.rb:54:in
`process'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/thin-1.5.1/lib/thin/connection.rb:39:in
`receive_data'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/eventmachine-1.0.3/lib/eventmachine.rb:187:in
`run_machine'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/eventmachine-1.0.3/lib/eventmachine.rb:187:in
`run'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/thin-1.5.1/lib/thin/backends/base.rb:63:in
`start'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/thin-1.5.1/lib/thin/server.rb:159:in
`start'
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2957.0/bin/bosh-director:37:in
`<top (required)>'
/var/vcap/packages/director/bin/bosh-director:16:in `load'
/var/vcap/packages/director/bin/bosh-director:16:in `<main>'



I've wiped my local bosh config and re-targetted the director and tried
running bosh vms without specifying a deployment manifest (i.e. rule the
manifest out) - still get the same 500



Any tips appreciated!


Notice:
This message contains information that may be privileged or confidential and
is the property of Skyscape. It is intended only for the person to whom it
is addressed. If you are not the intended recipient, you are not authorised
to read, print, retain, copy, disseminate, distribute, or use this message
or any part thereof. If you receive this message in error, please notify the
sender immediately and delete all copies of this message. Skyscape reserves
the right to monitor all e-mail communications through its networks.
Skyscape Cloud Services Limited is registered in England and Wales: Company
No: 07619797. Registered office: Hartham Park, Hartham, Corsham, Wiltshire
SN13 0RP.

______________________________________________________________________
This email has been scanned by the Symantec Email Security.cloud service.
______________________________________________________________________


_______________________________________________
cf-bosh mailing list
cf-bosh(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-bosh




______________________________________________________________________
This email has been scanned by the Symantec Email Security.cloud service.
______________________________________________________________________


Notice:
This message contains information that may be privileged or confidential and is the property of Skyscape. It is intended only for the person to whom it is addressed. If you are not the intended recipient, you are not authorised to read, print, retain, copy, disseminate, distribute, or use this message or any part thereof. If you receive this message in error, please notify the sender immediately and delete all copies of this message. Skyscape reserves the right to monitor all e-mail communications through its networks. Skyscape Cloud Services Limited is registered in England and Wales: Company No: 07619797. Registered office: Hartham Park, Hartham, Corsham, Wiltshire SN13 0RP.

______________________________________________________________________
This email has been scanned by the Symantec Email Security.cloud service.
______________________________________________________________________


Re: Regarding installation of bosh

James Bayer
 

looks like you have a firewall or network connectivity issue between your
OS and the destination, in this case rubygems.org. i would talk to your
network administrator.

for example, this is my curl output showing a redirect is being issued.

$ curl -vvv http://rubygems.org/gems/semi_semantic-1.1.0.gem
* Hostname was NOT found in DNS cache
* Trying 54.186.104.15...
* Connected to rubygems.org (54.186.104.15) port 80 (#0)
GET /gems/semi_semantic-1.1.0.gem HTTP/1.1
User-Agent: curl/7.37.1
Host: rubygems.org
Accept: */*
< HTTP/1.1 302 Moved Temporarily
* Server nginx is not blacklisted
< Server: nginx
< Date: Thu, 28 May 2015 05:52:08 GMT
< Content-Type: text/html
< Content-Length: 154
< Connection: keep-alive
< Location:
http://rubygems.global.ssl.fastly.net/gems/semi_semantic-1.1.0.gem
<
<html>
<head><title>302 Found</title></head>
<body bgcolor="white">
<center><h1>302 Found</h1></center>
<hr><center>nginx</center>
</body>
</html>
* Connection #0 to host rubygems.org left intact

On Wed, May 27, 2015 at 10:19 PM, Bharath Posa <bharathp(a)vedams.com> wrote:

Hi james ,

thanks for the replay. I am attaching my terminal output screen along with
this mail. you can also see the ping to google.com is successful but the
ping for rubygems.org is failing

regards
bharath

On Tue, May 26, 2015 at 8:35 PM, James Bayer <jbayer(a)pivotal.io> wrote:

can you post the actual terminal output to a gist or something?

On Tue, May 26, 2015 at 12:45 AM, Bharath Posa <bharathp(a)vedams.com>
wrote:

Hi guys ,


I am right now working on deploying cloudfoundry on openstack. When i am
running the gem install bosh_cli command it is throwing error saying host
not found. I was able to ping all the other websites from my terminal
like google , wikipedia . In same terminal if i open rubygems.org
using a web browser it is working properly .


Unable to understand what could be the real problem

regards
bharath

_______________________________________________
cf-bosh mailing list
cf-bosh(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-bosh


--
Thank you,

James Bayer

--
Thank you,

James Bayer


Re: Multi-AZ CF Deployment in Openstack

ryunata <ricky.yunata@...>
 

It solved the issue. Thank you very much!



--
View this message in context: http://cf-bosh.70367.x6.nabble.com/cf-bosh-Multi-AZ-CF-Deployment-in-Openstack-tp64p67.html
Sent from the CF BOSH mailing list archive at Nabble.com.