Date   

Re: Need help for diego deployment

Kinjal Doshi
 

Hi

I have resolved the errors in generating deployment manifest. on executing
bosh deploy, the below error is encountered while compiling packages:

Started compiling packages
Started compiling packages > rtr/2d7de4f6fc25938c21c5be87174f95583feb14b5
Started compiling packages >
syslog_drain_binder/3c9c0b02c11c8dba10d059fe07e6d2ee641ec053
Started compiling packages >
routing-api/b4a3e7034c4a925aa42d45419b46ad6b128d92b1
Started compiling packages >
collector/158398837665181c70bd786b46e6f4d772523017
Failed compiling packages >
routing-api/b4a3e7034c4a925aa42d45419b46ad6b128d92b1: Timed out pinging to
dc15da09-8086-4231-a5b4-15efafa27eaf after 600 seconds (00:11:03)
Failed compiling packages >
syslog_drain_binder/3c9c0b02c11c8dba10d059fe07e6d2ee641ec053: Timed out
pinging to d150aff4-095c-4d48-8c6d-f182fc3738c7 after 600 seconds (00:11:03)
Failed compiling packages >
collector/158398837665181c70bd786b46e6f4d772523017: Timed out pinging to
824b2de9-bb39-4b24-8491-4e26f79adb50 after 600 seconds (00:11:03)
Failed compiling packages >
rtr/2d7de4f6fc25938c21c5be87174f95583feb14b5: Timed out pinging to
4d636c66-690a-43e7-8481-71258732d066 after 600 seconds (00:11:35)

Error 450002: Timed out pinging to dc15da09-8086-4231-a5b4-15efafa27eaf
after 600 seconds

Task 255 error

Would be great if some pointers can be provided to proceed further. Please
let me know if the logs for this bosh task are required.

Thanks in advance,
Kinjal

On Fri, Jan 29, 2016 at 10:45 PM, Kinjal Doshi <kindoshi(a)gmail.com> wrote:

Hi Amit,

Please ignore the unresolved nodes error in the above email. I have been
able to correct it, running into some more problems, checking it right now.

Please do let me know about my question on the dbs, though.

Thanks in advance,
Kinjal

On Fri, Jan 29, 2016 at 1:29 PM, Kinjal Doshi <kindoshi(a)gmail.com> wrote:

Hi Amit,

Thanks a lot for your response on this.

I was trying to use the manifest generation scripts to redeploy cf but I
ran into errors during spiff merge as below:

ubuntu(a)ip-172-31-45-52:~/cf-deployment/cf-release$
scripts/generate_deployment_manifest aws ../cf-stub.yml > cf-deployment.yml
2016/01/29 07:49:05 error generating manifest: unresolved nodes:
(( static_ips(1) )) in
/home/ubuntu/cf-deployment/cf-release/templates/cf-infrastructure-aws.yml
jobs.[0].networks.[0].static_ips
(( static_ips(5, 6, 15, 16, 17, 18, 19, 20) )) in
/home/ubuntu/cf-deployment/cf-release/templates/cf-infrastructure-aws.yml
jobs.[1].networks.[0].static_ips
(( static_ips(27, 28, 29) )) in
/home/ubuntu/cf-deployment/cf-release/templates/cf-infrastructure-aws.yml
jobs.[5].networks.[0].static_ips
(( static_ips(10, 25) )) in
/home/ubuntu/cf-deployment/cf-release/templates/cf-infrastructure-aws.yml
jobs.[6].networks.[0].static_ips


The public gist pointing to the cf-stub created for this attempt is at:
https://gist.github.com/kinjaldoshi/b0dc004876d2a4615c65

I am not very sure but I think this has something to do with the way I
configured the subnets. Could you please guide me on the corrections
required here. I know how (( static_ips(27, 28, 29) )) works, but not sure
why there is a problem in resolving to the required values.

Another question, I have is on the editing instructions at:
http://docs.cloudfoundry.org/deploying/aws/cf-stub.html#editing

For the ccdb and uaadb, as per comments, is it required for me to create
a service and host these DBs as mentioned in the 'Editing Instructions'
column? In that case where can i find the DDL to create the db and tables?


Thanks a lot in advance,
Kinjal


On Fri, Jan 29, 2016 at 10:31 AM, Amit Gupta <agupta(a)pivotal.io> wrote:

Hi Kinjal,

The minimal-aws manifest would be quite difficult to augment to get it
to work with diego. You would need to add static IP to your private
network, add a resource pool or increase the size of an existing one, add
the consul job, colocate the consul agent with some of the CF jobs, and add
a few configuration properties that aren't in the minimal one (e.g.
loggregator.tls.ca). It's probably simpler to use the manifest
generations scripts to redeploy cf (before deploying diego).

Use:

* http://docs.cloudfoundry.org/deploying/common/create_a_manifest.html
* http://docs.cloudfoundry.org/deploying/common/deploy.html

Let us know if you run into some difficulties. These documents ask you
to define stubs, which require you to input data from your AWS IaaS setup,
and may not exactly play nicely with the AWS setup described in the
minimal-aws doc, I'm not sure.

Best,
Amit



On Wed, Jan 27, 2016 at 3:17 AM, Kinjal Doshi <kindoshi(a)gmail.com>
wrote:

Hi Eric,

Thanks a lot for the detailed response to my query.

I used the minimal-aws.yml configuration (
https://github.com/cloudfoundry/cf-release/tree/v226/example_manifests) to
create my deployment manifest which does not have the consul VMs set up. I
am guessing that the first step would be to change this.

In this case should I use the script generators to generate the CF
deployment manifest and re-deploy cloud foundry, or are there any other
techniques/shorter path for doing this?

Thanks in advance,
Kinjal



On Mon, Jan 25, 2016 at 6:57 AM, Eric Malm <emalm(a)pivotal.io> wrote:

Hi, Kinjal,

The stub I included in-line in my previous email may not have come
through so well for all mail clients, so I've also included it in a public
gist at https://gist.github.com/ematpl/149ac1bac691caae0722.

Thanks,
Eric

On Fri, Jan 22, 2016 at 6:32 PM, Eric Malm <emalm(a)pivotal.io> wrote:

Hi, Kinjal,

Thanks for asking: this is an area in which the Diego team is looking
forward to improving documentation and tooling in the near term. For the
time being, here are some more manual instructions:

Assuming you have AWS infrastructure already provisioned for your CF
deployment (VPC, subnets, NAT box, ELBs, etc.), you should need only to add
one or more additional subnets for the VMs in the Diego deployment, and
optionally an ELB for the SSH proxy routing tier (you can also use the
HAproxy in the CF deployment to do the same load-balancing, but you'll need
to give it an Elastic IP). If you're brave, and can coordinate the reserved
sections in the CF and Diego deployment manifests' networking configs
correctly, you could even share the same subnet(s) between the two
deployments.

Once you have those subnets provisioned, you'll need to translate
their properties into the iaas-settings.yml stub that you supply to the
generate-deployment-manifest script in diego-release. Since you're
deploying CF v226, we recommend you use Diego final version v0.1442.0 and
the associated manifest-generation script in that version of the release.
The other stubs should be independent of that iaas-settings one, and should
be pretty much the same as the ones for the BOSH-Lite deployment. You'll
likely want to provide different secrets and credentials in the
property-overrides stub, though, and perhaps different instance counts
depending on the availability needs of your deployment. I've included at
the end of this email a representative iaas-settings.yml file from one of
the Diego team's environments, with any specific identifiers for AWS
entities replaced by PLACEHOLDER values.

As a side note, if you don't already have the consul VMs deployed in
your CF deployment, you'll need to enable them so that the Diego components
can use it to communicate. We recommend you operate an odd number of consul
VMs: 1 if don't need high availability, and 3 or 5 if you do (as in a
production environment). You can enable them by changing the instance count
on the consul_z1 and consul_z2 jobs in the CF manifest.

After you've customized those stubs and adjusted your CF manifest if
necessary, you can generate the Diego manifest by running something like
the following from your diego-release directory:

$ ./scripts/generate-deployment-manifest \
PATH/TO/MY/CUSTOMIZED-PROPERTY-OVERRIDES.YML \
PATH/TO/MY/CUSTOMIZED-INSTANCE-COUNT-OVERRIDES.YML \
manifest-generation/bosh-lite-stubs/persistent-disk-overrides.yml \
PATH/TO/MY/CUSTOMIZED-IAAS-SETTINGS.YML \
manifest-generation/bosh-lite-stubs/additional-jobs.yml \
manifest-generation/bosh-lite-stubs/release-versions.yml \
PATH/TO/MY/MANIFEST/DIRECTORY \
> PATH/TO/MY/MANIFEST/DIRECTORY/diego.yml

'PATH/TO/MY/MANIFEST/DIRECTORY' should contain your CF manifest in a
file named 'cf.yml'. Also, please note that if you move to CF v227 or
later, which recommend Diego v0.1445.0 or later, the manifest-generation
script has changed to take its stub arguments via flags, instead of as
these positional arguments, and some of the stubs have changed slightly.

We also realize this is currently an obscure and potentially
error-prone process, and the Diego team does have a couple stories queued
up to do soon to provide more information about how to set up Diego on AWS:

- We plan in https://www.pivotaltracker.com/story/show/100909610 to
parametrize, document, and publish the tools and additional templates we
use to provision the AWS environments we use for CI and for our developers'
experiments and investigations, all the way from an empty account to a VPC
with BOSH, CF, and Diego.
- We plan in https://www.pivotaltracker.com/story/show/100909610 to
provide more manual instructions to set up a Diego environment compatible
with the 'minimal-aws' CF deployment manifest and infrastructure settings,
including provisioning any additional infrastructure such as subnets and
translating their information into the stubs for the diego-release
manifest-generation script.

We'll also be eager to adopt and to integrate with the tooling the CF
Infrastructure and CF Release Integration teams will produce at some point
to automate environment bootstrapping and CF manifest generation as much as
possible.

Please let me and the rest of the team know here if you need further
assistance or clarification.

Thanks again,
Eric, CF Runtime Diego PM

*****

Example iaas-settings.yml file, with PLACEHOLDER entries for your
environment's info:

iaas_settings:
compilation_cloud_properties:
availability_zone: us-east-1a
instance_type: c3.large
resource_pool_cloud_properties:
- cloud_properties:
availability_zone: us-east-1a
elbs:
- PLACEHOLDER-SSHProxyELB-ID
instance_type: m3.medium
name: access_z1
- cloud_properties:
availability_zone: us-east-1b
elbs:
- PLACEHOLDER-SSHProxyELB-ID
instance_type: m3.medium
name: access_z2
- cloud_properties:
availability_zone: us-east-1c
elbs:
- PLACEHOLDER-SSHProxyELB-ID
instance_type: m3.medium
name: access_z3
- cloud_properties:
availability_zone: us-east-1a
instance_type: m3.medium
name: brain_z1
- cloud_properties:
availability_zone: us-east-1b
instance_type: m3.medium
name: brain_z2
- cloud_properties:
availability_zone: us-east-1c
instance_type: m3.medium
name: brain_z3
- cloud_properties:
availability_zone: us-east-1a
instance_type: m3.medium
name: cc_bridge_z1
- cloud_properties:
availability_zone: us-east-1b
instance_type: m3.medium
name: cc_bridge_z2
- cloud_properties:
availability_zone: us-east-1c
instance_type: m3.medium
name: cc_bridge_z3
- cloud_properties:
availability_zone: us-east-1a
ephemeral_disk:
iops: 1200
size: 50000
type: io1
instance_type: m3.large
name: cell_z1
- cloud_properties:
availability_zone: us-east-1b
ephemeral_disk:
iops: 1200
size: 50000
type: io1
instance_type: m3.large
name: cell_z2
- cloud_properties:
availability_zone: us-east-1c
ephemeral_disk:
iops: 1200
size: 50000
type: io1
instance_type: m3.large
name: cell_z3
- cloud_properties:
availability_zone: us-east-1a
instance_type: m3.large
name: colocated_z1
- cloud_properties:
availability_zone: us-east-1b
instance_type: m3.large
name: colocated_z2
- cloud_properties:
availability_zone: us-east-1c
instance_type: m3.large
name: colocated_z3
- cloud_properties:
availability_zone: us-east-1a
instance_type: m3.large
name: database_z1
- cloud_properties:
availability_zone: us-east-1b
instance_type: m3.large
name: database_z2
- cloud_properties:
availability_zone: us-east-1c
instance_type: m3.large
name: database_z3
- cloud_properties:
availability_zone: us-east-1a
instance_type: m3.medium
name: route_emitter_z1
- cloud_properties:
availability_zone: us-east-1b
instance_type: m3.medium
name: route_emitter_z2
- cloud_properties:
availability_zone: us-east-1c
instance_type: m3.medium
name: route_emitter_z3
stemcell:
name: bosh-aws-xen-hvm-ubuntu-trusty-go_agent
version: latest
subnet_configs:
- name: diego1
subnets:
- cloud_properties:
security_groups:
- PLACEHOLDER-InternalSecurityGroup-ID
subnet: PLACEHOLDER-subnet-id-A
dns:
- 10.10.0.2
gateway: 10.10.5.1
range: 10.10.5.0/24
reserved:
- 10.10.5.2 - 10.10.5.9
static:
- 10.10.5.10 - 10.10.5.63
- name: diego2
subnets:
- cloud_properties:
security_groups:
- PLACEHOLDER-InternalSecurityGroup-ID
subnet: PLACEHOLDER-subnet-id-B
dns:
- 10.10.0.2
gateway: 10.10.6.1
range: 10.10.6.0/24
reserved:
- 10.10.6.2 - 10.10.6.9
static:
- 10.10.6.10 - 10.10.6.63
- name: diego3
subnets:
- cloud_properties:
security_groups:
- PLACEHOLDER-InternalSecurityGroup-ID
subnet: PLACEHOLDER-subnet-id-C
dns:
- 10.10.0.2
gateway: 10.10.7.1
range: 10.10.7.0/24
reserved:
- 10.10.7.2 - 10.10.7.9
static:
- 10.10.7.10 - 10.10.7.63


On Fri, Jan 22, 2016 at 4:28 AM, Kinjal Doshi <kindoshi(a)gmail.com>
wrote:

Hi,

After deploying CF version 226 on AWS using microbosh, I am trying
to understand how to deploy Diego now to work with this version of CF but
have not been able to figure out much yet. I was able to find steps for
deploying Diego on BOSH-Lite at
https://github.com/cloudfoundry-incubator/diego-release#deploying-diego-to-bosh-lite
but not for BOSH.

Would appreciate some pointers in this direction .

Thanks in advance,
Kinjal


Re: Need help for diego deployment

Amit Kumar Gupta
 

Hi Kinjal,

As per those instructions, you can use Amazon RDS for your database
service. You do not need to create the tables, migrations built into
cf-release code will do that. You will need to create the databases within
the service, namely "ccdb" and "uaadb". For example, if you use RDS to
provision MySQL databases, then this section:

uaadb:
db_scheme: UAADB_SCHEME
roles:
- tag: UAADB_USER
name: UAADB_USER_NAME
password: UAADB_USER_PASSWORD
databases:
- tag: uaa
name: uaadb
address: UAADB_ADDRESS
port: UAADB_PORT

will become:

uaadb:
db_scheme: mysql
roles:
- tag: you_pick_tag
name: you_pick_user
password: you_pick_password
databases:
- tag: uaa
name: uaadb
address: <AWS wil tell you address>
port: <AWS will tell you port>

On Fri, Jan 29, 2016 at 9:15 AM, Kinjal Doshi <kindoshi(a)gmail.com> wrote:

Hi Amit,

Please ignore the unresolved nodes error in the above email. I have been
able to correct it, running into some more problems, checking it right now.

Please do let me know about my question on the dbs, though.

Thanks in advance,
Kinjal

On Fri, Jan 29, 2016 at 1:29 PM, Kinjal Doshi <kindoshi(a)gmail.com> wrote:

Hi Amit,

Thanks a lot for your response on this.

I was trying to use the manifest generation scripts to redeploy cf but I
ran into errors during spiff merge as below:

ubuntu(a)ip-172-31-45-52:~/cf-deployment/cf-release$
scripts/generate_deployment_manifest aws ../cf-stub.yml > cf-deployment.yml
2016/01/29 07:49:05 error generating manifest: unresolved nodes:
(( static_ips(1) )) in
/home/ubuntu/cf-deployment/cf-release/templates/cf-infrastructure-aws.yml
jobs.[0].networks.[0].static_ips
(( static_ips(5, 6, 15, 16, 17, 18, 19, 20) )) in
/home/ubuntu/cf-deployment/cf-release/templates/cf-infrastructure-aws.yml
jobs.[1].networks.[0].static_ips
(( static_ips(27, 28, 29) )) in
/home/ubuntu/cf-deployment/cf-release/templates/cf-infrastructure-aws.yml
jobs.[5].networks.[0].static_ips
(( static_ips(10, 25) )) in
/home/ubuntu/cf-deployment/cf-release/templates/cf-infrastructure-aws.yml
jobs.[6].networks.[0].static_ips


The public gist pointing to the cf-stub created for this attempt is at:
https://gist.github.com/kinjaldoshi/b0dc004876d2a4615c65

I am not very sure but I think this has something to do with the way I
configured the subnets. Could you please guide me on the corrections
required here. I know how (( static_ips(27, 28, 29) )) works, but not sure
why there is a problem in resolving to the required values.

Another question, I have is on the editing instructions at:
http://docs.cloudfoundry.org/deploying/aws/cf-stub.html#editing

For the ccdb and uaadb, as per comments, is it required for me to create
a service and host these DBs as mentioned in the 'Editing Instructions'
column? In that case where can i find the DDL to create the db and tables?


Thanks a lot in advance,
Kinjal


On Fri, Jan 29, 2016 at 10:31 AM, Amit Gupta <agupta(a)pivotal.io> wrote:

Hi Kinjal,

The minimal-aws manifest would be quite difficult to augment to get it
to work with diego. You would need to add static IP to your private
network, add a resource pool or increase the size of an existing one, add
the consul job, colocate the consul agent with some of the CF jobs, and add
a few configuration properties that aren't in the minimal one (e.g.
loggregator.tls.ca). It's probably simpler to use the manifest
generations scripts to redeploy cf (before deploying diego).

Use:

* http://docs.cloudfoundry.org/deploying/common/create_a_manifest.html
* http://docs.cloudfoundry.org/deploying/common/deploy.html

Let us know if you run into some difficulties. These documents ask you
to define stubs, which require you to input data from your AWS IaaS setup,
and may not exactly play nicely with the AWS setup described in the
minimal-aws doc, I'm not sure.

Best,
Amit



On Wed, Jan 27, 2016 at 3:17 AM, Kinjal Doshi <kindoshi(a)gmail.com>
wrote:

Hi Eric,

Thanks a lot for the detailed response to my query.

I used the minimal-aws.yml configuration (
https://github.com/cloudfoundry/cf-release/tree/v226/example_manifests) to
create my deployment manifest which does not have the consul VMs set up. I
am guessing that the first step would be to change this.

In this case should I use the script generators to generate the CF
deployment manifest and re-deploy cloud foundry, or are there any other
techniques/shorter path for doing this?

Thanks in advance,
Kinjal



On Mon, Jan 25, 2016 at 6:57 AM, Eric Malm <emalm(a)pivotal.io> wrote:

Hi, Kinjal,

The stub I included in-line in my previous email may not have come
through so well for all mail clients, so I've also included it in a public
gist at https://gist.github.com/ematpl/149ac1bac691caae0722.

Thanks,
Eric

On Fri, Jan 22, 2016 at 6:32 PM, Eric Malm <emalm(a)pivotal.io> wrote:

Hi, Kinjal,

Thanks for asking: this is an area in which the Diego team is looking
forward to improving documentation and tooling in the near term. For the
time being, here are some more manual instructions:

Assuming you have AWS infrastructure already provisioned for your CF
deployment (VPC, subnets, NAT box, ELBs, etc.), you should need only to add
one or more additional subnets for the VMs in the Diego deployment, and
optionally an ELB for the SSH proxy routing tier (you can also use the
HAproxy in the CF deployment to do the same load-balancing, but you'll need
to give it an Elastic IP). If you're brave, and can coordinate the reserved
sections in the CF and Diego deployment manifests' networking configs
correctly, you could even share the same subnet(s) between the two
deployments.

Once you have those subnets provisioned, you'll need to translate
their properties into the iaas-settings.yml stub that you supply to the
generate-deployment-manifest script in diego-release. Since you're
deploying CF v226, we recommend you use Diego final version v0.1442.0 and
the associated manifest-generation script in that version of the release.
The other stubs should be independent of that iaas-settings one, and should
be pretty much the same as the ones for the BOSH-Lite deployment. You'll
likely want to provide different secrets and credentials in the
property-overrides stub, though, and perhaps different instance counts
depending on the availability needs of your deployment. I've included at
the end of this email a representative iaas-settings.yml file from one of
the Diego team's environments, with any specific identifiers for AWS
entities replaced by PLACEHOLDER values.

As a side note, if you don't already have the consul VMs deployed in
your CF deployment, you'll need to enable them so that the Diego components
can use it to communicate. We recommend you operate an odd number of consul
VMs: 1 if don't need high availability, and 3 or 5 if you do (as in a
production environment). You can enable them by changing the instance count
on the consul_z1 and consul_z2 jobs in the CF manifest.

After you've customized those stubs and adjusted your CF manifest if
necessary, you can generate the Diego manifest by running something like
the following from your diego-release directory:

$ ./scripts/generate-deployment-manifest \
PATH/TO/MY/CUSTOMIZED-PROPERTY-OVERRIDES.YML \
PATH/TO/MY/CUSTOMIZED-INSTANCE-COUNT-OVERRIDES.YML \
manifest-generation/bosh-lite-stubs/persistent-disk-overrides.yml \
PATH/TO/MY/CUSTOMIZED-IAAS-SETTINGS.YML \
manifest-generation/bosh-lite-stubs/additional-jobs.yml \
manifest-generation/bosh-lite-stubs/release-versions.yml \
PATH/TO/MY/MANIFEST/DIRECTORY \
> PATH/TO/MY/MANIFEST/DIRECTORY/diego.yml

'PATH/TO/MY/MANIFEST/DIRECTORY' should contain your CF manifest in a
file named 'cf.yml'. Also, please note that if you move to CF v227 or
later, which recommend Diego v0.1445.0 or later, the manifest-generation
script has changed to take its stub arguments via flags, instead of as
these positional arguments, and some of the stubs have changed slightly.

We also realize this is currently an obscure and potentially
error-prone process, and the Diego team does have a couple stories queued
up to do soon to provide more information about how to set up Diego on AWS:

- We plan in https://www.pivotaltracker.com/story/show/100909610 to
parametrize, document, and publish the tools and additional templates we
use to provision the AWS environments we use for CI and for our developers'
experiments and investigations, all the way from an empty account to a VPC
with BOSH, CF, and Diego.
- We plan in https://www.pivotaltracker.com/story/show/100909610 to
provide more manual instructions to set up a Diego environment compatible
with the 'minimal-aws' CF deployment manifest and infrastructure settings,
including provisioning any additional infrastructure such as subnets and
translating their information into the stubs for the diego-release
manifest-generation script.

We'll also be eager to adopt and to integrate with the tooling the CF
Infrastructure and CF Release Integration teams will produce at some point
to automate environment bootstrapping and CF manifest generation as much as
possible.

Please let me and the rest of the team know here if you need further
assistance or clarification.

Thanks again,
Eric, CF Runtime Diego PM

*****

Example iaas-settings.yml file, with PLACEHOLDER entries for your
environment's info:

iaas_settings:
compilation_cloud_properties:
availability_zone: us-east-1a
instance_type: c3.large
resource_pool_cloud_properties:
- cloud_properties:
availability_zone: us-east-1a
elbs:
- PLACEHOLDER-SSHProxyELB-ID
instance_type: m3.medium
name: access_z1
- cloud_properties:
availability_zone: us-east-1b
elbs:
- PLACEHOLDER-SSHProxyELB-ID
instance_type: m3.medium
name: access_z2
- cloud_properties:
availability_zone: us-east-1c
elbs:
- PLACEHOLDER-SSHProxyELB-ID
instance_type: m3.medium
name: access_z3
- cloud_properties:
availability_zone: us-east-1a
instance_type: m3.medium
name: brain_z1
- cloud_properties:
availability_zone: us-east-1b
instance_type: m3.medium
name: brain_z2
- cloud_properties:
availability_zone: us-east-1c
instance_type: m3.medium
name: brain_z3
- cloud_properties:
availability_zone: us-east-1a
instance_type: m3.medium
name: cc_bridge_z1
- cloud_properties:
availability_zone: us-east-1b
instance_type: m3.medium
name: cc_bridge_z2
- cloud_properties:
availability_zone: us-east-1c
instance_type: m3.medium
name: cc_bridge_z3
- cloud_properties:
availability_zone: us-east-1a
ephemeral_disk:
iops: 1200
size: 50000
type: io1
instance_type: m3.large
name: cell_z1
- cloud_properties:
availability_zone: us-east-1b
ephemeral_disk:
iops: 1200
size: 50000
type: io1
instance_type: m3.large
name: cell_z2
- cloud_properties:
availability_zone: us-east-1c
ephemeral_disk:
iops: 1200
size: 50000
type: io1
instance_type: m3.large
name: cell_z3
- cloud_properties:
availability_zone: us-east-1a
instance_type: m3.large
name: colocated_z1
- cloud_properties:
availability_zone: us-east-1b
instance_type: m3.large
name: colocated_z2
- cloud_properties:
availability_zone: us-east-1c
instance_type: m3.large
name: colocated_z3
- cloud_properties:
availability_zone: us-east-1a
instance_type: m3.large
name: database_z1
- cloud_properties:
availability_zone: us-east-1b
instance_type: m3.large
name: database_z2
- cloud_properties:
availability_zone: us-east-1c
instance_type: m3.large
name: database_z3
- cloud_properties:
availability_zone: us-east-1a
instance_type: m3.medium
name: route_emitter_z1
- cloud_properties:
availability_zone: us-east-1b
instance_type: m3.medium
name: route_emitter_z2
- cloud_properties:
availability_zone: us-east-1c
instance_type: m3.medium
name: route_emitter_z3
stemcell:
name: bosh-aws-xen-hvm-ubuntu-trusty-go_agent
version: latest
subnet_configs:
- name: diego1
subnets:
- cloud_properties:
security_groups:
- PLACEHOLDER-InternalSecurityGroup-ID
subnet: PLACEHOLDER-subnet-id-A
dns:
- 10.10.0.2
gateway: 10.10.5.1
range: 10.10.5.0/24
reserved:
- 10.10.5.2 - 10.10.5.9
static:
- 10.10.5.10 - 10.10.5.63
- name: diego2
subnets:
- cloud_properties:
security_groups:
- PLACEHOLDER-InternalSecurityGroup-ID
subnet: PLACEHOLDER-subnet-id-B
dns:
- 10.10.0.2
gateway: 10.10.6.1
range: 10.10.6.0/24
reserved:
- 10.10.6.2 - 10.10.6.9
static:
- 10.10.6.10 - 10.10.6.63
- name: diego3
subnets:
- cloud_properties:
security_groups:
- PLACEHOLDER-InternalSecurityGroup-ID
subnet: PLACEHOLDER-subnet-id-C
dns:
- 10.10.0.2
gateway: 10.10.7.1
range: 10.10.7.0/24
reserved:
- 10.10.7.2 - 10.10.7.9
static:
- 10.10.7.10 - 10.10.7.63


On Fri, Jan 22, 2016 at 4:28 AM, Kinjal Doshi <kindoshi(a)gmail.com>
wrote:

Hi,

After deploying CF version 226 on AWS using microbosh, I am trying
to understand how to deploy Diego now to work with this version of CF but
have not been able to figure out much yet. I was able to find steps for
deploying Diego on BOSH-Lite at
https://github.com/cloudfoundry-incubator/diego-release#deploying-diego-to-bosh-lite
but not for BOSH.

Would appreciate some pointers in this direction .

Thanks in advance,
Kinjal


Re: Need help for diego deployment

Kinjal Doshi
 

Hi Amit,

Please ignore the unresolved nodes error in the above email. I have been
able to correct it, running into some more problems, checking it right now.

Please do let me know about my question on the dbs, though.

Thanks in advance,
Kinjal

On Fri, Jan 29, 2016 at 1:29 PM, Kinjal Doshi <kindoshi(a)gmail.com> wrote:

Hi Amit,

Thanks a lot for your response on this.

I was trying to use the manifest generation scripts to redeploy cf but I
ran into errors during spiff merge as below:

ubuntu(a)ip-172-31-45-52:~/cf-deployment/cf-release$
scripts/generate_deployment_manifest aws ../cf-stub.yml > cf-deployment.yml
2016/01/29 07:49:05 error generating manifest: unresolved nodes:
(( static_ips(1) )) in
/home/ubuntu/cf-deployment/cf-release/templates/cf-infrastructure-aws.yml
jobs.[0].networks.[0].static_ips
(( static_ips(5, 6, 15, 16, 17, 18, 19, 20) )) in
/home/ubuntu/cf-deployment/cf-release/templates/cf-infrastructure-aws.yml
jobs.[1].networks.[0].static_ips
(( static_ips(27, 28, 29) )) in
/home/ubuntu/cf-deployment/cf-release/templates/cf-infrastructure-aws.yml
jobs.[5].networks.[0].static_ips
(( static_ips(10, 25) )) in
/home/ubuntu/cf-deployment/cf-release/templates/cf-infrastructure-aws.yml
jobs.[6].networks.[0].static_ips


The public gist pointing to the cf-stub created for this attempt is at:
https://gist.github.com/kinjaldoshi/b0dc004876d2a4615c65

I am not very sure but I think this has something to do with the way I
configured the subnets. Could you please guide me on the corrections
required here. I know how (( static_ips(27, 28, 29) )) works, but not sure
why there is a problem in resolving to the required values.

Another question, I have is on the editing instructions at:
http://docs.cloudfoundry.org/deploying/aws/cf-stub.html#editing

For the ccdb and uaadb, as per comments, is it required for me to create a
service and host these DBs as mentioned in the 'Editing Instructions'
column? In that case where can i find the DDL to create the db and tables?


Thanks a lot in advance,
Kinjal


On Fri, Jan 29, 2016 at 10:31 AM, Amit Gupta <agupta(a)pivotal.io> wrote:

Hi Kinjal,

The minimal-aws manifest would be quite difficult to augment to get it to
work with diego. You would need to add static IP to your private network,
add a resource pool or increase the size of an existing one, add the consul
job, colocate the consul agent with some of the CF jobs, and add a few
configuration properties that aren't in the minimal one (e.g.
loggregator.tls.ca). It's probably simpler to use the manifest
generations scripts to redeploy cf (before deploying diego).

Use:

* http://docs.cloudfoundry.org/deploying/common/create_a_manifest.html
* http://docs.cloudfoundry.org/deploying/common/deploy.html

Let us know if you run into some difficulties. These documents ask you
to define stubs, which require you to input data from your AWS IaaS setup,
and may not exactly play nicely with the AWS setup described in the
minimal-aws doc, I'm not sure.

Best,
Amit



On Wed, Jan 27, 2016 at 3:17 AM, Kinjal Doshi <kindoshi(a)gmail.com> wrote:

Hi Eric,

Thanks a lot for the detailed response to my query.

I used the minimal-aws.yml configuration (
https://github.com/cloudfoundry/cf-release/tree/v226/example_manifests) to
create my deployment manifest which does not have the consul VMs set up. I
am guessing that the first step would be to change this.

In this case should I use the script generators to generate the CF
deployment manifest and re-deploy cloud foundry, or are there any other
techniques/shorter path for doing this?

Thanks in advance,
Kinjal



On Mon, Jan 25, 2016 at 6:57 AM, Eric Malm <emalm(a)pivotal.io> wrote:

Hi, Kinjal,

The stub I included in-line in my previous email may not have come
through so well for all mail clients, so I've also included it in a public
gist at https://gist.github.com/ematpl/149ac1bac691caae0722.

Thanks,
Eric

On Fri, Jan 22, 2016 at 6:32 PM, Eric Malm <emalm(a)pivotal.io> wrote:

Hi, Kinjal,

Thanks for asking: this is an area in which the Diego team is looking
forward to improving documentation and tooling in the near term. For the
time being, here are some more manual instructions:

Assuming you have AWS infrastructure already provisioned for your CF
deployment (VPC, subnets, NAT box, ELBs, etc.), you should need only to add
one or more additional subnets for the VMs in the Diego deployment, and
optionally an ELB for the SSH proxy routing tier (you can also use the
HAproxy in the CF deployment to do the same load-balancing, but you'll need
to give it an Elastic IP). If you're brave, and can coordinate the reserved
sections in the CF and Diego deployment manifests' networking configs
correctly, you could even share the same subnet(s) between the two
deployments.

Once you have those subnets provisioned, you'll need to translate
their properties into the iaas-settings.yml stub that you supply to the
generate-deployment-manifest script in diego-release. Since you're
deploying CF v226, we recommend you use Diego final version v0.1442.0 and
the associated manifest-generation script in that version of the release.
The other stubs should be independent of that iaas-settings one, and should
be pretty much the same as the ones for the BOSH-Lite deployment. You'll
likely want to provide different secrets and credentials in the
property-overrides stub, though, and perhaps different instance counts
depending on the availability needs of your deployment. I've included at
the end of this email a representative iaas-settings.yml file from one of
the Diego team's environments, with any specific identifiers for AWS
entities replaced by PLACEHOLDER values.

As a side note, if you don't already have the consul VMs deployed in
your CF deployment, you'll need to enable them so that the Diego components
can use it to communicate. We recommend you operate an odd number of consul
VMs: 1 if don't need high availability, and 3 or 5 if you do (as in a
production environment). You can enable them by changing the instance count
on the consul_z1 and consul_z2 jobs in the CF manifest.

After you've customized those stubs and adjusted your CF manifest if
necessary, you can generate the Diego manifest by running something like
the following from your diego-release directory:

$ ./scripts/generate-deployment-manifest \
PATH/TO/MY/CUSTOMIZED-PROPERTY-OVERRIDES.YML \
PATH/TO/MY/CUSTOMIZED-INSTANCE-COUNT-OVERRIDES.YML \
manifest-generation/bosh-lite-stubs/persistent-disk-overrides.yml \
PATH/TO/MY/CUSTOMIZED-IAAS-SETTINGS.YML \
manifest-generation/bosh-lite-stubs/additional-jobs.yml \
manifest-generation/bosh-lite-stubs/release-versions.yml \
PATH/TO/MY/MANIFEST/DIRECTORY \
> PATH/TO/MY/MANIFEST/DIRECTORY/diego.yml

'PATH/TO/MY/MANIFEST/DIRECTORY' should contain your CF manifest in a
file named 'cf.yml'. Also, please note that if you move to CF v227 or
later, which recommend Diego v0.1445.0 or later, the manifest-generation
script has changed to take its stub arguments via flags, instead of as
these positional arguments, and some of the stubs have changed slightly.

We also realize this is currently an obscure and potentially
error-prone process, and the Diego team does have a couple stories queued
up to do soon to provide more information about how to set up Diego on AWS:

- We plan in https://www.pivotaltracker.com/story/show/100909610 to
parametrize, document, and publish the tools and additional templates we
use to provision the AWS environments we use for CI and for our developers'
experiments and investigations, all the way from an empty account to a VPC
with BOSH, CF, and Diego.
- We plan in https://www.pivotaltracker.com/story/show/100909610 to
provide more manual instructions to set up a Diego environment compatible
with the 'minimal-aws' CF deployment manifest and infrastructure settings,
including provisioning any additional infrastructure such as subnets and
translating their information into the stubs for the diego-release
manifest-generation script.

We'll also be eager to adopt and to integrate with the tooling the CF
Infrastructure and CF Release Integration teams will produce at some point
to automate environment bootstrapping and CF manifest generation as much as
possible.

Please let me and the rest of the team know here if you need further
assistance or clarification.

Thanks again,
Eric, CF Runtime Diego PM

*****

Example iaas-settings.yml file, with PLACEHOLDER entries for your
environment's info:

iaas_settings:
compilation_cloud_properties:
availability_zone: us-east-1a
instance_type: c3.large
resource_pool_cloud_properties:
- cloud_properties:
availability_zone: us-east-1a
elbs:
- PLACEHOLDER-SSHProxyELB-ID
instance_type: m3.medium
name: access_z1
- cloud_properties:
availability_zone: us-east-1b
elbs:
- PLACEHOLDER-SSHProxyELB-ID
instance_type: m3.medium
name: access_z2
- cloud_properties:
availability_zone: us-east-1c
elbs:
- PLACEHOLDER-SSHProxyELB-ID
instance_type: m3.medium
name: access_z3
- cloud_properties:
availability_zone: us-east-1a
instance_type: m3.medium
name: brain_z1
- cloud_properties:
availability_zone: us-east-1b
instance_type: m3.medium
name: brain_z2
- cloud_properties:
availability_zone: us-east-1c
instance_type: m3.medium
name: brain_z3
- cloud_properties:
availability_zone: us-east-1a
instance_type: m3.medium
name: cc_bridge_z1
- cloud_properties:
availability_zone: us-east-1b
instance_type: m3.medium
name: cc_bridge_z2
- cloud_properties:
availability_zone: us-east-1c
instance_type: m3.medium
name: cc_bridge_z3
- cloud_properties:
availability_zone: us-east-1a
ephemeral_disk:
iops: 1200
size: 50000
type: io1
instance_type: m3.large
name: cell_z1
- cloud_properties:
availability_zone: us-east-1b
ephemeral_disk:
iops: 1200
size: 50000
type: io1
instance_type: m3.large
name: cell_z2
- cloud_properties:
availability_zone: us-east-1c
ephemeral_disk:
iops: 1200
size: 50000
type: io1
instance_type: m3.large
name: cell_z3
- cloud_properties:
availability_zone: us-east-1a
instance_type: m3.large
name: colocated_z1
- cloud_properties:
availability_zone: us-east-1b
instance_type: m3.large
name: colocated_z2
- cloud_properties:
availability_zone: us-east-1c
instance_type: m3.large
name: colocated_z3
- cloud_properties:
availability_zone: us-east-1a
instance_type: m3.large
name: database_z1
- cloud_properties:
availability_zone: us-east-1b
instance_type: m3.large
name: database_z2
- cloud_properties:
availability_zone: us-east-1c
instance_type: m3.large
name: database_z3
- cloud_properties:
availability_zone: us-east-1a
instance_type: m3.medium
name: route_emitter_z1
- cloud_properties:
availability_zone: us-east-1b
instance_type: m3.medium
name: route_emitter_z2
- cloud_properties:
availability_zone: us-east-1c
instance_type: m3.medium
name: route_emitter_z3
stemcell:
name: bosh-aws-xen-hvm-ubuntu-trusty-go_agent
version: latest
subnet_configs:
- name: diego1
subnets:
- cloud_properties:
security_groups:
- PLACEHOLDER-InternalSecurityGroup-ID
subnet: PLACEHOLDER-subnet-id-A
dns:
- 10.10.0.2
gateway: 10.10.5.1
range: 10.10.5.0/24
reserved:
- 10.10.5.2 - 10.10.5.9
static:
- 10.10.5.10 - 10.10.5.63
- name: diego2
subnets:
- cloud_properties:
security_groups:
- PLACEHOLDER-InternalSecurityGroup-ID
subnet: PLACEHOLDER-subnet-id-B
dns:
- 10.10.0.2
gateway: 10.10.6.1
range: 10.10.6.0/24
reserved:
- 10.10.6.2 - 10.10.6.9
static:
- 10.10.6.10 - 10.10.6.63
- name: diego3
subnets:
- cloud_properties:
security_groups:
- PLACEHOLDER-InternalSecurityGroup-ID
subnet: PLACEHOLDER-subnet-id-C
dns:
- 10.10.0.2
gateway: 10.10.7.1
range: 10.10.7.0/24
reserved:
- 10.10.7.2 - 10.10.7.9
static:
- 10.10.7.10 - 10.10.7.63


On Fri, Jan 22, 2016 at 4:28 AM, Kinjal Doshi <kindoshi(a)gmail.com>
wrote:

Hi,

After deploying CF version 226 on AWS using microbosh, I am trying to
understand how to deploy Diego now to work with this version of CF but have
not been able to figure out much yet. I was able to find steps for
deploying Diego on BOSH-Lite at
https://github.com/cloudfoundry-incubator/diego-release#deploying-diego-to-bosh-lite
but not for BOSH.

Would appreciate some pointers in this direction .

Thanks in advance,
Kinjal


Java Buildpack v3.6

Christopher Frost
 

I'm pleased to announce the release of the java-buildpack, version 3.6. This
release contains improvements to the Luna HA and GemFire support and
updates to the dependencies.

For a more detailed look at the changes in 3.6, please take a look at
the commit
log <https://github.com/cloudfoundry/java-buildpack/compare/v3.5.1...v3.6>.
Packaged versions of the buildpack, suitable for use with create-buildpack
and update-buildpack, can be found attached to this release
<https://github.com/cloudfoundry/java-buildpack/releases/tag/v3.6>.
*Packaged Dependencies*

- AppDynamics Agent: 4.1.8_5
- GemFire 8.2.0
- GemFire Modules 8.2.0
- GemFire Modules Tomcat7 8.2.0
- GemFire Security 8.2.0
- Groovy: 2.4.5
- JRebel 6.3.2
- Log4j API 2.1.0
- Log4j Core 2.1.0
- Log4j Jcl 2.1.0
- Log4j Jul 2.1.0
- Log4j Slf4j 2.1.0
- MariaDB JDBC: 1.3.4
- Memory Calculator (mountainlion): 2.0.1.RELEASE
- Memory Calculator (precise): 2.0.1.RELEASE
- Memory Calculator (trusty): 2.0.1.RELEASE
- New Relic Agent: 3.25.0
- OpenJDK JRE (mountainlion): 1.8.0_71
- OpenJDK JRE (precise): 1.8.0_71
- OpenJDK JRE (trusty): 1.8.0_71
- Play Framework JPA Plugin: 1.10.0.RELEASE
- PostgreSQL JDBC: 9.4.1207
- RedisStore: 1.2.0_RELEASE
- Slf4j API 1.7.7
- Slf4j Core 1.7.7
- Spring Auto-reconfiguration: 1.10.0_RELEASE
- Spring Boot CLI: 1.3.2_RELEASE
- Tomcat Access Logging Support: 2.5.0_RELEASE
- Tomcat Lifecycle Support: 2.5.0_RELEASE
- Tomcat Logging Support: 2.5.0_RELEASE
- Tomcat: 8.0.30
- YourKit: 2015.15086.0


Christopher Frost - Pivotal UK
Java Buildpack Team


Deploying BOSH : Spawning new deployment from child BOSH becomes unresponsive after some time

Subhankar Chattopadhyay <subho.atg@...>
 

Hi,

I have BOSH-Lite installed locally and I am trying to deploy BOSH on
BOSH-lite. This may sound confusing but I am just trying to create
another level of hierarchy of BOSH. The attached is the manifest that
I use and it deploys successfully.

Now I target to this new child BOSH director and I try to deploy a
sample release, for example, redis release. I am able to upload
stemcell and deploy the redis cluster successfully. But after some
minutes, the nodes of this deployment becomes unresponsive.

vcap(a)agent-id-bosh-0:~/i068838/microbosh$ bosh vms
Deployment `redis-warden'

Director task 27

Task 27 done

+-------------------+---------+---------------+------------+
| Job/index | State | Resource Pool | IPs |
+-------------------+---------+---------------+------------+
| redis_leader_z1/0 | running | small_z1 | 10.244.2.2 |
| redis_z1/0 | running | small_z1 | 10.244.1.2 |
| redis_z1/1 | running | small_z1 | 10.244.1.6 |
+-------------------+---------+---------------+------------+

VMs total: 3
vcap(a)agent-id-bosh-0:~/i068838/microbosh$ bosh vms
Deployment `redis-warden'

Director task 28

Task 28 done

+-------------------+--------------------+---------------+-----+
| Job/index | State | Resource Pool | IPs |
+-------------------+--------------------+---------------+-----+
| redis_leader_z1/0 | unresponsive agent | small_z1 | |
| redis_z1/0 | unresponsive agent | small_z1 | |
| redis_z1/1 | unresponsive agent | small_z1 | |
+-------------------+--------------------+---------------+-----+


I tried to search the logs and found this in the health monitor log of
the child bosh.

vi /var/vcap/sys/log/health_monitor/health_monitor.log

I, [2016-01-29T10:03:04.343261 #496] INFO : Analyzing agents...
I, [2016-01-29T10:03:04.343621 #496] INFO : Analyzed 0 agents, took
8.0871e-05 seconds
E, [2016-01-29T10:03:34.402357 #496] ERROR : Cannot get deployments
from director at https://10.244.9.2:25555/deployments: 401 Not
authorized: '/deployments'

E, [2016-01-29T10:03:34.402539 #496] ERROR :
/var/vcap/packages/health_monitor/gem_home/ruby/2.1.0/gems/bosh-monitor-1.3169.0/lib/bosh/monitor/director.rb:16:in
`get_deployments'
/var/vcap/packages/health_monitor/gem_home/ruby/2.1.0/gems/bosh-monitor-1.3169.0/lib/bosh/monitor/runner.rb:146:in
`fetch_deployments'
/var/vcap/packages/health_monitor/gem_home/ruby/2.1.0/gems/bosh-monitor-1.3169.0/lib/bosh/monitor/runner.rb:97:in
`block in poll_director'
I, [2016-01-29T10:03:34.402711 #496] INFO : [ALERT] Alert @
2016-01-29 10:03:34 UTC, severity 3: Cannot get deployments from
director at https://10.244.9.2:25555/deployments: 401 Not authorized:
'/deployments'
...................
................
I, [2016-01-29T11:41:28.543206 #26013] INFO : Found deployment `redis-warden'
I, [2016-01-29T11:41:28.587300 #26013] INFO : Adding agent
a613875d-cbd7-4450-bfde-39bdfe21f11f (redis_z1/0) to redis-warden...
I, [2016-01-29T11:41:28.587431 #26013] INFO : Adding agent
52bbce8a-fe26-47fc-9613-76a311949414 (redis_leader_z1/0) to
redis-warden...
I, [2016-01-29T11:41:28.587505 #26013] INFO : Adding agent
ee51f46c-9907-4293-b1cf-28f6be6ce87a (redis_z1/1) to redis-warden...
I, [2016-01-29T11:41:58.518624 #26013] INFO : Analyzing agents...
I, [2016-01-29T11:41:58.519463 #26013] INFO : Analyzed 3 agents, took
0.000134647 seconds
W, [2016-01-29T11:42:28.578004 #26013] WARN : Found stale deployment
redis-warden, removing...
I, [2016-01-29T11:42:36.454647 #26013] INFO : [ALERT] Alert @
2016-01-29 11:42:36 UTC, severity 4: Begin update deployment for
'redis-warden' against Director '4aa4c1d8-b5b1-4892-944d-d95d66f0529a'
W, [2016-01-29T11:42:36.454806 #26013] WARN : (Resurrector) event did
not have deployment, job and index: Alert @ 2016-01-29 11:42:36 UTC,
severity 4: Begin update deployment for 'redis-warden' against
Director '4aa4c1d8-b5b1-4892-944d-d95d66f0529a'
W, [2016-01-29T11:42:41.868459 #26013] WARN : Received heartbeat from
unmanaged agent: 0a509657-9f23-4f01-874e-5e98a53239e7
W, [2016-01-29T11:42:42.507345 #26013] WARN : Received heartbeat from
unmanaged agent: fbb147b8-ae89-47ba-ac50-5392a22930fd
W, [2016-01-29T11:42:42.521077 #26013] WARN : Received heartbeat from
unmanaged agent: 42243d80-2f80-47e3-9840-a58d66d0e784
I, [2016-01-29T11:42:51.304697 #26013] INFO : Agent
`42243d80-2f80-47e3-9840-a58d66d0e784' shutting down...
I, [2016-01-29T11:42:51.305346 #26013] INFO : Removing agent
42243d80-2f80-47e3-9840-a58d66d0e784 from all deployments...
I, [2016-01-29T11:42:58.520078 #26013] INFO : Analyzing agents...
W, [2016-01-29T11:42:58.520301 #26013] WARN : Agent
0a509657-9f23-4f01-874e-5e98a53239e7 is not a part of any deployment
W, [2016-01-29T11:42:58.520415 #26013] WARN : Agent
fbb147b8-ae89-47ba-ac50-5392a22930fd is not a part of any deployment
I, [2016-01-29T11:42:58.520508 #26013] INFO : Analyzed 2 agents, took
0.000273774 seconds
W, [2016-01-29T11:43:00.209452 #26013] WARN : Received alert from
unmanaged agent: 42243d80-2f80-47e3-9840-a58d66d0e784
I, [2016-01-29T11:43:00.209909 #26013] INFO : [ALERT] Alert @
2016-01-29 11:43:00 UTC, severity 1: process is not running
W, [2016-01-29T11:43:00.210027 #26013] WARN : (Resurrector) event did
not have deployment, job and index: Alert @ 2016-01-29 11:43:00 UTC,
severity 1: process is not running
I, [2016-01-29T11:43:07.190492 #26013] INFO : Agent
`0a509657-9f23-4f01-874e-5e98a53239e7' shutting down...
I, [2016-01-29T11:43:07.191313 #26013] INFO : Removing agent
0a509657-9f23-4f01-874e-5e98a53239e7 from all deployments...
I, [2016-01-29T11:43:07.222311 #26013] INFO : Agent
`fbb147b8-ae89-47ba-ac50-5392a22930fd' shutting down...
I, [2016-01-29T11:43:07.222733 #26013] INFO : Removing agent
fbb147b8-ae89-47ba-ac50-5392a22930fd from all deployments...
W, [2016-01-29T11:43:15.782356 #26013] WARN : Received alert from
unmanaged agent: fbb147b8-ae89-47ba-ac50-5392a22930fd
I, [2016-01-29T11:43:15.782627 #26013] INFO : [ALERT] Alert @
2016-01-29 11:43:15 UTC, severity 1: process is not running
W, [2016-01-29T11:43:15.782714 #26013] WARN : (Resurrector) event did
not have deployment, job and index: Alert @ 2016-01-29 11:43:15 UTC,
severity 1: process is not running
W, [2016-01-29T11:43:15.832729 #26013] WARN : Received alert from
unmanaged agent: 0a509657-9f23-4f01-874e-5e98a53239e7
I, [2016-01-29T11:43:15.833077 #26013] INFO : [ALERT] Alert @
2016-01-29 11:43:15 UTC, severity 1: process is not running
W, [2016-01-29T11:43:15.833184 #26013] WARN : (Resurrector) event did
not have deployment, job and index: Alert @ 2016-01-29 11:43:15 UTC,
severity 1: process is not running
I, [2016-01-29T11:43:21.924108 #26013] INFO : [ALERT] Alert @
2016-01-29 11:43:21 UTC, severity 4: Finish update deployment for
'redis-warden' against Director '4aa4c1d8-b5b1-4892-944d-d95d66f0529a'
W, [2016-01-29T11:43:21.924923 #26013] WARN : (Resurrector) event did
not have deployment, job and index: Alert @ 2016-01-29 11:43:21 UTC,
severity 4: Finish update deployment for 'redis-warden' against
Director '4aa4c1d8-b5b1-4892-944d-d95d66f0529a'


Looks like the health monitor is not working properly. Can someone
please help me on this ?


Regards,
Subhankar


Bosh-init aws template not working

Sylvain Gibier
 

Hi,

Following the instruction on this page (http://bosh.io/docs/init-aws.html), copy the current deployment template, and filling in the information - when I try to deploy it - I keep on getting:

"for aws_cpi/0 (line 17: #<TemplateEvaluationContext::UnknownProperty: Can't find property 'registry.username'>) (RuntimeError)"

If I check the registry job definition - the property is definitely there.

So question - where can we download a working yaml file for bosh-init aws deployment?

bosh release: 250
bosh-aws-cpi: 41

Sylvain


Re: Need help for diego deployment

Kinjal Doshi
 

Hi Amit,

Thanks a lot for your response on this.

I was trying to use the manifest generation scripts to redeploy cf but I
ran into errors during spiff merge as below:

ubuntu(a)ip-172-31-45-52:~/cf-deployment/cf-release$
scripts/generate_deployment_manifest aws ../cf-stub.yml > cf-deployment.yml
2016/01/29 07:49:05 error generating manifest: unresolved nodes:
(( static_ips(1) )) in
/home/ubuntu/cf-deployment/cf-release/templates/cf-infrastructure-aws.yml
jobs.[0].networks.[0].static_ips
(( static_ips(5, 6, 15, 16, 17, 18, 19, 20) )) in
/home/ubuntu/cf-deployment/cf-release/templates/cf-infrastructure-aws.yml
jobs.[1].networks.[0].static_ips
(( static_ips(27, 28, 29) )) in
/home/ubuntu/cf-deployment/cf-release/templates/cf-infrastructure-aws.yml
jobs.[5].networks.[0].static_ips
(( static_ips(10, 25) )) in
/home/ubuntu/cf-deployment/cf-release/templates/cf-infrastructure-aws.yml
jobs.[6].networks.[0].static_ips


The public gist pointing to the cf-stub created for this attempt is at:
https://gist.github.com/kinjaldoshi/b0dc004876d2a4615c65

I am not very sure but I think this has something to do with the way I
configured the subnets. Could you please guide me on the corrections
required here. I know how (( static_ips(27, 28, 29) )) works, but not sure
why there is a problem in resolving to the required values.

Another question, I have is on the editing instructions at:
http://docs.cloudfoundry.org/deploying/aws/cf-stub.html#editing

For the ccdb and uaadb, as per comments, is it required for me to create a
service and host these DBs as mentioned in the 'Editing Instructions'
column? In that case where can i find the DDL to create the db and tables?


Thanks a lot in advance,
Kinjal

On Fri, Jan 29, 2016 at 10:31 AM, Amit Gupta <agupta(a)pivotal.io> wrote:

Hi Kinjal,

The minimal-aws manifest would be quite difficult to augment to get it to
work with diego. You would need to add static IP to your private network,
add a resource pool or increase the size of an existing one, add the consul
job, colocate the consul agent with some of the CF jobs, and add a few
configuration properties that aren't in the minimal one (e.g.
loggregator.tls.ca). It's probably simpler to use the manifest
generations scripts to redeploy cf (before deploying diego).

Use:

* http://docs.cloudfoundry.org/deploying/common/create_a_manifest.html
* http://docs.cloudfoundry.org/deploying/common/deploy.html

Let us know if you run into some difficulties. These documents ask you to
define stubs, which require you to input data from your AWS IaaS setup, and
may not exactly play nicely with the AWS setup described in the minimal-aws
doc, I'm not sure.

Best,
Amit



On Wed, Jan 27, 2016 at 3:17 AM, Kinjal Doshi <kindoshi(a)gmail.com> wrote:

Hi Eric,

Thanks a lot for the detailed response to my query.

I used the minimal-aws.yml configuration (
https://github.com/cloudfoundry/cf-release/tree/v226/example_manifests) to
create my deployment manifest which does not have the consul VMs set up. I
am guessing that the first step would be to change this.

In this case should I use the script generators to generate the CF
deployment manifest and re-deploy cloud foundry, or are there any other
techniques/shorter path for doing this?

Thanks in advance,
Kinjal



On Mon, Jan 25, 2016 at 6:57 AM, Eric Malm <emalm(a)pivotal.io> wrote:

Hi, Kinjal,

The stub I included in-line in my previous email may not have come
through so well for all mail clients, so I've also included it in a public
gist at https://gist.github.com/ematpl/149ac1bac691caae0722.

Thanks,
Eric

On Fri, Jan 22, 2016 at 6:32 PM, Eric Malm <emalm(a)pivotal.io> wrote:

Hi, Kinjal,

Thanks for asking: this is an area in which the Diego team is looking
forward to improving documentation and tooling in the near term. For the
time being, here are some more manual instructions:

Assuming you have AWS infrastructure already provisioned for your CF
deployment (VPC, subnets, NAT box, ELBs, etc.), you should need only to add
one or more additional subnets for the VMs in the Diego deployment, and
optionally an ELB for the SSH proxy routing tier (you can also use the
HAproxy in the CF deployment to do the same load-balancing, but you'll need
to give it an Elastic IP). If you're brave, and can coordinate the reserved
sections in the CF and Diego deployment manifests' networking configs
correctly, you could even share the same subnet(s) between the two
deployments.

Once you have those subnets provisioned, you'll need to translate their
properties into the iaas-settings.yml stub that you supply to the
generate-deployment-manifest script in diego-release. Since you're
deploying CF v226, we recommend you use Diego final version v0.1442.0 and
the associated manifest-generation script in that version of the release.
The other stubs should be independent of that iaas-settings one, and should
be pretty much the same as the ones for the BOSH-Lite deployment. You'll
likely want to provide different secrets and credentials in the
property-overrides stub, though, and perhaps different instance counts
depending on the availability needs of your deployment. I've included at
the end of this email a representative iaas-settings.yml file from one of
the Diego team's environments, with any specific identifiers for AWS
entities replaced by PLACEHOLDER values.

As a side note, if you don't already have the consul VMs deployed in
your CF deployment, you'll need to enable them so that the Diego components
can use it to communicate. We recommend you operate an odd number of consul
VMs: 1 if don't need high availability, and 3 or 5 if you do (as in a
production environment). You can enable them by changing the instance count
on the consul_z1 and consul_z2 jobs in the CF manifest.

After you've customized those stubs and adjusted your CF manifest if
necessary, you can generate the Diego manifest by running something like
the following from your diego-release directory:

$ ./scripts/generate-deployment-manifest \
PATH/TO/MY/CUSTOMIZED-PROPERTY-OVERRIDES.YML \
PATH/TO/MY/CUSTOMIZED-INSTANCE-COUNT-OVERRIDES.YML \
manifest-generation/bosh-lite-stubs/persistent-disk-overrides.yml \
PATH/TO/MY/CUSTOMIZED-IAAS-SETTINGS.YML \
manifest-generation/bosh-lite-stubs/additional-jobs.yml \
manifest-generation/bosh-lite-stubs/release-versions.yml \
PATH/TO/MY/MANIFEST/DIRECTORY \
> PATH/TO/MY/MANIFEST/DIRECTORY/diego.yml

'PATH/TO/MY/MANIFEST/DIRECTORY' should contain your CF manifest in a
file named 'cf.yml'. Also, please note that if you move to CF v227 or
later, which recommend Diego v0.1445.0 or later, the manifest-generation
script has changed to take its stub arguments via flags, instead of as
these positional arguments, and some of the stubs have changed slightly.

We also realize this is currently an obscure and potentially
error-prone process, and the Diego team does have a couple stories queued
up to do soon to provide more information about how to set up Diego on AWS:

- We plan in https://www.pivotaltracker.com/story/show/100909610 to
parametrize, document, and publish the tools and additional templates we
use to provision the AWS environments we use for CI and for our developers'
experiments and investigations, all the way from an empty account to a VPC
with BOSH, CF, and Diego.
- We plan in https://www.pivotaltracker.com/story/show/100909610 to
provide more manual instructions to set up a Diego environment compatible
with the 'minimal-aws' CF deployment manifest and infrastructure settings,
including provisioning any additional infrastructure such as subnets and
translating their information into the stubs for the diego-release
manifest-generation script.

We'll also be eager to adopt and to integrate with the tooling the CF
Infrastructure and CF Release Integration teams will produce at some point
to automate environment bootstrapping and CF manifest generation as much as
possible.

Please let me and the rest of the team know here if you need further
assistance or clarification.

Thanks again,
Eric, CF Runtime Diego PM

*****

Example iaas-settings.yml file, with PLACEHOLDER entries for your
environment's info:

iaas_settings:
compilation_cloud_properties:
availability_zone: us-east-1a
instance_type: c3.large
resource_pool_cloud_properties:
- cloud_properties:
availability_zone: us-east-1a
elbs:
- PLACEHOLDER-SSHProxyELB-ID
instance_type: m3.medium
name: access_z1
- cloud_properties:
availability_zone: us-east-1b
elbs:
- PLACEHOLDER-SSHProxyELB-ID
instance_type: m3.medium
name: access_z2
- cloud_properties:
availability_zone: us-east-1c
elbs:
- PLACEHOLDER-SSHProxyELB-ID
instance_type: m3.medium
name: access_z3
- cloud_properties:
availability_zone: us-east-1a
instance_type: m3.medium
name: brain_z1
- cloud_properties:
availability_zone: us-east-1b
instance_type: m3.medium
name: brain_z2
- cloud_properties:
availability_zone: us-east-1c
instance_type: m3.medium
name: brain_z3
- cloud_properties:
availability_zone: us-east-1a
instance_type: m3.medium
name: cc_bridge_z1
- cloud_properties:
availability_zone: us-east-1b
instance_type: m3.medium
name: cc_bridge_z2
- cloud_properties:
availability_zone: us-east-1c
instance_type: m3.medium
name: cc_bridge_z3
- cloud_properties:
availability_zone: us-east-1a
ephemeral_disk:
iops: 1200
size: 50000
type: io1
instance_type: m3.large
name: cell_z1
- cloud_properties:
availability_zone: us-east-1b
ephemeral_disk:
iops: 1200
size: 50000
type: io1
instance_type: m3.large
name: cell_z2
- cloud_properties:
availability_zone: us-east-1c
ephemeral_disk:
iops: 1200
size: 50000
type: io1
instance_type: m3.large
name: cell_z3
- cloud_properties:
availability_zone: us-east-1a
instance_type: m3.large
name: colocated_z1
- cloud_properties:
availability_zone: us-east-1b
instance_type: m3.large
name: colocated_z2
- cloud_properties:
availability_zone: us-east-1c
instance_type: m3.large
name: colocated_z3
- cloud_properties:
availability_zone: us-east-1a
instance_type: m3.large
name: database_z1
- cloud_properties:
availability_zone: us-east-1b
instance_type: m3.large
name: database_z2
- cloud_properties:
availability_zone: us-east-1c
instance_type: m3.large
name: database_z3
- cloud_properties:
availability_zone: us-east-1a
instance_type: m3.medium
name: route_emitter_z1
- cloud_properties:
availability_zone: us-east-1b
instance_type: m3.medium
name: route_emitter_z2
- cloud_properties:
availability_zone: us-east-1c
instance_type: m3.medium
name: route_emitter_z3
stemcell:
name: bosh-aws-xen-hvm-ubuntu-trusty-go_agent
version: latest
subnet_configs:
- name: diego1
subnets:
- cloud_properties:
security_groups:
- PLACEHOLDER-InternalSecurityGroup-ID
subnet: PLACEHOLDER-subnet-id-A
dns:
- 10.10.0.2
gateway: 10.10.5.1
range: 10.10.5.0/24
reserved:
- 10.10.5.2 - 10.10.5.9
static:
- 10.10.5.10 - 10.10.5.63
- name: diego2
subnets:
- cloud_properties:
security_groups:
- PLACEHOLDER-InternalSecurityGroup-ID
subnet: PLACEHOLDER-subnet-id-B
dns:
- 10.10.0.2
gateway: 10.10.6.1
range: 10.10.6.0/24
reserved:
- 10.10.6.2 - 10.10.6.9
static:
- 10.10.6.10 - 10.10.6.63
- name: diego3
subnets:
- cloud_properties:
security_groups:
- PLACEHOLDER-InternalSecurityGroup-ID
subnet: PLACEHOLDER-subnet-id-C
dns:
- 10.10.0.2
gateway: 10.10.7.1
range: 10.10.7.0/24
reserved:
- 10.10.7.2 - 10.10.7.9
static:
- 10.10.7.10 - 10.10.7.63


On Fri, Jan 22, 2016 at 4:28 AM, Kinjal Doshi <kindoshi(a)gmail.com>
wrote:

Hi,

After deploying CF version 226 on AWS using microbosh, I am trying to
understand how to deploy Diego now to work with this version of CF but have
not been able to figure out much yet. I was able to find steps for
deploying Diego on BOSH-Lite at
https://github.com/cloudfoundry-incubator/diego-release#deploying-diego-to-bosh-lite
but not for BOSH.

Would appreciate some pointers in this direction .

Thanks in advance,
Kinjal


Re: Need help for diego deployment

Amit Kumar Gupta
 

Hi Kinjal,

The minimal-aws manifest would be quite difficult to augment to get it to
work with diego. You would need to add static IP to your private network,
add a resource pool or increase the size of an existing one, add the consul
job, colocate the consul agent with some of the CF jobs, and add a few
configuration properties that aren't in the minimal one (e.g.
loggregator.tls.ca). It's probably simpler to use the manifest generations
scripts to redeploy cf (before deploying diego).

Use:

* http://docs.cloudfoundry.org/deploying/common/create_a_manifest.html
* http://docs.cloudfoundry.org/deploying/common/deploy.html

Let us know if you run into some difficulties. These documents ask you to
define stubs, which require you to input data from your AWS IaaS setup, and
may not exactly play nicely with the AWS setup described in the minimal-aws
doc, I'm not sure.

Best,
Amit

On Wed, Jan 27, 2016 at 3:17 AM, Kinjal Doshi <kindoshi(a)gmail.com> wrote:

Hi Eric,

Thanks a lot for the detailed response to my query.

I used the minimal-aws.yml configuration (
https://github.com/cloudfoundry/cf-release/tree/v226/example_manifests) to
create my deployment manifest which does not have the consul VMs set up. I
am guessing that the first step would be to change this.

In this case should I use the script generators to generate the CF
deployment manifest and re-deploy cloud foundry, or are there any other
techniques/shorter path for doing this?

Thanks in advance,
Kinjal



On Mon, Jan 25, 2016 at 6:57 AM, Eric Malm <emalm(a)pivotal.io> wrote:

Hi, Kinjal,

The stub I included in-line in my previous email may not have come
through so well for all mail clients, so I've also included it in a public
gist at https://gist.github.com/ematpl/149ac1bac691caae0722.

Thanks,
Eric

On Fri, Jan 22, 2016 at 6:32 PM, Eric Malm <emalm(a)pivotal.io> wrote:

Hi, Kinjal,

Thanks for asking: this is an area in which the Diego team is looking
forward to improving documentation and tooling in the near term. For the
time being, here are some more manual instructions:

Assuming you have AWS infrastructure already provisioned for your CF
deployment (VPC, subnets, NAT box, ELBs, etc.), you should need only to add
one or more additional subnets for the VMs in the Diego deployment, and
optionally an ELB for the SSH proxy routing tier (you can also use the
HAproxy in the CF deployment to do the same load-balancing, but you'll need
to give it an Elastic IP). If you're brave, and can coordinate the reserved
sections in the CF and Diego deployment manifests' networking configs
correctly, you could even share the same subnet(s) between the two
deployments.

Once you have those subnets provisioned, you'll need to translate their
properties into the iaas-settings.yml stub that you supply to the
generate-deployment-manifest script in diego-release. Since you're
deploying CF v226, we recommend you use Diego final version v0.1442.0 and
the associated manifest-generation script in that version of the release.
The other stubs should be independent of that iaas-settings one, and should
be pretty much the same as the ones for the BOSH-Lite deployment. You'll
likely want to provide different secrets and credentials in the
property-overrides stub, though, and perhaps different instance counts
depending on the availability needs of your deployment. I've included at
the end of this email a representative iaas-settings.yml file from one of
the Diego team's environments, with any specific identifiers for AWS
entities replaced by PLACEHOLDER values.

As a side note, if you don't already have the consul VMs deployed in
your CF deployment, you'll need to enable them so that the Diego components
can use it to communicate. We recommend you operate an odd number of consul
VMs: 1 if don't need high availability, and 3 or 5 if you do (as in a
production environment). You can enable them by changing the instance count
on the consul_z1 and consul_z2 jobs in the CF manifest.

After you've customized those stubs and adjusted your CF manifest if
necessary, you can generate the Diego manifest by running something like
the following from your diego-release directory:

$ ./scripts/generate-deployment-manifest \
PATH/TO/MY/CUSTOMIZED-PROPERTY-OVERRIDES.YML \
PATH/TO/MY/CUSTOMIZED-INSTANCE-COUNT-OVERRIDES.YML \
manifest-generation/bosh-lite-stubs/persistent-disk-overrides.yml \
PATH/TO/MY/CUSTOMIZED-IAAS-SETTINGS.YML \
manifest-generation/bosh-lite-stubs/additional-jobs.yml \
manifest-generation/bosh-lite-stubs/release-versions.yml \
PATH/TO/MY/MANIFEST/DIRECTORY \
> PATH/TO/MY/MANIFEST/DIRECTORY/diego.yml

'PATH/TO/MY/MANIFEST/DIRECTORY' should contain your CF manifest in a
file named 'cf.yml'. Also, please note that if you move to CF v227 or
later, which recommend Diego v0.1445.0 or later, the manifest-generation
script has changed to take its stub arguments via flags, instead of as
these positional arguments, and some of the stubs have changed slightly.

We also realize this is currently an obscure and potentially error-prone
process, and the Diego team does have a couple stories queued up to do soon
to provide more information about how to set up Diego on AWS:

- We plan in https://www.pivotaltracker.com/story/show/100909610 to
parametrize, document, and publish the tools and additional templates we
use to provision the AWS environments we use for CI and for our developers'
experiments and investigations, all the way from an empty account to a VPC
with BOSH, CF, and Diego.
- We plan in https://www.pivotaltracker.com/story/show/100909610 to
provide more manual instructions to set up a Diego environment compatible
with the 'minimal-aws' CF deployment manifest and infrastructure settings,
including provisioning any additional infrastructure such as subnets and
translating their information into the stubs for the diego-release
manifest-generation script.

We'll also be eager to adopt and to integrate with the tooling the CF
Infrastructure and CF Release Integration teams will produce at some point
to automate environment bootstrapping and CF manifest generation as much as
possible.

Please let me and the rest of the team know here if you need further
assistance or clarification.

Thanks again,
Eric, CF Runtime Diego PM

*****

Example iaas-settings.yml file, with PLACEHOLDER entries for your
environment's info:

iaas_settings:
compilation_cloud_properties:
availability_zone: us-east-1a
instance_type: c3.large
resource_pool_cloud_properties:
- cloud_properties:
availability_zone: us-east-1a
elbs:
- PLACEHOLDER-SSHProxyELB-ID
instance_type: m3.medium
name: access_z1
- cloud_properties:
availability_zone: us-east-1b
elbs:
- PLACEHOLDER-SSHProxyELB-ID
instance_type: m3.medium
name: access_z2
- cloud_properties:
availability_zone: us-east-1c
elbs:
- PLACEHOLDER-SSHProxyELB-ID
instance_type: m3.medium
name: access_z3
- cloud_properties:
availability_zone: us-east-1a
instance_type: m3.medium
name: brain_z1
- cloud_properties:
availability_zone: us-east-1b
instance_type: m3.medium
name: brain_z2
- cloud_properties:
availability_zone: us-east-1c
instance_type: m3.medium
name: brain_z3
- cloud_properties:
availability_zone: us-east-1a
instance_type: m3.medium
name: cc_bridge_z1
- cloud_properties:
availability_zone: us-east-1b
instance_type: m3.medium
name: cc_bridge_z2
- cloud_properties:
availability_zone: us-east-1c
instance_type: m3.medium
name: cc_bridge_z3
- cloud_properties:
availability_zone: us-east-1a
ephemeral_disk:
iops: 1200
size: 50000
type: io1
instance_type: m3.large
name: cell_z1
- cloud_properties:
availability_zone: us-east-1b
ephemeral_disk:
iops: 1200
size: 50000
type: io1
instance_type: m3.large
name: cell_z2
- cloud_properties:
availability_zone: us-east-1c
ephemeral_disk:
iops: 1200
size: 50000
type: io1
instance_type: m3.large
name: cell_z3
- cloud_properties:
availability_zone: us-east-1a
instance_type: m3.large
name: colocated_z1
- cloud_properties:
availability_zone: us-east-1b
instance_type: m3.large
name: colocated_z2
- cloud_properties:
availability_zone: us-east-1c
instance_type: m3.large
name: colocated_z3
- cloud_properties:
availability_zone: us-east-1a
instance_type: m3.large
name: database_z1
- cloud_properties:
availability_zone: us-east-1b
instance_type: m3.large
name: database_z2
- cloud_properties:
availability_zone: us-east-1c
instance_type: m3.large
name: database_z3
- cloud_properties:
availability_zone: us-east-1a
instance_type: m3.medium
name: route_emitter_z1
- cloud_properties:
availability_zone: us-east-1b
instance_type: m3.medium
name: route_emitter_z2
- cloud_properties:
availability_zone: us-east-1c
instance_type: m3.medium
name: route_emitter_z3
stemcell:
name: bosh-aws-xen-hvm-ubuntu-trusty-go_agent
version: latest
subnet_configs:
- name: diego1
subnets:
- cloud_properties:
security_groups:
- PLACEHOLDER-InternalSecurityGroup-ID
subnet: PLACEHOLDER-subnet-id-A
dns:
- 10.10.0.2
gateway: 10.10.5.1
range: 10.10.5.0/24
reserved:
- 10.10.5.2 - 10.10.5.9
static:
- 10.10.5.10 - 10.10.5.63
- name: diego2
subnets:
- cloud_properties:
security_groups:
- PLACEHOLDER-InternalSecurityGroup-ID
subnet: PLACEHOLDER-subnet-id-B
dns:
- 10.10.0.2
gateway: 10.10.6.1
range: 10.10.6.0/24
reserved:
- 10.10.6.2 - 10.10.6.9
static:
- 10.10.6.10 - 10.10.6.63
- name: diego3
subnets:
- cloud_properties:
security_groups:
- PLACEHOLDER-InternalSecurityGroup-ID
subnet: PLACEHOLDER-subnet-id-C
dns:
- 10.10.0.2
gateway: 10.10.7.1
range: 10.10.7.0/24
reserved:
- 10.10.7.2 - 10.10.7.9
static:
- 10.10.7.10 - 10.10.7.63


On Fri, Jan 22, 2016 at 4:28 AM, Kinjal Doshi <kindoshi(a)gmail.com>
wrote:

Hi,

After deploying CF version 226 on AWS using microbosh, I am trying to
understand how to deploy Diego now to work with this version of CF but have
not been able to figure out much yet. I was able to find steps for
deploying Diego on BOSH-Lite at
https://github.com/cloudfoundry-incubator/diego-release#deploying-diego-to-bosh-lite
but not for BOSH.

Would appreciate some pointers in this direction .

Thanks in advance,
Kinjal


Re: ERR Failed to stage application: insufficient resources

Matthew Sykes <matthew.sykes@...>
 

There are three resources that you can run out of that would result in that
error: memory, disk, or containers. [1] These resources are validated
during the auction that determines where a task or LRP will land.

Advertised memory and disk resources are configurable [2] in diego and
container limits are defined in garden [3].

If you don't specify memory or disk for diego, the actual system capacity
is advertised for placement without any overcommit. That means if you're
running 8 containers with a 2048M limit on a 16GB cell, you're out of
memory capacity - regardless of how much memory is actually used.

Many deployments can get away with 2 to 3x overcommit on memory and disk
but it really depends on the kinds of apps that are being deployed.

[1]:
https://github.com/cloudfoundry-incubator/rep/blob/320f5e9ff0ba2a7bb1294927cbe31ce5af40c987/resources.go#L63-L68
[2]:
https://github.com/cloudfoundry-incubator/diego-release/blob/develop/jobs/rep/spec#L55-L60
[3]:
https://github.com/cloudfoundry-incubator/garden-linux-release/blob/develop/jobs/garden/spec#L65-L67

On Thu, Jan 28, 2016 at 11:52 AM, Stanley Shen <meteorping(a)gmail.com> wrote:

when I push an application to CF with diego deployed, the app failed for
error message

ERR Failed to stage application: insufficient resources

The app asks for:
disk_quota: 2048M
memory: 4072M
instances: 1

The Runner VM is c3.2xlarge, which has 8vCPU and 15G memory.
The resource usage of runner VM is:

Filesystem Size Used Avail Use% Mounted on
udev 7.4G 4.0K 7.4G 1% /dev
tmpfs 1.5G 352K 1.5G 1% /run
/dev/xvda1 2.9G 1.3G 1.6G 45% /
none 4.0K 0 4.0K 0% /sys/fs/cgroup
none 5.0M 0 5.0M 0% /run/lock
none 7.4G 0 7.4G 0% /run/shm
none 100M 0 100M 0% /run/user
/dev/xvdb2 65G 3.0G 58G 5% /var/vcap/data
tmpfs 1.0M 20K 1004K 2% /var/vcap/data/sys/run
/dev/loop0 120M 1.6M 115M 2% /tmp
none 7.4G 0 7.4G 0% /tmp/warden/cgroup

%Cpu(s): 0.0 us, 0.0 sy, 0.0 ni,100.0 id, 0.0 wa, 0.0 hi, 0.0 si,
0.0 st
KiB Mem: 15399972 total, 3539764 used, 11860208 free, 19664 buffers
KiB Swap: 15406328 total, 0 used, 15406328 free. 3106884 cached Mem

I didn't find useful information message related to this issue under
/var/vcap/sys/log.

What could be the reason, and what should I change for fixing it?
How CF determine the resource?
--
Matthew Sykes
matthew.sykes(a)gmail.com


Re: Auto Mysql Database Creation

Shannon Coen
 

Hello Raymond,

Information on the "automagic" service bindings behavior for Java can be
found here:
- http://docs.cloudfoundry.org/buildpacks/java/
- http://docs.cloudfoundry.org/buildpacks/java/spring-service-bindings.html

Shannon Coen
Product Manager, Cloud Foundry
Pivotal, Inc.

On Thu, Jan 28, 2016 at 5:33 PM, Raymond J Steele <raymondsteele(a)gmail.com>
wrote:

Thanks for the reply. Is there an example somewhere? We were under the
impression that CF would create and configure the tables and fields of the
DB automatically based on a config file. Is this not true?


Re: Auto Mysql Database Creation

Raymond J Steele
 

Thanks for the reply. Is there an example somewhere? We were under the impression that CF would create and configure the tables and fields of the DB automatically based on a config file. Is this not true?


Re: Auto Mysql Database Creation

Zach Brown
 

Hi Raymond,

If you've bound your app to the mysql service, then the db connection info
should be available in your application's environment variables.

use `cf env <app-name>` to see the environment variables. You can then use
this connection info to access the db and create tables, load data, etc.

http://docs.cloudfoundry.org/devguide/deploy-apps/environment-variable.html#VCAP-SERVICES

On Thu, Jan 28, 2016 at 3:17 PM, Raymond J Steele <raymondsteele(a)gmail.com>
wrote:

I am under the impression that cloud foundry can stand up and configure a
database for me. I've added the mysql service to my app, and performed a cf
push, but the database is not created. Is there a config file that I need
to place in order for cloud foundry to know my fields and db connection
info? We are using a standard java web app and the CF maven plugin.

Thanks


--

*Zach Brown* | Product Manager

650-954-0427 - mobile

zbrown(a)pivotal.io

<http://pivotal.io>


Re: etcd fails to start when trying to deploy diego with

Amit Kumar Gupta
 

Glad to hear you're back on track Martin!

Best,
Amit

On Thu, Jan 28, 2016 at 4:13 AM, Martin Jackson <martin(a)uncommonsense-uk.com
wrote:
Thanks Amit,

You've put me back on track I have confused the mail CF etcd cluster with
the new Diego one.

Regards

Martin


Auto Mysql Database Creation

Raymond J Steele
 

I am under the impression that cloud foundry can stand up and configure a database for me. I've added the mysql service to my app, and performed a cf push, but the database is not created. Is there a config file that I need to place in order for cloud foundry to know my fields and db connection info? We are using a standard java web app and the CF maven plugin.

Thanks


Re: uaa saml to ping-federate broke when upgrading from cf-226 to cf-227

Sree Tummidi
 

Hi Rich,

This has been fixed in the CF release v229 & v230
Its broken in CF Release v227 & v228

-Sree

On Thu, Jan 28, 2016 at 8:52 AM, Rich Wohlstadter <lethwin(a)gmail.com> wrote:

Thanks Sree,

We tried to send the new generated SP metadata over to ping again and ping
is complaining that it has an invalid signature. So, is the uaa release
that is on cf v228 the one with the mismatched key pairs? Just trying to
understand if its supposed to be fixed in uaa 2.7.3 or whether we need to
wait for a newer release to get this fixed or something else is going on.

-Rich


Re: uaa saml to ping-federate broke when upgrading from cf-226 to cf-227

Rich Wohlstadter
 

Thanks Sree,

We tried to send the new generated SP metadata over to ping again and ping is complaining that it has an invalid signature. So, is the uaa release that is on cf v228 the one with the mismatched key pairs? Just trying to understand if its supposed to be fixed in uaa 2.7.3 or whether we need to wait for a newer release to get this fixed or something else is going on.

-Rich


ERR Failed to stage application: insufficient resources

Stanley Shen <meteorping@...>
 

when I push an application to CF with diego deployed, the app failed for error message

ERR Failed to stage application: insufficient resources

The app asks for:
disk_quota: 2048M
memory: 4072M
instances: 1

The Runner VM is c3.2xlarge, which has 8vCPU and 15G memory.
The resource usage of runner VM is:

Filesystem Size Used Avail Use% Mounted on
udev 7.4G 4.0K 7.4G 1% /dev
tmpfs 1.5G 352K 1.5G 1% /run
/dev/xvda1 2.9G 1.3G 1.6G 45% /
none 4.0K 0 4.0K 0% /sys/fs/cgroup
none 5.0M 0 5.0M 0% /run/lock
none 7.4G 0 7.4G 0% /run/shm
none 100M 0 100M 0% /run/user
/dev/xvdb2 65G 3.0G 58G 5% /var/vcap/data
tmpfs 1.0M 20K 1004K 2% /var/vcap/data/sys/run
/dev/loop0 120M 1.6M 115M 2% /tmp
none 7.4G 0 7.4G 0% /tmp/warden/cgroup

%Cpu(s): 0.0 us, 0.0 sy, 0.0 ni,100.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
KiB Mem: 15399972 total, 3539764 used, 11860208 free, 19664 buffers
KiB Swap: 15406328 total, 0 used, 15406328 free. 3106884 cached Mem

I didn't find useful information message related to this issue under /var/vcap/sys/log.

What could be the reason, and what should I change for fixing it?
How CF determine the resource?


Re: uaa saml to ping-federate broke when upgrading from cf-226 to cf-227

Rich Wohlstadter
 

Thanks Sree,

We tried to send the new SP metadata over and Ping is complaining that it has an invalid signature. So is the current release of uaa in v228 the one that has bad/invalid signature? Just trying to understand if we need to wait for a newer release before the mismatched public/private key pair is fixed.

Rich


Re: Does standard service-registry service available in PWS?

Logan Lee
 

PWS is a commercial product offered by Pivotal. You can contact them via
their support site or you can connect with me directly.

This list is for open source cloud foundry project related topics.

On Jan 27, 2016, at 11:10 PM, Rajesh Bhojwani <rajesh.bhojwani(a)gmail.com>
wrote:

Hi,
Do we have standard service-registry service available in free version of
PWS?
I checked in marketplace and could not find. Is there a way to install it
there?

Please help if any idea. I want to try spring cloud services in PWS.


Re: uaa saml to ping-federate broke when upgrading from cf-226 to cf-227

Sree Tummidi
 

Hi Rich,

Please see my comments inline

1. When using cf login --sso, prompt no longer points to proper url but
defaults to localhost: One Time Code ( Get one at
http://localhost:8080/uaa/passcode )

We are addressing this issue as part of
https://www.pivotaltracker.com/story/show/112592967

2. When comparing the cf IP metadata, it differs now in the SignatureValue
field

We fixed an issue with mismatched public/private key pair which was causing
invalid signature to be generated.
Now the key set up is valid. Yes, you would need to change the PING side of
the configuration and update the SP metatdata

Thanks,
Sree Tummidi
Sr. Product Manager
Identity - Pivotal Cloud Foundry

On Thu, Jan 28, 2016 at 6:41 AM, Rich Wohlstadter <lethwin(a)gmail.com> wrote:

Hi There,

We have cloudfoundry uaa setup to authenticate users to our Identity
Provider ping-federate. After we upgraded to cf-227 this functionality
broke. Are there any know issues with saml setup when you moved over to
the uaa-release github? Some of the symptoms we see:

1. When using cf login --sso, prompt no longer points to proper url but
defaults to localhost: One Time Code ( Get one at
http://localhost:8080/uaa/passcode )
2. When comparing the cf IP metadata, it differs now in the SignatureValue
field

Wondering if we need to set the ping info back up due to a change with
this new release?

Here is the config we use for saml (stripped sensitive info):

saml:
entity_base_url: login.cf-np.threega.com
entityid: login
keystore_key: selfsigned
keystore_name: samlKeystore.jks
keystore_password: UGN9RbgNaMwp4Dnn
providers:
ping-federate:
assertionConsumerIndex: 0
idpMetadata: |+
<md:EntityDescriptor ID="qhotIfnybstUv02tsh8w2jvpJxF"
cacheDuration="PT1440M" entityID="company-t"
xmlns:md="urn:oasis:names:tc:SAML:2.0:metadata"><ds:Signature xmlns:ds="
http://www.w3.org/2000/09/xmldsig#">
<ds:SignedInfo>
<ds:CanonicalizationMethod Algorithm="
http://www.w3.org/2001/10/xml-exc-c14n#"/>
<ds:SignatureMethod Algorithm="
http://www.w3.org/2001/04/xmldsig-more#rsa-sha256"/>
<ds:Reference URI="#qhotIfnybstUv02tsh8w2jvpJxF">
<ds:Transforms>
<ds:Transform Algorithm="
http://www.w3.org/2000/09/xmldsig#enveloped-signature"/>
<ds:Transform Algorithm="
http://www.w3.org/2001/10/xml-exc-c14n#"/>
</ds:Transforms>
<ds:DigestMethod Algorithm="
http://www.w3.org/2001/04/xmlenc#sha256"/>

<ds:DigestValue>zzTEqNenEtq85owsS83D+YhJ3cU0Qfgr1bOWxoLssRI=</ds:DigestValue>
</ds:Reference>
</ds:SignedInfo>
<ds:SignatureValue>
our_signature
</ds:SignatureValue>
<ds:KeyInfo>
<ds:X509Data>
<ds:X509Certificate>
our_certificate
</ds:X509Certificate>
</ds:X509Data>
<ds:KeyValue>
<ds:RSAKeyValue>
<ds:Modulus>

lZ4ZUFzYXubIUKmMrw+maVTrPGikviTfsJWAiPhuSL6qGnRVLorTTeUr/ynS++TdLpVkBLz0hqD/

yQvd1V3sgK6X22NGikLcmIrHRX69DLqB7IdC9HFlpz3yVWK0lIChVlrqgLX7/wEQpYwWLnnLXjz4

J3ce0mQ4Y4kmiBvhciqNEoqPK/g9wrkZKzMhLk3/CMtR/hDVurG/s+bnmYhbNb3pmHYBu5KnqmrJ

xHzxsxnBRF6V8fEXlmI7pqu9SV21p7dEW1VYi5p99lnFPkL1ic+dF4iIIWtggbq4Ue3qdl1bUoc8
y+iG5fRPSQJIGkmiAfQdTdxe8zc384gmf6IenQ==
</ds:Modulus>
<ds:Exponent>AQAB</ds:Exponent>
</ds:RSAKeyValue>
</ds:KeyValue>
</ds:KeyInfo>
</ds:Signature><md:IDPSSODescriptor
protocolSupportEnumeration="urn:oasis:names:tc:SAML:2.0:protocol"
WantAuthnRequestsSigned="true"><md:KeyDescriptor use="signing"><ds:KeyInfo
xmlns:ds="http://www.w3.org/2000/09/xmldsig#
"><ds:X509Data><ds:X509Certificate>MIIDUjCCAjqgAwIBAgIGAU1EsJejMA0GCSqGSIb3DQEBBQUAMGoxCzAJBgNVBAYTAlVTMREwDwYDVQQIEwhNaXNzb3VyaTESMBAGA1UEBxMJU3QuIExvdWlzMREwDwYDVQQKEwhNb25zYW50bzEMMAoGA1UECxMDRUlTMRMwEQYDVQQDEwpNb25zYW50by10MB4XDTE1MDUxMTIwMzUzM1oXDTE3MDUxMDIwMzUzM1owajELMAkGA1UEBhMCVVMxETAPBgNVBAgTCE1pc3NvdXJpMRIwEAYDVQQHEwlTdC4gTG91aXMxETAPBgNVBAoTCE1vbnNhbnRvMQwwCgYDVQQLEwNFSVMxEzARBgNVBAMTCk1vbnNhbnRvLXQwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCVnhlQXNhe5shQqYyvD6ZpVOs8aKS+JN+wlYCI+G5IvqoadFUuitNN5Sv/KdL75N0ulWQEvPSGoP/JC93VXeyArpfbY0aKQtyYisdFfr0MuoHsh0L0cWWnPfJVYrSUgKFWWuqAtfv/ARCljBYuectePPgndx7SZDhjiSaIG+FyKo0Sio8r+D3CuRkrMyEuTf8Iy1H+ENW6sb+z5ueZiFs1vemYdgG7kqeqasnEfPGzGcFEXpXx8ReWYjumq71JXbWnt0RbVViLmn32WcU+QvWJz50XiIgha2CBurhR7ep2XV
tShzzL6I
bl9E9JAkgaSaIB9B1N3F7zNzfziCZ/oh6dAgMBAAEwDQYJKoZIhvcNAQEFBQADggEBAG/MyUQ05U8Liqq85+xTY7WcUGiUAXv+/cSS7OLasoblDQ0iBxcpSkWvkGTVqR73QTRssIfnokG9GGJsSdyIcZzWoCLg2iTaJjRFEuI5oP9sy3QPeK66MeIdkkSGeEuHfNKloSoApxxocuDZuGTHCuU7dqXZe49hf1qiSvLbZHGZuksu4jBPN2qWqwe+v2TFM3AraakAwPbcYqir7c3nWAWkr4h/6KlmZwEo9gAFsMliUM0h9+AHVLyjRQfMlPeOP1N7zpNnMYr0JKJ9B7Rs2ebtCoHLLsyOVmiDiVJDRHVv04GBDSMXIkGcKY7ULLR9WiqMKfnkamGs1QOrQTIJZhU=</ds:X509Certificate></ds:X509Data></ds:KeyInfo></md:KeyDescriptor><md:ArtifactResolutionService
index="0" Location="https://test.amp.company.com/idp/ARS.ssaml2"
Binding="urn:oasis:names:tc:SAML:2.0:bindings:SOAP"
isDefault="true"/><md:NameIDFormat>urn:oasis:names:tc:SAML:1.1:nameid-format:unspecified</md:NameIDFormat><md:SingleSignOnService
Binding="urn:oasis:names:tc:SAML:2.0:bindings:HTTP-POST" Location="
https://test.amp.monsanto.com/idp/SSO.saml2"/><md:SingleSignOnService
Binding="urn:oasis:names:tc:SAML:2.0:bindings:HTTP-Redirect" Location="
https://test.amp.monsanto.c
om/idp/S
SO.saml2"/><md:SingleSignOnService
Binding="urn:oasis:names:tc:SAML:2.0:bindings:HTTP-Artifact" Location="
https://test.amp.monsanto.com/idp/SSO.saml2"/><md:SingleSignOnService
Binding="urn:oasis:names:tc:SAML:2.0:bindings:SOAP" Location="
https://test.amp.monsanto.com/idp/SSO.saml2"/></md:IDPSSODescriptor><md:ContactPerson
contactType="administrative"><md:Company>company</md:Company><md:GivenName>AMP</md:GivenName><md:SurName>Team</md:SurName><md:EmailAddress>
DL-AMPSUPPORT(a)company.com
</md:EmailAddress><md:TelephoneNumber>xxx-xxx-xxxx</md:TelephoneNumber></md:ContactPerson></md:EntityDescriptor>
linkText: Ping Identity
metadataTrustCheck: true
nameID: urn:oasis:names:tc:SAML:1.1:nameid-format:emailAddress
showSamlLoginLink: true

5841 - 5860 of 9426