Date   

Re: Reg cant find template : metron_agent

Nithiyasri Gnanasekaran -X (ngnanase - TECH MAHINDRA LIM@Cisco) <ngnanase at cisco.com...>
 

Hi Rohit

I tried to generate the manifest , but resulted in the following error:
We are using floating IP for assigning to the VMs..
Most of the errors, point about the static IP.
Can you please help on how to fix this issue…

root(a)vms-inception-vm-2:/opt/cisco/vms-installer/cf-release/scripts# ./generate_deployment_manifest openstack /opt/cisco/vms-installer/tenant-vmsdev5control/cf-deploy/cf-230.yml > cf-deployment.yml
2016/02/17 17:08:19 error generating manifest: unresolved nodes:
(( merge )) in /opt/cisco/vms-installer/cf-release/templates/cf-infrastructure-openstack.yml properties.cc (properties.cc)
(( jobs.postgres_z1.networks.cf1.static_ips.[0] )) in dynaml properties.ccdb.address ()
(( merge )) in /opt/cisco/vms-installer/cf-release/templates/cf-infrastructure-openstack.yml properties.ccdb.roles.[0].password (properties.ccdb.roles.ccadmin.password)
(( jobs.postgres_z1.networks.cf1.static_ips.[0] )) in dynaml properties.databases.address ()
(( merge )) in /opt/cisco/vms-installer/cf-release/templates/cf-infrastructure-openstack.yml properties.databases.roles.[0].password (properties.databases.roles.ccadmin.password)
(( merge )) in /opt/cisco/vms-installer/cf-release/templates/cf-infrastructure-openstack.yml properties.databases.roles.[1].password (properties.databases.roles.uaaadmin.password)
(( merge )) in /opt/cisco/vms-installer/cf-release/templates/cf-infrastructure-openstack.yml properties.uaa.clients (properties.uaa.clients)
(( merge )) in /opt/cisco/vms-installer/cf-release/templates/cf-infrastructure-openstack.yml properties.uaadb.roles.[0].password (properties.uaadb.roles.uaaadmin.password)
(( jobs.postgres_z1.networks.cf1.static_ips.[0] )) in dynaml properties.uaadb.address ()
(( meta.floating_static_ips )) in dynaml jobs.[0].networks.[0].static_ips ()
(( static_ips(0) )) in /opt/cisco/vms-installer/cf-release/templates/cf-infrastructure-openstack.yml jobs.[0].networks.[1].static_ips ()
(( static_ips(2) )) in /opt/cisco/vms-installer/cf-release/templates/cf-infrastructure-openstack.yml jobs.[1].networks.[0].static_ips ()
(( static_ips(3) )) in /opt/cisco/vms-installer/cf-release/templates/cf-infrastructure-openstack.yml jobs.[2].networks.[0].static_ips ()
(( static_ips(4) )) in /opt/cisco/vms-installer/cf-release/templates/cf-infrastructure-openstack.yml jobs.[3].networks.[0].static_ips ()
(( static_ips(5) )) in /opt/cisco/vms-installer/cf-release/templates/cf-infrastructure-openstack.yml jobs.[4].networks.[0].static_ips ()
(( static_ips(8, 9, 10) )) in /opt/cisco/vms-installer/cf-release/templates/cf-infrastructure-openstack.yml jobs.[8].networks.[0].static_ips ()
(( static_ips(12, 13, 14) )) in /opt/cisco/vms-installer/cf-release/templates/cf-infrastructure-openstack.yml jobs.[9].networks.[0].static_ips ()
(( merge )) in /opt/cisco/vms-installer/cf-release/templates/cf-infrastructure-openstack.yml meta.floating_static_ips (meta.floating_static_ips)
(( merge )) in /opt/cisco/vms-installer/cf-release/templates/cf-infrastructure-openstack.yml networks (networks)

From: Rohit Kumar [mailto:rokumar(a)pivotal.io]
Sent: Wednesday, February 17, 2016 8:31 AM
To: Jayarajan Ramapurath Kozhummal (jayark) <jayark(a)cisco.com>
Cc: Discussions about Cloud Foundry projects and the system overall. <cf-dev(a)lists.cloudfoundry.org>; Nithiyasri Gnanasekaran -X (ngnanase - TECH MAHINDRA LIM at Cisco) <ngnanase(a)cisco.com>
Subject: Re: [cf-dev] Reg cant find template : metron_agent

Hi Jayaraj,

Yes, you need to generate a new manifest for each upgrade of a CF version. Properties might be added, jobs may change, new resource pools may be required, older ones may be removed, etc between versions. So for each version upgrade you need to regenerate the manifest.

One way a lot of CloudFoundry teams manage this is by having a stub file which contains properties and customizations specific to their CF installation. This stub gets merged with other CF templates to generate the actual manifest yml file.

To see a good example of this look at the bosh-lite stub and the template generation code:

https://github.com/cloudfoundry/cf-release/tree/master/bosh-lite/stubs
https://github.com/cloudfoundry/cf-release/blob/master/scripts/generate-bosh-lite-dev-manifest

For other serious installations (AWS, OpenStack, vSphere, etc) we have the generate_deployment_manifest script which lets you specify the IaaS and then the path to your stub. This is the recommended way of generating your manifest and you should re-generate the manifest for each new release version.

Rohit

On Tue, Feb 16, 2016 at 6:31 PM, Jayarajan Ramapurath Kozhummal (jayark) <jayark(a)cisco.com<mailto:jayark(a)cisco.com>> wrote:
Hi Rohit,

We did not generate a fresh manifest for the CF version 230. We had a manifest file which is used for deploying CF version 205.
Do we need to generate the manifest file for each CF release? Can we generate the new manifest file for CF version 230 using scripts/generate_deployment_manifest and copy paste the missing contents to our deployment manifest file?

Thanks
Jayaraj

From: Rohit Kumar <rokumar(a)pivotal.io<mailto:rokumar(a)pivotal.io>>
Date: Tuesday, February 16, 2016 at 5:25 PM

To: Jayarajan Ramapurath Kozhummal <jayark(a)cisco.com<mailto:jayark(a)cisco.com>>
Cc: "Discussions about Cloud Foundry projects and the system overall." <cf-dev(a)lists.cloudfoundry.org<mailto:cf-dev(a)lists.cloudfoundry.org>>, "Nithiyasri Gnanasekaran -X (ngnanase - TECH MAHINDRA LIM at Cisco)" <ngnanase(a)cisco.com<mailto:ngnanase(a)cisco.com>>
Subject: Re: [cf-dev] Reg cant find template : metron_agent

Those properties are used to specify the etcd machines to loggregator. Those typically get auto-filled by spiff and you don't need to specify them explicitly in the properties section [1]. Did you not generate your manifest with the help of `scripts/generate_deployment_manifest` script in cf-release?

Rohit

[1]: https://github.com/cloudfoundry/cf-release/blob/develop/templates/cf-jobs.yml#L768-L769

On Tue, Feb 16, 2016 at 5:55 PM, Jayarajan Ramapurath Kozhummal (jayark) <jayark(a)cisco.com<mailto:jayark(a)cisco.com>> wrote:
Hi Rohit,

I added the cloud_controller IP for the below property and I am running into the below exception now:-


Started preparing configuration > Binding configuration. Failed: Error filling in template `etcd_bosh_utils.sh.erb' for `cloud_controller/0' (line 31: Can't find property `["etcd.cluster"]') (00:00:01)



Error 100: Error filling in template `etcd_bosh_utils.sh.erb' for `cloud_controller/0' (line 31: Can't find property `["etcd.cluster"]')



Task 55 error



For a more detailed error report, run: bosh task 55 —debug





Is there a reference which I can use to find what values to be filled for the missing properties?



Thanks

Jayaraj




From: Jayarajan Ramapurath Kozhummal <jayark(a)cisco.com<mailto:jayark(a)cisco.com>>
Date: Tuesday, February 16, 2016 at 4:29 PM

To: Rohit Kumar <rokumar(a)pivotal.io<mailto:rokumar(a)pivotal.io>>
Cc: "Discussions about Cloud Foundry projects and the system overall." <cf-dev(a)lists.cloudfoundry.org<mailto:cf-dev(a)lists.cloudfoundry.org>>, "Nithiyasri Gnanasekaran -X (ngnanase - TECH MAHINDRA LIM at Cisco)" <ngnanase(a)cisco.com<mailto:ngnanase(a)cisco.com>>
Subject: Re: [cf-dev] Reg cant find template : metron_agent

Thanks Rohit!

That worked for metro_agent issue. Now running into the following issue after filling the property you mentioned.


Started preparing configuration > Binding configuration. Failed: Error filling in template `metron_agent.json.erb' for `nfs/0' (line 7: Can't find property `["loggregator.etcd.machines"]') (00:00:00)



Error 100: Error filling in template `metron_agent.json.erb' for `nfs/0' (line 7: Can't find property `["loggregator.etcd.machines"]')



Task 54 error



For a more detailed error report, run: bosh task 54 —debug





Nithiya was also running into the same issues. The error posted is after applying some workarounds as seen in the web.



Regards

Jayaraj



From: Rohit Kumar <rokumar(a)pivotal.io<mailto:rokumar(a)pivotal.io>>
Date: Tuesday, February 16, 2016 at 3:53 PM
To: Jayarajan Ramapurath Kozhummal <jayark(a)cisco.com<mailto:jayark(a)cisco.com>>
Cc: "Discussions about Cloud Foundry projects and the system overall." <cf-dev(a)lists.cloudfoundry.org<mailto:cf-dev(a)lists.cloudfoundry.org>>, "Nithiyasri Gnanasekaran -X (ngnanase - TECH MAHINDRA LIM at Cisco)" <ngnanase(a)cisco.com<mailto:ngnanase(a)cisco.com>>
Subject: Re: [cf-dev] Reg cant find template : metron_agent

OK cool. The error which you are getting now is different from what you had originally posted. You need to include the following property in your deployment and it should get fixed:

properties:
metron_agent:
deployment: <name of your deployment>


On Tue, Feb 16, 2016 at 3:41 PM, Jayarajan Ramapurath Kozhummal (jayark) <jayark(a)cisco.com<mailto:jayark(a)cisco.com>> wrote:
Hi Rohit,

Please see the output:-


root(a)vms-inception-vm-2:/opt/cisco/vms-installer/tenant-vmsdev5control/cf-deploy# bosh -n deploy

Acting as user 'admin' on deployment 'cf-vmsdev5control' on 'vms-installdev5-control-66380'

Getting deployment properties from director...

Unable to get properties list from director, trying without it...

Cannot get current deployment information from director, possibly a new deployment



Deploying

---------



Director task 51

Started unknown

Started unknown > Binding deployment. Done (00:00:00)



Started preparing deployment

Started preparing deployment > Binding releases. Done (00:00:00)

Started preparing deployment > Binding existing deployment. Done (00:00:00)

Started preparing deployment > Binding resource pools. Done (00:00:00)

Started preparing deployment > Binding stemcells. Done (00:00:00)

Started preparing deployment > Binding templates. Done (00:00:00)

Started preparing deployment > Binding properties. Done (00:00:00)

Started preparing deployment > Binding unallocated VMs. Done (00:00:01)

Started preparing deployment > Binding instance networks. Done (00:00:00)



Started preparing package compilation > Finding packages to compile. Done (00:00:00)



Started compiling packages

Started compiling packages > rootfs_cflinuxfs2/3232d35298f26bcfb153d964e329fcb42c77051f

Started compiling packages > haproxy/f5d89b125a66892628a8cd61d23be7f9b0d31171

Started compiling packages > uaa/0e15122de61644748d111b619aff4487726f8378

Started compiling packages > golang1.5/ef3267f8998cebcdc86a477126e79e465753aaf1

Started compiling packages > uaa_utils/8ee843cd3e50520398f28541c513ac0d16b00877

Started compiling packages > postgres-9.4.5/06a51985e0701707b27d45c7a5757171b5cefb8c

Started compiling packages > buildpack_binary/e0c8736b073d83c2459519851b5736c288311d92

Started compiling packages > buildpack_staticfile/47c22ec219ca96215c509572f7a59aae55e45535

Started compiling packages > buildpack_php/6dae2301648646cd8ed544af53ff34be0497efe0

Started compiling packages > buildpack_python/a5d5eeb5e255ceb3282424a28c74a4bccd3316e9

Done compiling packages > uaa/0e15122de61644748d111b619aff4487726f8378 (00:02:52)

Started compiling packages > buildpack_go/08a35c7097417bedf06812c7ac8931d950dfae60

Done compiling packages > buildpack_php/6dae2301648646cd8ed544af53ff34be0497efe0 (00:03:02)

Started compiling packages > buildpack_nodejs/da88c1de3e899a27d33c5a8d6e08e151b42a1aa8. Done (00:00:05)

Started compiling packages > buildpack_ruby/d37b44b37b7c95077fd9698879b78561ac0aaf52

Done compiling packages > buildpack_go/08a35c7097417bedf06812c7ac8931d950dfae60 (00:00:37)

Started compiling packages > buildpack_java_offline/f6b99f87508400e9d75926c1546e8d08177072ef

Done compiling packages > buildpack_ruby/d37b44b37b7c95077fd9698879b78561ac0aaf52 (00:00:26)

Started compiling packages > buildpack_java/0dd2a9074cdfee66f56d6a9e958c2b9e1fa9337c. Done (00:00:02)

Started compiling packages > nginx/bf3af6163e13887aacd230bbbc5eff90213ac6af

Done compiling packages > buildpack_java_offline/f6b99f87508400e9d75926c1546e8d08177072ef (00:00:22)

Started compiling packages > ruby-2.2.4/dd1b827e6ea0ca7e9fcb95d08ae81fb82f035261

Done compiling packages > nginx/bf3af6163e13887aacd230bbbc5eff90213ac6af (00:00:33)

Started compiling packages > libpq/14d0b1290ea238243d04dd46d1a9635e6e9812bb

Done compiling packages > buildpack_python/a5d5eeb5e255ceb3282424a28c74a4bccd3316e9 (00:04:10)

Started compiling packages > libmariadb/dcc142dd0798ae557193f08bc46e9bdd97e4c6f3. Done (00:00:02)

Started compiling packages > ruby-2.1.8/b5bf6af82bae947ad255e426001308acfc2244ee

Done compiling packages > uaa_utils/8ee843cd3e50520398f28541c513ac0d16b00877 (00:04:25)

Started compiling packages > etcd-common/a5492fb0ad41a80d2fa083172c0430073213a296

Done compiling packages > libpq/14d0b1290ea238243d04dd46d1a9635e6e9812bb (00:00:18)

Started compiling packages > golang1.4/714698bc352d2a1dbe321376f0676037568147bb

Done compiling packages > etcd-common/a5492fb0ad41a80d2fa083172c0430073213a296 (00:00:02)

Started compiling packages > loggregator_common/e401816a4748292163679fafcbd8f818ed8154a5

Done compiling packages > haproxy/f5d89b125a66892628a8cd61d23be7f9b0d31171 (00:04:28)

Started compiling packages > debian_nfs_server/aac05f22582b2f9faa6840da056084ed15772594

Done compiling packages > loggregator_common/e401816a4748292163679fafcbd8f818ed8154a5 (00:00:03)

Started compiling packages > common/e401816a4748292163679fafcbd8f818ed8154a5

Done compiling packages > debian_nfs_server/aac05f22582b2f9faa6840da056084ed15772594 (00:00:04)

Done compiling packages > common/e401816a4748292163679fafcbd8f818ed8154a5 (00:00:02)

Done compiling packages > golang1.4/714698bc352d2a1dbe321376f0676037568147bb (00:00:15)

Started compiling packages > dea_logging_agent/3179906f4e18fa39bf8baa60c92ee51fb7ce4e22

Started compiling packages > loggregator_trafficcontroller/612624b9a615310d1d87053101c0f64b87038ab5

Started compiling packages > doppler/4abad345222d75f714fc3b7524c87b1829dcd187

Done compiling packages > dea_logging_agent/3179906f4e18fa39bf8baa60c92ee51fb7ce4e22 (00:00:09)

Started compiling packages > gnatsd/0242557ff8fc93c42ff54aa642c524b17ce203eb

Done compiling packages > buildpack_staticfile/47c22ec219ca96215c509572f7a59aae55e45535 (00:04:51)

Started compiling packages > etcd_metrics_server/fc0f1835cd8e95ca86cf3851645486531ae4f12b

Done compiling packages > loggregator_trafficcontroller/612624b9a615310d1d87053101c0f64b87038ab5 (00:00:14)

Started compiling packages > etcd/d43feb5cdad0809d109df0afe6cd3c315dc94a61

Done compiling packages > doppler/4abad345222d75f714fc3b7524c87b1829dcd187 (00:00:17)

Started compiling packages > metron_agent/4dfd17660ea7654bcdfbb81a15cef3b86ac22aab

Done compiling packages > gnatsd/0242557ff8fc93c42ff54aa642c524b17ce203eb (00:00:08)

Done compiling packages > etcd_metrics_server/fc0f1835cd8e95ca86cf3851645486531ae4f12b (00:00:11)

Done compiling packages > metron_agent/4dfd17660ea7654bcdfbb81a15cef3b86ac22aab (00:00:14)

Done compiling packages > etcd/d43feb5cdad0809d109df0afe6cd3c315dc94a61 (00:00:30)

Done compiling packages > rootfs_cflinuxfs2/3232d35298f26bcfb153d964e329fcb42c77051f (00:05:47)

Done compiling packages > buildpack_binary/e0c8736b073d83c2459519851b5736c288311d92 (00:07:34)

Done compiling packages > golang1.5/ef3267f8998cebcdc86a477126e79e465753aaf1 (00:07:38)

Started compiling packages > gorouter/cbbf5f8f71a32cf205d910fe86ef3e5eaa1897f5

Started compiling packages > hm9000/082bbefc4bf586e9195ce94d21dfc4a1e7c6798f

Done compiling packages > ruby-2.2.4/dd1b827e6ea0ca7e9fcb95d08ae81fb82f035261 (00:03:58)

Started compiling packages > dea_next/6193e865f0a87f054d550f0e8c6ff3173e216e0e

Started compiling packages > warden/0fc9616fdc0263f6093a58d9d4da5bb47e337ec2

Started compiling packages > nginx_newrelic_plugin/3bf72c30bcda79a44863a2d1a6f932fe0a5486a5

Started compiling packages > cloud_controller_ng/9ca58fcb7c289431af16f161078d22ada352ff20

Done compiling packages > nginx_newrelic_plugin/3bf72c30bcda79a44863a2d1a6f932fe0a5486a5 (00:00:12)

Done compiling packages > gorouter/cbbf5f8f71a32cf205d910fe86ef3e5eaa1897f5 (00:00:29)

Done compiling packages > hm9000/082bbefc4bf586e9195ce94d21dfc4a1e7c6798f (00:00:29)

Done compiling packages > ruby-2.1.8/b5bf6af82bae947ad255e426001308acfc2244ee (00:04:00)

Started compiling packages > collector/9f8dfbcbcfffb124820327ad2ad4fee35e51d236

Started compiling packages > nats/2230720d1021af6c2c90cd7f3983264ab351043b

Done compiling packages > warden/0fc9616fdc0263f6093a58d9d4da5bb47e337ec2 (00:00:36)

Done compiling packages > nats/2230720d1021af6c2c90cd7f3983264ab351043b (00:00:23)

Done compiling packages > collector/9f8dfbcbcfffb124820327ad2ad4fee35e51d236 (00:00:39)

Done compiling packages > dea_next/6193e865f0a87f054d550f0e8c6ff3173e216e0e (00:01:51)

Done compiling packages > postgres-9.4.5/06a51985e0701707b27d45c7a5757171b5cefb8c (00:09:43)

Done compiling packages > cloud_controller_ng/9ca58fcb7c289431af16f161078d22ada352ff20 (00:03:09)

Done compiling packages (00:10:58)



Started preparing dns > Binding DNS. Done (00:00:00)



Started creating bound missing vms

Started creating bound missing vms > small/0

Started creating bound missing vms > small/1

Started creating bound missing vms > small/2

Started creating bound missing vms > medium/0

Started creating bound missing vms > medium/1

Started creating bound missing vms > medium/2

Started creating bound missing vms > medium/3

Started creating bound missing vms > medium/4

Started creating bound missing vms > medium/5

Started creating bound missing vms > large/0

Started creating bound missing vms > large/1

Started creating bound missing vms > large/2

Started creating bound missing vms > large/3

Started creating bound missing vms > large/4

Started creating bound missing vms > large/5

Started creating bound missing vms > large/6

Started creating bound missing vms > large/7

Started creating bound missing vms > xlarge/0

Started creating bound missing vms > xlarge/1

Done creating bound missing vms > medium/2 (00:01:53)

Done creating bound missing vms > large/4 (00:01:55)

Done creating bound missing vms > medium/3 (00:01:56)

Done creating bound missing vms > xlarge/0 (00:01:56)

Done creating bound missing vms > large/1 (00:01:59)

Done creating bound missing vms > medium/0 (00:02:02)

Done creating bound missing vms > medium/4 (00:02:05)

Done creating bound missing vms > large/6 (00:02:18)

Done creating bound missing vms > large/2 (00:02:20)

Done creating bound missing vms > large/3 (00:02:20)

Done creating bound missing vms > large/5 (00:02:20)

Done creating bound missing vms > medium/5 (00:02:25)

Done creating bound missing vms > xlarge/1 (00:02:29)

Done creating bound missing vms > large/7 (00:02:31)

Done creating bound missing vms > medium/1 (00:02:33)

Done creating bound missing vms > large/0 (00:02:43)

Done creating bound missing vms > small/2 (00:02:51)

Done creating bound missing vms > small/1 (00:03:25)

Done creating bound missing vms > small/0 (00:03:31)

Done creating bound missing vms (00:03:31)



Started binding instance vms

Started binding instance vms > nfs/0

Started binding instance vms > cloud_controller/0

Started binding instance vms > loggregator/0

Started binding instance vms > loggregator_trafficcontroller/0

Started binding instance vms > api_worker/0

Started binding instance vms > dea-spare/0

Started binding instance vms > dea-spare/1

Started binding instance vms > router/0

Started binding instance vms > router/1

Started binding instance vms > haproxy/0

Started binding instance vms > cassandra/0

Started binding instance vms > cassandra_seed/0

Started binding instance vms > zookeeper/0

Started binding instance vms > redis/0

Started binding instance vms > zookeeper/1

Started binding instance vms > zookeeper/2

Started binding instance vms > kafka/1

Started binding instance vms > kafka/0

Started binding instance vms > kafka/2

Done binding instance vms > loggregator_trafficcontroller/0 (00:00:00)

Done binding instance vms > haproxy/0 (00:00:01)

Done binding instance vms > router/1 (00:00:01)

Done binding instance vms > cassandra/0 (00:00:01)

Done binding instance vms > api_worker/0 (00:00:01)

Done binding instance vms > loggregator/0 (00:00:01)

Done binding instance vms > dea-spare/1 (00:00:01)

Done binding instance vms > zookeeper/0 (00:00:01)

Done binding instance vms > cassandra_seed/0 (00:00:01)

Done binding instance vms > kafka/1 (00:00:01)

Done binding instance vms > zookeeper/1 (00:00:01)

Done binding instance vms > cloud_controller/0 (00:00:01)

Done binding instance vms > nfs/0 (00:00:01)

Done binding instance vms > router/0 (00:00:02)

Done binding instance vms > zookeeper/2 (00:00:02)

Done binding instance vms > redis/0 (00:00:02)

Done binding instance vms > kafka/0 (00:00:02)

Done binding instance vms > dea-spare/0 (00:00:02)

Done binding instance vms > kafka/2 (00:00:02)

Done binding instance vms (00:00:02)



Started preparing configuration > Binding configuration. Failed: Error filling in template `metron_agent.json.erb' for `nfs/0' (line 5: Can't find property `["metron_agent.deployment"]') (00:00:00)



Error 100: Error filling in template `metron_agent.json.erb' for `nfs/0' (line 5: Can't find property `["metron_agent.deployment"]')



Task 51 error



For a more detailed error report, run: bosh task 51 --debug

Thanks
Jayaraj

From: Jayarajan Ramapurath Kozhummal <jayark(a)cisco.com<mailto:jayark(a)cisco.com>>
Date: Tuesday, February 16, 2016 at 2:29 PM
To: Rohit Kumar <rokumar(a)pivotal.io<mailto:rokumar(a)pivotal.io>>
Cc: "Discussions about Cloud Foundry projects and the system overall." <cf-dev(a)lists.cloudfoundry.org<mailto:cf-dev(a)lists.cloudfoundry.org>>, "Nithiyasri Gnanasekaran -X (ngnanase - TECH MAHINDRA LIM at Cisco)" <ngnanase(a)cisco.com<mailto:ngnanase(a)cisco.com>>

Subject: Re: [cf-dev] Reg cant find template : metron_agent

Hi Rohit,

Following steps followed :-

* git clone https://github.com/cloudfoundry/cf-release.git

· root(a)vms-inception-vm-2:/opt/cisco/vms-installer/cf-release/scripts# ./update

· root(a)vms-inception-vm-2:/opt/cisco/vms-installer/cf-release# bosh create release releases/cf-230.yml --with-tarball

· root(a)vms-inception-vm-2:/opt/cisco/vms-installer/cf-release/releases# bosh upload release cf-230.tgz

* root(a)vms-inception-vm-2:/opt/cisco/vms-installer/tenant-vmsdev5control/cf-deploy# bosh -n deployment cf-vmsdev5control.yml

Deployment set to `/opt/cisco/vms-installer/tenant-vmsdev5control/cf-deploy/cf-vmsdev5control.yml’

* root(a)vms-inception-vm-2:/opt/cisco/vms-installer/tenant-vmsdev5control/cf-deploy# bosh -n deploy



I have shared the sample deployment yml file we are using for your reference.





Thanks

Jayaraj

From: Rohit Kumar <rokumar(a)pivotal.io<mailto:rokumar(a)pivotal.io>>
Date: Tuesday, February 16, 2016 at 2:15 PM
To: Jayarajan Ramapurath Kozhummal <jayark(a)cisco.com<mailto:jayark(a)cisco.com>>
Cc: "Discussions about Cloud Foundry projects and the system overall." <cf-dev(a)lists.cloudfoundry.org<mailto:cf-dev(a)lists.cloudfoundry.org>>
Subject: Re: [cf-dev] Reg cant find template : metron_agent

Can you also list the commands on how you are creating, uploading and deploying the release?

On Tue, Feb 16, 2016 at 2:42 PM, Jayarajan Ramapurath Kozhummal (jayark) <jayark(a)cisco.com<mailto:jayark(a)cisco.com>> wrote:

Thanks a lot Rohit for replying back very quickly!
I have run the scripts/update command after cloning the cf-release GIT repo.
Please see the command output below:-


root(a)automation-vm-jayark:/opt/cisco/vms-installer/cf-release/src/loggregator# find . -type d -maxdepth 2

find: warning: you have specified the -maxdepth option after a non-option argument -type, but options are not positional (-maxdepth affects tests specified before it as well as those specified after it). Please specify options before other arguments.



.

./src

./src/doppler

./src/loggregator

./src/trafficcontroller

./src/syslog_drain_binder

./src/bitbucket.org<http://bitbucket.org>

./src/monitor

./src/matchers

./src/signalmanager

./src/deaagent

./src/tools

./src/truncatingbuffer

./src/profiler

./src/logger

./src/lats

./src/common

./src/integration_tests

./src/metron

./src/github.com<http://github.com>

./packages

./packages/doppler

./packages/loggregator_trafficcontroller

./packages/syslog_drain_binder

./packages/dea_logging_agent

./packages/loggregator_common

./packages/loggregator-acceptance-tests

./packages/golang1.4

./packages/metron_agent

./docs

./config

./jobs

./jobs/doppler

./jobs/loggregator_trafficcontroller

./jobs/syslog_drain_binder

./jobs/dea_logging_agent

./jobs/loggregator-acceptance-tests

./jobs/metron_agent

./bin

./git-hooks

./samples

Thanks
Jayaraj

From: Rohit Kumar <rokumar(a)pivotal.io<mailto:rokumar(a)pivotal.io>>
Date: Tuesday, February 16, 2016 at 9:30 AM
To: "Discussions about Cloud Foundry projects and the system overall." <cf-dev(a)lists.cloudfoundry.org<mailto:cf-dev(a)lists.cloudfoundry.org>>
Cc: Jayarajan Ramapurath Kozhummal <jayark(a)cisco.com<mailto:jayark(a)cisco.com>>
Subject: Re: [cf-dev] Reg cant find template : metron_agent

Did you make sure to run "scripts/update" after cloning the cf-release repo? Can you run "find . -type d -maxdepth 2" from within the "src/loggregator" directory in cf-release and reply with what you get as output?

Rohit

On Tue, Feb 16, 2016 at 1:39 AM, Nithiyasri Gnanasekaran -X (ngnanase - TECH MAHINDRA LIM at Cisco) <ngnanase(a)cisco.com<mailto:ngnanase(a)cisco.com>> wrote:
Hi

I am working on cloud foundry and I could create a development bosh release of cloud foundry using the following source:
git clone https://github.com/cloudfoundry/cf-release.git (added some rules in haproxy.conf file)

When I tried to deploy with the dev release of cloud-foundry, I get the following error:

Started preparing deployment
Started preparing deployment > Binding releases. Done (00:00:00)
Started preparing deployment > Binding existing deployment. Done (00:00:00)
Started preparing deployment > Binding resource pools. Done (00:00:00)
Started preparing deployment > Binding stemcells. Done (00:00:00)
Started preparing deployment > Binding templates. Failed: Can't find template `metron_agent' (00:00:00)

Error 190012: Can't find template `metron_agent'

Kindly help me figure out the issue of the error, as this is a show-stopper for us..

Regards
Nithiaysri


Set a floting IP for the routers in the CF manifest stub

Sylvain Goulmy <sygoulmy@...>
 

Hi all,

I have deployed a CF (release 248) instance on top of openstack platform
and i'm currently configuring a F5 IP in front of my CF platform.


I need now to set two floating IP to my routers components. In the CF
manifest stub, there is a floating_static_ips input in the metadata section
but the documentation only talks about the haproxy.

Replace 173.1.1.1 with an existing static IP address for your OpenStack
floating network. This is assigned to the ha_proxy job to receive
incoming traffic.

Is there any possibility to set also floating IPs for the routers in the
stub ?

Thanks in advance for your support.
Sylvain.


unlimited services and routes in space quota

sukhil patil
 

Hi,
While defining new space quota using latest CC API version 2.48.0 , we can give instance_memory_limit & app_instance_limit
as -1 which is unlimited,However other resources such as total_services and total_routes also accepts -1.
does it also indicates that services and routes are unlimited??
It is only documented for instance_memory_limit & app_instance_limit but not for other resources.
attaching REST API http://apidocs.cloudfoundry.org/230/space_quota_definitions/creating_a_space_quota_definition.html

thanks in advance
Sukhil Patil


Re: Diego: Permission denied when starting application with startup command

Matthew Sykes <matthew.sykes@...>
 

Looking again, it seems the cli *tries* to set the mode but, for some
reason, the package that gets sent up doesn't have the permissions.

https://github.com/cloudfoundry/cli/blob/master/cf/app_files/zipper.go#L294

Not sure where it's getting lost.

On Wed, Feb 17, 2016 at 8:48 AM, Matthew Sykes <matthew.sykes(a)gmail.com>
wrote:

I can confirm that the problem is with how the cli handles zip entries in
archives. When the cli extracts the zip into a temporary directory, it does
not set the file mode on the files it creates. [1] It simply relies on the
behavior of os.Create() [2]

Looks like an issue should be raised against the cli but a simple
workaround is to simply expand the zip before pushing.

[1]:
https://github.com/cloudfoundry/cli/blob/master/cf/app_files/zipper.go#L283-L292
[2]: https://golang.org/pkg/os/#Create

On Wed, Feb 17, 2016 at 8:18 AM, Daniel Mikusa <dmikusa(a)pivotal.io> wrote:

On Wed, Feb 17, 2016 at 4:30 AM, Meng, Xiangyi <xiangyi.meng(a)emc.com>
wrote:

Hi, Daniel



CF:230 Diego: 0.1441.0 Garden-linux: 0.327.0



I ran “chmod a+x *” under my application’s “bin” folder and then zip all
files. The command for starting application is “bin/start_ngis.sh -m dev”.




I downloaded the droplet from CC after staging on diego, the file
permission under “bin” folder was modified to “-rw-r--r--”.



How would diego decide which file’s permission should be preserved and
which one should be modified.
My understanding is that it doesn't. Diego just lets thing pass through
unchanged, whereas the DEA would previously force a specific set of
permissions.

In this case, it might be the cf cli that is the problem. You're
creating the JAR/WAR with the proper permissions on your script (I'm
assuming you've unzipped to verify the permission is retained, if not do
that and confirm). The next step is for the cf cli to extract your files
and upload them. The cf cli doesn't upload your JAR / WAR file whole. It
unzips it and uploads files individually. This is how it can skip certain
parts of your application that have been already uploaded. This would be
the next phase where there could possibly be an issue.



I found a very strange thing. Part of jar files under “bundles” folder
were modified to “-rwxr--r--”. But part of them were not changed.
Interesting. What OS are you running locally? Linux, Mac, Windows,
Cygwin? Also, what is your version of cf? `cf -v`?

Dan





*From:* Daniel Mikusa [mailto:dmikusa(a)pivotal.io]
*Sent:* 2016年2月16日 23:26
*To:* Discussions about Cloud Foundry projects and the system overall.
*Subject:* [cf-dev] Re: Diego: Permission denied when starting
application with startup command



On Tue, Feb 16, 2016 at 4:21 AM, Meng, Xiangyi <xiangyi.meng(a)emc.com>
wrote:

Hi,



Our application is started by a shell script. So we pushed our
application with –c option. It works fine with dea. Application could be
started successfully. But when I pushed the application into diego, I got
“bash bin/start.sh permission denied”. I also found if I pushed and started
the application into dea first and enabled diego later, the error was
gone. I guess that is because dea will update file permission during
staging.



I suspect you're running into this:




https://github.com/cloudfoundry-incubator/diego-design-notes/blob/master/migrating-to-diego.md#file-permission-modes





I also tried to grant permission when zipping the applciation. But diego
totally ignore the setting. Could someone help me to solve this problem?



What version of cf are you using? What is your OS? What do you mean by
"grant permission when zipping the application", what commands are you
running to do that?



Dan





--
Matthew Sykes
matthew.sykes(a)gmail.com


--
Matthew Sykes
matthew.sykes(a)gmail.com


Re: Diego: Permission denied when starting application with startup command

Daniel Mikusa
 

Or set your start up command to `chmod 755 bin/start_ngis.sh &&
bin/start_ngis.sh -m dev`.

Dan


On Wed, Feb 17, 2016 at 8:48 AM, Matthew Sykes <matthew.sykes(a)gmail.com>
wrote:

I can confirm that the problem is with how the cli handles zip entries in
archives. When the cli extracts the zip into a temporary directory, it does
not set the file mode on the files it creates. [1] It simply relies on the
behavior of os.Create() [2]

Looks like an issue should be raised against the cli but a simple
workaround is to simply expand the zip before pushing.

[1]:
https://github.com/cloudfoundry/cli/blob/master/cf/app_files/zipper.go#L283-L292
[2]: https://golang.org/pkg/os/#Create

On Wed, Feb 17, 2016 at 8:18 AM, Daniel Mikusa <dmikusa(a)pivotal.io> wrote:

On Wed, Feb 17, 2016 at 4:30 AM, Meng, Xiangyi <xiangyi.meng(a)emc.com>
wrote:

Hi, Daniel



CF:230 Diego: 0.1441.0 Garden-linux: 0.327.0



I ran “chmod a+x *” under my application’s “bin” folder and then zip all
files. The command for starting application is “bin/start_ngis.sh -m dev”.




I downloaded the droplet from CC after staging on diego, the file
permission under “bin” folder was modified to “-rw-r--r--”.



How would diego decide which file’s permission should be preserved and
which one should be modified.
My understanding is that it doesn't. Diego just lets thing pass through
unchanged, whereas the DEA would previously force a specific set of
permissions.

In this case, it might be the cf cli that is the problem. You're
creating the JAR/WAR with the proper permissions on your script (I'm
assuming you've unzipped to verify the permission is retained, if not do
that and confirm). The next step is for the cf cli to extract your files
and upload them. The cf cli doesn't upload your JAR / WAR file whole. It
unzips it and uploads files individually. This is how it can skip certain
parts of your application that have been already uploaded. This would be
the next phase where there could possibly be an issue.



I found a very strange thing. Part of jar files under “bundles” folder
were modified to “-rwxr--r--”. But part of them were not changed.
Interesting. What OS are you running locally? Linux, Mac, Windows,
Cygwin? Also, what is your version of cf? `cf -v`?

Dan





*From:* Daniel Mikusa [mailto:dmikusa(a)pivotal.io]
*Sent:* 2016年2月16日 23:26
*To:* Discussions about Cloud Foundry projects and the system overall.
*Subject:* [cf-dev] Re: Diego: Permission denied when starting
application with startup command



On Tue, Feb 16, 2016 at 4:21 AM, Meng, Xiangyi <xiangyi.meng(a)emc.com>
wrote:

Hi,



Our application is started by a shell script. So we pushed our
application with –c option. It works fine with dea. Application could be
started successfully. But when I pushed the application into diego, I got
“bash bin/start.sh permission denied”. I also found if I pushed and started
the application into dea first and enabled diego later, the error was
gone. I guess that is because dea will update file permission during
staging.



I suspect you're running into this:




https://github.com/cloudfoundry-incubator/diego-design-notes/blob/master/migrating-to-diego.md#file-permission-modes





I also tried to grant permission when zipping the applciation. But diego
totally ignore the setting. Could someone help me to solve this problem?



What version of cf are you using? What is your OS? What do you mean by
"grant permission when zipping the application", what commands are you
running to do that?



Dan





--
Matthew Sykes
matthew.sykes(a)gmail.com


Re: Diego: Permission denied when starting application with startup command

Matthew Sykes <matthew.sykes@...>
 

I can confirm that the problem is with how the cli handles zip entries in
archives. When the cli extracts the zip into a temporary directory, it does
not set the file mode on the files it creates. [1] It simply relies on the
behavior of os.Create() [2]

Looks like an issue should be raised against the cli but a simple
workaround is to simply expand the zip before pushing.

[1]:
https://github.com/cloudfoundry/cli/blob/master/cf/app_files/zipper.go#L283-L292
[2]: https://golang.org/pkg/os/#Create

On Wed, Feb 17, 2016 at 8:18 AM, Daniel Mikusa <dmikusa(a)pivotal.io> wrote:

On Wed, Feb 17, 2016 at 4:30 AM, Meng, Xiangyi <xiangyi.meng(a)emc.com>
wrote:

Hi, Daniel



CF:230 Diego: 0.1441.0 Garden-linux: 0.327.0



I ran “chmod a+x *” under my application’s “bin” folder and then zip all
files. The command for starting application is “bin/start_ngis.sh -m dev”.




I downloaded the droplet from CC after staging on diego, the file
permission under “bin” folder was modified to “-rw-r--r--”.



How would diego decide which file’s permission should be preserved and
which one should be modified.
My understanding is that it doesn't. Diego just lets thing pass through
unchanged, whereas the DEA would previously force a specific set of
permissions.

In this case, it might be the cf cli that is the problem. You're creating
the JAR/WAR with the proper permissions on your script (I'm assuming you've
unzipped to verify the permission is retained, if not do that and
confirm). The next step is for the cf cli to extract your files and upload
them. The cf cli doesn't upload your JAR / WAR file whole. It unzips it
and uploads files individually. This is how it can skip certain parts of
your application that have been already uploaded. This would be the next
phase where there could possibly be an issue.



I found a very strange thing. Part of jar files under “bundles” folder
were modified to “-rwxr--r--”. But part of them were not changed.
Interesting. What OS are you running locally? Linux, Mac, Windows,
Cygwin? Also, what is your version of cf? `cf -v`?

Dan





*From:* Daniel Mikusa [mailto:dmikusa(a)pivotal.io]
*Sent:* 2016年2月16日 23:26
*To:* Discussions about Cloud Foundry projects and the system overall.
*Subject:* [cf-dev] Re: Diego: Permission denied when starting
application with startup command



On Tue, Feb 16, 2016 at 4:21 AM, Meng, Xiangyi <xiangyi.meng(a)emc.com>
wrote:

Hi,



Our application is started by a shell script. So we pushed our
application with –c option. It works fine with dea. Application could be
started successfully. But when I pushed the application into diego, I got
“bash bin/start.sh permission denied”. I also found if I pushed and started
the application into dea first and enabled diego later, the error was
gone. I guess that is because dea will update file permission during
staging.



I suspect you're running into this:




https://github.com/cloudfoundry-incubator/diego-design-notes/blob/master/migrating-to-diego.md#file-permission-modes





I also tried to grant permission when zipping the applciation. But diego
totally ignore the setting. Could someone help me to solve this problem?



What version of cf are you using? What is your OS? What do you mean by
"grant permission when zipping the application", what commands are you
running to do that?



Dan




--
Matthew Sykes
matthew.sykes(a)gmail.com


Re: Diego: Permission denied when starting application with startup command

Daniel Mikusa
 

On Wed, Feb 17, 2016 at 4:30 AM, Meng, Xiangyi <xiangyi.meng(a)emc.com> wrote:

Hi, Daniel



CF:230 Diego: 0.1441.0 Garden-linux: 0.327.0



I ran “chmod a+x *” under my application’s “bin” folder and then zip all
files. The command for starting application is “bin/start_ngis.sh -m dev”.




I downloaded the droplet from CC after staging on diego, the file
permission under “bin” folder was modified to “-rw-r--r--”.



How would diego decide which file’s permission should be preserved and
which one should be modified.
My understanding is that it doesn't. Diego just lets thing pass through
unchanged, whereas the DEA would previously force a specific set of
permissions.

In this case, it might be the cf cli that is the problem. You're creating
the JAR/WAR with the proper permissions on your script (I'm assuming you've
unzipped to verify the permission is retained, if not do that and
confirm). The next step is for the cf cli to extract your files and upload
them. The cf cli doesn't upload your JAR / WAR file whole. It unzips it
and uploads files individually. This is how it can skip certain parts of
your application that have been already uploaded. This would be the next
phase where there could possibly be an issue.



I found a very strange thing. Part of jar files under “bundles” folder
were modified to “-rwxr--r--”. But part of them were not changed.
Interesting. What OS are you running locally? Linux, Mac, Windows,
Cygwin? Also, what is your version of cf? `cf -v`?

Dan





*From:* Daniel Mikusa [mailto:dmikusa(a)pivotal.io]
*Sent:* 2016年2月16日 23:26
*To:* Discussions about Cloud Foundry projects and the system overall.
*Subject:* [cf-dev] Re: Diego: Permission denied when starting
application with startup command



On Tue, Feb 16, 2016 at 4:21 AM, Meng, Xiangyi <xiangyi.meng(a)emc.com>
wrote:

Hi,



Our application is started by a shell script. So we pushed our application
with –c option. It works fine with dea. Application could be started
successfully. But when I pushed the application into diego, I got “bash
bin/start.sh permission denied”. I also found if I pushed and started the
application into dea first and enabled diego later, the error was gone. I
guess that is because dea will update file permission during staging.



I suspect you're running into this:




https://github.com/cloudfoundry-incubator/diego-design-notes/blob/master/migrating-to-diego.md#file-permission-modes





I also tried to grant permission when zipping the applciation. But diego
totally ignore the setting. Could someone help me to solve this problem?



What version of cf are you using? What is your OS? What do you mean by
"grant permission when zipping the application", what commands are you
running to do that?



Dan





Re: Deploying Dockerfiles in Cloud Foundry

Will Pragnell <wpragnell@...>
 

Hi Nanduni,

I just replied to your other email. To follow on from that - if you're
comfortable deploying CF to BOSH-Lite, it should be easy to run a custom
registry locally and add the address of it to the
`garden.insecure_docker_registry_list` property [1]. Then your CF
deployment should be able to pull docker images from your local registry.

Best,
Will

[1]:
https://github.com/cloudfoundry-incubator/garden-linux-release/blob/master/jobs/garden/spec#L89

On 17 February 2016 at 09:51, Nanduni Nimalsiri <nandunibw(a)gmail.com> wrote:

Hi all,

Is there any way for pushing Dockerfiles to Cloud Foundry instead of using
publicly available Docker images in Docker Hub? Actually my requirement is
to run some artifacts[1] in MicroPCF or Diego in BOSH-Lite. The problem
seems that these Docker images are not publicly available in Docker Hub. So
how can I proceed with this. Please help.
[1] https://github.com/wso2/kubernetes-artifacts


Best regards,
Nanduni


Re: Error dialing loggregator server: unexpected EOF

ramonskie
 

my issue was a faulty dns server
there was one dns server that did not responded or responded really slow


Re: Pushing Docker images to MicroPCF

Will Pragnell <wpragnell@...>
 

Hi Nanduni,

Currently there's no way to push a Dockerfile that I'm aware of. Your best
bet (assuming you can't push your image to Docker Hub) is to run a Docker
registry locally. This can be done fairly easily using the Registry image
[1]. You may even be able to run the registry on MicroPCF temporarily
(remember containers on CF are ephemeral and can't persist state to disk),
which would be kinda neat!

There's one other thing, though. By default, CF can't pull docker images
from registries that are using self signed certificates. Garden-Linux (the
container running part of CF) has a list of registries that are allowed to
break this rule (configured using the `insecure_docker_registry_list`
property [2]) so you'll need to add the address of your registry to that.
I'm afraid I don't know how to set this for MicroPCF though. Hopefully
someone more familiar with MicroPCF can advise on that.

Good luck!
Will

[1]: https://hub.docker.com/_/registry
[2]:
https://github.com/cloudfoundry-incubator/garden-linux-release/blob/master/jobs/garden/spec#L89

On 17 February 2016 at 09:36, Nanduni Nimalsiri <nandunibw(a)gmail.com> wrote:

Hi,

Is there any way to push a Dockerfile and start an application instead of
pushing a publicly available docker image in Cloud Foundry. The problem is
that I want to run some Docker images on Cloud Foundry, but they are not
publicly available in Docker Hub. How can I proceed with this.

If I explain this scenario briefly, what I want is to deploy the
artifacts[1] in MicroPCF or in Diego in Bosh-Lite. How can I run them? Can
you please help.
[1]https://github.com/wso2/kubernetes-artifacts

Best regards,
Nanduni.


Deploying Dockerfiles in Cloud Foundry

Nanduni Nimalsiri
 

Hi all,

Is there any way for pushing Dockerfiles to Cloud Foundry instead of using publicly available Docker images in Docker Hub? Actually my requirement is to run some artifacts[1] in MicroPCF or Diego in BOSH-Lite. The problem seems that these Docker images are not publicly available in Docker Hub. So how can I proceed with this. Please help.
[1] https://github.com/wso2/kubernetes-artifacts


Best regards,
Nanduni


Pushing Docker images to MicroPCF

Nanduni Nimalsiri
 

Hi,

Is there any way to push a Dockerfile and start an application instead of pushing a publicly available docker image in Cloud Foundry. The problem is that I want to run some Docker images on Cloud Foundry, but they are not publicly available in Docker Hub. How can I proceed with this.

If I explain this scenario briefly, what I want is to deploy the artifacts[1] in MicroPCF or in Diego in Bosh-Lite. How can I run them? Can you please help.
[1]https://github.com/wso2/kubernetes-artifacts

Best regards,
Nanduni.


Re: Diego: Permission denied when starting application with startup command

MaggieMeng
 

Hi, Daniel

CF:230 Diego: 0.1441.0 Garden-linux: 0.327.0

I ran “chmod a+x *” under my application’s “bin” folder and then zip all files. The command for starting application is “bin/start_ngis.sh -m dev”.

I downloaded the droplet from CC after staging on diego, the file permission under “bin” folder was modified to “-rw-r--r--”.

How would diego decide which file’s permission should be preserved and which one should be modified. I found a very strange thing. Part of jar files under “bundles” folder were modified to “-rwxr--r--”. But part of them were not changed.

Thanks,
Maggie

From: Daniel Mikusa [mailto:dmikusa(a)pivotal.io]
Sent: 2016年2月16日 23:26
To: Discussions about Cloud Foundry projects and the system overall.
Subject: [cf-dev] Re: Diego: Permission denied when starting application with startup command

On Tue, Feb 16, 2016 at 4:21 AM, Meng, Xiangyi <xiangyi.meng(a)emc.com<mailto:xiangyi.meng(a)emc.com>> wrote:
Hi,

Our application is started by a shell script. So we pushed our application with –c option. It works fine with dea. Application could be started successfully. But when I pushed the application into diego, I got “bash bin/start.sh permission denied”. I also found if I pushed and started the application into dea first and enabled diego later, the error was gone. I guess that is because dea will update file permission during staging.

I suspect you're running into this:

https://github.com/cloudfoundry-incubator/diego-design-notes/blob/master/migrating-to-diego.md#file-permission-modes


I also tried to grant permission when zipping the applciation. But diego totally ignore the setting. Could someone help me to solve this problem?

What version of cf are you using? What is your OS? What do you mean by "grant permission when zipping the application", what commands are you running to do that?

Dan


Re: Scaling Down etcd

Lingesh Mouleeshwaran
 

Thanks Amit,

to have odd number of cluster size , we have added 4 new members in the new
deployment. now the plan is to remove the old 3 member + 1 member in new
deployment. , but while doing this , cluster size is not reducing and break
the cluster when 4 machine down, which makes all apps to restage and there
is an significant down time.


Regards
Lingesh M,

On Wed, Feb 17, 2016 at 11:21 AM, Amit Gupta <agupta(a)pivotal.io> wrote:

Orchestrating the etcd cluster is fairly complex, and what you're
describing is not a recommended usage. I'm not sure why you need a new 4
node cluster (why not just use the existing 3-node cluster? why the number
4?), but if you do, the simplest thing is to delete the old cluster, deploy
the new cluster, and then regenerate and redeploy *all* of your small
manifests to reflect the updated properties.etcd.machines.

On Tue, Feb 16, 2016 at 9:44 PM, Lingesh Mouleeshwaran <
lingeshmouleeshwaran(a)gmail.com> wrote:

Hi Amit,

Thanks for taking time to respond. actually we are maintaining deployment
manifest templates and from there we are generating each components
manifest using spruce. So there by we are controlling the deviations.

Now coming to the etcd problem:

The old manifest is having 3 member and new manifest is having 4 member.
All 7 members joined together and formed a single etcd cluster. Now we need
to remove the existing 3 members from the cluster and delete the old
deployment.

The problem is, when we remove these 3 members from the
properties.etcd.machines in the new manifest and do a bosh deploy, the job
is failing during the update and not coming up. The exact error in the etcd
job logs is *'the member count is unequal'*

Regards
Lingesh M


On Wed, Feb 17, 2016 at 10:30 AM, Amit Gupta <agupta(a)pivotal.io> wrote:

Hi Lingesh,

I don't think easier deployment and maintenance is that simple. Each
manifest may become smaller, but now you have to maintain multiple small
manifests. And keep them in sync. And make sure that they are all
compatible. There are pros and cons to any sort of decomposition like this.

With regards to targetting specific components for change, I think what
will really solve your problem is having a single CF deployment composed of
multiple releases. E.g. uaa as its own separate release within a single CF
deployment. If you wanted to, you could update the uaa release itself
instead of having to update all the jobs. You still have the problem of,
if you only update one component, how you know it's compatible with all the
things you don't upgrade, but it sounds like you're already willing to take
on that complexity.

This decomposition of cf-release into multiple releases (composed into a
single deployment) is currently underway.

With regards to scaling down etcd, I wasn't able to understand the
problem you're hitting. Can you provide more details about exactly what
you did, in what order?

Best,
Amit

On Tue, Feb 16, 2016 at 8:50 PM, Lingesh Mouleeshwaran <
lingeshmouleeshwaran(a)gmail.com> wrote:

Hi Amit
The main advantage that we are targeting is to reduce deployment time
for any changes in the cloud foundry. The advantages include but not
limited to
* Target specific components for changes
* Deployment time
* Addressing specific components for patch updates
* Easier deployment
* Easier maintenance etc

Regards
Lingesh M

On Wed, Feb 17, 2016 at 4:03 AM, Amit Gupta <agupta(a)pivotal.io> wrote:

Hi Surendhar,

May I ask why you want to split the deployment into multiple
deployments? What problem are you having that you're trying to solve by
doing this?

Best,
Amit

On Mon, Feb 15, 2016 at 9:34 AM, Suren R <suren.devices(a)gmail.com>
wrote:

Hi Cloud Foundry!
We are trying to split the cloud foundry deployment in to multiple
deployments. Each CF component will have its own deployment manifest.
We are doing this activity in an existing CF. We moved all components
except nats and etcd, into the new deployments. The original single
deployment is now having just these two jobs.

Of which, existing deployment is having 3 etcd machines. The
migration idea is to bring 4 new etcd machines in the cluster through new
deployment. Point all other components to these four etcd machines and
delete the existing 3 nodes.

However, if we delete the existing 3 nodes and do an update to form a
4 node cluster, the cluster breaks and as a result all running apps are
going down. (Because the canary job brings one node down for the update, as
a result tolerance is breached.)

We also tried to remove these three nodes from the cluster using
etcdctl command and tried to update deletion to the new deployment through
bosh. This also makes the bosh deployment to fail (etcd job is failing
saying "unequal number of nodes").

In this situation, what would be the best way to reduce the nodes in
the etcd cluster?

regards,
Surendhar



Re: Scaling Down etcd

Amit Kumar Gupta
 

Orchestrating the etcd cluster is fairly complex, and what you're
describing is not a recommended usage. I'm not sure why you need a new 4
node cluster (why not just use the existing 3-node cluster? why the number
4?), but if you do, the simplest thing is to delete the old cluster, deploy
the new cluster, and then regenerate and redeploy *all* of your small
manifests to reflect the updated properties.etcd.machines.

On Tue, Feb 16, 2016 at 9:44 PM, Lingesh Mouleeshwaran <
lingeshmouleeshwaran(a)gmail.com> wrote:

Hi Amit,

Thanks for taking time to respond. actually we are maintaining deployment
manifest templates and from there we are generating each components
manifest using spruce. So there by we are controlling the deviations.

Now coming to the etcd problem:

The old manifest is having 3 member and new manifest is having 4 member.
All 7 members joined together and formed a single etcd cluster. Now we need
to remove the existing 3 members from the cluster and delete the old
deployment.

The problem is, when we remove these 3 members from the
properties.etcd.machines in the new manifest and do a bosh deploy, the job
is failing during the update and not coming up. The exact error in the etcd
job logs is *'the member count is unequal'*

Regards
Lingesh M


On Wed, Feb 17, 2016 at 10:30 AM, Amit Gupta <agupta(a)pivotal.io> wrote:

Hi Lingesh,

I don't think easier deployment and maintenance is that simple. Each
manifest may become smaller, but now you have to maintain multiple small
manifests. And keep them in sync. And make sure that they are all
compatible. There are pros and cons to any sort of decomposition like this.

With regards to targetting specific components for change, I think what
will really solve your problem is having a single CF deployment composed of
multiple releases. E.g. uaa as its own separate release within a single CF
deployment. If you wanted to, you could update the uaa release itself
instead of having to update all the jobs. You still have the problem of,
if you only update one component, how you know it's compatible with all the
things you don't upgrade, but it sounds like you're already willing to take
on that complexity.

This decomposition of cf-release into multiple releases (composed into a
single deployment) is currently underway.

With regards to scaling down etcd, I wasn't able to understand the
problem you're hitting. Can you provide more details about exactly what
you did, in what order?

Best,
Amit

On Tue, Feb 16, 2016 at 8:50 PM, Lingesh Mouleeshwaran <
lingeshmouleeshwaran(a)gmail.com> wrote:

Hi Amit
The main advantage that we are targeting is to reduce deployment time
for any changes in the cloud foundry. The advantages include but not
limited to
* Target specific components for changes
* Deployment time
* Addressing specific components for patch updates
* Easier deployment
* Easier maintenance etc

Regards
Lingesh M

On Wed, Feb 17, 2016 at 4:03 AM, Amit Gupta <agupta(a)pivotal.io> wrote:

Hi Surendhar,

May I ask why you want to split the deployment into multiple
deployments? What problem are you having that you're trying to solve by
doing this?

Best,
Amit

On Mon, Feb 15, 2016 at 9:34 AM, Suren R <suren.devices(a)gmail.com>
wrote:

Hi Cloud Foundry!
We are trying to split the cloud foundry deployment in to multiple
deployments. Each CF component will have its own deployment manifest.
We are doing this activity in an existing CF. We moved all components
except nats and etcd, into the new deployments. The original single
deployment is now having just these two jobs.

Of which, existing deployment is having 3 etcd machines. The migration
idea is to bring 4 new etcd machines in the cluster through new deployment.
Point all other components to these four etcd machines and delete the
existing 3 nodes.

However, if we delete the existing 3 nodes and do an update to form a
4 node cluster, the cluster breaks and as a result all running apps are
going down. (Because the canary job brings one node down for the update, as
a result tolerance is breached.)

We also tried to remove these three nodes from the cluster using
etcdctl command and tried to update deletion to the new deployment through
bosh. This also makes the bosh deployment to fail (etcd job is failing
saying "unequal number of nodes").

In this situation, what would be the best way to reduce the nodes in
the etcd cluster?

regards,
Surendhar



Re: Scaling Down etcd

Lingesh Mouleeshwaran
 

Hi Amit,

Thanks for taking time to respond. actually we are maintaining deployment
manifest templates and from there we are generating each components
manifest using spruce. So there by we are controlling the deviations.

Now coming to the etcd problem:

The old manifest is having 3 member and new manifest is having 4 member.
All 7 members joined together and formed a single etcd cluster. Now we need
to remove the existing 3 members from the cluster and delete the old
deployment.

The problem is, when we remove these 3 members from the
properties.etcd.machines in the new manifest and do a bosh deploy, the job
is failing during the update and not coming up. The exact error in the etcd
job logs is *'the member count is unequal'*

Regards
Lingesh M

On Wed, Feb 17, 2016 at 10:30 AM, Amit Gupta <agupta(a)pivotal.io> wrote:

Hi Lingesh,

I don't think easier deployment and maintenance is that simple. Each
manifest may become smaller, but now you have to maintain multiple small
manifests. And keep them in sync. And make sure that they are all
compatible. There are pros and cons to any sort of decomposition like this.

With regards to targetting specific components for change, I think what
will really solve your problem is having a single CF deployment composed of
multiple releases. E.g. uaa as its own separate release within a single CF
deployment. If you wanted to, you could update the uaa release itself
instead of having to update all the jobs. You still have the problem of,
if you only update one component, how you know it's compatible with all the
things you don't upgrade, but it sounds like you're already willing to take
on that complexity.

This decomposition of cf-release into multiple releases (composed into a
single deployment) is currently underway.

With regards to scaling down etcd, I wasn't able to understand the problem
you're hitting. Can you provide more details about exactly what you did,
in what order?

Best,
Amit

On Tue, Feb 16, 2016 at 8:50 PM, Lingesh Mouleeshwaran <
lingeshmouleeshwaran(a)gmail.com> wrote:

Hi Amit
The main advantage that we are targeting is to reduce deployment time for
any changes in the cloud foundry. The advantages include but not limited to
* Target specific components for changes
* Deployment time
* Addressing specific components for patch updates
* Easier deployment
* Easier maintenance etc

Regards
Lingesh M

On Wed, Feb 17, 2016 at 4:03 AM, Amit Gupta <agupta(a)pivotal.io> wrote:

Hi Surendhar,

May I ask why you want to split the deployment into multiple
deployments? What problem are you having that you're trying to solve by
doing this?

Best,
Amit

On Mon, Feb 15, 2016 at 9:34 AM, Suren R <suren.devices(a)gmail.com>
wrote:

Hi Cloud Foundry!
We are trying to split the cloud foundry deployment in to multiple
deployments. Each CF component will have its own deployment manifest.
We are doing this activity in an existing CF. We moved all components
except nats and etcd, into the new deployments. The original single
deployment is now having just these two jobs.

Of which, existing deployment is having 3 etcd machines. The migration
idea is to bring 4 new etcd machines in the cluster through new deployment.
Point all other components to these four etcd machines and delete the
existing 3 nodes.

However, if we delete the existing 3 nodes and do an update to form a 4
node cluster, the cluster breaks and as a result all running apps are going
down. (Because the canary job brings one node down for the update, as a
result tolerance is breached.)

We also tried to remove these three nodes from the cluster using
etcdctl command and tried to update deletion to the new deployment through
bosh. This also makes the bosh deployment to fail (etcd job is failing
saying "unequal number of nodes").

In this situation, what would be the best way to reduce the nodes in
the etcd cluster?

regards,
Surendhar



Re: Scaling Down etcd

Amit Kumar Gupta
 

Hi Lingesh,

I don't think easier deployment and maintenance is that simple. Each
manifest may become smaller, but now you have to maintain multiple small
manifests. And keep them in sync. And make sure that they are all
compatible. There are pros and cons to any sort of decomposition like this.

With regards to targetting specific components for change, I think what
will really solve your problem is having a single CF deployment composed of
multiple releases. E.g. uaa as its own separate release within a single CF
deployment. If you wanted to, you could update the uaa release itself
instead of having to update all the jobs. You still have the problem of,
if you only update one component, how you know it's compatible with all the
things you don't upgrade, but it sounds like you're already willing to take
on that complexity.

This decomposition of cf-release into multiple releases (composed into a
single deployment) is currently underway.

With regards to scaling down etcd, I wasn't able to understand the problem
you're hitting. Can you provide more details about exactly what you did,
in what order?

Best,
Amit

On Tue, Feb 16, 2016 at 8:50 PM, Lingesh Mouleeshwaran <
lingeshmouleeshwaran(a)gmail.com> wrote:

Hi Amit
The main advantage that we are targeting is to reduce deployment time for
any changes in the cloud foundry. The advantages include but not limited to
* Target specific components for changes
* Deployment time
* Addressing specific components for patch updates
* Easier deployment
* Easier maintenance etc

Regards
Lingesh M

On Wed, Feb 17, 2016 at 4:03 AM, Amit Gupta <agupta(a)pivotal.io> wrote:

Hi Surendhar,

May I ask why you want to split the deployment into multiple deployments?
What problem are you having that you're trying to solve by doing this?

Best,
Amit

On Mon, Feb 15, 2016 at 9:34 AM, Suren R <suren.devices(a)gmail.com> wrote:

Hi Cloud Foundry!
We are trying to split the cloud foundry deployment in to multiple
deployments. Each CF component will have its own deployment manifest.
We are doing this activity in an existing CF. We moved all components
except nats and etcd, into the new deployments. The original single
deployment is now having just these two jobs.

Of which, existing deployment is having 3 etcd machines. The migration
idea is to bring 4 new etcd machines in the cluster through new deployment.
Point all other components to these four etcd machines and delete the
existing 3 nodes.

However, if we delete the existing 3 nodes and do an update to form a 4
node cluster, the cluster breaks and as a result all running apps are going
down. (Because the canary job brings one node down for the update, as a
result tolerance is breached.)

We also tried to remove these three nodes from the cluster using etcdctl
command and tried to update deletion to the new deployment through bosh.
This also makes the bosh deployment to fail (etcd job is failing saying
"unequal number of nodes").

In this situation, what would be the best way to reduce the nodes in the
etcd cluster?

regards,
Surendhar



Re: Scaling Down etcd

Lingesh Mouleeshwaran
 

Hi Amit
The main advantage that we are targeting is to reduce deployment time for
any changes in the cloud foundry. The advantages include but not limited to
* Target specific components for changes
* Deployment time
* Addressing specific components for patch updates
* Easier deployment
* Easier maintenance etc

Regards
Lingesh M

On Wed, Feb 17, 2016 at 4:03 AM, Amit Gupta <agupta(a)pivotal.io> wrote:

Hi Surendhar,

May I ask why you want to split the deployment into multiple deployments?
What problem are you having that you're trying to solve by doing this?

Best,
Amit

On Mon, Feb 15, 2016 at 9:34 AM, Suren R <suren.devices(a)gmail.com> wrote:

Hi Cloud Foundry!
We are trying to split the cloud foundry deployment in to multiple
deployments. Each CF component will have its own deployment manifest.
We are doing this activity in an existing CF. We moved all components
except nats and etcd, into the new deployments. The original single
deployment is now having just these two jobs.

Of which, existing deployment is having 3 etcd machines. The migration
idea is to bring 4 new etcd machines in the cluster through new deployment.
Point all other components to these four etcd machines and delete the
existing 3 nodes.

However, if we delete the existing 3 nodes and do an update to form a 4
node cluster, the cluster breaks and as a result all running apps are going
down. (Because the canary job brings one node down for the update, as a
result tolerance is breached.)

We also tried to remove these three nodes from the cluster using etcdctl
command and tried to update deletion to the new deployment through bosh.
This also makes the bosh deployment to fail (etcd job is failing saying
"unequal number of nodes").

In this situation, what would be the best way to reduce the nodes in the
etcd cluster?

regards,
Surendhar



Re: Reg cant find template : metron_agent

Rohit Kumar
 

Hi Jayaraj,

Yes, you need to generate a new manifest for each upgrade of a CF version.
Properties might be added, jobs may change, new resource pools may be
required, older ones may be removed, etc between versions. So for each
version upgrade you need to regenerate the manifest.

One way a lot of CloudFoundry teams manage this is by having a *stub* file
which contains properties and customizations specific to their CF
installation. This stub gets *merged *with other CF templates to generate
the actual manifest yml file.

To see a good example of this look at the bosh-lite stub and the template
generation code:

https://github.com/cloudfoundry/cf-release/tree/master/bosh-lite/stubs
https://github.com/cloudfoundry/cf-release/blob/master/scripts/generate-bosh-lite-dev-manifest

For other serious installations (AWS, OpenStack, vSphere, etc) we have the
generate_deployment_manifest script which lets you specify the IaaS and
then the path to your stub. This is the recommended way of generating your
manifest and you should re-generate the manifest for each new release
version.

Rohit

On Tue, Feb 16, 2016 at 6:31 PM, Jayarajan Ramapurath Kozhummal (jayark) <
jayark(a)cisco.com> wrote:

Hi Rohit,

We did not generate a fresh manifest for the CF version 230. We had a
manifest file which is used for deploying CF version 205.
Do we need to generate the manifest file for each CF release? Can we
generate the new manifest file for CF version 230 using
scripts/generate_deployment_manifest and copy paste the missing contents to
our deployment manifest file?

Thanks
Jayaraj

From: Rohit Kumar <rokumar(a)pivotal.io>
Date: Tuesday, February 16, 2016 at 5:25 PM

To: Jayarajan Ramapurath Kozhummal <jayark(a)cisco.com>
Cc: "Discussions about Cloud Foundry projects and the system overall." <
cf-dev(a)lists.cloudfoundry.org>, "Nithiyasri Gnanasekaran -X (ngnanase -
TECH MAHINDRA LIM at Cisco)" <ngnanase(a)cisco.com>
Subject: Re: [cf-dev] Reg cant find template : metron_agent

Those properties are used to specify the etcd machines to loggregator.
Those typically get auto-filled by spiff and you don't need to specify them
explicitly in the properties section [1]. Did you not generate your
manifest with the help of `scripts/generate_deployment_manifest` script in
cf-release?

Rohit

[1]:
https://github.com/cloudfoundry/cf-release/blob/develop/templates/cf-jobs.yml#L768-L769

On Tue, Feb 16, 2016 at 5:55 PM, Jayarajan Ramapurath Kozhummal (jayark) <
jayark(a)cisco.com> wrote:

Hi Rohit,

I added the cloud_controller IP for the below property and I am running
into the below exception now:-

Started preparing configuration > Binding configuration. Failed: Error
filling in template `etcd_bosh_utils.sh.erb' for `cloud_controller/0' (line
31: Can't find property `["etcd.cluster"]') (00:00:01)


Error 100: Error filling in template `etcd_bosh_utils.sh.erb' for
`cloud_controller/0' (line 31: Can't find property `["etcd.cluster"]')


Task 55 error


For a more detailed error report, run: bosh task 55 —debug



Is there a reference which I can use to find what values to be filled for
the missing properties?


Thanks

Jayaraj



From: Jayarajan Ramapurath Kozhummal <jayark(a)cisco.com>
Date: Tuesday, February 16, 2016 at 4:29 PM

To: Rohit Kumar <rokumar(a)pivotal.io>
Cc: "Discussions about Cloud Foundry projects and the system overall." <
cf-dev(a)lists.cloudfoundry.org>, "Nithiyasri Gnanasekaran -X (ngnanase -
TECH MAHINDRA LIM at Cisco)" <ngnanase(a)cisco.com>
Subject: Re: [cf-dev] Reg cant find template : metron_agent

Thanks Rohit!

That worked for metro_agent issue. Now running into the following issue
after filling the property you mentioned.

Started preparing configuration > Binding configuration. Failed: Error
filling in template `metron_agent.json.erb' for `nfs/0' (line 7: Can't find
property `["loggregator.etcd.machines"]') (00:00:00)


Error 100: Error filling in template `metron_agent.json.erb' for `nfs/0'
(line 7: Can't find property `["loggregator.etcd.machines"]')


Task 54 error


For a more detailed error report, run: bosh task 54 —debug



Nithiya was also running into the same issues. The error posted is after
applying some workarounds as seen in the web.


Regards

Jayaraj



From: Rohit Kumar <rokumar(a)pivotal.io>
Date: Tuesday, February 16, 2016 at 3:53 PM
To: Jayarajan Ramapurath Kozhummal <jayark(a)cisco.com>
Cc: "Discussions about Cloud Foundry projects and the system overall." <
cf-dev(a)lists.cloudfoundry.org>, "Nithiyasri Gnanasekaran -X (ngnanase -
TECH MAHINDRA LIM at Cisco)" <ngnanase(a)cisco.com>
Subject: Re: [cf-dev] Reg cant find template : metron_agent

OK cool. The error which you are getting now is different from what you
had originally posted. You need to include the following property in your
deployment and it should get fixed:

properties:
metron_agent:
deployment: <name of your deployment>


On Tue, Feb 16, 2016 at 3:41 PM, Jayarajan Ramapurath Kozhummal (jayark)
<jayark(a)cisco.com> wrote:

Hi Rohit,

Please see the output:-

root(a)vms-inception-vm-2:/opt/cisco/vms-installer/tenant-vmsdev5control/cf-deploy#
bosh -n deploy

Acting as user 'admin' on deployment 'cf-vmsdev5control' on '
vms-installdev5-control-66380'

Getting deployment properties from director...

Unable to get properties list from director, trying without it...

Cannot get current deployment information from director, possibly a new
deployment


Deploying

---------


Director task 51

Started unknown

Started unknown > Binding deployment. Done (00:00:00)


Started preparing deployment

Started preparing deployment > Binding releases. Done (00:00:00)

Started preparing deployment > Binding existing deployment. Done
(00:00:00)

Started preparing deployment > Binding resource pools. Done (00:00:00)

Started preparing deployment > Binding stemcells. Done (00:00:00)

Started preparing deployment > Binding templates. Done (00:00:00)

Started preparing deployment > Binding properties. Done (00:00:00)

Started preparing deployment > Binding unallocated VMs. Done
(00:00:01)

Started preparing deployment > Binding instance networks. Done
(00:00:00)


Started preparing package compilation > Finding packages to compile.
Done (00:00:00)


Started compiling packages

Started compiling packages >
rootfs_cflinuxfs2/3232d35298f26bcfb153d964e329fcb42c77051f

Started compiling packages >
haproxy/f5d89b125a66892628a8cd61d23be7f9b0d31171

Started compiling packages >
uaa/0e15122de61644748d111b619aff4487726f8378

Started compiling packages >
golang1.5/ef3267f8998cebcdc86a477126e79e465753aaf1

Started compiling packages >
uaa_utils/8ee843cd3e50520398f28541c513ac0d16b00877

Started compiling packages >
postgres-9.4.5/06a51985e0701707b27d45c7a5757171b5cefb8c

Started compiling packages >
buildpack_binary/e0c8736b073d83c2459519851b5736c288311d92

Started compiling packages >
buildpack_staticfile/47c22ec219ca96215c509572f7a59aae55e45535

Started compiling packages >
buildpack_php/6dae2301648646cd8ed544af53ff34be0497efe0

Started compiling packages >
buildpack_python/a5d5eeb5e255ceb3282424a28c74a4bccd3316e9

Done compiling packages >
uaa/0e15122de61644748d111b619aff4487726f8378 (00:02:52)

Started compiling packages >
buildpack_go/08a35c7097417bedf06812c7ac8931d950dfae60

Done compiling packages >
buildpack_php/6dae2301648646cd8ed544af53ff34be0497efe0 (00:03:02)

Started compiling packages >
buildpack_nodejs/da88c1de3e899a27d33c5a8d6e08e151b42a1aa8. Done
(00:00:05)

Started compiling packages >
buildpack_ruby/d37b44b37b7c95077fd9698879b78561ac0aaf52

Done compiling packages >
buildpack_go/08a35c7097417bedf06812c7ac8931d950dfae60 (00:00:37)

Started compiling packages >
buildpack_java_offline/f6b99f87508400e9d75926c1546e8d08177072ef

Done compiling packages >
buildpack_ruby/d37b44b37b7c95077fd9698879b78561ac0aaf52 (00:00:26)

Started compiling packages >
buildpack_java/0dd2a9074cdfee66f56d6a9e958c2b9e1fa9337c. Done (00:00:02)

Started compiling packages >
nginx/bf3af6163e13887aacd230bbbc5eff90213ac6af

Done compiling packages >
buildpack_java_offline/f6b99f87508400e9d75926c1546e8d08177072ef
(00:00:22)

Started compiling packages >
ruby-2.2.4/dd1b827e6ea0ca7e9fcb95d08ae81fb82f035261

Done compiling packages >
nginx/bf3af6163e13887aacd230bbbc5eff90213ac6af (00:00:33)

Started compiling packages >
libpq/14d0b1290ea238243d04dd46d1a9635e6e9812bb

Done compiling packages >
buildpack_python/a5d5eeb5e255ceb3282424a28c74a4bccd3316e9 (00:04:10)

Started compiling packages >
libmariadb/dcc142dd0798ae557193f08bc46e9bdd97e4c6f3. Done (00:00:02)

Started compiling packages >
ruby-2.1.8/b5bf6af82bae947ad255e426001308acfc2244ee

Done compiling packages >
uaa_utils/8ee843cd3e50520398f28541c513ac0d16b00877 (00:04:25)

Started compiling packages >
etcd-common/a5492fb0ad41a80d2fa083172c0430073213a296

Done compiling packages >
libpq/14d0b1290ea238243d04dd46d1a9635e6e9812bb (00:00:18)

Started compiling packages >
golang1.4/714698bc352d2a1dbe321376f0676037568147bb

Done compiling packages >
etcd-common/a5492fb0ad41a80d2fa083172c0430073213a296 (00:00:02)

Started compiling packages >
loggregator_common/e401816a4748292163679fafcbd8f818ed8154a5

Done compiling packages >
haproxy/f5d89b125a66892628a8cd61d23be7f9b0d31171 (00:04:28)

Started compiling packages >
debian_nfs_server/aac05f22582b2f9faa6840da056084ed15772594

Done compiling packages >
loggregator_common/e401816a4748292163679fafcbd8f818ed8154a5 (00:00:03)

Started compiling packages >
common/e401816a4748292163679fafcbd8f818ed8154a5

Done compiling packages >
debian_nfs_server/aac05f22582b2f9faa6840da056084ed15772594 (00:00:04)

Done compiling packages >
common/e401816a4748292163679fafcbd8f818ed8154a5 (00:00:02)

Done compiling packages >
golang1.4/714698bc352d2a1dbe321376f0676037568147bb (00:00:15)

Started compiling packages >
dea_logging_agent/3179906f4e18fa39bf8baa60c92ee51fb7ce4e22

Started compiling packages >
loggregator_trafficcontroller/612624b9a615310d1d87053101c0f64b87038ab5

Started compiling packages >
doppler/4abad345222d75f714fc3b7524c87b1829dcd187

Done compiling packages >
dea_logging_agent/3179906f4e18fa39bf8baa60c92ee51fb7ce4e22 (00:00:09)

Started compiling packages >
gnatsd/0242557ff8fc93c42ff54aa642c524b17ce203eb

Done compiling packages >
buildpack_staticfile/47c22ec219ca96215c509572f7a59aae55e45535 (00:04:51)

Started compiling packages >
etcd_metrics_server/fc0f1835cd8e95ca86cf3851645486531ae4f12b

Done compiling packages >
loggregator_trafficcontroller/612624b9a615310d1d87053101c0f64b87038ab5
(00:00:14)

Started compiling packages >
etcd/d43feb5cdad0809d109df0afe6cd3c315dc94a61

Done compiling packages >
doppler/4abad345222d75f714fc3b7524c87b1829dcd187 (00:00:17)

Started compiling packages >
metron_agent/4dfd17660ea7654bcdfbb81a15cef3b86ac22aab

Done compiling packages >
gnatsd/0242557ff8fc93c42ff54aa642c524b17ce203eb (00:00:08)

Done compiling packages >
etcd_metrics_server/fc0f1835cd8e95ca86cf3851645486531ae4f12b (00:00:11)

Done compiling packages >
metron_agent/4dfd17660ea7654bcdfbb81a15cef3b86ac22aab (00:00:14)

Done compiling packages >
etcd/d43feb5cdad0809d109df0afe6cd3c315dc94a61 (00:00:30)

Done compiling packages >
rootfs_cflinuxfs2/3232d35298f26bcfb153d964e329fcb42c77051f (00:05:47)

Done compiling packages >
buildpack_binary/e0c8736b073d83c2459519851b5736c288311d92 (00:07:34)

Done compiling packages >
golang1.5/ef3267f8998cebcdc86a477126e79e465753aaf1 (00:07:38)

Started compiling packages >
gorouter/cbbf5f8f71a32cf205d910fe86ef3e5eaa1897f5

Started compiling packages >
hm9000/082bbefc4bf586e9195ce94d21dfc4a1e7c6798f

Done compiling packages >
ruby-2.2.4/dd1b827e6ea0ca7e9fcb95d08ae81fb82f035261 (00:03:58)

Started compiling packages >
dea_next/6193e865f0a87f054d550f0e8c6ff3173e216e0e

Started compiling packages >
warden/0fc9616fdc0263f6093a58d9d4da5bb47e337ec2

Started compiling packages >
nginx_newrelic_plugin/3bf72c30bcda79a44863a2d1a6f932fe0a5486a5

Started compiling packages >
cloud_controller_ng/9ca58fcb7c289431af16f161078d22ada352ff20

Done compiling packages >
nginx_newrelic_plugin/3bf72c30bcda79a44863a2d1a6f932fe0a5486a5
(00:00:12)

Done compiling packages >
gorouter/cbbf5f8f71a32cf205d910fe86ef3e5eaa1897f5 (00:00:29)

Done compiling packages >
hm9000/082bbefc4bf586e9195ce94d21dfc4a1e7c6798f (00:00:29)

Done compiling packages >
ruby-2.1.8/b5bf6af82bae947ad255e426001308acfc2244ee (00:04:00)

Started compiling packages >
collector/9f8dfbcbcfffb124820327ad2ad4fee35e51d236

Started compiling packages >
nats/2230720d1021af6c2c90cd7f3983264ab351043b

Done compiling packages >
warden/0fc9616fdc0263f6093a58d9d4da5bb47e337ec2 (00:00:36)

Done compiling packages >
nats/2230720d1021af6c2c90cd7f3983264ab351043b (00:00:23)

Done compiling packages >
collector/9f8dfbcbcfffb124820327ad2ad4fee35e51d236 (00:00:39)

Done compiling packages >
dea_next/6193e865f0a87f054d550f0e8c6ff3173e216e0e (00:01:51)

Done compiling packages >
postgres-9.4.5/06a51985e0701707b27d45c7a5757171b5cefb8c (00:09:43)

Done compiling packages >
cloud_controller_ng/9ca58fcb7c289431af16f161078d22ada352ff20 (00:03:09)

Done compiling packages (00:10:58)


Started preparing dns > Binding DNS. Done (00:00:00)


Started creating bound missing vms

Started creating bound missing vms > small/0

Started creating bound missing vms > small/1

Started creating bound missing vms > small/2

Started creating bound missing vms > medium/0

Started creating bound missing vms > medium/1

Started creating bound missing vms > medium/2

Started creating bound missing vms > medium/3

Started creating bound missing vms > medium/4

Started creating bound missing vms > medium/5

Started creating bound missing vms > large/0

Started creating bound missing vms > large/1

Started creating bound missing vms > large/2

Started creating bound missing vms > large/3

Started creating bound missing vms > large/4

Started creating bound missing vms > large/5

Started creating bound missing vms > large/6

Started creating bound missing vms > large/7

Started creating bound missing vms > xlarge/0

Started creating bound missing vms > xlarge/1

Done creating bound missing vms > medium/2 (00:01:53)

Done creating bound missing vms > large/4 (00:01:55)

Done creating bound missing vms > medium/3 (00:01:56)

Done creating bound missing vms > xlarge/0 (00:01:56)

Done creating bound missing vms > large/1 (00:01:59)

Done creating bound missing vms > medium/0 (00:02:02)

Done creating bound missing vms > medium/4 (00:02:05)

Done creating bound missing vms > large/6 (00:02:18)

Done creating bound missing vms > large/2 (00:02:20)

Done creating bound missing vms > large/3 (00:02:20)

Done creating bound missing vms > large/5 (00:02:20)

Done creating bound missing vms > medium/5 (00:02:25)

Done creating bound missing vms > xlarge/1 (00:02:29)

Done creating bound missing vms > large/7 (00:02:31)

Done creating bound missing vms > medium/1 (00:02:33)

Done creating bound missing vms > large/0 (00:02:43)

Done creating bound missing vms > small/2 (00:02:51)

Done creating bound missing vms > small/1 (00:03:25)

Done creating bound missing vms > small/0 (00:03:31)

Done creating bound missing vms (00:03:31)


Started binding instance vms

Started binding instance vms > nfs/0

Started binding instance vms > cloud_controller/0

Started binding instance vms > loggregator/0

Started binding instance vms > loggregator_trafficcontroller/0

Started binding instance vms > api_worker/0

Started binding instance vms > dea-spare/0

Started binding instance vms > dea-spare/1

Started binding instance vms > router/0

Started binding instance vms > router/1

Started binding instance vms > haproxy/0

Started binding instance vms > cassandra/0

Started binding instance vms > cassandra_seed/0

Started binding instance vms > zookeeper/0

Started binding instance vms > redis/0

Started binding instance vms > zookeeper/1

Started binding instance vms > zookeeper/2

Started binding instance vms > kafka/1

Started binding instance vms > kafka/0

Started binding instance vms > kafka/2

Done binding instance vms > loggregator_trafficcontroller/0
(00:00:00)

Done binding instance vms > haproxy/0 (00:00:01)

Done binding instance vms > router/1 (00:00:01)

Done binding instance vms > cassandra/0 (00:00:01)

Done binding instance vms > api_worker/0 (00:00:01)

Done binding instance vms > loggregator/0 (00:00:01)

Done binding instance vms > dea-spare/1 (00:00:01)

Done binding instance vms > zookeeper/0 (00:00:01)

Done binding instance vms > cassandra_seed/0 (00:00:01)

Done binding instance vms > kafka/1 (00:00:01)

Done binding instance vms > zookeeper/1 (00:00:01)

Done binding instance vms > cloud_controller/0 (00:00:01)

Done binding instance vms > nfs/0 (00:00:01)

Done binding instance vms > router/0 (00:00:02)

Done binding instance vms > zookeeper/2 (00:00:02)

Done binding instance vms > redis/0 (00:00:02)

Done binding instance vms > kafka/0 (00:00:02)

Done binding instance vms > dea-spare/0 (00:00:02)

Done binding instance vms > kafka/2 (00:00:02)

Done binding instance vms (00:00:02)


Started preparing configuration > Binding configuration. Failed: Error
filling in template `metron_agent.json.erb' for `nfs/0' (line 5: Can't find
property `["metron_agent.deployment"]') (00:00:00)


Error 100: Error filling in template `metron_agent.json.erb' for `nfs/0'
(line 5: Can't find property `["metron_agent.deployment"]')


Task 51 error


For a more detailed error report, run: bosh task 51 --debug

Thanks
Jayaraj

From: Jayarajan Ramapurath Kozhummal <jayark(a)cisco.com>
Date: Tuesday, February 16, 2016 at 2:29 PM
To: Rohit Kumar <rokumar(a)pivotal.io>
Cc: "Discussions about Cloud Foundry projects and the system overall." <
cf-dev(a)lists.cloudfoundry.org>, "Nithiyasri Gnanasekaran -X (ngnanase -
TECH MAHINDRA LIM at Cisco)" <ngnanase(a)cisco.com>

Subject: Re: [cf-dev] Reg cant find template : metron_agent

Hi Rohit,

Following steps followed :-

- git clone https://github.com/cloudfoundry/cf-release.git
-

root(a)vms-inception-vm-2:/opt/cisco/vms-installer/cf-release/scripts#
*./update *
-

root(a)vms-inception-vm-2:/opt/cisco/vms-installer/cf-release# *bosh
create release releases/cf-230.yml --with-tarball*
-

root(a)vms-inception-vm-2:/opt/cisco/vms-installer/cf-release/releases#
*bosh upload release cf-230.tgz *


- root(a)vms-inception-vm-2:/opt/cisco/vms-installer/tenant-vmsdev5control/cf-deploy#
*bosh -n deployment cf-vmsdev5control.yml*

Deployment set to `
/opt/cisco/vms-installer/tenant-vmsdev5control/cf-deploy/cf-vmsdev5control.yml


- root(a)vms-inception-vm-2:/opt/cisco/vms-installer/tenant-vmsdev5control/cf-deploy#
*bosh -n deploy*


I have shared the sample deployment yml file we are using for your
reference.



Thanks

Jayaraj

From: Rohit Kumar <rokumar(a)pivotal.io>
Date: Tuesday, February 16, 2016 at 2:15 PM
To: Jayarajan Ramapurath Kozhummal <jayark(a)cisco.com>
Cc: "Discussions about Cloud Foundry projects and the system overall." <
cf-dev(a)lists.cloudfoundry.org>
Subject: Re: [cf-dev] Reg cant find template : metron_agent

Can you also list the commands on how you are creating, uploading and
deploying the release?

On Tue, Feb 16, 2016 at 2:42 PM, Jayarajan Ramapurath Kozhummal (jayark)
<jayark(a)cisco.com> wrote:


Thanks a lot Rohit for replying back very quickly!
I have run the scripts/update command after cloning the cf-release GIT
repo.
Please see the command output below:-

root(a)automation-vm-jayark:/opt/cisco/vms-installer/cf-release/src/loggregator#
find . -type d -maxdepth 2

find: warning: you have specified the -maxdepth option after a
non-option argument -type, but options are not positional (-maxdepth
affects tests specified before it as well as those specified after it).
Please specify options before other arguments.


.

./src

./src/doppler

./src/loggregator

./src/trafficcontroller

./src/syslog_drain_binder

./src/bitbucket.org

./src/monitor

./src/matchers

./src/signalmanager

./src/deaagent

./src/tools

./src/truncatingbuffer

./src/profiler

./src/logger

./src/lats

./src/common

./src/integration_tests

./src/metron

./src/github.com

./packages

./packages/doppler

./packages/loggregator_trafficcontroller

./packages/syslog_drain_binder

./packages/dea_logging_agent

./packages/loggregator_common

./packages/loggregator-acceptance-tests

./packages/golang1.4

./packages/metron_agent

./docs

./config

./jobs

./jobs/doppler

./jobs/loggregator_trafficcontroller

./jobs/syslog_drain_binder

./jobs/dea_logging_agent

./jobs/loggregator-acceptance-tests

./jobs/metron_agent

./bin

./git-hooks

./samples

Thanks
Jayaraj

From: Rohit Kumar <rokumar(a)pivotal.io>
Date: Tuesday, February 16, 2016 at 9:30 AM
To: "Discussions about Cloud Foundry projects and the system overall."
<cf-dev(a)lists.cloudfoundry.org>
Cc: Jayarajan Ramapurath Kozhummal <jayark(a)cisco.com>
Subject: Re: [cf-dev] Reg cant find template : metron_agent

Did you make sure to run "scripts/update" after cloning the cf-release
repo? Can you run "find . -type d -maxdepth 2" from within the
"src/loggregator" directory in cf-release and reply with what you get as
output?

Rohit

On Tue, Feb 16, 2016 at 1:39 AM, Nithiyasri Gnanasekaran -X (ngnanase -
TECH MAHINDRA LIM at Cisco) <ngnanase(a)cisco.com> wrote:

Hi



I am working on cloud foundry and I could create a development bosh
release of cloud foundry using the following source:

git clone https://github.com/cloudfoundry/cf-release.git (added some
rules in haproxy.conf file)



When I tried to deploy with the dev release of cloud-foundry, I get
the following error:



Started preparing deployment

Started preparing deployment > Binding releases. Done (00:00:00)

Started preparing deployment > Binding existing deployment. Done
(00:00:00)

Started preparing deployment > Binding resource pools. Done
(00:00:00)

Started preparing deployment > Binding stemcells. Done (00:00:00)

Started preparing deployment > Binding templates. Failed: Can't
find template `metron_agent' (00:00:00)



Error 190012: Can't find template `metron_agent'



Kindly help me figure out the issue of the error, as this is a
show-stopper for us..



Regards

Nithiaysri





Re: Reg cant find template : metron_agent

Jayarajan Ramapurath Kozhummal (jayark) <jayark@...>
 

Hi Rohit,

We did not generate a fresh manifest for the CF version 230. We had a manifest file which is used for deploying CF version 205.
Do we need to generate the manifest file for each CF release? Can we generate the new manifest file for CF version 230 using scripts/generate_deployment_manifest and copy paste the missing contents to our deployment manifest file?

Thanks
Jayaraj

From: Rohit Kumar <rokumar(a)pivotal.io<mailto:rokumar(a)pivotal.io>>
Date: Tuesday, February 16, 2016 at 5:25 PM
To: Jayarajan Ramapurath Kozhummal <jayark(a)cisco.com<mailto:jayark(a)cisco.com>>
Cc: "Discussions about Cloud Foundry projects and the system overall." <cf-dev(a)lists.cloudfoundry.org<mailto:cf-dev(a)lists.cloudfoundry.org>>, "Nithiyasri Gnanasekaran -X (ngnanase - TECH MAHINDRA LIM at Cisco)" <ngnanase(a)cisco.com<mailto:ngnanase(a)cisco.com>>
Subject: Re: [cf-dev] Reg cant find template : metron_agent

Those properties are used to specify the etcd machines to loggregator. Those typically get auto-filled by spiff and you don't need to specify them explicitly in the properties section [1]. Did you not generate your manifest with the help of `scripts/generate_deployment_manifest` script in cf-release?

Rohit

[1]: https://github.com/cloudfoundry/cf-release/blob/develop/templates/cf-jobs.yml#L768-L769

On Tue, Feb 16, 2016 at 5:55 PM, Jayarajan Ramapurath Kozhummal (jayark) <jayark(a)cisco.com<mailto:jayark(a)cisco.com>> wrote:
Hi Rohit,

I added the cloud_controller IP for the below property and I am running into the below exception now:-


Started preparing configuration > Binding configuration. Failed: Error filling in template `etcd_bosh_utils.sh.erb' for `cloud_controller/0' (line 31: Can't find property `["etcd.cluster"]') (00:00:01)


Error 100: Error filling in template `etcd_bosh_utils.sh.erb' for `cloud_controller/0' (line 31: Can't find property `["etcd.cluster"]')


Task 55 error


For a more detailed error report, run: bosh task 55 —debug



Is there a reference which I can use to find what values to be filled for the missing properties?


Thanks

Jayaraj



From: Jayarajan Ramapurath Kozhummal <jayark(a)cisco.com<mailto:jayark(a)cisco.com>>
Date: Tuesday, February 16, 2016 at 4:29 PM

To: Rohit Kumar <rokumar(a)pivotal.io<mailto:rokumar(a)pivotal.io>>
Cc: "Discussions about Cloud Foundry projects and the system overall." <cf-dev(a)lists.cloudfoundry.org<mailto:cf-dev(a)lists.cloudfoundry.org>>, "Nithiyasri Gnanasekaran -X (ngnanase - TECH MAHINDRA LIM at Cisco)" <ngnanase(a)cisco.com<mailto:ngnanase(a)cisco.com>>
Subject: Re: [cf-dev] Reg cant find template : metron_agent

Thanks Rohit!

That worked for metro_agent issue. Now running into the following issue after filling the property you mentioned.


Started preparing configuration > Binding configuration. Failed: Error filling in template `metron_agent.json.erb' for `nfs/0' (line 7: Can't find property `["loggregator.etcd.machines"]') (00:00:00)


Error 100: Error filling in template `metron_agent.json.erb' for `nfs/0' (line 7: Can't find property `["loggregator.etcd.machines"]')


Task 54 error


For a more detailed error report, run: bosh task 54 —debug



Nithiya was also running into the same issues. The error posted is after applying some workarounds as seen in the web.


Regards

Jayaraj


From: Rohit Kumar <rokumar(a)pivotal.io<mailto:rokumar(a)pivotal.io>>
Date: Tuesday, February 16, 2016 at 3:53 PM
To: Jayarajan Ramapurath Kozhummal <jayark(a)cisco.com<mailto:jayark(a)cisco.com>>
Cc: "Discussions about Cloud Foundry projects and the system overall." <cf-dev(a)lists.cloudfoundry.org<mailto:cf-dev(a)lists.cloudfoundry.org>>, "Nithiyasri Gnanasekaran -X (ngnanase - TECH MAHINDRA LIM at Cisco)" <ngnanase(a)cisco.com<mailto:ngnanase(a)cisco.com>>
Subject: Re: [cf-dev] Reg cant find template : metron_agent

OK cool. The error which you are getting now is different from what you had originally posted. You need to include the following property in your deployment and it should get fixed:

properties:
metron_agent:
deployment: <name of your deployment>


On Tue, Feb 16, 2016 at 3:41 PM, Jayarajan Ramapurath Kozhummal (jayark) <jayark(a)cisco.com<mailto:jayark(a)cisco.com>> wrote:
Hi Rohit,

Please see the output:-


root(a)vms-inception-vm-2:/opt/cisco/vms-installer/tenant-vmsdev5control/cf-deploy# bosh -n deploy

Acting as user 'admin' on deployment 'cf-vmsdev5control' on 'vms-installdev5-control-66380'

Getting deployment properties from director...

Unable to get properties list from director, trying without it...

Cannot get current deployment information from director, possibly a new deployment


Deploying

---------


Director task 51

Started unknown

Started unknown > Binding deployment. Done (00:00:00)


Started preparing deployment

Started preparing deployment > Binding releases. Done (00:00:00)

Started preparing deployment > Binding existing deployment. Done (00:00:00)

Started preparing deployment > Binding resource pools. Done (00:00:00)

Started preparing deployment > Binding stemcells. Done (00:00:00)

Started preparing deployment > Binding templates. Done (00:00:00)

Started preparing deployment > Binding properties. Done (00:00:00)

Started preparing deployment > Binding unallocated VMs. Done (00:00:01)

Started preparing deployment > Binding instance networks. Done (00:00:00)


Started preparing package compilation > Finding packages to compile. Done (00:00:00)


Started compiling packages

Started compiling packages > rootfs_cflinuxfs2/3232d35298f26bcfb153d964e329fcb42c77051f

Started compiling packages > haproxy/f5d89b125a66892628a8cd61d23be7f9b0d31171

Started compiling packages > uaa/0e15122de61644748d111b619aff4487726f8378

Started compiling packages > golang1.5/ef3267f8998cebcdc86a477126e79e465753aaf1

Started compiling packages > uaa_utils/8ee843cd3e50520398f28541c513ac0d16b00877

Started compiling packages > postgres-9.4.5/06a51985e0701707b27d45c7a5757171b5cefb8c

Started compiling packages > buildpack_binary/e0c8736b073d83c2459519851b5736c288311d92

Started compiling packages > buildpack_staticfile/47c22ec219ca96215c509572f7a59aae55e45535

Started compiling packages > buildpack_php/6dae2301648646cd8ed544af53ff34be0497efe0

Started compiling packages > buildpack_python/a5d5eeb5e255ceb3282424a28c74a4bccd3316e9

Done compiling packages > uaa/0e15122de61644748d111b619aff4487726f8378 (00:02:52)

Started compiling packages > buildpack_go/08a35c7097417bedf06812c7ac8931d950dfae60

Done compiling packages > buildpack_php/6dae2301648646cd8ed544af53ff34be0497efe0 (00:03:02)

Started compiling packages > buildpack_nodejs/da88c1de3e899a27d33c5a8d6e08e151b42a1aa8. Done (00:00:05)

Started compiling packages > buildpack_ruby/d37b44b37b7c95077fd9698879b78561ac0aaf52

Done compiling packages > buildpack_go/08a35c7097417bedf06812c7ac8931d950dfae60 (00:00:37)

Started compiling packages > buildpack_java_offline/f6b99f87508400e9d75926c1546e8d08177072ef

Done compiling packages > buildpack_ruby/d37b44b37b7c95077fd9698879b78561ac0aaf52 (00:00:26)

Started compiling packages > buildpack_java/0dd2a9074cdfee66f56d6a9e958c2b9e1fa9337c. Done (00:00:02)

Started compiling packages > nginx/bf3af6163e13887aacd230bbbc5eff90213ac6af

Done compiling packages > buildpack_java_offline/f6b99f87508400e9d75926c1546e8d08177072ef (00:00:22)

Started compiling packages > ruby-2.2.4/dd1b827e6ea0ca7e9fcb95d08ae81fb82f035261

Done compiling packages > nginx/bf3af6163e13887aacd230bbbc5eff90213ac6af (00:00:33)

Started compiling packages > libpq/14d0b1290ea238243d04dd46d1a9635e6e9812bb

Done compiling packages > buildpack_python/a5d5eeb5e255ceb3282424a28c74a4bccd3316e9 (00:04:10)

Started compiling packages > libmariadb/dcc142dd0798ae557193f08bc46e9bdd97e4c6f3. Done (00:00:02)

Started compiling packages > ruby-2.1.8/b5bf6af82bae947ad255e426001308acfc2244ee

Done compiling packages > uaa_utils/8ee843cd3e50520398f28541c513ac0d16b00877 (00:04:25)

Started compiling packages > etcd-common/a5492fb0ad41a80d2fa083172c0430073213a296

Done compiling packages > libpq/14d0b1290ea238243d04dd46d1a9635e6e9812bb (00:00:18)

Started compiling packages > golang1.4/714698bc352d2a1dbe321376f0676037568147bb

Done compiling packages > etcd-common/a5492fb0ad41a80d2fa083172c0430073213a296 (00:00:02)

Started compiling packages > loggregator_common/e401816a4748292163679fafcbd8f818ed8154a5

Done compiling packages > haproxy/f5d89b125a66892628a8cd61d23be7f9b0d31171 (00:04:28)

Started compiling packages > debian_nfs_server/aac05f22582b2f9faa6840da056084ed15772594

Done compiling packages > loggregator_common/e401816a4748292163679fafcbd8f818ed8154a5 (00:00:03)

Started compiling packages > common/e401816a4748292163679fafcbd8f818ed8154a5

Done compiling packages > debian_nfs_server/aac05f22582b2f9faa6840da056084ed15772594 (00:00:04)

Done compiling packages > common/e401816a4748292163679fafcbd8f818ed8154a5 (00:00:02)

Done compiling packages > golang1.4/714698bc352d2a1dbe321376f0676037568147bb (00:00:15)

Started compiling packages > dea_logging_agent/3179906f4e18fa39bf8baa60c92ee51fb7ce4e22

Started compiling packages > loggregator_trafficcontroller/612624b9a615310d1d87053101c0f64b87038ab5

Started compiling packages > doppler/4abad345222d75f714fc3b7524c87b1829dcd187

Done compiling packages > dea_logging_agent/3179906f4e18fa39bf8baa60c92ee51fb7ce4e22 (00:00:09)

Started compiling packages > gnatsd/0242557ff8fc93c42ff54aa642c524b17ce203eb

Done compiling packages > buildpack_staticfile/47c22ec219ca96215c509572f7a59aae55e45535 (00:04:51)

Started compiling packages > etcd_metrics_server/fc0f1835cd8e95ca86cf3851645486531ae4f12b

Done compiling packages > loggregator_trafficcontroller/612624b9a615310d1d87053101c0f64b87038ab5 (00:00:14)

Started compiling packages > etcd/d43feb5cdad0809d109df0afe6cd3c315dc94a61

Done compiling packages > doppler/4abad345222d75f714fc3b7524c87b1829dcd187 (00:00:17)

Started compiling packages > metron_agent/4dfd17660ea7654bcdfbb81a15cef3b86ac22aab

Done compiling packages > gnatsd/0242557ff8fc93c42ff54aa642c524b17ce203eb (00:00:08)

Done compiling packages > etcd_metrics_server/fc0f1835cd8e95ca86cf3851645486531ae4f12b (00:00:11)

Done compiling packages > metron_agent/4dfd17660ea7654bcdfbb81a15cef3b86ac22aab (00:00:14)

Done compiling packages > etcd/d43feb5cdad0809d109df0afe6cd3c315dc94a61 (00:00:30)

Done compiling packages > rootfs_cflinuxfs2/3232d35298f26bcfb153d964e329fcb42c77051f (00:05:47)

Done compiling packages > buildpack_binary/e0c8736b073d83c2459519851b5736c288311d92 (00:07:34)

Done compiling packages > golang1.5/ef3267f8998cebcdc86a477126e79e465753aaf1 (00:07:38)

Started compiling packages > gorouter/cbbf5f8f71a32cf205d910fe86ef3e5eaa1897f5

Started compiling packages > hm9000/082bbefc4bf586e9195ce94d21dfc4a1e7c6798f

Done compiling packages > ruby-2.2.4/dd1b827e6ea0ca7e9fcb95d08ae81fb82f035261 (00:03:58)

Started compiling packages > dea_next/6193e865f0a87f054d550f0e8c6ff3173e216e0e

Started compiling packages > warden/0fc9616fdc0263f6093a58d9d4da5bb47e337ec2

Started compiling packages > nginx_newrelic_plugin/3bf72c30bcda79a44863a2d1a6f932fe0a5486a5

Started compiling packages > cloud_controller_ng/9ca58fcb7c289431af16f161078d22ada352ff20

Done compiling packages > nginx_newrelic_plugin/3bf72c30bcda79a44863a2d1a6f932fe0a5486a5 (00:00:12)

Done compiling packages > gorouter/cbbf5f8f71a32cf205d910fe86ef3e5eaa1897f5 (00:00:29)

Done compiling packages > hm9000/082bbefc4bf586e9195ce94d21dfc4a1e7c6798f (00:00:29)

Done compiling packages > ruby-2.1.8/b5bf6af82bae947ad255e426001308acfc2244ee (00:04:00)

Started compiling packages > collector/9f8dfbcbcfffb124820327ad2ad4fee35e51d236

Started compiling packages > nats/2230720d1021af6c2c90cd7f3983264ab351043b

Done compiling packages > warden/0fc9616fdc0263f6093a58d9d4da5bb47e337ec2 (00:00:36)

Done compiling packages > nats/2230720d1021af6c2c90cd7f3983264ab351043b (00:00:23)

Done compiling packages > collector/9f8dfbcbcfffb124820327ad2ad4fee35e51d236 (00:00:39)

Done compiling packages > dea_next/6193e865f0a87f054d550f0e8c6ff3173e216e0e (00:01:51)

Done compiling packages > postgres-9.4.5/06a51985e0701707b27d45c7a5757171b5cefb8c (00:09:43)

Done compiling packages > cloud_controller_ng/9ca58fcb7c289431af16f161078d22ada352ff20 (00:03:09)

Done compiling packages (00:10:58)


Started preparing dns > Binding DNS. Done (00:00:00)


Started creating bound missing vms

Started creating bound missing vms > small/0

Started creating bound missing vms > small/1

Started creating bound missing vms > small/2

Started creating bound missing vms > medium/0

Started creating bound missing vms > medium/1

Started creating bound missing vms > medium/2

Started creating bound missing vms > medium/3

Started creating bound missing vms > medium/4

Started creating bound missing vms > medium/5

Started creating bound missing vms > large/0

Started creating bound missing vms > large/1

Started creating bound missing vms > large/2

Started creating bound missing vms > large/3

Started creating bound missing vms > large/4

Started creating bound missing vms > large/5

Started creating bound missing vms > large/6

Started creating bound missing vms > large/7

Started creating bound missing vms > xlarge/0

Started creating bound missing vms > xlarge/1

Done creating bound missing vms > medium/2 (00:01:53)

Done creating bound missing vms > large/4 (00:01:55)

Done creating bound missing vms > medium/3 (00:01:56)

Done creating bound missing vms > xlarge/0 (00:01:56)

Done creating bound missing vms > large/1 (00:01:59)

Done creating bound missing vms > medium/0 (00:02:02)

Done creating bound missing vms > medium/4 (00:02:05)

Done creating bound missing vms > large/6 (00:02:18)

Done creating bound missing vms > large/2 (00:02:20)

Done creating bound missing vms > large/3 (00:02:20)

Done creating bound missing vms > large/5 (00:02:20)

Done creating bound missing vms > medium/5 (00:02:25)

Done creating bound missing vms > xlarge/1 (00:02:29)

Done creating bound missing vms > large/7 (00:02:31)

Done creating bound missing vms > medium/1 (00:02:33)

Done creating bound missing vms > large/0 (00:02:43)

Done creating bound missing vms > small/2 (00:02:51)

Done creating bound missing vms > small/1 (00:03:25)

Done creating bound missing vms > small/0 (00:03:31)

Done creating bound missing vms (00:03:31)


Started binding instance vms

Started binding instance vms > nfs/0

Started binding instance vms > cloud_controller/0

Started binding instance vms > loggregator/0

Started binding instance vms > loggregator_trafficcontroller/0

Started binding instance vms > api_worker/0

Started binding instance vms > dea-spare/0

Started binding instance vms > dea-spare/1

Started binding instance vms > router/0

Started binding instance vms > router/1

Started binding instance vms > haproxy/0

Started binding instance vms > cassandra/0

Started binding instance vms > cassandra_seed/0

Started binding instance vms > zookeeper/0

Started binding instance vms > redis/0

Started binding instance vms > zookeeper/1

Started binding instance vms > zookeeper/2

Started binding instance vms > kafka/1

Started binding instance vms > kafka/0

Started binding instance vms > kafka/2

Done binding instance vms > loggregator_trafficcontroller/0 (00:00:00)

Done binding instance vms > haproxy/0 (00:00:01)

Done binding instance vms > router/1 (00:00:01)

Done binding instance vms > cassandra/0 (00:00:01)

Done binding instance vms > api_worker/0 (00:00:01)

Done binding instance vms > loggregator/0 (00:00:01)

Done binding instance vms > dea-spare/1 (00:00:01)

Done binding instance vms > zookeeper/0 (00:00:01)

Done binding instance vms > cassandra_seed/0 (00:00:01)

Done binding instance vms > kafka/1 (00:00:01)

Done binding instance vms > zookeeper/1 (00:00:01)

Done binding instance vms > cloud_controller/0 (00:00:01)

Done binding instance vms > nfs/0 (00:00:01)

Done binding instance vms > router/0 (00:00:02)

Done binding instance vms > zookeeper/2 (00:00:02)

Done binding instance vms > redis/0 (00:00:02)

Done binding instance vms > kafka/0 (00:00:02)

Done binding instance vms > dea-spare/0 (00:00:02)

Done binding instance vms > kafka/2 (00:00:02)

Done binding instance vms (00:00:02)


Started preparing configuration > Binding configuration. Failed: Error filling in template `metron_agent.json.erb' for `nfs/0' (line 5: Can't find property `["metron_agent.deployment"]') (00:00:00)


Error 100: Error filling in template `metron_agent.json.erb' for `nfs/0' (line 5: Can't find property `["metron_agent.deployment"]')


Task 51 error


For a more detailed error report, run: bosh task 51 --debug

Thanks
Jayaraj

From: Jayarajan Ramapurath Kozhummal <jayark(a)cisco.com<mailto:jayark(a)cisco.com>>
Date: Tuesday, February 16, 2016 at 2:29 PM
To: Rohit Kumar <rokumar(a)pivotal.io<mailto:rokumar(a)pivotal.io>>
Cc: "Discussions about Cloud Foundry projects and the system overall." <cf-dev(a)lists.cloudfoundry.org<mailto:cf-dev(a)lists.cloudfoundry.org>>, "Nithiyasri Gnanasekaran -X (ngnanase - TECH MAHINDRA LIM at Cisco)" <ngnanase(a)cisco.com<mailto:ngnanase(a)cisco.com>>

Subject: Re: [cf-dev] Reg cant find template : metron_agent

Hi Rohit,

Following steps followed :-

* git clone https://github.com/cloudfoundry/cf-release.git
* root(a)vms-inception-vm-2:/opt/cisco/vms-installer/cf-release/scripts# ./update

*

root(a)vms-inception-vm-2:/opt/cisco/vms-installer/cf-release# bosh create release releases/cf-230.yml --with-tarball

*

root(a)vms-inception-vm-2:/opt/cisco/vms-installer/cf-release/releases# bosh upload release cf-230.tgz

* root(a)vms-inception-vm-2:/opt/cisco/vms-installer/tenant-vmsdev5control/cf-deploy# bosh -n deployment cf-vmsdev5control.yml

Deployment set to `/opt/cisco/vms-installer/tenant-vmsdev5control/cf-deploy/cf-vmsdev5control.yml’

* root(a)vms-inception-vm-2:/opt/cisco/vms-installer/tenant-vmsdev5control/cf-deploy# bosh -n deploy


I have shared the sample deployment yml file we are using for your reference.



Thanks

Jayaraj

From: Rohit Kumar <rokumar(a)pivotal.io<mailto:rokumar(a)pivotal.io>>
Date: Tuesday, February 16, 2016 at 2:15 PM
To: Jayarajan Ramapurath Kozhummal <jayark(a)cisco.com<mailto:jayark(a)cisco.com>>
Cc: "Discussions about Cloud Foundry projects and the system overall." <cf-dev(a)lists.cloudfoundry.org<mailto:cf-dev(a)lists.cloudfoundry.org>>
Subject: Re: [cf-dev] Reg cant find template : metron_agent

Can you also list the commands on how you are creating, uploading and deploying the release?

On Tue, Feb 16, 2016 at 2:42 PM, Jayarajan Ramapurath Kozhummal (jayark) <jayark(a)cisco.com<mailto:jayark(a)cisco.com>> wrote:

Thanks a lot Rohit for replying back very quickly!
I have run the scripts/update command after cloning the cf-release GIT repo.
Please see the command output below:-


root(a)automation-vm-jayark:/opt/cisco/vms-installer/cf-release/src/loggregator# find . -type d -maxdepth 2

find: warning: you have specified the -maxdepth option after a non-option argument -type, but options are not positional (-maxdepth affects tests specified before it as well as those specified after it). Please specify options before other arguments.


.

./src

./src/doppler

./src/loggregator

./src/trafficcontroller

./src/syslog_drain_binder

./src/bitbucket.org<http://bitbucket.org>

./src/monitor

./src/matchers

./src/signalmanager

./src/deaagent

./src/tools

./src/truncatingbuffer

./src/profiler

./src/logger

./src/lats

./src/common

./src/integration_tests

./src/metron

./src/github.com<http://github.com>

./packages

./packages/doppler

./packages/loggregator_trafficcontroller

./packages/syslog_drain_binder

./packages/dea_logging_agent

./packages/loggregator_common

./packages/loggregator-acceptance-tests

./packages/golang1.4

./packages/metron_agent

./docs

./config

./jobs

./jobs/doppler

./jobs/loggregator_trafficcontroller

./jobs/syslog_drain_binder

./jobs/dea_logging_agent

./jobs/loggregator-acceptance-tests

./jobs/metron_agent

./bin

./git-hooks

./samples

Thanks
Jayaraj

From: Rohit Kumar <rokumar(a)pivotal.io<mailto:rokumar(a)pivotal.io>>
Date: Tuesday, February 16, 2016 at 9:30 AM
To: "Discussions about Cloud Foundry projects and the system overall." <cf-dev(a)lists.cloudfoundry.org<mailto:cf-dev(a)lists.cloudfoundry.org>>
Cc: Jayarajan Ramapurath Kozhummal <jayark(a)cisco.com<mailto:jayark(a)cisco.com>>
Subject: Re: [cf-dev] Reg cant find template : metron_agent

Did you make sure to run "scripts/update" after cloning the cf-release repo? Can you run "find . -type d -maxdepth 2" from within the "src/loggregator" directory in cf-release and reply with what you get as output?

Rohit

On Tue, Feb 16, 2016 at 1:39 AM, Nithiyasri Gnanasekaran -X (ngnanase - TECH MAHINDRA LIM at Cisco) <ngnanase(a)cisco.com<mailto:ngnanase(a)cisco.com>> wrote:
Hi

I am working on cloud foundry and I could create a development bosh release of cloud foundry using the following source:
git clone https://github.com/cloudfoundry/cf-release.git (added some rules in haproxy.conf file)

When I tried to deploy with the dev release of cloud-foundry, I get the following error:

Started preparing deployment
Started preparing deployment > Binding releases. Done (00:00:00)
Started preparing deployment > Binding existing deployment. Done (00:00:00)
Started preparing deployment > Binding resource pools. Done (00:00:00)
Started preparing deployment > Binding stemcells. Done (00:00:00)
Started preparing deployment > Binding templates. Failed: Can't find template `metron_agent' (00:00:00)

Error 190012: Can't find template `metron_agent'

Kindly help me figure out the issue of the error, as this is a show-stopper for us..

Regards
Nithiaysri

5481 - 5500 of 9301