Multizone multi subnet deployment across multiple of the same jobs in different zones


Oded Gold
 

Hi All

I am try to do a multi zone deployment I have divided my VPC into different subnets each In a different zone I am not sure of the syntax and how to apply them to each instance of the same job
Has anyone done this before and know how to set it up? Thank you in advance

This is what I have, I am not sure if I have even divided the network and resource pools correctly, Please help and thank you again

Subnet-A: 10.3.1.128/27
Subnet-C: 10.3.1.160/27
Subnet-D: 10.3.1.192/27
Subnet-E: 10.3.1.224/27

networks:
- name: subnet-A
type: manual
subnets:
- range: 10.3.1.128/27
gateway: 10.3.1.1
dns: [10.3.1.10, 10.3.1.2]
reserved: [10.3.1.2-10.3.1.10]
cloud_properties:
subnet: subnet-A
security_groups:
- sg-1xxx67

- name: subnet-C
type: manual
subnets:
- range: 10.3.1.160/27
gateway: 10.3.1.1
dns: [10.3.1.10, 10.3.1.2]
reserved: [10.3.1.2-10.3.1.10]
cloud_properties:
subnet: subnet-C
security_groups:
- sg-1xxx67

- name: subnet-D
type: manual
subnets:
- range: 10.3.1.192/27
gateway: 10.3.1.1
dns: [10.3.1.10, 10.3.1.2]
reserved: [10.3.1.2-10.3.1.10]
cloud_properties:
subnet: subnet-D
security_groups:
- sg-1xxx67

- name: subnet-E
type: manual
subnets:
- range: 10.3.1.224/27
gateway: 10.3.1.1
dns: [10.3.1.10, 10.3.1.2]
reserved: [10.3.1.2-10.3.1.10]
cloud_properties:
subnet: subnet-E
security_groups:
- sg-1xxx67

- name: public
type: vip

- name: infra_network
type: vip
cloud_properties:
security_groups:
- sg-1xxxx67
- sg-fxxxx884

resource_pools:
- name: c3-large-1a
network: subnet-A
size: 5
stemcell:
name: bosh-aws-xen-ubuntu-trusty-go_agent
version: '0000'
cloud_properties:
instance_type: c3.large
availability_zone: us-east-1a

- name: c3-large-1c
network: subnet-C
size: 5
stemcell:
name: bosh-aws-xen-ubuntu-trusty-go_agent
version: '0000'
cloud_properties:
instance_type: c3.large
availability_zone: us-east-1c

- name: c3-large-1d
network: subnet-D
size: 5
stemcell:
name: bosh-aws-xen-ubuntu-trusty-go_agent
version: '0000'
cloud_properties:
instance_type: c3.large
availability_zone: us-east-1d

- name: c3-large-1e
network: subnet-E
size: 5
stemcell:
name: bosh-aws-xen-ubuntu-trusty-go_agent
version: '0000'
cloud_properties:
instance_type: c3.large
availability_zone: us-east-1e

jobs:
- name: infra
templates:
- {name: bss_redis, release: bss}
- {name: activemq, release: bss}
- {name: router, release: bss}
- {name: uaa, release: bss}
instances: 1
resource_pool: c3-large
persistent_disk: 20000
networks:
- name: default
default: [dns, gateway]
- name: infra_network
static_ips:
- 52.XXX.XXX.8
properties:
db: databases

- name: backend
templates:
- {name: account_manager, release: bss}
- {name: communication_service, release: bss}
- {name: service_manager, release: bss}
- {name: audit_service, release: bss}
instances: 2 #Want to add the second instance to a diff zone
resource_pool: c3-large-1a
persistent_disk: 8192
networks:
- name: subnet-A # I assume the second instance also needs to be in a different resource pool
default: [dns, gateway]
- name: infra_network
static_ips:
- 52.XXX.XXX74
- 52.XXX.XXX.31


Aitken, Neil S
 

We ran into a similar problem and have not found a concise way to solve this in config.

We basically have duplicate job config by zone, e.g.

jobs:
- name: infra_zone1
<dupe config>
resource_pool: zone1

- name: infra_zone2
<dupe config>
resource_pool: zone2

-----Original Message-----
From: Oded Gold [mailto:oded(a)nurego.com]
Sent: Monday, December 28, 2015 12:14 PM
To: cf-bosh(a)lists.cloudfoundry.org
Subject: [cf-bosh] Multizone multi subnet deployment across multiple of the same jobs in different zones

Hi All

I am try to do a multi zone deployment I have divided my VPC into different subnets each In a different zone I am not sure of the syntax and how to apply them to each instance of the same job Has anyone done this before and know how to set it up? Thank you in advance

This is what I have, I am not sure if I have even divided the network and resource pools correctly, Please help and thank you again

Subnet-A: 10.3.1.128/27
Subnet-C: 10.3.1.160/27
Subnet-D: 10.3.1.192/27
Subnet-E: 10.3.1.224/27

networks:
- name: subnet-A
type: manual
subnets:
- range: 10.3.1.128/27
gateway: 10.3.1.1
dns: [10.3.1.10, 10.3.1.2]
reserved: [10.3.1.2-10.3.1.10]
cloud_properties:
subnet: subnet-A
security_groups:
- sg-1xxx67

- name: subnet-C
type: manual
subnets:
- range: 10.3.1.160/27
gateway: 10.3.1.1
dns: [10.3.1.10, 10.3.1.2]
reserved: [10.3.1.2-10.3.1.10]
cloud_properties:
subnet: subnet-C
security_groups:
- sg-1xxx67

- name: subnet-D
type: manual
subnets:
- range: 10.3.1.192/27
gateway: 10.3.1.1
dns: [10.3.1.10, 10.3.1.2]
reserved: [10.3.1.2-10.3.1.10]
cloud_properties:
subnet: subnet-D
security_groups:
- sg-1xxx67

- name: subnet-E
type: manual
subnets:
- range: 10.3.1.224/27
gateway: 10.3.1.1
dns: [10.3.1.10, 10.3.1.2]
reserved: [10.3.1.2-10.3.1.10]
cloud_properties:
subnet: subnet-E
security_groups:
- sg-1xxx67

- name: public
type: vip

- name: infra_network
type: vip
cloud_properties:
security_groups:
- sg-1xxxx67
- sg-fxxxx884

resource_pools:
- name: c3-large-1a
network: subnet-A
size: 5
stemcell:
name: bosh-aws-xen-ubuntu-trusty-go_agent
version: '0000'
cloud_properties:
instance_type: c3.large
availability_zone: us-east-1a

- name: c3-large-1c
network: subnet-C
size: 5
stemcell:
name: bosh-aws-xen-ubuntu-trusty-go_agent
version: '0000'
cloud_properties:
instance_type: c3.large
availability_zone: us-east-1c

- name: c3-large-1d
network: subnet-D
size: 5
stemcell:
name: bosh-aws-xen-ubuntu-trusty-go_agent
version: '0000'
cloud_properties:
instance_type: c3.large
availability_zone: us-east-1d

- name: c3-large-1e
network: subnet-E
size: 5
stemcell:
name: bosh-aws-xen-ubuntu-trusty-go_agent
version: '0000'
cloud_properties:
instance_type: c3.large
availability_zone: us-east-1e

jobs:
- name: infra
templates:
- {name: bss_redis, release: bss}
- {name: activemq, release: bss}
- {name: router, release: bss}
- {name: uaa, release: bss}
instances: 1
resource_pool: c3-large
persistent_disk: 20000
networks:
- name: default
default: [dns, gateway]
- name: infra_network
static_ips:
- 52.XXX.XXX.8
properties:
db: databases

- name: backend
templates:
- {name: account_manager, release: bss}
- {name: communication_service, release: bss}
- {name: service_manager, release: bss}
- {name: audit_service, release: bss}
instances: 2 #Want to add the second instance to a diff zone
resource_pool: c3-large-1a
persistent_disk: 8192
networks:
- name: subnet-A # I assume the second instance also needs to be in a different resource pool
default: [dns, gateway]
- name: infra_network
static_ips:
- 52.XXX.XXX74
- 52.XXX.XXX.31

This communication is for informational purposes only. It is not intended as an offer or solicitation for the purchase or sale of any financial instrument or as an official confirmation of any transaction. All market prices, data and other information are not warranted as to completeness or accuracy and are subject to change without notice. Any comments or statements made herein do not necessarily reflect those of JPMorgan Chase & Co., its subsidiaries and affiliates (collectively, "JPMC").

This transmission may contain information that is proprietary, privileged, confidential and/or exempt from disclosure under applicable law. If you are not the intended recipient, you are hereby notified that any disclosure, copying, distribution, or use of the information contained herein (including any reliance thereon) is STRICTLY PROHIBITED. If you received this transmission in error, please immediately contact the sender and destroy the material in its entirety, whether in electronic or hard copy format. Although this transmission and any attachments are believed to be free of any virus or other defect that might affect any computer system into which it is received and opened, it is the responsibility of the recipient to ensure that it is virus free and no responsibility is accepted by JPMC for any loss or damage arising in any way from its use. Please note that any electronic communication that is conducted within or through JPMC's systems is subject to interception, monitoring, review, retention and external production in accordance with JPMC's policy and local laws, rules and regulations; may be stored or otherwise processed in countries other than the country in which you are located; and will be treated in accordance with JPMC policies and applicable laws and regulations.

Please refer to http://www.jpmorgan.com/pages/disclosures for disclosures relating to European legal entities.


James Hunt <james@...>
 

On Dec 30, 2015, at 3:00 PM, Aitken, Neil S <neil.s.aitken(a)jpmchase.com> wrote:

We ran into a similar problem and have not found a concise way to solve this in config.

We basically have duplicate job config by zone, e.g.

jobs:
- name: infra_zone1
<dupe config>
resource_pool: zone1

- name: infra_zone2
<dupe config>
resource_pool: zone2
If you use a tool like Spruce (https://github.com/geofffranks/spruce),
you can use the `(( inject ... ))` operator to remove the explicit
duplication and DRY up your manifest.

Here's an example using Oded's template below:

http://play.spruce.cf/#cb6499ae2083cfb0dd6f

---
jrh


Amit Kumar Gupta
 

Yes, multi zone deployments are standard. BOSH is working on multi-AZ
anti-affinity striping of jobs being expressible in terms of first-class
primitives, which you can read about here:
https://github.com/cloudfoundry/bosh-notes/blob/master/availability-zones.md.
The goal would be that instead of having a duplicate job for each zone, you
have one job definition and configuration, and BOSH will know to distribute
instances across zones.

Currently, a common pattern is something like the bottom of this email (AWS
example), you can look at that to see if you have the syntax correct. Full
details on the manifest syntax/schema is documented on bosh.io (
http://bosh.io/docs/deployment-manifest.html).

As for current solutions to avoid duplication, you have several options.
First, YAML itself supports references via alias nodes (
http://www.yaml.org/spec/1.2/spec.html#id2786196) and node anchors (
http://www.yaml.org/spec/1.2/spec.html#id2785586). Another option would be
to write some simple scripts in a language like Ruby or Python that has
good dynamic support for parsing and writing YAML. Finally, there are
tools like spruce (https://github.com/geofffranks/spruce) and spiff (
https://github.com/cloudfoundry-incubator/spiff/) which allow you to merge
YAML "templates" with YAML "stubs", and the templates contain templating
language syntax which can take values from your stubs and populate them in
multiple different places within the template.

networks:
- name: nw_z1
subnets:
- range: 10.10.16.0/20
dns: [10.10.0.2]
gateway: 10.10.16.1
reserved: [10.10.16.2 - 10.10.16.9]
static: [10.10.16.10 - 10.10.16.255]
cloud_properties:
security_groups: [MY_SECURITY_GROUP]
subnet: SUBNET_ID_1
- name: nw_z2
subnets:
- range: 10.10.80.0/20
dns: [10.10.0.2]
gateway: 10.10.80.1
reserved: [10.10.80.2 - 10.10.80.9]
static: [10.10.80.10 - 10.10.80.255]
cloud_properties:
security_groups: [MY_SECURITY_GROUP]
subnet: SUBNET_ID_2

resource_pools:
- name: small_z1
network: nw_z1
stemcell:
name: bosh-aws-xen-hvm-ubuntu-trusty-go_agent
version: latest
cloud_properties:
availability_zone: ZONE_1
instance_type: m3.medium
- name: small_z2
network: cf2
stemcell:
name: bosh-aws-xen-hvm-ubuntu-trusty-go_agent
version: latest
cloud_properties:
availability_zone: ZONE_2
instance_type: m3.medium

jobs:
- name: my_job_z1
instances: 1
networks:
- name: nw_z1
static_ips:
- 10.10.16.11
resource_pool: medium_z1
templates:
- name: my_job
release: my_release
- name: my_job_z2
instances: 1
networks:
- name: nw_z2
static_ips:
- 10.10.80.11
resource_pool: medium_z2
templates:
- name: my_job
release: my_release

Best,
Amit

On Wed, Dec 30, 2015 at 1:01 PM, James Hunt <james(a)jameshunt.us> wrote:


On Dec 30, 2015, at 3:00 PM, Aitken, Neil S <neil.s.aitken(a)jpmchase.com>
wrote:

We ran into a similar problem and have not found a concise way to solve
this in config.

We basically have duplicate job config by zone, e.g.

jobs:
- name: infra_zone1
<dupe config>
resource_pool: zone1

- name: infra_zone2
<dupe config>
resource_pool: zone2
If you use a tool like Spruce (https://github.com/geofffranks/spruce),
you can use the `(( inject ... ))` operator to remove the explicit
duplication and DRY up your manifest.

Here's an example using Oded's template below:

http://play.spruce.cf/#cb6499ae2083cfb0dd6f

---
jrh