Date   

automatically registering gorouters with OpenStack LB

Koper, Dies <diesk@...>
 

Hi,

I'm aware of bosh aws cpi's elbs option to list ELBs with gorouters.
I'm looking for a similar option for OpenStack.
How do people ensure that when gorouters are e.g. rolled during an upgrade, the new VMs are automatically registered with the load balancer?

We use a LB we developed ourselves, which is hooked in to OpenStack using its pluggable LB infrastructure.

Cheers,
Dies Koper


Re: bosh-init deploy to AWS problems

Dmitriy Kalinin
 

Make sure to set `region` configuration (See
http://bosh.io/docs/init-aws.html for an example manifest with region set).
It defaults to us-east-1. Yours should be us-west-2.

Also make sure to bosh-init delete if this is a new environment before
setting the region in the manifest since bosh-init already imported
stemcell into us-east-1 -- Creating vm with stemcell cid 'ami-5728e73c
light' (
http://thecloudmarket.com/image/ami-5728e73c--bosh-d5ed0258-80f7-4514-a195-eff206e90c41#/definition
).

On Mon, Sep 14, 2015 at 4:04 PM, sean d <ssdowd(a)gmail.com> wrote:

We are trying to do a bosh-init deploy bosh.yml onto AWS (using
http://bosh.io/docs/init-aws.html) and keep getting this as a result:

Command 'deploy' failed:
Deploying:
Creating instance 'bosh/0':
Creating VM:
Creating vm with stemcell cid 'ami-5728e73c light':
CPI 'create_vm' method responded with error:
CmdError{"type":"Unknown","message":"The subnet ID 'subnet-135e0b76' does
not exist","ok_to_retry":false}

But subnet-135e0b76 clearly exists (it's visible via the AWS console and
via 'aws ec2 describe-subnets'). The availability zone (us-west-2a) also
matches bosh.yml.

"Subnets": [
{
"VpcId": "vpc-92cb8ff7",
"Tags": [
...
],
"CidrBlock": "10.0.0.0/24",
"MapPublicIpOnLaunch": false,
"DefaultForAz": false,
"State": "available",
"AvailabilityZone": "us-west-2a",
"SubnetId": "subnet-135e0b76",
"AvailableIpAddressCount": 251
},

Any suggestions?


bosh-init deploy to AWS problems

Sean Dowd <ssdowd@...>
 

We are trying to do a bosh-init deploy bosh.yml onto AWS (using http://bosh.io/docs/init-aws.html) and keep getting this as a result:

Command 'deploy' failed:
Deploying:
Creating instance 'bosh/0':
Creating VM:
Creating vm with stemcell cid 'ami-5728e73c light':
CPI 'create_vm' method responded with error: CmdError{"type":"Unknown","message":"The subnet ID 'subnet-135e0b76' does not exist","ok_to_retry":false}

But subnet-135e0b76 clearly exists (it's visible via the AWS console and via 'aws ec2 describe-subnets'). The availability zone (us-west-2a) also matches bosh.yml.

"Subnets": [
{
"VpcId": "vpc-92cb8ff7",
"Tags": [
...
],
"CidrBlock": "10.0.0.0/24",
"MapPublicIpOnLaunch": false,
"DefaultForAz": false,
"State": "available",
"AvailabilityZone": "us-west-2a",
"SubnetId": "subnet-135e0b76",
"AvailableIpAddressCount": 251
},

Any suggestions?


Re: Cannot start app in new vsphere cf deployment

Ramesh Sambandan
 

awesome!
I changed the property and redeployed.
It works!

Thanks you very much.

-Ramesh


Re: Cannot start app in new vsphere cf deployment

CF Runtime
 

Hi Ramesh,

Looking at the original logs you provided, we see the following error when
the CC tries to talk to HM9000:

"SSL_connect returned=1 errno=0 state=SSLv3 read server certificate B:
certificate verify failed"

We're assuming that your certificate used by ha_proxy is a self signed
cert? Setting the "ssl.skip_cert_verify" to true in your manifest should
make the CC ignore the fact that it is self signed.

Joseph & Dan
OSS Release Integration team

On Mon, Sep 14, 2015 at 9:03 AM, Ramesh Sambandan <rsamban(a)gmail.com> wrote:

Amit,

Below are my cf trace output.
guests-MacBook-Pro:apps rsamban$ cf logs gs --recent

REQUEST: [2015-09-14T09:39:28-06:00]
GET
/v2/spaces/6ecadc21-314d-4f93-8afb-b646cd384826/apps?q=name%3Ags&inline-relations-depth=1
HTTP/1.1
Host: api.192.168.1.103.xip.io
Accept: application/json
Authorization: [PRIVATE DATA HIDDEN]
Content-Type: application/json
User-Agent: go-cli 6.12.3-c0c9a03 / darwin



RESPONSE: [2015-09-14T09:39:28-06:00]
HTTP/1.1 200 OK
Content-Length: 5311
Content-Type: application/json;charset=utf-8
Date: Mon, 14 Sep 2015 15:39:28 GMT
Server: nginx
X-Cf-Requestid: 3d46e09a-5ef4-4f0d-5793-8d145a1b91a2
X-Content-Type-Options: nosniff
X-Vcap-Request-Id:
eab0cd51-7f04-4745-6ebf-0461df755620::4395680b-f74f-4325-a952-6d9fb974e3ce

{
"total_results": 1,
"total_pages": 1,
"prev_url": null,
"next_url": null,
"resources": [
{
"metadata": {
"guid": "48d31982-7e60-4de2-9d33-fdbf291ec779",
"url": "/v2/apps/48d31982-7e60-4de2-9d33-fdbf291ec779",
"created_at": "2015-09-14T13:15:50Z",
"updated_at": "2015-09-14T14:02:52Z"
},
"entity": {
"name": "gs",
"production": false,
"space_guid": "6ecadc21-314d-4f93-8afb-b646cd384826",
"stack_guid": "4d62cffe-549a-4952-8fe7-977cc2d562d8",
"buildpack": null,
"detected_buildpack": "java-buildpack=v3.1.1-
https://github.com/cloudfoundry/java-buildpack#7a538fb java-main
open-jdk-like-jre=1.8.0_60 open-jdk-like-memory-calculator=1.1.1_RELEASE
spring-auto-reconfiguration=1.10.0_RELEASE",
"environment_json": {

},
"memory": 1024,
"instances": 1,
"disk_quota": 1024,
"state": "STARTED",
"version": "40963450-8b7d-4478-bb84-682dee103a08",
"command": null,
"console": false,
"debug": null,
"staging_task_id": "0bd355ee625b4f1e814b7e1bc4cd0157",
"package_state": "STAGED",
"health_check_type": "port",
"health_check_timeout": null,
"staging_failed_reason": "StagingTimeExpired",
"staging_failed_description": null,
"diego": false,
"docker_image": null,
"package_updated_at": "2015-09-14T13:16:04Z",
"detected_start_command":
"CALCULATED_MEMORY=$($PWD/.java-buildpack/open_jdk_jre/bin/java-buildpack-memory-calculator-1.1.1_RELEASE
-memorySizes=metaspace:64m..
-memoryWeights=heap:75,metaspace:10,stack:5,native:10
-totMemory=$MEMORY_LIMIT) && SERVER_PORT=$PORT
$PWD/.java-buildpack/open_jdk_jre/bin/java -cp
$PWD/.:$PWD/.java-buildpack/spring_auto_reconfiguration/spring_auto_reconfiguration-1.10.0_RELEASE.jar
-Djava.io.tmpdir=$TMPDIR
-XX:OnOutOfMemoryError=$PWD/.java-buildpack/open_jdk_jre/bin/killjava.sh
$CALCULATED_MEMORY org.springframework.boot.loader.JarLauncher",
"enable_ssh": true,
"docker_credentials_json": {
"redacted_message": "[PRIVATE DATA HIDDEN]"
},
"space_url": "/v2/spaces/6ecadc21-314d-4f93-8afb-b646cd384826",
"space": {
"metadata": {
"guid": "6ecadc21-314d-4f93-8afb-b646cd384826",
"url": "/v2/spaces/6ecadc21-314d-4f93-8afb-b646cd384826",
"created_at": "2015-09-14T13:14:52Z",
"updated_at": null
},
"entity": {
"name": "dev",
"organization_guid": "6ee4b17e-ca40-47f2-b949-c7692324112a",
"space_quota_definition_guid": null,
"allow_ssh": true,
"organization_url":
"/v2/organizations/6ee4b17e-ca40-47f2-b949-c7692324112a",
"developers_url":
"/v2/spaces/6ecadc21-314d-4f93-8afb-b646cd384826/developers",
"managers_url":
"/v2/spaces/6ecadc21-314d-4f93-8afb-b646cd384826/managers",
"auditors_url":
"/v2/spaces/6ecadc21-314d-4f93-8afb-b646cd384826/auditors",
"apps_url":
"/v2/spaces/6ecadc21-314d-4f93-8afb-b646cd384826/apps",
"routes_url":
"/v2/spaces/6ecadc21-314d-4f93-8afb-b646cd384826/routes",
"domains_url":
"/v2/spaces/6ecadc21-314d-4f93-8afb-b646cd384826/domains",
"service_instances_url":
"/v2/spaces/6ecadc21-314d-4f93-8afb-b646cd384826/service_instances",
"app_events_url":
"/v2/spaces/6ecadc21-314d-4f93-8afb-b646cd384826/app_events",
"events_url":
"/v2/spaces/6ecadc21-314d-4f93-8afb-b646cd384826/events",
"security_groups_url":
"/v2/spaces/6ecadc21-314d-4f93-8afb-b646cd384826/security_groups"
}
},
"stack_url": "/v2/stacks/4d62cffe-549a-4952-8fe7-977cc2d562d8",
"stack": {
"metadata": {
"guid": "4d62cffe-549a-4952-8fe7-977cc2d562d8",
"url": "/v2/stacks/4d62cffe-549a-4952-8fe7-977cc2d562d8",
"created_at": "2015-09-14T05:55:54Z",
"updated_at": null
},
"entity": {
"name": "cflinuxfs2",
"description": "Cloud Foundry Linux-based filesystem"
}
},
"events_url":
"/v2/apps/48d31982-7e60-4de2-9d33-fdbf291ec779/events",
"service_bindings_url":
"/v2/apps/48d31982-7e60-4de2-9d33-fdbf291ec779/service_bindings",
"service_bindings": [

],
"routes_url":
"/v2/apps/48d31982-7e60-4de2-9d33-fdbf291ec779/routes",
"routes": [
{
"metadata": {
"guid": "4b3968ca-8821-496b-b4e9-47b54303f92f",
"url": "/v2/routes/4b3968ca-8821-496b-b4e9-47b54303f92f",
"created_at": "2015-09-14T13:15:50Z",
"updated_at": null
},
"entity": {
"host": "gs",
"path": "",
"domain_guid": "a6cc2662-2771-4737-89de-992a2d606997",
"space_guid": "6ecadc21-314d-4f93-8afb-b646cd384826",
"domain_url":
"/v2/domains/a6cc2662-2771-4737-89de-992a2d606997",
"space_url":
"/v2/spaces/6ecadc21-314d-4f93-8afb-b646cd384826",
"apps_url":
"/v2/routes/4b3968ca-8821-496b-b4e9-47b54303f92f/apps"
}
}
]
}
}
]
}
Connected, dumping recent logs for app gs in org yesVin / space dev as
admin...


REQUEST: [2015-09-14T09:39:29-06:00]
POST /oauth/token HTTP/1.1
Host: login.192.168.1.103.xip.io
Accept: application/json
Authorization: [PRIVATE DATA HIDDEN]
Content-Type: application/x-www-form-urlencoded
User-Agent: go-cli 6.12.3-c0c9a03 / darwin


grant_type=refresh_token&refresh_token=eyJhbGciOiJSUzI1NiJ9.eyJqdGkiOiJiOWQzMTE5NC1iM2ZhLTQ4MmQtYmE0Ni1hZjRjMjRhYWUwMWYiLCJzdWIiOiIxY2UwZWQ4Yi1hMTdiLTQyZGEtYTE2Ni04MmZlYWQ1OTQyMGMiLCJzY29wZSI6WyJvcGVuaWQiLCJzY2ltLnJlYWQiLCJjbG91ZF9jb250cm9sbGVyLmFkbWluIiwiY2xvdWRfY29udHJvbGxlci5yZWFkIiwicGFzc3dvcmQud3JpdGUiLCJjbG91ZF9jb250cm9sbGVyLndyaXRlIiwiZG9wcGxlci5maXJlaG9zZSIsInNjaW0ud3JpdGUiXSwiaWF0IjoxNDQyMjM2NDU2LCJleHAiOjE0NDQ4Mjg0NTYsImNpZCI6ImNmIiwiY2xpZW50X2lkIjoiY2YiLCJpc3MiOiJodHRwczovL3VhYS4xOTIuMTY4LjEuMTAzLnhpcC5pby9vYXV0aC90b2tlbiIsInppZCI6InVhYSIsImdyYW50X3R5cGUiOiJwYXNzd29yZCIsInVzZXJfbmFtZSI6ImFkbWluIiwidXNlcl9pZCI6IjFjZTBlZDhiLWExN2ItNDJkYS1hMTY2LTgyZmVhZDU5NDIwYyIsInJldl9zaWciOiI1NGEzZjhkYyIsImF1ZCI6WyJjZiIsIm9wZW5pZCIsInNjaW0iLCJjbG91ZF9jb250cm9sbGVyIiwicGFzc3dvcmQiLCJkb3BwbGVyIl19.GLBJ3Px81ccR76QppqDF0ilyvxeXQO21j-XwOUVbQTllg3h8nLgPHShMDOuy7eOecBhJUSLbesahJME196LHYZYV1iOad9WhNASO11gdfqJ_0rTEGwZQEzwY2q9ggaoAv_YUjAmZOYCM8052K6LdKtROOqFd67CdTdrC0L8K1eo&scope=

RESPONSE: [2015-09-14T09:39:31-06:00]
HTTP/1.1 200 OK
Transfer-Encoding: chunked
Access-Control-Allow-Origin: *
Cache-Control: no-cache, no-store, max-age=0, must-revalidate
Cache-Control: no-store
Content-Type: application/json;charset=UTF-8
Date: Mon, 14 Sep 2015 15:39:31 GMT
Expires: 0
Pragma: no-cache
Pragma: no-cache
Server: Apache-Coyote/1.1
Strict-Transport-Security: max-age=31536000 ; includeSubDomains
X-Cf-Requestid: 8c1688dd-2a9e-4b44-4075-a84e88f5d6cd
X-Content-Type-Options: nosniff
X-Frame-Options: DENY
X-Xss-Protection: 1; mode=block

872
{"access_token":"[PRIVATE DATA
HIDDEN]","token_type":"bearer","refresh_token":"[PRIVATE DATA
HIDDEN]","expires_in":599,"scope":"cloud_controller.read password.write
cloud_controller.write openid doppler.firehose scim.write scim.read
cloud_controller.admin","jti":"33c5626e-eb4c-459a-9a2c-f38295a80a22"}
0


FAILED
Unauthorized error: You are not authorized. Error: Invalid authorization
FAILED
Unauthorized error: You are not authorized. Error: Invalid authorization

Let me know

thanks
-Ramesh


Re: VMWare Affinity Rules

Cory Jett
 

Thanks Dmitriy. Would it be possible to put this in an attachment so I can see how you have things spaced out?


Re: VMWare Affinity Rules

Dmitriy Kalinin
 

What you want is something like this (combination of two):

resource_pools:
- name: runner_z1
network: cf1
stemcell:
name: bosh-vsphere-esxi-ubuntu-trusty-go_agent
version: latest
cloud_properties:
cpu: 2
disk: 32768
ram: 16384
datacenters:
- name: my-dc
clusters:
- my-vsphere-cluster:
drs_rules:
- name: separate-hadoop-datanodes-rule
type: separate_vms
env:
bosh:
password: REDACTED

On Mon, Sep 14, 2015 at 9:11 AM, Cory Jett <cory.jett(a)gmail.com> wrote:

Looks like my YAML format was broken. Here is the YAML configs in Github:

https://github.com/coryjett/Cloud-Foundry-DRS/blob/master/README.md


Re: VMWare Affinity Rules

Cory Jett
 

Looks like my YAML format was broken. Here is the YAML configs in Github:

https://github.com/coryjett/Cloud-Foundry-DRS/blob/master/README.md


Re: Cannot start app in new vsphere cf deployment

Ramesh Sambandan
 

Amit,

Below are my cf trace output.
guests-MacBook-Pro:apps rsamban$ cf logs gs --recent

REQUEST: [2015-09-14T09:39:28-06:00]
GET /v2/spaces/6ecadc21-314d-4f93-8afb-b646cd384826/apps?q=name%3Ags&inline-relations-depth=1 HTTP/1.1
Host: api.192.168.1.103.xip.io
Accept: application/json
Authorization: [PRIVATE DATA HIDDEN]
Content-Type: application/json
User-Agent: go-cli 6.12.3-c0c9a03 / darwin



RESPONSE: [2015-09-14T09:39:28-06:00]
HTTP/1.1 200 OK
Content-Length: 5311
Content-Type: application/json;charset=utf-8
Date: Mon, 14 Sep 2015 15:39:28 GMT
Server: nginx
X-Cf-Requestid: 3d46e09a-5ef4-4f0d-5793-8d145a1b91a2
X-Content-Type-Options: nosniff
X-Vcap-Request-Id: eab0cd51-7f04-4745-6ebf-0461df755620::4395680b-f74f-4325-a952-6d9fb974e3ce

{
"total_results": 1,
"total_pages": 1,
"prev_url": null,
"next_url": null,
"resources": [
{
"metadata": {
"guid": "48d31982-7e60-4de2-9d33-fdbf291ec779",
"url": "/v2/apps/48d31982-7e60-4de2-9d33-fdbf291ec779",
"created_at": "2015-09-14T13:15:50Z",
"updated_at": "2015-09-14T14:02:52Z"
},
"entity": {
"name": "gs",
"production": false,
"space_guid": "6ecadc21-314d-4f93-8afb-b646cd384826",
"stack_guid": "4d62cffe-549a-4952-8fe7-977cc2d562d8",
"buildpack": null,
"detected_buildpack": "java-buildpack=v3.1.1-https://github.com/cloudfoundry/java-buildpack#7a538fb java-main open-jdk-like-jre=1.8.0_60 open-jdk-like-memory-calculator=1.1.1_RELEASE spring-auto-reconfiguration=1.10.0_RELEASE",
"environment_json": {

},
"memory": 1024,
"instances": 1,
"disk_quota": 1024,
"state": "STARTED",
"version": "40963450-8b7d-4478-bb84-682dee103a08",
"command": null,
"console": false,
"debug": null,
"staging_task_id": "0bd355ee625b4f1e814b7e1bc4cd0157",
"package_state": "STAGED",
"health_check_type": "port",
"health_check_timeout": null,
"staging_failed_reason": "StagingTimeExpired",
"staging_failed_description": null,
"diego": false,
"docker_image": null,
"package_updated_at": "2015-09-14T13:16:04Z",
"detected_start_command": "CALCULATED_MEMORY=$($PWD/.java-buildpack/open_jdk_jre/bin/java-buildpack-memory-calculator-1.1.1_RELEASE -memorySizes=metaspace:64m.. -memoryWeights=heap:75,metaspace:10,stack:5,native:10 -totMemory=$MEMORY_LIMIT) && SERVER_PORT=$PORT $PWD/.java-buildpack/open_jdk_jre/bin/java -cp $PWD/.:$PWD/.java-buildpack/spring_auto_reconfiguration/spring_auto_reconfiguration-1.10.0_RELEASE.jar -Djava.io.tmpdir=$TMPDIR -XX:OnOutOfMemoryError=$PWD/.java-buildpack/open_jdk_jre/bin/killjava.sh $CALCULATED_MEMORY org.springframework.boot.loader.JarLauncher",
"enable_ssh": true,
"docker_credentials_json": {
"redacted_message": "[PRIVATE DATA HIDDEN]"
},
"space_url": "/v2/spaces/6ecadc21-314d-4f93-8afb-b646cd384826",
"space": {
"metadata": {
"guid": "6ecadc21-314d-4f93-8afb-b646cd384826",
"url": "/v2/spaces/6ecadc21-314d-4f93-8afb-b646cd384826",
"created_at": "2015-09-14T13:14:52Z",
"updated_at": null
},
"entity": {
"name": "dev",
"organization_guid": "6ee4b17e-ca40-47f2-b949-c7692324112a",
"space_quota_definition_guid": null,
"allow_ssh": true,
"organization_url": "/v2/organizations/6ee4b17e-ca40-47f2-b949-c7692324112a",
"developers_url": "/v2/spaces/6ecadc21-314d-4f93-8afb-b646cd384826/developers",
"managers_url": "/v2/spaces/6ecadc21-314d-4f93-8afb-b646cd384826/managers",
"auditors_url": "/v2/spaces/6ecadc21-314d-4f93-8afb-b646cd384826/auditors",
"apps_url": "/v2/spaces/6ecadc21-314d-4f93-8afb-b646cd384826/apps",
"routes_url": "/v2/spaces/6ecadc21-314d-4f93-8afb-b646cd384826/routes",
"domains_url": "/v2/spaces/6ecadc21-314d-4f93-8afb-b646cd384826/domains",
"service_instances_url": "/v2/spaces/6ecadc21-314d-4f93-8afb-b646cd384826/service_instances",
"app_events_url": "/v2/spaces/6ecadc21-314d-4f93-8afb-b646cd384826/app_events",
"events_url": "/v2/spaces/6ecadc21-314d-4f93-8afb-b646cd384826/events",
"security_groups_url": "/v2/spaces/6ecadc21-314d-4f93-8afb-b646cd384826/security_groups"
}
},
"stack_url": "/v2/stacks/4d62cffe-549a-4952-8fe7-977cc2d562d8",
"stack": {
"metadata": {
"guid": "4d62cffe-549a-4952-8fe7-977cc2d562d8",
"url": "/v2/stacks/4d62cffe-549a-4952-8fe7-977cc2d562d8",
"created_at": "2015-09-14T05:55:54Z",
"updated_at": null
},
"entity": {
"name": "cflinuxfs2",
"description": "Cloud Foundry Linux-based filesystem"
}
},
"events_url": "/v2/apps/48d31982-7e60-4de2-9d33-fdbf291ec779/events",
"service_bindings_url": "/v2/apps/48d31982-7e60-4de2-9d33-fdbf291ec779/service_bindings",
"service_bindings": [

],
"routes_url": "/v2/apps/48d31982-7e60-4de2-9d33-fdbf291ec779/routes",
"routes": [
{
"metadata": {
"guid": "4b3968ca-8821-496b-b4e9-47b54303f92f",
"url": "/v2/routes/4b3968ca-8821-496b-b4e9-47b54303f92f",
"created_at": "2015-09-14T13:15:50Z",
"updated_at": null
},
"entity": {
"host": "gs",
"path": "",
"domain_guid": "a6cc2662-2771-4737-89de-992a2d606997",
"space_guid": "6ecadc21-314d-4f93-8afb-b646cd384826",
"domain_url": "/v2/domains/a6cc2662-2771-4737-89de-992a2d606997",
"space_url": "/v2/spaces/6ecadc21-314d-4f93-8afb-b646cd384826",
"apps_url": "/v2/routes/4b3968ca-8821-496b-b4e9-47b54303f92f/apps"
}
}
]
}
}
]
}
Connected, dumping recent logs for app gs in org yesVin / space dev as admin...


REQUEST: [2015-09-14T09:39:29-06:00]
POST /oauth/token HTTP/1.1
Host: login.192.168.1.103.xip.io
Accept: application/json
Authorization: [PRIVATE DATA HIDDEN]
Content-Type: application/x-www-form-urlencoded
User-Agent: go-cli 6.12.3-c0c9a03 / darwin

grant_type=refresh_token&refresh_token=eyJhbGciOiJSUzI1NiJ9.eyJqdGkiOiJiOWQzMTE5NC1iM2ZhLTQ4MmQtYmE0Ni1hZjRjMjRhYWUwMWYiLCJzdWIiOiIxY2UwZWQ4Yi1hMTdiLTQyZGEtYTE2Ni04MmZlYWQ1OTQyMGMiLCJzY29wZSI6WyJvcGVuaWQiLCJzY2ltLnJlYWQiLCJjbG91ZF9jb250cm9sbGVyLmFkbWluIiwiY2xvdWRfY29udHJvbGxlci5yZWFkIiwicGFzc3dvcmQud3JpdGUiLCJjbG91ZF9jb250cm9sbGVyLndyaXRlIiwiZG9wcGxlci5maXJlaG9zZSIsInNjaW0ud3JpdGUiXSwiaWF0IjoxNDQyMjM2NDU2LCJleHAiOjE0NDQ4Mjg0NTYsImNpZCI6ImNmIiwiY2xpZW50X2lkIjoiY2YiLCJpc3MiOiJodHRwczovL3VhYS4xOTIuMTY4LjEuMTAzLnhpcC5pby9vYXV0aC90b2tlbiIsInppZCI6InVhYSIsImdyYW50X3R5cGUiOiJwYXNzd29yZCIsInVzZXJfbmFtZSI6ImFkbWluIiwidXNlcl9pZCI6IjFjZTBlZDhiLWExN2ItNDJkYS1hMTY2LTgyZmVhZDU5NDIwYyIsInJldl9zaWciOiI1NGEzZjhkYyIsImF1ZCI6WyJjZiIsIm9wZW5pZCIsInNjaW0iLCJjbG91ZF9jb250cm9sbGVyIiwicGFzc3dvcmQiLCJkb3BwbGVyIl19.GLBJ3Px81ccR76QppqDF0ilyvxeXQO21j-XwOUVbQTllg3h8nLgPHShMDOuy7eOecBhJUSLbesahJME196LHYZYV1iOad9WhNASO11gdfqJ_0rTEGwZQEzwY2q9ggaoAv_YUjAmZOYCM8052K6LdKtROOqFd67CdTdrC0L8K1eo&scope=

RESPONSE: [2015-09-14T09:39:31-06:00]
HTTP/1.1 200 OK
Transfer-Encoding: chunked
Access-Control-Allow-Origin: *
Cache-Control: no-cache, no-store, max-age=0, must-revalidate
Cache-Control: no-store
Content-Type: application/json;charset=UTF-8
Date: Mon, 14 Sep 2015 15:39:31 GMT
Expires: 0
Pragma: no-cache
Pragma: no-cache
Server: Apache-Coyote/1.1
Strict-Transport-Security: max-age=31536000 ; includeSubDomains
X-Cf-Requestid: 8c1688dd-2a9e-4b44-4075-a84e88f5d6cd
X-Content-Type-Options: nosniff
X-Frame-Options: DENY
X-Xss-Protection: 1; mode=block

872
{"access_token":"[PRIVATE DATA HIDDEN]","token_type":"bearer","refresh_token":"[PRIVATE DATA HIDDEN]","expires_in":599,"scope":"cloud_controller.read password.write cloud_controller.write openid doppler.firehose scim.write scim.read cloud_controller.admin","jti":"33c5626e-eb4c-459a-9a2c-f38295a80a22"}
0


FAILED
Unauthorized error: You are not authorized. Error: Invalid authorization
FAILED
Unauthorized error: You are not authorized. Error: Invalid authorization

Let me know

thanks
-Ramesh


Re: How to upgrade my bosh?

Amit Kumar Gupta
 

When you say "local bosh", do you mean BOSH-Lite?

On Wed, Jun 3, 2015 at 4:03 AM, daxiezhi <daxiezhi(a)gmail.com> wrote:

Hi,all.
My local bosh version is 1.2124.0. From cf v208, there is a change in the
templates to no longer include resoure pool sizes require s a minimun bosh
director of v149. So I guess the solution is to upgrade bosh. But I
couldn't find a doc for it.


VMWare Affinity Rules

Cory Jett
 

Hello. We are running Cloud Foundry v215 deployed to vSphere and are looking to implement drs_rules to improve fault tolerence per this document https://bosh.io/docs/vm-anti-affinity.html. The current YAML structure of our resource pools looks like this (generated using spiff):

resource_pools:
- cloud_properties:
cpu: 2
disk: 32768
ram: 16384
env:
bosh:
password: REDACTED
name: runner_z1
network: cf1
stemcell:
name: bosh-vsphere-esxi-ubuntu-trusty-go_agent
version: latest

But the YAML structure in the referenced document looks like this:

resource_pools:
- name: runner_z1
cloud_properties:
datacenters:
- name: my-dc
clusters:
- my-vsphere-cluster:
drs_rules:
- name: separate-hadoop-datanodes-rule
type: separate_vms


I am having trouble restructuring the YAML implementing drs_rules and getting a successful deployment. BOSH accepts the format but the deployment fails. Can someone point me in the right direction?


Re: Cannot start app in new vsphere cf deployment

Amit Kumar Gupta
 

Hi Ramesh,

You mention Diego, but I doubt your app is actually trying to run on
diego. You are not defaulting to the Diego backend, and I don't see
anything about how you're pushing this specific app that would make it run
on Diego.

My guess is there is something misconfigured about your loggregator
credentials. It appears to say you're not authorized to see logs, and "cf
app" may be failing because it needs to talk to the loggregator system to
determine health metrics for the app.

Can you share the output of "CF_TRACE=true cf logs gs"

Best,
Amit

On Mon, Sep 14, 2015 at 10:54 AM, Ramesh Sambandan <rsamban(a)gmail.com>
wrote:

I deployed diego release and still have issues.

1. Uploaded the app with no-start flag.
2. cf apps works
3. Started the app, got the failed status (app it self is working. I can
get int o the app from browser)
4. cf apps and cf logs are failing (irrespective of how many times I try
and how long I wait)


Following are my command logs and the logs from all the bosh vms is in
https://gist.github.com/6c81bed0d8eb888da970

1. diego_cf-deployment.yml - diego cf deployment
2. diego_diego-deployment.yml - diego diego deployment
3. jobsLogDiego.tgz - logs from all bosh vms

*******************Uploading app********
guests-MacBook-Pro:apps rsamban$ cf push gs -p
gs-serving-web-content-0.1.0.jar --no-start
Creating app gs in org yesVin / space dev as admin...
OK

Creating route gs.192.168.1.103.xip.io...
OK

Binding gs.192.168.1.103.xip.io to gs...
OK

Uploading gs...
Uploading app files from: gs-serving-web-content-0.1.0.jar
Uploading 12.9M, 112 files
Done uploading
OK
*******************cf apps SUCCESS********
guests-MacBook-Pro:apps rsamban$ cf apps
Getting apps in org yesVin / space dev as admin...
OK

name requested state instances memory disk urls
gs stopped 0/1 1G 1G
gs.192.168.1.103.xip.io
******************Staring apps FAILED********
guests-MacBook-Pro:apps rsamban$ cf start gs
Warning: error tailing logs
Unauthorized error: You are not authorized. Error: Invalid authorization
Starting app gs in org yesVin / space dev as admin...

FAILED
Start app timeout

TIP: use 'cf logs gs --recent' for more information
*********************cf logs FAILED***********
guests-MacBook-Pro:apps rsamban$ cf logs gs --recent
Connected, dumping recent logs for app gs in org yesVin / space dev as
admin...

FAILED
Unauthorized error: You are not authorized. Error: Invalid authorization
*********************cf apps fails***********
guests-MacBook-Pro:apps rsamban$ cf apps
Getting apps in org yesVin / space dev as admin...
FAILED
Server error, status code: 500, error code: 10001, message: An unknown
error occurred.
guests-MacBook-Pro:apps rsamban$

The logs for all the bosh vms are in jobsLogDiego.tgz
Following is the bosh vms output
ramesh(a)ubuntu:~/cloudFoundry/bosh-workplace/logs/6c81bed0d8eb888da970$
bosh vms
Acting as user 'admin' on 'bosh2'
Deployment `yesVinCloudFoundry'

Director task 421

Task 421 done


+------------------------------------+---------+---------------+---------------+
| Job/index | State | Resource Pool | IPs
|

+------------------------------------+---------+---------------+---------------+
| api_worker_z1/0 | running | small_z1 |
192.168.1.122 |
| api_worker_z2/0 | running | small_z2 |
192.168.1.222 |
| api_z1/0 | running | large_z1 |
192.168.1.120 |
| api_z2/0 | running | large_z2 |
192.168.1.220 |
| clock_global/0 | running | medium_z1 |
192.168.1.121 |
| consul_z1/0 | running | medium_z1 |
192.168.1.117 |
| doppler_z1/0 | running | medium_z1 |
192.168.1.107 |
| doppler_z2/0 | running | medium_z2 |
192.168.1.207 |
| etcd_z1/0 | running | medium_z1 |
192.168.1.114 |
| etcd_z1/1 | running | medium_z1 |
192.168.1.115 |
| etcd_z2/0 | running | medium_z2 |
192.168.1.213 |
| ha_proxy_z1/0 | running | router_z1 |
192.168.1.103 |
| hm9000_z1/0 | running | medium_z1 |
192.168.1.123 |
| hm9000_z2/0 | running | medium_z2 |
192.168.1.223 |
| loggregator_trafficcontroller_z1/0 | running | small_z1 |
192.168.1.108 |
| loggregator_trafficcontroller_z2/0 | running | small_z2 |
192.168.1.208 |
| nats_z1/0 | running | medium_z1 |
192.168.1.101 |
| nats_z2/0 | running | medium_z2 |
192.168.1.201 |
| nfs_z1/0 | running | medium_z1 |
192.168.1.102 |
| postgres_z1/0 | running | medium_z1 |
192.168.1.112 |
| router_z1/0 | running | router_z1 |
192.168.1.105 |
| router_z2/0 | running | router_z2 |
192.168.1.205 |
| runner_z1/0 | running | runner_z1 |
192.168.1.124 |
| runner_z2/0 | running | runner_z2 |
192.168.1.224 |
| stats_z1/0 | running | small_z1 |
192.168.1.110 |
| uaa_z1/0 | running | medium_z1 |
192.168.1.111 |
| uaa_z2/0 | running | medium_z2 |
192.168.1.211 |

+------------------------------------+---------+---------------+---------------+

VMs total: 27
Deployment `yesVinCloudFoundry-diego'

Director task 422

Task 422 done

+--------------------+---------+------------------+---------------+
| Job/index | State | Resource Pool | IPs |
+--------------------+---------+------------------+---------------+
| access_z1/0 | running | access_z1 | 192.168.1.152 |
| brain_z1/0 | running | brain_z1 | 192.168.1.27 |
| cc_bridge_z1/0 | running | cc_bridge_z1 | 192.168.1.29 |
| cell_z1/0 | running | cell_z1 | 192.168.1.28 |
| database_z1/0 | running | database_z1 | 192.168.1.26 |
| route_emitter_z1/0 | running | route_emitter_z1 | 192.168.1.30 |
+--------------------+---------+------------------+---------------+

VMs total: 6

I appears that once I start the app, things go awry for some reason. Even
though staring app reports failure, the app is actually started

I would really appreciate any help

thanks
-Ramesh


Re: Cannot start app in new vsphere cf deployment

Ramesh Sambandan
 

I deployed diego release and still have issues.

1. Uploaded the app with no-start flag.
2. cf apps works
3. Started the app, got the failed status (app it self is working. I can get int o the app from browser)
4. cf apps and cf logs are failing (irrespective of how many times I try and how long I wait)


Following are my command logs and the logs from all the bosh vms is in https://gist.github.com/6c81bed0d8eb888da970

1. diego_cf-deployment.yml - diego cf deployment
2. diego_diego-deployment.yml - diego diego deployment
3. jobsLogDiego.tgz - logs from all bosh vms

*******************Uploading app********
guests-MacBook-Pro:apps rsamban$ cf push gs -p gs-serving-web-content-0.1.0.jar --no-start
Creating app gs in org yesVin / space dev as admin...
OK

Creating route gs.192.168.1.103.xip.io...
OK

Binding gs.192.168.1.103.xip.io to gs...
OK

Uploading gs...
Uploading app files from: gs-serving-web-content-0.1.0.jar
Uploading 12.9M, 112 files
Done uploading
OK
*******************cf apps SUCCESS********
guests-MacBook-Pro:apps rsamban$ cf apps
Getting apps in org yesVin / space dev as admin...
OK

name requested state instances memory disk urls
gs stopped 0/1 1G 1G gs.192.168.1.103.xip.io
******************Staring apps FAILED********
guests-MacBook-Pro:apps rsamban$ cf start gs
Warning: error tailing logs
Unauthorized error: You are not authorized. Error: Invalid authorization
Starting app gs in org yesVin / space dev as admin...

FAILED
Start app timeout

TIP: use 'cf logs gs --recent' for more information
*********************cf logs FAILED***********
guests-MacBook-Pro:apps rsamban$ cf logs gs --recent
Connected, dumping recent logs for app gs in org yesVin / space dev as admin...

FAILED
Unauthorized error: You are not authorized. Error: Invalid authorization
*********************cf apps fails***********
guests-MacBook-Pro:apps rsamban$ cf apps
Getting apps in org yesVin / space dev as admin...
FAILED
Server error, status code: 500, error code: 10001, message: An unknown error occurred.
guests-MacBook-Pro:apps rsamban$

The logs for all the bosh vms are in jobsLogDiego.tgz
Following is the bosh vms output
ramesh(a)ubuntu:~/cloudFoundry/bosh-workplace/logs/6c81bed0d8eb888da970$ bosh vms
Acting as user 'admin' on 'bosh2'
Deployment `yesVinCloudFoundry'

Director task 421

Task 421 done

+------------------------------------+---------+---------------+---------------+
| Job/index | State | Resource Pool | IPs |
+------------------------------------+---------+---------------+---------------+
| api_worker_z1/0 | running | small_z1 | 192.168.1.122 |
| api_worker_z2/0 | running | small_z2 | 192.168.1.222 |
| api_z1/0 | running | large_z1 | 192.168.1.120 |
| api_z2/0 | running | large_z2 | 192.168.1.220 |
| clock_global/0 | running | medium_z1 | 192.168.1.121 |
| consul_z1/0 | running | medium_z1 | 192.168.1.117 |
| doppler_z1/0 | running | medium_z1 | 192.168.1.107 |
| doppler_z2/0 | running | medium_z2 | 192.168.1.207 |
| etcd_z1/0 | running | medium_z1 | 192.168.1.114 |
| etcd_z1/1 | running | medium_z1 | 192.168.1.115 |
| etcd_z2/0 | running | medium_z2 | 192.168.1.213 |
| ha_proxy_z1/0 | running | router_z1 | 192.168.1.103 |
| hm9000_z1/0 | running | medium_z1 | 192.168.1.123 |
| hm9000_z2/0 | running | medium_z2 | 192.168.1.223 |
| loggregator_trafficcontroller_z1/0 | running | small_z1 | 192.168.1.108 |
| loggregator_trafficcontroller_z2/0 | running | small_z2 | 192.168.1.208 |
| nats_z1/0 | running | medium_z1 | 192.168.1.101 |
| nats_z2/0 | running | medium_z2 | 192.168.1.201 |
| nfs_z1/0 | running | medium_z1 | 192.168.1.102 |
| postgres_z1/0 | running | medium_z1 | 192.168.1.112 |
| router_z1/0 | running | router_z1 | 192.168.1.105 |
| router_z2/0 | running | router_z2 | 192.168.1.205 |
| runner_z1/0 | running | runner_z1 | 192.168.1.124 |
| runner_z2/0 | running | runner_z2 | 192.168.1.224 |
| stats_z1/0 | running | small_z1 | 192.168.1.110 |
| uaa_z1/0 | running | medium_z1 | 192.168.1.111 |
| uaa_z2/0 | running | medium_z2 | 192.168.1.211 |
+------------------------------------+---------+---------------+---------------+

VMs total: 27
Deployment `yesVinCloudFoundry-diego'

Director task 422

Task 422 done

+--------------------+---------+------------------+---------------+
| Job/index | State | Resource Pool | IPs |
+--------------------+---------+------------------+---------------+
| access_z1/0 | running | access_z1 | 192.168.1.152 |
| brain_z1/0 | running | brain_z1 | 192.168.1.27 |
| cc_bridge_z1/0 | running | cc_bridge_z1 | 192.168.1.29 |
| cell_z1/0 | running | cell_z1 | 192.168.1.28 |
| database_z1/0 | running | database_z1 | 192.168.1.26 |
| route_emitter_z1/0 | running | route_emitter_z1 | 192.168.1.30 |
+--------------------+---------+------------------+---------------+

VMs total: 6

I appears that once I start the app, things go awry for some reason. Even though staring app reports failure, the app is actually started

I would really appreciate any help

thanks
-Ramesh


How to upgrade my bosh?

Zhi Xie
 

Hi,all.
My local bosh version is 1.2124.0. From cf v208, there is a change in the templates to no longer include resoure pool sizes require s a minimun bosh director of v149. So I guess the solution is to upgrade bosh. But I couldn't find a doc for it.


Re: Adding security groups in resource_pools instead of networks

Marco Voelz
 

Dear Dmitriy,

thanks for the quick reply. See my response inline.

On 10/09/15 05:07, "Dmitriy Kalinin" <dkalinin(a)pivotal.io<mailto:dkalinin(a)pivotal.io>> wrote:

It's a bit unclear to me if you are proposing to security groups feature in the Director or in the OpenStack CPI specifically.

If in the OpenStack CPI, then I think it makes sense to pull in security groups config into resource pool's cloud_properties section. For example:

resource_pools:
- name: my-fancy-web
stemcell: { ... }
cloud_properties:
instance_type: m3.xlarge
security_groups: [web]

- name: my-fancy-worker
stemcell: { ... }
cloud_properties:
instance_type: m3.xlarge
security_groups: [worker]

This is exactly what I’m proposing to have. Add the feature to the Openstack CPI to specify security groups in resource_pools like in your example above.

will assign web to VMs of type my-fancy-web and worker to VMs of type my-fancy-worker. When resource pool's cloud_properties are changed, VMs will be recreated with new config. Like you point out there is a case of configure_networks, imho we can just raise an error in create_vm if both networks' cloud_properties specifies security groups and resource pool cp's specifies security_groups.

Throwing an error when security groups are specified in resource_pools as well as in networks seems like a great pragmatic solution, thanks. The problems with configure_networks still remain, though – see below.

If you are proposing changes in the Director, then I think it gets a bit more complicated. I'm not sure we have enough good usage patterns to figure out a good abstraction *yet* (e.g. Azure has two different types of security groups: network and vm, which imho makes a lot of sense).

I was just mentioning some changes in the director coding to resolve the configure_networks problem: This method needs some knowledge about the security groups defined by the resource_pool. Otherwise it will just remove those when it is called. Therefore, I wanted to add another parameter for the resource_pool security groups. If it makes sense to pull them out further across CPIs, we could do that. But from your comments about the Azure CPI I gather that this might be more work than I thought. So I will stick to adding a parameter for the Openstack CPI API then. Or is there another option I’m missing here?

Also if it's becoming a first class feature, may be BOSH at this point should be creating security groups automatically with certain rules and may be even with links generating these rules dynamically, etc.

This is actually true for a lot of things. If you take it to the extreme, bosh should be able to take templates e.g. from Heat or CloudFormation and set up my landscape before deploying things. But that might take it too far for now :)

Btw we are currently implementing first class support for AZs + links in the Director, so all of the bosh-director core classes are going through significant changes on global-net branch. Related to that I am actually thinking we split up resource pool config into two pieces: vm type and stemcell. With recent cloud config changes it would make it more sense to keep stemcell os/version in the deployment manifest (just like releases) and vm type with iaas specifics in the cloud config file.

I think it makes a lot of sense to keep the stemcell information in the manifest – the different things which get deployed might have their own lifecycle of adoption – although I would wish otherwise. The security group information would definitely be something which is specific to a deployment, so at least something specific to the resource_pools should remain in the release’s manifest.

Warm regards
Marco


Re: [cf-dev] bosh-lite 9000.50.0+ uses official garden-linux release -> easier net configuration

Cyrille Le Clerc
 

Thanks for this simplification!

Cyrille

On Sat, Sep 12, 2015 at 1:52 AM, Dmitriy Kalinin <dkalinin(a)pivotal.io>
wrote:

Hey all,

As of version 9000.50.0 bosh-lite uses official garden-linux release. If
everything was gone right, you will not notice any difference in how things
work.

One benefit of using newer garden server in bosh-lite is easier networking
configuration in your bosh-lite manifests.

For example before you would do something like this:

networks:
- name: cf1
subnets:
- cloud_properties: {name: random}
range: 10.244.0.0/30
reserved:
- 10.244.0.1
static:
- 10.244.0.2
- cloud_properties: {name: random}
range: 10.244.0.4/30
reserved:
- 10.244.0.5
static:
- 10.244.0.6
...

and now you can just do something like this since garden server allows to
place multiple containers on the same subnet:

networks:
- name: cf1
subnets:
- range: 10.244.0.0/24
gateway: 10.244.0.1
static: [10.244.0.2, 10.244.0.3]
...

Dmitriy


--
Cyrille Le Clerc
email & gtalk : cleclerc(a)cloudbees.com / mob: +33-6.61.33.69.86 / skype:
cyrille.leclerc
CloudBees, Inc
www.cloudbees.com


Re: bosh-lite 9000.50.0+ uses official garden-linux release -> easier net configuration

Ed
 

I'm having some difficulty with the new networking configuration.
I've defined a 'default' network as follows:

networks:
- name: default
subnets:
- range: 10.244.11.0/24
gateway: 10.244.11.1
static: [10.244.11.2]

And have assigned the static IP '10.244.11.2' to one of my jobs:

jobs:
- name: graphite
...
networks:
- name: default
static_ips:
- 10.244.11.2

However when I try to deploy I get the following error:

"Error 130009: `graphite/0' asked for a static IP 10.244.11.2 but it's in the dynamic pool"

Am I missing something? Testing this with bosh-lite 9000.50.0.


Re: bosh-lite 9000.50.0+ uses official garden-linux release -> easier net configuration

Dmitriy Kalinin
 

Everything should still work as is.

On Fri, Sep 11, 2015 at 4:59 PM, Dr Nic Williams <drnicwilliams(a)gmail.com>
wrote:

I am glad for future simpler networking for bosh-lite manifests; I just
know we have lots of infrastructure-warden.yml spiff templates in lots of
bosh releases.

Also, will bosh-lite stemcells keep *warden* in their name? I assume
there's no functional difference; they are just rootfs.

There's probably automation around that looks for if the current stemcell
name has been uploaded already; else tries to upload it.





On Fri, Sep 11, 2015 at 4:53 PM, Dmitriy Kalinin <dkalinin(a)pivotal.io>
wrote:

Hey all,

As of version 9000.50.0 bosh-lite uses official garden-linux release. If
everything was gone right, you will not notice any difference in how things
work.

One benefit of using newer garden server in bosh-lite is easier
networking configuration in your bosh-lite manifests.

For example before you would do something like this:

networks:
- name: cf1
subnets:
- cloud_properties: {name: random}
range: 10.244.0.0/30
reserved:
- 10.244.0.1
static:
- 10.244.0.2
- cloud_properties: {name: random}
range: 10.244.0.4/30
reserved:
- 10.244.0.5
static:
- 10.244.0.6
...

and now you can just do something like this since garden server allows to
place multiple containers on the same subnet:

networks:
- name: cf1
subnets:
- range: 10.244.0.0/24
gateway: 10.244.0.1
static: [10.244.0.2, 10.244.0.3]
...

Dmitriy


Re: bosh-lite 9000.50.0+ uses official garden-linux release -> easier net configuration

Dr Nic Williams
 

I am glad for future simpler networking for bosh-lite manifests; I just know we have lots of infrastructure-warden.yml spiff templates in lots of bosh releases.




Also, will bosh-lite stemcells keep *warden* in their name? I assume there's no functional difference; they are just rootfs.




There's probably automation around that looks for if the current stemcell name has been uploaded already; else tries to upload it.

On Fri, Sep 11, 2015 at 4:53 PM, Dmitriy Kalinin <dkalinin(a)pivotal.io>
wrote:

Hey all,
As of version 9000.50.0 bosh-lite uses official garden-linux release. If
everything was gone right, you will not notice any difference in how things
work.
One benefit of using newer garden server in bosh-lite is easier networking
configuration in your bosh-lite manifests.
For example before you would do something like this:
networks:
- name: cf1
subnets:
- cloud_properties: {name: random}
range: 10.244.0.0/30
reserved:
- 10.244.0.1
static:
- 10.244.0.2
- cloud_properties: {name: random}
range: 10.244.0.4/30
reserved:
- 10.244.0.5
static:
- 10.244.0.6
...
and now you can just do something like this since garden server allows to
place multiple containers on the same subnet:
networks:
- name: cf1
subnets:
- range: 10.244.0.0/24
gateway: 10.244.0.1
static: [10.244.0.2, 10.244.0.3]
...
Dmitriy


Re: bosh-lite 9000.50.0+ uses official garden-linux release -> easier net configuration

Dr Nic Williams
 

I know versioning is somewhat a lost cause for bosh-lite. 




I guess my actual question is: will old manifests still work for new bosh-lite? I assume so as you said "you will not notice any difference"

On Fri, Sep 11, 2015 at 4:53 PM, Dmitriy Kalinin <dkalinin(a)pivotal.io>
wrote:

Hey all,
As of version 9000.50.0 bosh-lite uses official garden-linux release. If
everything was gone right, you will not notice any difference in how things
work.
One benefit of using newer garden server in bosh-lite is easier networking
configuration in your bosh-lite manifests.
For example before you would do something like this:
networks:
- name: cf1
subnets:
- cloud_properties: {name: random}
range: 10.244.0.0/30
reserved:
- 10.244.0.1
static:
- 10.244.0.2
- cloud_properties: {name: random}
range: 10.244.0.4/30
reserved:
- 10.244.0.5
static:
- 10.244.0.6
...
and now you can just do something like this since garden server allows to
place multiple containers on the same subnet:
networks:
- name: cf1
subnets:
- range: 10.244.0.0/24
gateway: 10.244.0.1
static: [10.244.0.2, 10.244.0.3]
...
Dmitriy