This forces us to spread all clusterable nodes across 2 deploys and certain jobs, like CC, use the job_name+index to uniquely identify a node I believe they're planning on switching to guids for bosh job identifiers. I saw in another thread you and Dmitriy discussed this. Any other reasons for having unique job names we should know about? How would you feel about the interface allowing for specifying
additional releases, jobs, and templates to be colocated on existing jobs, along with property configuration for these things? I don't quite follow what you are proposing here. Can you clarify? What I mean is the tools we build for generating manifests will support specifying inputs (probably in the form of a YAML file) that declares what additional releases you want to add to the deployment, what additional jobs you may want to add, what additional job templates you may want to colocate with an existing job, and property configuration for those additional jobs or colocated job templates. A common example is wanting to colocate some monitoring agent on all the jobs, and providing some credential configuration so it can pump metrics into some third party service. This would be for things not already covered by the LAMB architecture. Something like that would work for me as long as we were still able to take advantage of the scripts/tooling in cf-deployment to manage the config and templates we manage in lds-deployment. Yes, that'd be the plan. Cheers, Amit On Mon, Sep 21, 2015 at 2:41 PM, Mike Youngstrom <youngm(a)gmail.com> wrote: Thanks for the response. See comments below:
Sensitive property management as part of manifest generation (encrypted or acquired from an outside source)
How do you currently get these encrypted or external values into your manifests? At manifest generation time, would you be able to generate a stub on the fly from this source, and pass it into the manifest generation script?
Yes, that would work fine. Just thought I'd call it out as something our current solution does that we'd have to augment in cf-deployment.
If for some reason we are forced to fork a stock release we'd like to be able to use that forked release we are building instead of the publicly available one for manifest generation and release uploads, etc.
Yes, using the stock release will be the default option, but we will support several other ways of specifying a release, including providing a URL to a remote tarball, a path to a local release directory, a path to a local tarball, and maybe a git URL and SHA.
Great!
The job names in each deployment must be unique across the installation.
Why do the job names need to be unique across deployments?
This is because a single bosh cannot connect to multiple datacenters which for us represent different availability zones. This forces us to spread all clusterable nodes across 2 deploys and certain jobs, like CC, use the job_name+index to uniquely identify a node [0]. Therefore if we have 2 CCs deployed across 2 AZ we must have one job named cloud_controller_az1 and the other named cloud_controller_az2. Does that make sense? I recognize this is mostly the fault of a limitation in Bosh but until bosh supports connection to multiple vsphere datacenters with a single director we will need to account for it in our templatin.
[0] https://github.com/cloudfoundry/cloud_controller_ng/blob/5257a8af6990e71cd1e34ae8978dfe4773b32826/bosh-templates/cloud_controller_worker_ctl.erb#L48
Occasionally we may wish to use some config from a stock release not currently exposed in a cf-deployment template. I'd like to be sure there is a way we can add that config, in a not hacky way, without waiting for a PR to be accepted and subsequent release.
This would be ideal. Currently, a lot of complexity in manifest generation is around, if you specify a certain value X, then you need to make sure you specify values Y, Z, etc. in a compatible way. E.g. if you have 3 etcd instances, then the value for the etcd.machines property needs to have those 3 IPs. If you specify domain as "mydomain.com", then you need to specify in other places that the UAA URL is " https://uaa.mydomain.com". The hope is most of this complexity goes away with BOSH Links ( https://github.com/cloudfoundry/bosh-notes/blob/master/links.md). My hope is that, as the complexity goes away, we will have to maintain less logic and will be able to comfortably expose more, if not all, of the properties.
Great
We have our own internal bosh releases and config that we'll need to merge in with the things cf-deployment is doing.
How would you feel about the interface allowing for specifying additional releases, jobs, and templates to be colocated on existing jobs, along with property configuration for these things?
I don't quite follow what you are proposing here. Can you clarify?
we'd like to augment this with our own release jobs and config that we know to work with cf-deployment 250's and perhaps tag it as v250.lds
Would a workflow like this work for you: maintain an lds-deployment repo, which includes cf-deployment as a submodule, and you can version lds-deployment and update your submodule pointer to cf-deployment as you see fit? lds-deployment will probably just need the cf-deployment submodule, and a config file describing the "blessed" versions of the non-stock releases you wish to add on. I know this is lacking details, but does something along those lines sound like a reasonable workflow?
Something like that would work for me as long as we were still able to take advantage of the scripts/tooling in cf-deployment to manage the config and templates we manage in lds-deployment.
Thanks, Mike
On Wed, Sep 16, 2015 at 3:06 PM, Mike Youngstrom <youngm(a)gmail.com> wrote:
Another situation we have that you may want to keep in mind while developing cf-deployment:
* We are using vsphere and currently we have a cf installation with 2 AZ using 2 separate vsphere "Datacenters" (more details: https://github.com/cloudfoundry/bosh-notes/issues/7). This means we have a CF installation that is actually made up of 2 deployments. So, we need to generate a manifest for az1 and another for az2. The job names in each deployment must be unique across the installation (e.g. cloud_controller_az1 and cloud_controller_az2) would be the cc job names in each deployment.
Mike
On Wed, Sep 16, 2015 at 3:38 PM, Mike Youngstrom <youngm(a)gmail.com> wrote:
Here are some of the examples:
* Sensitive property management as part of manifest generation (encrypted or acquired from an outside source)
* We have our own internal bosh releases and config that we'll need to merge in with the things cf-deployment is doing. For example, if cf-deployment tags v250 as including Diego 3333 and etcd 34 with given templates perhaps we'd like to augment this with our own release jobs and config that we know to work with cf-deployment 250's and perhaps tag it as v250.lds and that becomes what we use to generate our manifests and upload releases.
* Occasionally we may wish to use some config from a stock release not currently exposed in a cf-deployment template. I'd like to be sure there is a way we can add that config, in a not hacky way, without waiting for a PR to be accepted and subsequent release.
* If for some reason we are forced to fork a stock release we'd like to be able to use that forked release we are building instead of the publicly available one for manifest generation and release uploads, etc.
Does that help?
Mike
On Tue, Sep 15, 2015 at 9:50 PM, Amit Gupta <agupta(a)pivotal.io> wrote:
Thanks for the feedback Mike!
Can you tell us more specifically what sort of extensions you need? It would be great if cf-deployment provided an interface that could serve the needs of essentially all operators of CF.
Thanks, Amit
On Tue, Sep 15, 2015 at 4:02 PM, Mike Youngstrom <youngm(a)gmail.com> wrote:
This is great stuff! My organization currently maintains our own custom ways to generate manifests, include secure properties, and manage release versions.
We would love to base the next generation of our solution on cf-deployment. Have you put any thought into how others might customize or extend cf-deployment? Our needs are very similar to yours just sometimes a little different.
Perhaps a private fork periodically merged with a known good release combination (tag) might be appropriate? Or some way to include the same tools into a wholly private repo?
Mike
On Tue, Sep 8, 2015 at 1:22 PM, Amit Gupta <agupta(a)pivotal.io> wrote:
Hi all,
The CF OSS Release Integration team (casually referred to as the "MEGA team") is trying to solve a lot of tightly interrelated problems, and make many of said problems less interrelated. It is difficult to address just one issue without touching the others, so the following proposal addresses several issues, but the most important ones are:
* decompose cf-release into many independently manageable, independently testable, independently usable releases * separate manifest generation strategies from the release source, paving the way for Diego to be part of the standard deployment
This proposal will outline a picture of how manifest generation will work in a unified manner in development, test, and integration environments. It will also outline a picture of what each release’s test pipelines will look like, how they will feed into a common integration environment, and how feedback from the integration environment will feed back into the test environments. Finally, it will propose a picture for what the integration environment will look like, and how we get from the current integration environment to where we want to be.
For further details, please feel free to view and comment here:
https://docs.google.com/document/d/1Viga_TzUB2nLxN_ILqksmUiILM1hGhq7MBXxgLaUOkY
Thanks, Amit, CF OSS Release Integration team
|