Re: Scaling Down etcd

Amit Kumar Gupta

Hi Lingesh,

I don't think easier deployment and maintenance is that simple. Each
manifest may become smaller, but now you have to maintain multiple small
manifests. And keep them in sync. And make sure that they are all
compatible. There are pros and cons to any sort of decomposition like this.

With regards to targetting specific components for change, I think what
will really solve your problem is having a single CF deployment composed of
multiple releases. E.g. uaa as its own separate release within a single CF
deployment. If you wanted to, you could update the uaa release itself
instead of having to update all the jobs. You still have the problem of,
if you only update one component, how you know it's compatible with all the
things you don't upgrade, but it sounds like you're already willing to take
on that complexity.

This decomposition of cf-release into multiple releases (composed into a
single deployment) is currently underway.

With regards to scaling down etcd, I wasn't able to understand the problem
you're hitting. Can you provide more details about exactly what you did,
in what order?


On Tue, Feb 16, 2016 at 8:50 PM, Lingesh Mouleeshwaran <
lingeshmouleeshwaran(a)> wrote:

Hi Amit
The main advantage that we are targeting is to reduce deployment time for
any changes in the cloud foundry. The advantages include but not limited to
* Target specific components for changes
* Deployment time
* Addressing specific components for patch updates
* Easier deployment
* Easier maintenance etc

Lingesh M

On Wed, Feb 17, 2016 at 4:03 AM, Amit Gupta <agupta(a)> wrote:

Hi Surendhar,

May I ask why you want to split the deployment into multiple deployments?
What problem are you having that you're trying to solve by doing this?


On Mon, Feb 15, 2016 at 9:34 AM, Suren R <suren.devices(a)> wrote:

Hi Cloud Foundry!
We are trying to split the cloud foundry deployment in to multiple
deployments. Each CF component will have its own deployment manifest.
We are doing this activity in an existing CF. We moved all components
except nats and etcd, into the new deployments. The original single
deployment is now having just these two jobs.

Of which, existing deployment is having 3 etcd machines. The migration
idea is to bring 4 new etcd machines in the cluster through new deployment.
Point all other components to these four etcd machines and delete the
existing 3 nodes.

However, if we delete the existing 3 nodes and do an update to form a 4
node cluster, the cluster breaks and as a result all running apps are going
down. (Because the canary job brings one node down for the update, as a
result tolerance is breached.)

We also tried to remove these three nodes from the cluster using etcdctl
command and tried to update deletion to the new deployment through bosh.
This also makes the bosh deployment to fail (etcd job is failing saying
"unequal number of nodes").

In this situation, what would be the best way to reduce the nodes in the
etcd cluster?


Join to automatically receive all group messages.