Hello all, For the first time this year, we'll be a having a CF Hackathon at Cloud Foundry Summit Silicon Valley (Jun 13-15). It's free, and will have dedicated time from 9am-3pm Tuesday, with projects due Wednesday at 3pm. Winners will be announced on stage during Thursday's keynotes, and top three teams (max 4 people) will receive various awesome robots as prizes. You can sign up for this while registering for Summit, but walk-ins will be welcome Tuesday morning. More details here < https://www.cloudfoundry.org/event_subpages/events-sv-2017/>. Please reach out if you have any questions and/or suggestions. Hope to see you there! Chris Clark Technical Operations Manager Cloud Foundry Foundation
|
|
Building stemcell on ESX/vSphere
|
|
Re: Update deployment with new packages leaves old packages on deployed machine
A copy of the old package is kept around so that reverting to the previous version requires less downtime. This minimizes downtime in the case of a failed deploy. This is an intended feature. Currently, there is no way to disable this behavior.
--Tyler
toggle quoted message
Show quoted text
On Sun, May 7, 2017 at 2:17 PM Benjamin Gandon <benjamin(a)gandon.org> wrote: Sounds like a workaround to me.
I'm not aware of any feature in BOSH that would need old packages to stay around on BOSH instances.
But it could be due to BOSH delegating process restarts to Monit and not knowing exactly when the old binaries are not needed anymore. Sure Monit has specs about how restarting things, but it also has its own agenda.
This is only a guess, though. The truth is in the code. Or when Dmitry speaks. :)
One other observation is that if the downtime implied by '--recreate' is an issue to you, then you might be operating *Pets* instead of *Cattle* and my advice is that you should pay attention for any bigger reliability problem there.
Using '--recreate' is something that should be done harmlessly on a regular basis. Like with the 'repave' part of the rotate-repave-repair 3R's of security.
Le 3 mai 2017 à 16:58, Alexander Claus <alexander.claus(a)sap.com> a écrit :
Hey, first: sorry if this question is already here somewhere in the list. Unfortunately the search is not working. I always get: Sorry no email could be found for this query. No matter, which search term I enter. Am I doing anything wrong or is the search defect?
My real issue: When updating an existing deployment with a new release, artifacts from the former release still reside on the deployment vm after the finished update. In our case we have multiple packages which are not marginal in size, each around 100MB. In sum we have therefore 500-600MB for the packages which form the release. When we now update an existing deployment, one can see during the update via 'bosh deployments' that the deployment has the former and the new release, and after the 'bosh deploy' command for the update finished the same view shows only the new release. So far so good. When going via 'bosh ssh' to the machine, one can see that in /var/vcap/data/packages/<package-name> multiple folders exist: for the current version of the package and for the former one. I checked with multiple successive updates (versions 1>2>3>4, ..), that this mechanism only holds the last version in parallel, but not more. Furthermore I found out that the naming of the folder containing the different versions of the packages is obviously depending on the CPI. On bosh-lite one can see a relation to the package fingerprints and SHA1 of compiled packages. On Bosh(a)AWS I cannot recognize any relation to any fingerprint or SHA1 or blobstore id or whatever.
Here comes my question: Is it a bug or a feature that the old packages remain on the deployed vm after a finished update? I could not find anything about this in the bosh documentation.
Since in our case the disk space used by these duplicate unused packages is significant: Is there any way to deactivate this "feature", i.e. only to keep the current version of a package? I know that I could do the update via 'bosh deploy --recreate' and the result would be a machine with only the needed package version, but during that the complete vm is destroyed and recreated which takes significant amount of time depending on the cloud provider. So I'd favour an option to update without '--recreate'.
Hope that someone can help. Alexander
|
|
Re: redis release with links
Dr Nic. This is super useful thanks. On Sun, May 7, 2017 at 5:30 PM Dr Nic Williams <drnicwilliams(a)gmail.com> wrote: I've upgraded https://github.com/cloudfoundry-community/redis-boshrelease to provide/consume bosh links.
Also there is a 2-node manifest for cloud-config/bosh2 users:
bosh2 deploy manifests/redis.yml -d redis
The `redis-password` variable will be generated into credhub/config-server.
Or without credhub:
bosh2 deploy manifests/redis.yml -d redis --vars-store tmp/creds.yml
To backup/restore redis, there's an open PR for shield to make this easy/fun https://github.com/starkandwayne/shield-boshrelease/pull/76
Cheers Dr Nic
-- Dr Nic Williams Stark & Wayne LLC http://starkandwayne.com +61 437 276 076 <+61%20437%20276%20076> twitter @drnic
-- Duncan Winn Cloud Foundry PCF Services
|
|
|
|
Re: Update deployment with new packages leaves old packages on deployed machine

Benjamin Gandon
Sounds like a workaround to me.
I'm not aware of any feature in BOSH that would need old packages to stay around on BOSH instances.
But it could be due to BOSH delegating process restarts to Monit and not knowing exactly when the old binaries are not needed anymore. Sure Monit has specs about how restarting things, but it also has its own agenda.
This is only a guess, though. The truth is in the code. Or when Dmitry speaks. :)
One other observation is that if the downtime implied by '--recreate' is an issue to you, then you might be operating Pets instead of Cattle and my advice is that you should pay attention for any bigger reliability problem there.
Using '--recreate' is something that should be done harmlessly on a regular basis. Like with the 'repave' part of the rotate-repave-repair 3R's of security.
toggle quoted message
Show quoted text
Le 3 mai 2017 à 16:58, Alexander Claus <alexander.claus(a)sap.com> a écrit :
Hey, first: sorry if this question is already here somewhere in the list. Unfortunately the search is not working. I always get: Sorry no email could be found for this query. No matter, which search term I enter. Am I doing anything wrong or is the search defect?
My real issue: When updating an existing deployment with a new release, artifacts from the former release still reside on the deployment vm after the finished update. In our case we have multiple packages which are not marginal in size, each around 100MB. In sum we have therefore 500-600MB for the packages which form the release. When we now update an existing deployment, one can see during the update via 'bosh deployments' that the deployment has the former and the new release, and after the 'bosh deploy' command for the update finished the same view shows only the new release. So far so good. When going via 'bosh ssh' to the machine, one can see that in /var/vcap/data/packages/<package-name> multiple folders exist: for the current version of the package and for the former one. I checked with multiple successive updates (versions 1>2>3>4, ..), that this mechanism only holds the last version in parallel, but not more. Furthermore I found out that the naming of the folder containing the different versions of the packages is obviously depending on the CPI. On bosh-lite one can see a relation to the package fingerprints and SHA1 of compiled packages. On Bosh(a)AWS I cannot recognize any relation to any fingerprint or SHA1 or blobstore id or whatever.
Here comes my question: Is it a bug or a feature that the old packages remain on the deployed vm after a finished update? I could not find anything about this in the bosh documentation.
Since in our case the disk space used by these duplicate unused packages is significant: Is there any way to deactivate this "feature", i.e. only to keep the current version of a package? I know that I could do the update via 'bosh deploy --recreate' and the result would be a machine with only the needed package version, but during that the complete vm is destroyed and recreated which takes significant amount of time depending on the cloud provider. So I'd favour an option to update without '--recreate'.
Hope that someone can help. Alexander
|
|
disabling ipv6 at the kernel level in stemcells
hey all,
we've disabled ipv6 via ipv6.disable=1 in grub in 3363.19+ linux stemcells some time ago. in previous stemcells it was disabled via sysctl at bootup.
we are now aware that small number of releases may be affected (one release for example was disabling portion of ipv6 functionality themselves but no longer can succeed since /proc/... entry is gone and that code was not checking for its existence). we also had a report that some java processes may be affected if they were using particular libraries that for some reason try to obtain local ipv6 address (even though ipv6 was disabled before their startup).
we try very hard to avoid making any breaking changes in minor stemcell versions; however, this changed turned out to be more disruptive than we expected. given that it affects only small number of releases we have decided to keep it in (hoping that it should be easy for release authors to issue a patch if necessary).
(for folks thinking about the future ipv6 support, bosh-agent will automatically turn it on at runtime if necessary.)
as usual feel free to reach out to us on #bosh slack if you need any help.
dmitriy
|
|
Update deployment with new packages leaves old packages on deployed machine
Hey, first: sorry if this question is already here somewhere in the list. Unfortunately the search is not working. I always get: Sorry no email could be found for this query. No matter, which search term I enter. Am I doing anything wrong or is the search defect?
My real issue: When updating an existing deployment with a new release, artifacts from the former release still reside on the deployment vm after the finished update. In our case we have multiple packages which are not marginal in size, each around 100MB. In sum we have therefore 500-600MB for the packages which form the release. When we now update an existing deployment, one can see during the update via 'bosh deployments' that the deployment has the former and the new release, and after the 'bosh deploy' command for the update finished the same view shows only the new release. So far so good. When going via 'bosh ssh' to the machine, one can see that in /var/vcap/data/packages/<package-name> multiple folders exist: for the current version of the package and for the former one. I checked with multiple successive updates (versions 1>2>3>4, ..), that this mechanism only holds the last version in parallel, but not more. Furthermore I found out that the naming of the folder containing the different versions of the packages is obviously depending on the CPI. On bosh-lite one can see a relation to the package fingerprints and SHA1 of compiled packages. On Bosh(a)AWS I cannot recognize any relation to any fingerprint or SHA1 or blobstore id or whatever.
Here comes my question: Is it a bug or a feature that the old packages remain on the deployed vm after a finished update? I could not find anything about this in the bosh documentation.
Since in our case the disk space used by these duplicate unused packages is significant: Is there any way to deactivate this "feature", i.e. only to keep the current version of a package? I know that I could do the update via 'bosh deploy --recreate' and the result would be a machine with only the needed package version, but during that the complete vm is destroyed and recreated which takes significant amount of time depending on the cloud provider. So I'd favour an option to update without '--recreate'.
Hope that someone can help. Alexander
|
|
I can't wait to try out the new log-in com-mand
[image: Gemoji image for :trollface:]
(Srsly great work bosh folks! :))
toggle quoted message
Show quoted text
On Tue, 2 May 2017 at 06:39 Sean Keery <skeery(a)pivotal.io> wrote: Excellent. Thanks to the team for all the hard work.
On Mon, May 1, 2017 at 7:39 PM Dmitriy Kalinin <dkalinin(a)pivotal.io> wrote:
Hey all,
I am happy to announce BOSH CLI v2 is now generally available. CLI v2 incorporates tons of feedback received over the past few years. Some features have been redesigned, some removed, and some hopefully much improved.
You will find docs available on https://bosh.io/docs. Let us know if you find any missing material (I'm sure there is some). Here are some documentation pages worth mentioning:
- https://bosh.io/docs#basic-deploy - cli v2 section on the index page - https://bosh.io/docs/cli-v2 - all commands - https://bosh.io/docs/cli-v2-diff - some notable cli v1 vs v2 differences
CLI binary also links directly to the command specific documentation section from its command help output (-h), so more information is just a command+click away.
CLI v1 will continue to work and be supported for some time; however, new Director features will not be exposed in v1.
Feel free to drop by #bosh slack channel if you have any questions,
BOSH team
-- *Sean Keery | Minister of Chaos | Pivotal Cloud Foundry Solutions* Mobile: 970.274.1285 | skeery(a)pivotal.io LinkedIn: @zgrinch <http://www.linkedin.com/in/zgrinch> | Twitter: @zgrinch <https://twitter.com/zgrinch> | Github: @skibum55 <https://github.com/skibum55>
Adopt the Silicon Valley state of mind
|
|
Excellent. Thanks to the team for all the hard work. On Mon, May 1, 2017 at 7:39 PM Dmitriy Kalinin <dkalinin(a)pivotal.io> wrote: Hey all,
I am happy to announce BOSH CLI v2 is now generally available. CLI v2 incorporates tons of feedback received over the past few years. Some features have been redesigned, some removed, and some hopefully much improved.
You will find docs available on https://bosh.io/docs. Let us know if you find any missing material (I'm sure there is some). Here are some documentation pages worth mentioning:
- https://bosh.io/docs#basic-deploy - cli v2 section on the index page - https://bosh.io/docs/cli-v2 - all commands - https://bosh.io/docs/cli-v2-diff - some notable cli v1 vs v2 differences
CLI binary also links directly to the command specific documentation section from its command help output (-h), so more information is just a command+click away.
CLI v1 will continue to work and be supported for some time; however, new Director features will not be exposed in v1.
Feel free to drop by #bosh slack channel if you have any questions,
BOSH team
-- *Sean Keery | Minister of Chaos | Pivotal Cloud Foundry Solutions* Mobile: 970.274.1285 | skeery(a)pivotal.io LinkedIn: @zgrinch < http://www.linkedin.com/in/zgrinch> | Twitter: @zgrinch < https://twitter.com/zgrinch> | Github: @skibum55 < https://github.com/skibum55> Adopt the Silicon Valley state of mind
|
|
This is really great. I already started using it since a week. "-"(hyphen) plays a crucial role. :)
toggle quoted message
Show quoted text
On Mon, May 1, 2017 at 8:55 PM, Duncan Winn <dwinn(a)pivotal.io> wrote: Congrats - this is really good to see!!!
#bosh-all-the-things
On Mon, May 1, 2017 at 8:52 PM Michael Maximilien <mmaximilien(a)gmail.com> wrote:
Yeah!!! Wonderful news DK. Well done (even if a few hours late ;)
But seriously this is momentous accomplishment by the BOSH team and you leading it. Proud to be an extended member of this team.
Cheers,
max ibm cloud lab silicon valley, ca maximilien.org
On Mon, May 1, 2017 at 7:38 PM, Dmitriy Kalinin <dkalinin(a)pivotal.io> wrote:
Hey all,
I am happy to announce BOSH CLI v2 is now generally available. CLI v2 incorporates tons of feedback received over the past few years. Some features have been redesigned, some removed, and some hopefully much improved.
You will find docs available on https://bosh.io/docs. Let us know if you find any missing material (I'm sure there is some). Here are some documentation pages worth mentioning:
- https://bosh.io/docs#basic-deploy - cli v2 section on the index page - https://bosh.io/docs/cli-v2 - all commands - https://bosh.io/docs/cli-v2-diff - some notable cli v1 vs v2 differences
CLI binary also links directly to the command specific documentation section from its command help output (-h), so more information is just a command+click away.
CLI v1 will continue to work and be supported for some time; however, new Director features will not be exposed in v1.
Feel free to drop by #bosh slack channel if you have any questions,
BOSH team
-- max http://maximilien.org http://blog.maximilien.com
-- Duncan Winn Cloud Foundry PCF Services
|
|
Congrats - this is really good to see!!! #bosh-all-the-things On Mon, May 1, 2017 at 8:52 PM Michael Maximilien <mmaximilien(a)gmail.com> wrote: Yeah!!! Wonderful news DK. Well done (even if a few hours late ;)
But seriously this is momentous accomplishment by the BOSH team and you leading it. Proud to be an extended member of this team.
Cheers,
max ibm cloud lab silicon valley, ca maximilien.org
On Mon, May 1, 2017 at 7:38 PM, Dmitriy Kalinin <dkalinin(a)pivotal.io> wrote:
Hey all,
I am happy to announce BOSH CLI v2 is now generally available. CLI v2 incorporates tons of feedback received over the past few years. Some features have been redesigned, some removed, and some hopefully much improved.
You will find docs available on https://bosh.io/docs. Let us know if you find any missing material (I'm sure there is some). Here are some documentation pages worth mentioning:
- https://bosh.io/docs#basic-deploy - cli v2 section on the index page - https://bosh.io/docs/cli-v2 - all commands - https://bosh.io/docs/cli-v2-diff - some notable cli v1 vs v2 differences
CLI binary also links directly to the command specific documentation section from its command help output (-h), so more information is just a command+click away.
CLI v1 will continue to work and be supported for some time; however, new Director features will not be exposed in v1.
Feel free to drop by #bosh slack channel if you have any questions,
BOSH team
-- max http://maximilien.org http://blog.maximilien.com
-- Duncan Winn Cloud Foundry PCF Services
|
|
Yeah!!! Wonderful news DK. Well done (even if a few hours late ;)
But seriously this is momentous accomplishment by the BOSH team and you leading it. Proud to be an extended member of this team.
Cheers,
max ibm cloud lab silicon valley, ca maximilien.org
toggle quoted message
Show quoted text
On Mon, May 1, 2017 at 7:38 PM, Dmitriy Kalinin <dkalinin(a)pivotal.io> wrote: Hey all,
I am happy to announce BOSH CLI v2 is now generally available. CLI v2 incorporates tons of feedback received over the past few years. Some features have been redesigned, some removed, and some hopefully much improved.
You will find docs available on https://bosh.io/docs. Let us know if you find any missing material (I'm sure there is some). Here are some documentation pages worth mentioning:
- https://bosh.io/docs#basic-deploy - cli v2 section on the index page - https://bosh.io/docs/cli-v2 - all commands - https://bosh.io/docs/cli-v2-diff - some notable cli v1 vs v2 differences
CLI binary also links directly to the command specific documentation section from its command help output (-h), so more information is just a command+click away.
CLI v1 will continue to work and be supported for some time; however, new Director features will not be exposed in v1.
Feel free to drop by #bosh slack channel if you have any questions,
BOSH team
|
|
Hey all, I am happy to announce BOSH CLI v2 is now generally available. CLI v2 incorporates tons of feedback received over the past few years. Some features have been redesigned, some removed, and some hopefully much improved. You will find docs available on https://bosh.io/docs. Let us know if you find any missing material (I'm sure there is some). Here are some documentation pages worth mentioning: - https://bosh.io/docs#basic-deploy - cli v2 section on the index page - https://bosh.io/docs/cli-v2 - all commands - https://bosh.io/docs/cli-v2-diff - some notable cli v1 vs v2 differences CLI binary also links directly to the command specific documentation section from its command help output (-h), so more information is just a command+click away. CLI v1 will continue to work and be supported for some time; however, new Director features will not be exposed in v1. Feel free to drop by #bosh slack channel if you have any questions, BOSH team
|
|
Waiting for the agent on VM - failure clarification
I'm using the bosh v2 create-env command to deploy a bosh instance out. The deploy was failing and it took me a bit to discover that the problem was not that the agent on the new vm was failing, but that the bosh cli could not communicate from the host I was running it on to the previous or new vm. Would it make sense to have more detail on the 'waiting for agent' task output? "Started deploying\n", " Waiting for the agent on VM 'vm-877cff87-2451-4052-a98f-7a9ab9070108'...", " Failed (00:00:30)", " Deleting VM 'vm-877cff87-2451-4052-a98f-7a9ab9070108'...", " Finished (00:00:12)", " Creating VM for instance 'bosh/0' from stemcell 'sc-ae3823ea-7e05-4be2-814c-c85948266009'...", " Finished (00:00:24)", " Waiting for the agent on VM 'vm-d3c1b9b7-d348-4fc0-a33a-851ca83c8ad8' to be ready...", " Failed (00:10:09)", "Failed deploying (00:11:23)\n", "\n", "Stopping registry...", " Finished (00:00:00)", "Cleaning up rendered CPI jobs...", " Finished (00:00:00)", "Deploying:\n Creating instance 'bosh/0':\n Waiting until instance is ready:\n Sending ping to the agent:\n Performing request to agent endpoint ' https://mbus:\u003credacted(a)10.10.10.10:6868/agent':\n Performing POST request:\n Post https://mbus:\u003credacted\u003e(a)10.10.10.10:6868/agent: dial tcp 10.10.10.10:6868: i/o timeout", "Exit code 1"
|
|
Re: Elixir for bosh director?
Leandro David Cacciagioni
To be honest I like go... But for CLI or clients, there, in that field I don't know any other language as easy to compile or as easy to get up and running. Anyway in the server field go has the same major problem as any imperative language... Shared mutable state... Which makes it not the best fit for a highly concurrent distributed Bosh. Plus elixir has OTP which makes your life extremely easy, and the elixir syntax is similar to Ruby which is the current language of choice for the Bosh director, that's why I choose it over pure erlang. Once again my plan is just to replace the server side of the equation, because for CLIs elixir is a no go when you compare it with golang. Anyway this is my point of view, which I'll try to prove with some coding... Then if it picks up on the community great!!! If don't tough luck.
Thanks, Leandro.-
toggle quoted message
Show quoted text
On Apr 28, 2017 02:43, "Gwenn Etourneau" <getourneau(a)pivotal.io> wrote: Leandro,
To be honest if I have to choose, I will prefer Go over Elixir / Erlang.
Most of the tool around CF is written in Go and the community people (I think) already spend time on learning Go and are now pretty good with.
Not sure introducing another language just for the beauty (Yak shaving) is a good idea, I like the path that bosh cli took by rewriting everything in Go.
Thanks.
On Fri, Apr 28, 2017 at 6:15 AM, Leandro David Cacciagioni < leandro.21.2008(a)gmail.com> wrote:
Eric, thanks a lot. Really appreciate your point of view and would have to say that yes my idea of involving elixir / erlarng is to have a proper multi vm deployment to create a fully redundant highly available bosh deployment. Regarding how you deploy and update the director and its components can change a little bit and maybe change the cli in the future ;) , anyway I know that a work like this can take a lot and it will gonna involve more people over time if the day comes. Let see if I can get some minimal POC over the next months at least with the basic features.
Thanks, Leandro.-
2017-04-27 21:57 GMT+02:00 Eric Malm <emalm(a)pivotal.io>:
Leandro,
If you intend your project eventually to be considered for the CFF to adopt, please license it as Apache 2.0. That license is used uniformly across other Foundation projects. Please see https://www.cloudfoundry.o rg/governance/cff_ip_policy/ for more details.
I understand the technical benefit of the hot-reloading feature that Erlang brings, but I view it as incompatible with the realities of how BOSH itself is deployed. It's typically bootstrapped from some other tool in the BOSH ecosystem, whether that be another BOSH instance, or the new v2 BOSH CLI, or even the ancient bosh micro CLI plugin. Those tools all follow the BOSH update pattern of stopping services on a VM, replacing the software bits and configuration (and, in the CLI cases, even the VM itself!), and restarting the services. Unless you go out of your way with the BOSH release itself to violate the expectations of the BOSH job lifecycle, there's no opportunity to take advantage of that hot-reloading feature, and it wouldn't work at all anyway if the VM is replaced.
I think a more effective solution regarding downtime would be to make BOSH deployable in a fully HA mode, which would address both availability during upgrades and tolerance to a wider variety of failure modes (component, VM, availability zone). I've heard Dmitriy mention that as a potential direction for BOSH in the past, but taking a quick look at the BOSH project tracker I don't currently see work related to that effort. Even then, for almost everyone, BOSH is a means to the end of deploying the software you really care about in a way that allows you to evolve it over time. So it's typically not a substantial issue in practice for BOSH to have only a few 9s of availability, so long as the state it retains about the deployments it manages can always be restored successfully to a new BOSH director within a suitable period of time.
Finally, having been deeply involved in a rewrite of another major CF subsystem (DEAs to Diego), +1000000 on Jonathan's observation about rewrites always being harder to execute and taking longer than you expect, even when you try to account for those expected delays. (This can be viewed as one manifestation of the more general Hofstadter's Law <https://en.wikipedia.org/wiki/Hofstadter%27s_law>.) If you do perceive some benefits to simplifying the BOSH architecture and think that can be achieved through a rewrite in a different language, look for seams and interfaces to keep that change as small as possible while still being impactful.
Thanks, Eric, CF Diego PM
On Thu, Apr 27, 2017 at 12:00 PM, Leandro David Cacciagioni < leandro.21.2008(a)gmail.com> wrote:
Guys, I'm not saying that the director is bad or wrong, actually what I want is maybe to improve it a little bit without touching the logic or the api, my final goal is maybe to create a drop in replacement but keeping the agent and the logic in place. I know it can be hard work but OTP solves a lot of "edge cases" of the classic languages out of the box.
Geoff by downtime I mean that, no matter what, in languages like ruby/python/go or any "classical" language you need to stop and start again the server to read the new code while in erlang / elixir there is no need for this,since it has a feature that it is called "hot code reloading" (You can read about it here <http://learnyousomeerlang.com/designing-a-concurrent-application#hot-code-loving>, here <http://erlang.org/doc/man/code.html> and here <http://www.unstablebuild.com/2016/03/18/hot-code-reload-in-elixir.html>) it is one of the moto's of erlang 99.9999999% (nine nines of availability) and you can read more here <https://pragprog.com/articles/erlang>.
Marco good catch and thanks for the suggestion for the license, maybe I'll evaluate some others like Apache or LGPL.
Thanks, Leandro.-
2017-04-27 20:26 GMT+02:00 Voelz, Marco <marco.voelz(a)sap.com>:
Dear Leandro,
I'd love to see your experiment grow – keep in mind that the Director is around for quite a while and has some pretty complicated corner cases. Just like any rewrite: It is pretty simple to get 80% right, but then you'll spend much time on getting the remaining 20%.
A word on the license: If your target audience is really companies like IBM, don't go with GPL. I now that for example GPL is a no-go for us at SAP. I would assume a similar policy is in place in pretty much every big enterprise.
Warm regards
Marco
*From: *Leandro David Cacciagioni <leandro.21.2008(a)gmail.com> *Reply-To: *"Discussions about the Cloud Foundry BOSH project." < cf-bosh(a)lists.cloudfoundry.org> *Date: *Thursday, 27. April 2017 at 19:57 *To: *"Discussions about the Cloud Foundry BOSH project." < cf-bosh(a)lists.cloudfoundry.org> *Subject: *[cf-bosh] Re: Re: Elixir for bosh director?
OK what you quote is certainly amazing, anyway that only tackle the Scalability in part (I know for sure that elixir/erlang can hold the same a lot better) but it didn't solve the Fault Tolerance part or the true no downtime deployments (I know that people like IBM will love to update BOSH with true / zero downtime). Plus all the simplification in the Director logic that can come from using the proper tool for the right job. Anyway I think I'll start a POC as under GPL license to make a compatible "BOSH director" using elixir. Anyone who will like to help more than welcomed.
2017-04-27 17:59 GMT+02:00 Geoff Franks <geoff(a)starkandwayne.com>:
FWIW, we've managed BOSHes with many deployments, some of which consist of ~1000 VMs, and not seen any direct performance issues of the BOSH director. Just lengthy deploys due to having so many VMs to iterate through.
I've also seen a significant uptick in responsiveness from the bosh cli when using the v2 cli, since ruby isn't parsing for tons of gemfiles every time I start the CLI up.
On Apr 27, 2017, at 9:01 AM, Leandro David Cacciagioni < leandro.21.2008(a)gmail.com> wrote:
After more than 6 months working with elixir in prod, it crossed my mind that maybe it deserves some time of experiment and think on the possibility of a *TOTAL REWRITE OF BOSH DIRECTOR USING ELIXIR*.
Some of the pros that I can list out of the box (without digging to much in the technical side) are:
· Ruby like syntax (I know I know... This means a lot for people that don't like erlang syntax) (I'm used to both so far)
· Easiness of development thanks to OTP & FP.
o Scalability (ex: http://www.phoenixframework.or g/blog/the-road-to-2-million-websocket-connections)
o Fault-tolerance
o True no downtime updates.
· Simplification:
o nats can be deprecated.
o All the other jobs (Director, Registry, Blobstore, HM & CPI) can to be OTP/Apps (Mix powered) under the same umbrella project.
o Clustering out of the box
· Perfomance wins, giving the nature of elixir/erlang/OTP is easy to guess that a single bosh instance will gonna be able to manage more deployments and bigger deployments than it does now.
This is a suggestion and I would like to know if you agree or don't and why.
Thanks,
Leandro.-
|
|
Re: Elixir for bosh director?
Leandro, To be honest if I have to choose, I will prefer Go over Elixir / Erlang. Most of the tool around CF is written in Go and the community people (I think) already spend time on learning Go and are now pretty good with. Not sure introducing another language just for the beauty (Yak shaving) is a good idea, I like the path that bosh cli took by rewriting everything in Go. Thanks. On Fri, Apr 28, 2017 at 6:15 AM, Leandro David Cacciagioni < leandro.21.2008(a)gmail.com> wrote: Eric, thanks a lot. Really appreciate your point of view and would have to say that yes my idea of involving elixir / erlarng is to have a proper multi vm deployment to create a fully redundant highly available bosh deployment. Regarding how you deploy and update the director and its components can change a little bit and maybe change the cli in the future ;) , anyway I know that a work like this can take a lot and it will gonna involve more people over time if the day comes. Let see if I can get some minimal POC over the next months at least with the basic features.
Thanks, Leandro.-
2017-04-27 21:57 GMT+02:00 Eric Malm <emalm(a)pivotal.io>:
Leandro,
If you intend your project eventually to be considered for the CFF to adopt, please license it as Apache 2.0. That license is used uniformly across other Foundation projects. Please see https://www.cloudfoundry.o rg/governance/cff_ip_policy/ for more details.
I understand the technical benefit of the hot-reloading feature that Erlang brings, but I view it as incompatible with the realities of how BOSH itself is deployed. It's typically bootstrapped from some other tool in the BOSH ecosystem, whether that be another BOSH instance, or the new v2 BOSH CLI, or even the ancient bosh micro CLI plugin. Those tools all follow the BOSH update pattern of stopping services on a VM, replacing the software bits and configuration (and, in the CLI cases, even the VM itself!), and restarting the services. Unless you go out of your way with the BOSH release itself to violate the expectations of the BOSH job lifecycle, there's no opportunity to take advantage of that hot-reloading feature, and it wouldn't work at all anyway if the VM is replaced.
I think a more effective solution regarding downtime would be to make BOSH deployable in a fully HA mode, which would address both availability during upgrades and tolerance to a wider variety of failure modes (component, VM, availability zone). I've heard Dmitriy mention that as a potential direction for BOSH in the past, but taking a quick look at the BOSH project tracker I don't currently see work related to that effort. Even then, for almost everyone, BOSH is a means to the end of deploying the software you really care about in a way that allows you to evolve it over time. So it's typically not a substantial issue in practice for BOSH to have only a few 9s of availability, so long as the state it retains about the deployments it manages can always be restored successfully to a new BOSH director within a suitable period of time.
Finally, having been deeply involved in a rewrite of another major CF subsystem (DEAs to Diego), +1000000 on Jonathan's observation about rewrites always being harder to execute and taking longer than you expect, even when you try to account for those expected delays. (This can be viewed as one manifestation of the more general Hofstadter's Law <https://en.wikipedia.org/wiki/Hofstadter%27s_law>.) If you do perceive some benefits to simplifying the BOSH architecture and think that can be achieved through a rewrite in a different language, look for seams and interfaces to keep that change as small as possible while still being impactful.
Thanks, Eric, CF Diego PM
On Thu, Apr 27, 2017 at 12:00 PM, Leandro David Cacciagioni < leandro.21.2008(a)gmail.com> wrote:
Guys, I'm not saying that the director is bad or wrong, actually what I want is maybe to improve it a little bit without touching the logic or the api, my final goal is maybe to create a drop in replacement but keeping the agent and the logic in place. I know it can be hard work but OTP solves a lot of "edge cases" of the classic languages out of the box.
Geoff by downtime I mean that, no matter what, in languages like ruby/python/go or any "classical" language you need to stop and start again the server to read the new code while in erlang / elixir there is no need for this,since it has a feature that it is called "hot code reloading" (You can read about it here <http://learnyousomeerlang.com/designing-a-concurrent-application#hot-code-loving>, here <http://erlang.org/doc/man/code.html> and here <http://www.unstablebuild.com/2016/03/18/hot-code-reload-in-elixir.html>) it is one of the moto's of erlang 99.9999999% (nine nines of availability) and you can read more here <https://pragprog.com/articles/erlang>.
Marco good catch and thanks for the suggestion for the license, maybe I'll evaluate some others like Apache or LGPL.
Thanks, Leandro.-
2017-04-27 20:26 GMT+02:00 Voelz, Marco <marco.voelz(a)sap.com>:
Dear Leandro,
I'd love to see your experiment grow – keep in mind that the Director is around for quite a while and has some pretty complicated corner cases. Just like any rewrite: It is pretty simple to get 80% right, but then you'll spend much time on getting the remaining 20%.
A word on the license: If your target audience is really companies like IBM, don't go with GPL. I now that for example GPL is a no-go for us at SAP. I would assume a similar policy is in place in pretty much every big enterprise.
Warm regards
Marco
*From: *Leandro David Cacciagioni <leandro.21.2008(a)gmail.com> *Reply-To: *"Discussions about the Cloud Foundry BOSH project." < cf-bosh(a)lists.cloudfoundry.org> *Date: *Thursday, 27. April 2017 at 19:57 *To: *"Discussions about the Cloud Foundry BOSH project." < cf-bosh(a)lists.cloudfoundry.org> *Subject: *[cf-bosh] Re: Re: Elixir for bosh director?
OK what you quote is certainly amazing, anyway that only tackle the Scalability in part (I know for sure that elixir/erlang can hold the same a lot better) but it didn't solve the Fault Tolerance part or the true no downtime deployments (I know that people like IBM will love to update BOSH with true / zero downtime). Plus all the simplification in the Director logic that can come from using the proper tool for the right job. Anyway I think I'll start a POC as under GPL license to make a compatible "BOSH director" using elixir. Anyone who will like to help more than welcomed.
2017-04-27 17:59 GMT+02:00 Geoff Franks <geoff(a)starkandwayne.com>:
FWIW, we've managed BOSHes with many deployments, some of which consist of ~1000 VMs, and not seen any direct performance issues of the BOSH director. Just lengthy deploys due to having so many VMs to iterate through.
I've also seen a significant uptick in responsiveness from the bosh cli when using the v2 cli, since ruby isn't parsing for tons of gemfiles every time I start the CLI up.
On Apr 27, 2017, at 9:01 AM, Leandro David Cacciagioni < leandro.21.2008(a)gmail.com> wrote:
After more than 6 months working with elixir in prod, it crossed my mind that maybe it deserves some time of experiment and think on the possibility of a *TOTAL REWRITE OF BOSH DIRECTOR USING ELIXIR*.
Some of the pros that I can list out of the box (without digging to much in the technical side) are:
· Ruby like syntax (I know I know... This means a lot for people that don't like erlang syntax) (I'm used to both so far)
· Easiness of development thanks to OTP & FP.
o Scalability (ex: http://www.phoenixframework.or g/blog/the-road-to-2-million-websocket-connections)
o Fault-tolerance
o True no downtime updates.
· Simplification:
o nats can be deprecated.
o All the other jobs (Director, Registry, Blobstore, HM & CPI) can to be OTP/Apps (Mix powered) under the same umbrella project.
o Clustering out of the box
· Perfomance wins, giving the nature of elixir/erlang/OTP is easy to guess that a single bosh instance will gonna be able to manage more deployments and bigger deployments than it does now.
This is a suggestion and I would like to know if you agree or don't and why.
Thanks,
Leandro.-
|
|
Re: Elixir for bosh director?
Leandro David Cacciagioni
Eric, thanks a lot. Really appreciate your point of view and would have to say that yes my idea of involving elixir / erlarng is to have a proper multi vm deployment to create a fully redundant highly available bosh deployment. Regarding how you deploy and update the director and its components can change a little bit and maybe change the cli in the future ;) , anyway I know that a work like this can take a lot and it will gonna involve more people over time if the day comes. Let see if I can get some minimal POC over the next months at least with the basic features.
Thanks, Leandro.-
2017-04-27 21:57 GMT+02:00 Eric Malm <emalm(a)pivotal.io>:
toggle quoted message
Show quoted text
Leandro,
If you intend your project eventually to be considered for the CFF to adopt, please license it as Apache 2.0. That license is used uniformly across other Foundation projects. Please see https://www.cloudfoundry. org/governance/cff_ip_policy/ for more details.
I understand the technical benefit of the hot-reloading feature that Erlang brings, but I view it as incompatible with the realities of how BOSH itself is deployed. It's typically bootstrapped from some other tool in the BOSH ecosystem, whether that be another BOSH instance, or the new v2 BOSH CLI, or even the ancient bosh micro CLI plugin. Those tools all follow the BOSH update pattern of stopping services on a VM, replacing the software bits and configuration (and, in the CLI cases, even the VM itself!), and restarting the services. Unless you go out of your way with the BOSH release itself to violate the expectations of the BOSH job lifecycle, there's no opportunity to take advantage of that hot-reloading feature, and it wouldn't work at all anyway if the VM is replaced.
I think a more effective solution regarding downtime would be to make BOSH deployable in a fully HA mode, which would address both availability during upgrades and tolerance to a wider variety of failure modes (component, VM, availability zone). I've heard Dmitriy mention that as a potential direction for BOSH in the past, but taking a quick look at the BOSH project tracker I don't currently see work related to that effort. Even then, for almost everyone, BOSH is a means to the end of deploying the software you really care about in a way that allows you to evolve it over time. So it's typically not a substantial issue in practice for BOSH to have only a few 9s of availability, so long as the state it retains about the deployments it manages can always be restored successfully to a new BOSH director within a suitable period of time.
Finally, having been deeply involved in a rewrite of another major CF subsystem (DEAs to Diego), +1000000 on Jonathan's observation about rewrites always being harder to execute and taking longer than you expect, even when you try to account for those expected delays. (This can be viewed as one manifestation of the more general Hofstadter's Law <https://en.wikipedia.org/wiki/Hofstadter%27s_law>.) If you do perceive some benefits to simplifying the BOSH architecture and think that can be achieved through a rewrite in a different language, look for seams and interfaces to keep that change as small as possible while still being impactful.
Thanks, Eric, CF Diego PM
On Thu, Apr 27, 2017 at 12:00 PM, Leandro David Cacciagioni < leandro.21.2008(a)gmail.com> wrote:
Guys, I'm not saying that the director is bad or wrong, actually what I want is maybe to improve it a little bit without touching the logic or the api, my final goal is maybe to create a drop in replacement but keeping the agent and the logic in place. I know it can be hard work but OTP solves a lot of "edge cases" of the classic languages out of the box.
Geoff by downtime I mean that, no matter what, in languages like ruby/python/go or any "classical" language you need to stop and start again the server to read the new code while in erlang / elixir there is no need for this,since it has a feature that it is called "hot code reloading" (You can read about it here <http://learnyousomeerlang.com/designing-a-concurrent-application#hot-code-loving>, here <http://erlang.org/doc/man/code.html> and here <http://www.unstablebuild.com/2016/03/18/hot-code-reload-in-elixir.html>) it is one of the moto's of erlang 99.9999999% (nine nines of availability) and you can read more here <https://pragprog.com/articles/erlang>.
Marco good catch and thanks for the suggestion for the license, maybe I'll evaluate some others like Apache or LGPL.
Thanks, Leandro.-
2017-04-27 20:26 GMT+02:00 Voelz, Marco <marco.voelz(a)sap.com>:
Dear Leandro,
I'd love to see your experiment grow – keep in mind that the Director is around for quite a while and has some pretty complicated corner cases. Just like any rewrite: It is pretty simple to get 80% right, but then you'll spend much time on getting the remaining 20%.
A word on the license: If your target audience is really companies like IBM, don't go with GPL. I now that for example GPL is a no-go for us at SAP. I would assume a similar policy is in place in pretty much every big enterprise.
Warm regards
Marco
*From: *Leandro David Cacciagioni <leandro.21.2008(a)gmail.com> *Reply-To: *"Discussions about the Cloud Foundry BOSH project." < cf-bosh(a)lists.cloudfoundry.org> *Date: *Thursday, 27. April 2017 at 19:57 *To: *"Discussions about the Cloud Foundry BOSH project." < cf-bosh(a)lists.cloudfoundry.org> *Subject: *[cf-bosh] Re: Re: Elixir for bosh director?
OK what you quote is certainly amazing, anyway that only tackle the Scalability in part (I know for sure that elixir/erlang can hold the same a lot better) but it didn't solve the Fault Tolerance part or the true no downtime deployments (I know that people like IBM will love to update BOSH with true / zero downtime). Plus all the simplification in the Director logic that can come from using the proper tool for the right job. Anyway I think I'll start a POC as under GPL license to make a compatible "BOSH director" using elixir. Anyone who will like to help more than welcomed.
2017-04-27 17:59 GMT+02:00 Geoff Franks <geoff(a)starkandwayne.com>:
FWIW, we've managed BOSHes with many deployments, some of which consist of ~1000 VMs, and not seen any direct performance issues of the BOSH director. Just lengthy deploys due to having so many VMs to iterate through.
I've also seen a significant uptick in responsiveness from the bosh cli when using the v2 cli, since ruby isn't parsing for tons of gemfiles every time I start the CLI up.
On Apr 27, 2017, at 9:01 AM, Leandro David Cacciagioni < leandro.21.2008(a)gmail.com> wrote:
After more than 6 months working with elixir in prod, it crossed my mind that maybe it deserves some time of experiment and think on the possibility of a *TOTAL REWRITE OF BOSH DIRECTOR USING ELIXIR*.
Some of the pros that I can list out of the box (without digging to much in the technical side) are:
· Ruby like syntax (I know I know... This means a lot for people that don't like erlang syntax) (I'm used to both so far)
· Easiness of development thanks to OTP & FP.
o Scalability (ex: http://www.phoenixframework.or g/blog/the-road-to-2-million-websocket-connections)
o Fault-tolerance
o True no downtime updates.
· Simplification:
o nats can be deprecated.
o All the other jobs (Director, Registry, Blobstore, HM & CPI) can to be OTP/Apps (Mix powered) under the same umbrella project.
o Clustering out of the box
· Perfomance wins, giving the nature of elixir/erlang/OTP is easy to guess that a single bosh instance will gonna be able to manage more deployments and bigger deployments than it does now.
This is a suggestion and I would like to know if you agree or don't and why.
Thanks,
Leandro.-
|
|
Re: Elixir for bosh director?
Leandro, If you intend your project eventually to be considered for the CFF to adopt, please license it as Apache 2.0. That license is used uniformly across other Foundation projects. Please see https://www.cloudfoundry.org/governance/cff_ip_policy/ for more details. I understand the technical benefit of the hot-reloading feature that Erlang brings, but I view it as incompatible with the realities of how BOSH itself is deployed. It's typically bootstrapped from some other tool in the BOSH ecosystem, whether that be another BOSH instance, or the new v2 BOSH CLI, or even the ancient bosh micro CLI plugin. Those tools all follow the BOSH update pattern of stopping services on a VM, replacing the software bits and configuration (and, in the CLI cases, even the VM itself!), and restarting the services. Unless you go out of your way with the BOSH release itself to violate the expectations of the BOSH job lifecycle, there's no opportunity to take advantage of that hot-reloading feature, and it wouldn't work at all anyway if the VM is replaced. I think a more effective solution regarding downtime would be to make BOSH deployable in a fully HA mode, which would address both availability during upgrades and tolerance to a wider variety of failure modes (component, VM, availability zone). I've heard Dmitriy mention that as a potential direction for BOSH in the past, but taking a quick look at the BOSH project tracker I don't currently see work related to that effort. Even then, for almost everyone, BOSH is a means to the end of deploying the software you really care about in a way that allows you to evolve it over time. So it's typically not a substantial issue in practice for BOSH to have only a few 9s of availability, so long as the state it retains about the deployments it manages can always be restored successfully to a new BOSH director within a suitable period of time. Finally, having been deeply involved in a rewrite of another major CF subsystem (DEAs to Diego), +1000000 on Jonathan's observation about rewrites always being harder to execute and taking longer than you expect, even when you try to account for those expected delays. (This can be viewed as one manifestation of the more general Hofstadter's Law < https://en.wikipedia.org/wiki/Hofstadter%27s_law>.) If you do perceive some benefits to simplifying the BOSH architecture and think that can be achieved through a rewrite in a different language, look for seams and interfaces to keep that change as small as possible while still being impactful. Thanks, Eric, CF Diego PM On Thu, Apr 27, 2017 at 12:00 PM, Leandro David Cacciagioni < leandro.21.2008(a)gmail.com> wrote: Guys, I'm not saying that the director is bad or wrong, actually what I want is maybe to improve it a little bit without touching the logic or the api, my final goal is maybe to create a drop in replacement but keeping the agent and the logic in place. I know it can be hard work but OTP solves a lot of "edge cases" of the classic languages out of the box.
Geoff by downtime I mean that, no matter what, in languages like ruby/python/go or any "classical" language you need to stop and start again the server to read the new code while in erlang / elixir there is no need for this,since it has a feature that it is called "hot code reloading" (You can read about it here <http://learnyousomeerlang.com/designing-a-concurrent-application#hot-code-loving>, here <http://erlang.org/doc/man/code.html> and here <http://www.unstablebuild.com/2016/03/18/hot-code-reload-in-elixir.html>) it is one of the moto's of erlang 99.9999999% (nine nines of availability) and you can read more here <https://pragprog.com/articles/erlang>.
Marco good catch and thanks for the suggestion for the license, maybe I'll evaluate some others like Apache or LGPL.
Thanks, Leandro.-
2017-04-27 20:26 GMT+02:00 Voelz, Marco <marco.voelz(a)sap.com>:
Dear Leandro,
I'd love to see your experiment grow – keep in mind that the Director is around for quite a while and has some pretty complicated corner cases. Just like any rewrite: It is pretty simple to get 80% right, but then you'll spend much time on getting the remaining 20%.
A word on the license: If your target audience is really companies like IBM, don't go with GPL. I now that for example GPL is a no-go for us at SAP. I would assume a similar policy is in place in pretty much every big enterprise.
Warm regards
Marco
*From: *Leandro David Cacciagioni <leandro.21.2008(a)gmail.com> *Reply-To: *"Discussions about the Cloud Foundry BOSH project." < cf-bosh(a)lists.cloudfoundry.org> *Date: *Thursday, 27. April 2017 at 19:57 *To: *"Discussions about the Cloud Foundry BOSH project." < cf-bosh(a)lists.cloudfoundry.org> *Subject: *[cf-bosh] Re: Re: Elixir for bosh director?
OK what you quote is certainly amazing, anyway that only tackle the Scalability in part (I know for sure that elixir/erlang can hold the same a lot better) but it didn't solve the Fault Tolerance part or the true no downtime deployments (I know that people like IBM will love to update BOSH with true / zero downtime). Plus all the simplification in the Director logic that can come from using the proper tool for the right job. Anyway I think I'll start a POC as under GPL license to make a compatible "BOSH director" using elixir. Anyone who will like to help more than welcomed.
2017-04-27 17:59 GMT+02:00 Geoff Franks <geoff(a)starkandwayne.com>:
FWIW, we've managed BOSHes with many deployments, some of which consist of ~1000 VMs, and not seen any direct performance issues of the BOSH director. Just lengthy deploys due to having so many VMs to iterate through.
I've also seen a significant uptick in responsiveness from the bosh cli when using the v2 cli, since ruby isn't parsing for tons of gemfiles every time I start the CLI up.
On Apr 27, 2017, at 9:01 AM, Leandro David Cacciagioni < leandro.21.2008(a)gmail.com> wrote:
After more than 6 months working with elixir in prod, it crossed my mind that maybe it deserves some time of experiment and think on the possibility of a *TOTAL REWRITE OF BOSH DIRECTOR USING ELIXIR*.
Some of the pros that I can list out of the box (without digging to much in the technical side) are:
· Ruby like syntax (I know I know... This means a lot for people that don't like erlang syntax) (I'm used to both so far)
· Easiness of development thanks to OTP & FP.
o Scalability (ex: http://www.phoenixframework.or g/blog/the-road-to-2-million-websocket-connections)
o Fault-tolerance
o True no downtime updates.
· Simplification:
o nats can be deprecated.
o All the other jobs (Director, Registry, Blobstore, HM & CPI) can to be OTP/Apps (Mix powered) under the same umbrella project.
o Clustering out of the box
· Perfomance wins, giving the nature of elixir/erlang/OTP is easy to guess that a single bosh instance will gonna be able to manage more deployments and bigger deployments than it does now.
This is a suggestion and I would like to know if you agree or don't and why.
Thanks,
Leandro.-
|
|
Re: Elixir for bosh director?
few small comments on the thread... FWIW, we've managed BOSHes with many deployments, some of which consist of ~1000 VMs, and not seen any direct performance issues of the BOSH director. Just lengthy deploys due to having so many VMs to iterate through. exactly what i was thinking. Scalability, Fault-tolerance, True no downtime updates. that's all easy when there is no shared state to manage. director carries a lot of state (as it has to) and that state has to be migrated over time for backwards compatibility etc. you can technically deploy as many directors as you want today (horizontally scalable) but they all in the end have to connect to some shared state (database). I know that people like IBM will love to update BOSH with true / zero downtime downtime-less deployments of the director will be achievable soon enough when we expand agent's connectivity options to allow for connecting to multiple directors. this will make rolling directors just a standard procedure (like rolling cloud controllers in cf for example). Perfomance wins, giving the nature of elixir/erlang/OTP is easy to guess that a single bosh instance will gonna be able to manage more deployments and bigger deployments than it does now. if you take a look where majority of the time is spent, it's not in the director itself but in all the other components director orchestrates (cpi, installing jobs, startup, etc.). optimizing director would be focusing on 5% and most likely language choice isnt going to yield any noticeable change. Plus all the simplification in the Director logic that can come from using the proper tool for the right job not sure which parts you think can be simplified. director is a pretty vanilla application that uses a db, etc. On Thu, Apr 27, 2017 at 11:26 AM, Voelz, Marco <marco.voelz(a)sap.com> wrote: Dear Leandro,
I'd love to see your experiment grow – keep in mind that the Director is around for quite a while and has some pretty complicated corner cases. Just like any rewrite: It is pretty simple to get 80% right, but then you'll spend much time on getting the remaining 20%.
A word on the license: If your target audience is really companies like IBM, don't go with GPL. I now that for example GPL is a no-go for us at SAP. I would assume a similar policy is in place in pretty much every big enterprise.
Warm regards
Marco
*From: *Leandro David Cacciagioni <leandro.21.2008(a)gmail.com> *Reply-To: *"Discussions about the Cloud Foundry BOSH project." < cf-bosh(a)lists.cloudfoundry.org> *Date: *Thursday, 27. April 2017 at 19:57 *To: *"Discussions about the Cloud Foundry BOSH project." < cf-bosh(a)lists.cloudfoundry.org> *Subject: *[cf-bosh] Re: Re: Elixir for bosh director?
OK what you quote is certainly amazing, anyway that only tackle the Scalability in part (I know for sure that elixir/erlang can hold the same a lot better) but it didn't solve the Fault Tolerance part or the true no downtime deployments (I know that people like IBM will love to update BOSH with true / zero downtime). Plus all the simplification in the Director logic that can come from using the proper tool for the right job. Anyway I think I'll start a POC as under GPL license to make a compatible "BOSH director" using elixir. Anyone who will like to help more than welcomed.
2017-04-27 17:59 GMT+02:00 Geoff Franks <geoff(a)starkandwayne.com>:
FWIW, we've managed BOSHes with many deployments, some of which consist of ~1000 VMs, and not seen any direct performance issues of the BOSH director. Just lengthy deploys due to having so many VMs to iterate through.
I've also seen a significant uptick in responsiveness from the bosh cli when using the v2 cli, since ruby isn't parsing for tons of gemfiles every time I start the CLI up.
On Apr 27, 2017, at 9:01 AM, Leandro David Cacciagioni < leandro.21.2008(a)gmail.com> wrote:
After more than 6 months working with elixir in prod, it crossed my mind that maybe it deserves some time of experiment and think on the possibility of a *TOTAL REWRITE OF BOSH DIRECTOR USING ELIXIR*.
Some of the pros that I can list out of the box (without digging to much in the technical side) are:
· Ruby like syntax (I know I know... This means a lot for people that don't like erlang syntax) (I'm used to both so far)
· Easiness of development thanks to OTP & FP.
o Scalability (ex: http://www.phoenixframework.org/blog/the-road-to-2- million-websocket-connections)
o Fault-tolerance
o True no downtime updates.
· Simplification:
o nats can be deprecated.
o All the other jobs (Director, Registry, Blobstore, HM & CPI) can to be OTP/Apps (Mix powered) under the same umbrella project.
o Clustering out of the box
· Perfomance wins, giving the nature of elixir/erlang/OTP is easy to guess that a single bosh instance will gonna be able to manage more deployments and bigger deployments than it does now.
This is a suggestion and I would like to know if you agree or don't and why.
Thanks,
Leandro.-
|
|