Date   

Re: BOSH Director Performance on Jammy Stemcells FAQ

 

FYI, if anyone wants to comment, there's a Google Doc which has the contents of the top email in this thread ("BOSH Director Performance on Jammy Stemcells FAQ") updated to reflect Benjamin's points: https://docs.google.com/document/d/1umE4SwaE0wxXNsQs3D4VdFufoBn06KjnHrzNltKIba8/edit?usp=sharing

On Fri, Oct 28, 2022 at 2:06 PM Brian Cunnie via lists.cloudfoundry.org <brian.cunnie=gmail.com@...> wrote:
Hi Benjamin,

On Fri, Oct 28, 2022 at 1:32 PM Benjamin Gandon <benjamin@...> wrote:
When you intend the Bosh Agent to remove the Clang compiler on non-compilation VMs, I’m wondering about how consistent this would be with the env.bosh.remove_dev_tools option [1] in instance groups (defaulting to false), that already controls the removal of GCC on runtime VMs.

I believe the mechanism we would use would be the env.bosh.remove_dev_tools property that you describe; if you removed GCC, you also would remove Clang.
 
With the design you expose, we would end up with GCC being kept on runtime VMs by default, with a possibility to remove it, whereas Clang would be systematically be removed.

Good point! All the more reason to incorporate Clang-removal under the env.bosh.remove_dev_tools umbrella.
 
In order to have a consistent behavior about compilers removal, what about those two alternate solutions:

1. Add a new env.bosh.remove_clang option that would default to true.

2. Have Clang removal be controlled by the env.bosh.remove_dev_tools option, and make it true by default. After all, There are no Bosh Releases that need the compilers to be kept on runtime VMs.

I love that idea! I'd like to remove the dev_tools by default. I'll bring it up with the team to see if there's any reason why it shouldn't default to true.
 
Second remark, there hasn’t been any public CentOS 7 Stemcell generated for ≈3 years, as the latest v3763.61 tag [2] was set in Dec 2019 [3]. I can’t reasonably think that any organization would actually run software on such old OS.

That's good news for us.
 
It seems that the pipeline has been stopped back then, and might not have been maintained since then.
Therefore in the Stemcells page [4], would it be possible to add a Deprecation Warning for CentOS 7 Stemcells?

We always are willing to review pull requests! 😉

--
Brian Cunnie, 650.968.6262



--
Brian Cunnie, 650.968.6262


Re: BOSH Director Performance on Jammy Stemcells FAQ

 

Hi Benjamin,

On Fri, Oct 28, 2022 at 1:32 PM Benjamin Gandon <benjamin@...> wrote:
When you intend the Bosh Agent to remove the Clang compiler on non-compilation VMs, I’m wondering about how consistent this would be with the env.bosh.remove_dev_tools option [1] in instance groups (defaulting to false), that already controls the removal of GCC on runtime VMs.

I believe the mechanism we would use would be the env.bosh.remove_dev_tools property that you describe; if you removed GCC, you also would remove Clang.
 
With the design you expose, we would end up with GCC being kept on runtime VMs by default, with a possibility to remove it, whereas Clang would be systematically be removed.

Good point! All the more reason to incorporate Clang-removal under the env.bosh.remove_dev_tools umbrella.
 
In order to have a consistent behavior about compilers removal, what about those two alternate solutions:

1. Add a new env.bosh.remove_clang option that would default to true.

2. Have Clang removal be controlled by the env.bosh.remove_dev_tools option, and make it true by default. After all, There are no Bosh Releases that need the compilers to be kept on runtime VMs.

I love that idea! I'd like to remove the dev_tools by default. I'll bring it up with the team to see if there's any reason why it shouldn't default to true.
 
Second remark, there hasn’t been any public CentOS 7 Stemcell generated for ≈3 years, as the latest v3763.61 tag [2] was set in Dec 2019 [3]. I can’t reasonably think that any organization would actually run software on such old OS.

That's good news for us.
 
It seems that the pipeline has been stopped back then, and might not have been maintained since then.
Therefore in the Stemcells page [4], would it be possible to add a Deprecation Warning for CentOS 7 Stemcells?

We always are willing to review pull requests! 😉

--
Brian Cunnie, 650.968.6262


Re: BOSH Director Performance on Jammy Stemcells FAQ

Benjamin Gandon
 

Thank you for the info, Brian.

When you intend the Bosh Agent to remove the Clang compiler on non-compilation VMs, I’m wondering about how consistent this would be with the env.bosh.remove_dev_tools option [1] in instance groups (defaulting to false), that already controls the removal of GCC on runtime VMs.

With the design ou expose, we would end up with GCC being kept on runtime VMs by default, with a possibility to remove it, whereas Clang would be systematically be removed.

In order to have a consistent behavior about compilers removal, what about those two alternate solutions:

1. Add a new env.bosh.remove_clang option that would default to true.

2. Have Clang removal be controlled by the env.bosh.remove_dev_tools option, and make it true by default. After all, There are no Bosh Releases that need the compilers to be kept on runtime VMs.


Second remark, there hasn’t been any public CentOS 7 Stemcell generated for ≈3 years, as the latest v3763.61 tag [2] was set in Dec 2019 [3]. I can’t reasonably think that any organization would actually run software on such old OS.

It seems that the pipeline has been stopped back then, and might not have been maintained since then.
Therefore in the Stemcells page [4], would it be possible to add a Deprecation Warning for CentOS 7 Stemcells?

Best,
Benjamin


Le 28 oct. 2022 à 21:46, Brian Cunnie <brian.cunnie@...> a écrit :

BOSH Director Performance on Jammy Stemcells FAQ

What’s the problem?

Ruby programs using Ruby compiled with GCC (GNU Compiler Collection) on Jammy stemcells have a much larger RSS (Resident Set Size) memory footprint, which can cause memory pressure. This affects Ruby-based programs such as the BOSH Director and the BOSH Azure, AWS, and vSphere CPIs. This can cause BOSH operations such as “bosh deploy” to take much longer and even time out.

What’s the fix?

We plan to compile the Ruby interpreter on Jammy with Clang (an Apple-sponsored GCC-compatible compiler). Ruby interpreters compiled with Clang don’t appear to have the same memory bloat when running the BOSH Director or the Ruby-based CPIs.

How will we accomplish that?

We plan to include the Clang compiler on the Jammy stemcells. We also plan to modify the Ruby BOSH package to use Clang if it’s available, otherwise fall back to GCC.

Doesn’t the Clang compiler take up a lot of disk space?

Yes, the Clang compiler takes up 700-800 MB of disk space; however, we plan to instruct the BOSH agent to remove the Clang compiler on boot unless the VM is a compilation VM. In other words, the Clang compiler won’t take up precious space on the root disk for the typically deployed VM.

What about the Xenial, Bionic, and CentOS stemcells?

We don’t plan to install Clang on Xenial and Bionic stemcells; they don’t exhibit the performance problem, so Clang has little to offer.

We’re not sure whether the CentOS stemcell is affected.

Other than memory, is there a performance impact of using Clang-based Ruby?

In our testing, it appears that a Jammy-based Clang-based Ruby Director with a Clang-based vSphere CPI offers 5-25% performance boost over a Xenial-based GCC-based Ruby Director.

What is the root cause of the problem?

We’re not sure; it appears that the problem is related to the version of GCC used to compile Ruby. We notice that the memory footprint of Ruby’s threads grew from Ubuntu Disco to Ubuntu Eoan: 204 kiB → 10400 kiB.

--
Brian Cunnie, 650.968.6262


BOSH Director Performance on Jammy Stemcells FAQ

 

BOSH Director Performance on Jammy Stemcells FAQ

What’s the problem?


Ruby programs using Ruby compiled with GCC (GNU Compiler Collection) on Jammy stemcells have a much larger RSS (Resident Set Size) memory footprint, which can cause memory pressure. This affects Ruby-based programs such as the BOSH Director and the BOSH Azure, AWS, and vSphere CPIs. This can cause BOSH operations such as “bosh deploy” to take much longer and even time out.


What’s the fix?


We plan to compile the Ruby interpreter on Jammy with Clang (an Apple-sponsored GCC-compatible compiler). Ruby interpreters compiled with Clang don’t appear to have the same memory bloat when running the BOSH Director or the Ruby-based CPIs.


How will we accomplish that?


We plan to include the Clang compiler on the Jammy stemcells. We also plan to modify the Ruby BOSH package to use Clang if it’s available, otherwise fall back to GCC.


Doesn’t the Clang compiler take up a lot of disk space?


Yes, the Clang compiler takes up 700-800 MB of disk space; however, we plan to instruct the BOSH agent to remove the Clang compiler on boot unless the VM is a compilation VM. In other words, the Clang compiler won’t take up precious space on the root disk for the typically deployed VM.


What about the Xenial, Bionic, and CentOS stemcells?


We don’t plan to install Clang on Xenial and Bionic stemcells; they don’t exhibit the performance problem, so Clang has little to offer.

We’re not sure whether the CentOS stemcell is affected.


Other than memory, is there a performance impact of using Clang-based Ruby?


In our testing, it appears that a Jammy-based Clang-based Ruby Director with a Clang-based vSphere CPI offers 5-25% performance boost over a Xenial-based GCC-based Ruby Director.


What is the root cause of the problem?


We’re not sure; it appears that the problem is related to the version of GCC used to compile Ruby. We notice that the memory footprint of Ruby’s threads grew from Ubuntu Disco to Ubuntu Eoan: 204 kiB → 10400 kiB.


--
Brian Cunnie, 650.968.6262


Re: bosh acronym

daveankin+cf@...
 

That makes sense, thanks!


Re: bosh acronym

Rifa Achrinza
 

It's derived from Borg, Google's internal cluster management system.
BOSH was also created by 2 former Google engineers, which would explain
the influence :-)

BOSH: borg++ (r+1=s, g+1=h)

Taken from:
https://tanzu.vmware.com/content/blog/comparing-bosh-ansible-chef-part-1

On Sat, May 14, 2022 at 05:51:59AM -0700, daveankin+spam@... wrote:
hi everyone,
what does bosh stand for?
thanks!



References

1. https://lists.cloudfoundry.org/g/cf-bosh/message/2755
2. mailto:cf-bosh@...?subject=Re: [cf-bosh] bosh acronym
3. mailto:daveankin+spam@...?subject=Private: Re: [cf-bosh] bosh acronym
4. https://lists.cloudfoundry.org/mt/91100471/1949078
5. https://lists.cloudfoundry.org/g/cf-bosh/post
6. https://lists.cloudfoundry.org/g/cf-bosh/editsub/1949078
7. mailto:cf-bosh+owner@...
8. https://lists.cloudfoundry.org/g/cf-bosh/leave/9633041/1949078/267069161/xyzzy


bosh acronym

daveankin+cf@...
 

hi everyone,

what does bosh stand for?

thanks!


Removal of registry component from the bosh director

jpalermo@...
 

We've started tracking the work to finish up the removal of the registry from the bosh director: https://github.com/cloudfoundry/bosh/discussions/2355


Warning for release authors about AWS IMDSv2 support

jpalermo@...
 

The aws cpi can now be configured to require tokens for IMDSv2 communication: https://bosh.io/jobs/aws_cpi?source=github.com/cloudfoundry/bosh-aws-cpi-release&version=91#p%3daws.metadata_options

The aws cpi and the bosh agent have been updated to support IMDSv2 tokens, but it's possible there are bosh releases that also need to be updated to support this configuration.

If a bosh release uses instance profiles to communicate with AWS, they may need to be updated to support this new configuration.


Foundational Infrastructure Working Group is live!

Ruben Koster (VMware)
 

Hello community,

 

This message marks the official start of the Foundational Infrastructure Working Group. Below are a few links to common resources y’all might find useful:

  • Our weekly Office hours call, where we discuss PRs which are stuck in the review process and any other business. Meeting notes are kept here.
  • The Github project board per working group area:

 

The formation of the working groups is still being actively worked on. For ideas or general feedback, feel free to drop by during the office hours call, or leave a response below.

 

Best regards,

 

Ruben Koster (VMware) & Beyhan Veli (SAP)


CFF Working Groups: feedback and participants requested!

Eric Malm
 

Hi, everyone,

As we mentioned at CF Summit last week, the newly formed Technical Oversight Committee (TOC) is currently carrying out more and more of the details of the transition from the previous project-team and project management committee (PMC) structure to the working groups and community roles.

To that end, we have proposed an initial set of working groups on PR #140 on the community repo, together with how they will subsume the existing project teams and components. As a community, we now need to confirm the scope of each of these working groups, to staff them with leads and approvers, and to get them up and running with transparent development and roadmap processes. Consequently, we need a few things from the community now:
  • Please provide feedback on the overall working-group organizational structure.
  • Please provide suggestions for and feedback on the goals, scope, and technical assets for each proposed working group.
  • Please identify people in the community as candidates for approvers/leads in each working group, including yourselves if you already work in this area.
We're currently working out these details for the App Runtime Deployments working group as a pilot case, so if you care about cf-deployment, KubeCF, cf-for-k8s, or the overall community mandate to provide reference deployments of the App Runtime, now is the right time to get involved! The TOC would also like to complete this process for the complete set of working groups over the next few weeks, so if you're more focused on another area of the CF community, we need your involvement as well.

We invite everyone to join to discuss these working-group proposals at the public TOC meetings every Tuesday at 10:30 am ET, with details on the CF community calendar, and to provide feedback asynchronously on a particular proposed working group is the draft PR for its charter. Here is the full list of those charter PRs:
Please don't worry if you don't see a particular project listed here: it should be present in the PR content itself, and if it's not, the TOC wants to know about it!

Thanks,
Eric Malm, TOC member


CFF TOC Meetings

Chip Childers
 

All,

The newly elected CFF TOC will initially be meeting weekly, on Tuesdays at 14:30 UTC (7:30 AM US Pacific Time, 4:30 PM Central European Time). 

These meetings are open to the entire community to attend, although I ask that you respect that the TOC will be setting the agenda and running the meeting.

The meetings will be recorded and posted to the foundation's youtube channel.

Chip Childers
Executive Director
Cloud Foundry Foundation


Re: [cf-dev] Update regarding Bionic Stemcells: Version 1.10 released from Foundation owned infrastructure

Benjamin Gandon
 

Congrats Felix and SAP team, and all other CFF contributors for this great achievement!

Benjamin


Le 17 juin 2021 à 17:16, Riegger, Felix via lists.cloudfoundry.org <felix.riegger=sap.com@...> a écrit :

Dear Cloud Foundry community,
 
The migration of the Stemcell release infrastructure and pipelines from VMWare to CloudFoundry Foundation owned accounts is mostly done. With Bionic Stemcell 1.10 the first Stemcell has been created and released with the new setup.
 
This is again a huge step forward to bring the Stemcell into the hands of the communitiy. Again a big thank you to everyone involved in this journey.
 
What has changed?
 
The new Stemcells are now published in Google Cloud Storage instead of AWS S3. This change uncovered a bug in the BOSH CLI, which has been fixed in the latest version. Please update to https://github.com/cloudfoundry/bosh-cli/releases/tag/v6.4.4 to consume the latest Stemcells.
 
In the previous setup BOSH Acceptance Tests (BATS), IPv4 and IPv6 tests have been executed against a vSphere environment owned by VMWare. In the new setup these tests are executed in a GCP environment. Unfortunately, it was not possible to replicate the IPv6 tests to this environment. While we don't expect big changes in this area this implies that IPv6 might stop working at some point with newer Stemcells without us noticing. If this is important to you and you would like to contribute please reach out to discuss options.
 
Where can I access the new infrastructure?
 
The concourse is hosted on https://bosh.ci.cloudfoundry.org/ and runs on a GCP account of the CloudFoundry Foundation.
 
Where is the code?
 
The repositories for the pipeline definitions and the stemcell code itself have not changed. They can be found in https://github.com/cloudfoundry/bosh-stemcells-ci and https://github.com/cloudfoundry/bosh-linux-stemcell-builder.
 
Any assets regarding the Concourse setup live in https://github.com/cloudfoundry/bosh-community-stemcell-ci-infra.
 
Who can access the pipelines and contribute?
 
Any member of https://github.com/orgs/cloudfoundry/teams/bosh-stemcell can access and work with the pipeline as well as the relevant repositories. If you would like to become a contributor, please reach out.
 
Where can I follow?
 
 
Anything else?
 
As was announced by the CFF Security Working Group in https://www.cloudfoundry.org/blog/security-advisory-update/ CFF security advisories are now in place for Bionic Stemcells and the release of version 1.10 triggered the first advisories.
 
Kind regards,
Felix


Update regarding Bionic Stemcells: Version 1.10 released from Foundation owned infrastructure

Riegger, Felix
 

Dear Cloud Foundry community,

 

The migration of the Stemcell release infrastructure and pipelines from VMWare to CloudFoundry Foundation owned accounts is mostly done. With Bionic Stemcell 1.10 the first Stemcell has been created and released with the new setup.

 

This is again a huge step forward to bring the Stemcell into the hands of the communitiy. Again a big thank you to everyone involved in this journey.

 

What has changed?

 

The new Stemcells are now published in Google Cloud Storage instead of AWS S3. This change uncovered a bug in the BOSH CLI, which has been fixed in the latest version. Please update to https://github.com/cloudfoundry/bosh-cli/releases/tag/v6.4.4 to consume the latest Stemcells.

 

In the previous setup BOSH Acceptance Tests (BATS), IPv4 and IPv6 tests have been executed against a vSphere environment owned by VMWare. In the new setup these tests are executed in a GCP environment. Unfortunately, it was not possible to replicate the IPv6 tests to this environment. While we don't expect big changes in this area this implies that IPv6 might stop working at some point with newer Stemcells without us noticing. If this is important to you and you would like to contribute please reach out to discuss options.

 

Where can I access the new infrastructure?

 

The concourse is hosted on https://bosh.ci.cloudfoundry.org/ and runs on a GCP account of the CloudFoundry Foundation.

 

Where is the code?

 

The repositories for the pipeline definitions and the stemcell code itself have not changed. They can be found in https://github.com/cloudfoundry/bosh-stemcells-ci and https://github.com/cloudfoundry/bosh-linux-stemcell-builder.

 

Any assets regarding the Concourse setup live in https://github.com/cloudfoundry/bosh-community-stemcell-ci-infra.

 

Who can access the pipelines and contribute?

 

Any member of https://github.com/orgs/cloudfoundry/teams/bosh-stemcell can access and work with the pipeline as well as the relevant repositories. If you would like to become a contributor, please reach out.

 

Where can I follow?

 

Work can be tracked in https://github.com/orgs/cloudfoundry/projects/4.

 

Anything else?

 

As was announced by the CFF Security Working Group in https://www.cloudfoundry.org/blog/security-advisory-update/ CFF security advisories are now in place for Bionic Stemcells and the release of version 1.10 triggered the first advisories.

 

Kind regards,

Felix


Re: 2021 CFF Technical Oversight Committee Election Results

Lee Porte <lee.porte@...>
 

Wow! Thank you to the community.

I'm looking forwards to getting started

Cheers

L

On Thu, 17 Jun 2021 at 13:53, Chip Childers <cchilders@...> wrote:

Many thanks to everyone who took the time to register and vote in the first Cloud Foundry Foundation annual election for our newly forming Technical Oversight Committee (TOC). I also extend my deep thanks to all of the nominees who agreed to run in the election. Overall, our community’s participation in this election process was outstanding (especially given that this was our first time doing this!)


And with that preamble, I’m pleased to announce our first CFF Technical Oversight Committee will be:

* Eric Malm (VMware)

* David Stevenson (VMware)

* Jan von Loewenstein (SAP)

* Stephan Merker (SAP)

* Lee Porte (GOV.UK)


Congratulations to Eric, David, Jan, Stephan and Lee! I’m looking forward to working with you to get the TOC up and running.


For transparency, more details have been published here: https://github.com/cloudfoundry/community/blob/main/toc/elections/2021/results.md


Chip Childers
Executive Director
Cloud Foundry Foundation


--
Lee Porte
Reliability Engineer 
GOV.UK PaaS Team
‪020 3920 6036‬
07785 449292


2021 CFF Technical Oversight Committee Election Results

Chip Childers
 

Many thanks to everyone who took the time to register and vote in the first Cloud Foundry Foundation annual election for our newly forming Technical Oversight Committee (TOC). I also extend my deep thanks to all of the nominees who agreed to run in the election. Overall, our community’s participation in this election process was outstanding (especially given that this was our first time doing this!)


And with that preamble, I’m pleased to announce our first CFF Technical Oversight Committee will be:

* Eric Malm (VMware)

* David Stevenson (VMware)

* Jan von Loewenstein (SAP)

* Stephan Merker (SAP)

* Lee Porte (GOV.UK)


Congratulations to Eric, David, Jan, Stephan and Lee! I’m looking forward to working with you to get the TOC up and running.


For transparency, more details have been published here: https://github.com/cloudfoundry/community/blob/main/toc/elections/2021/results.md


Chip Childers
Executive Director
Cloud Foundry Foundation


LAST CALL for CFF TOC election - Polling closes tomorrow

Chip Childers
 

Hi all!

A reminder for the community that this is the last call for the TOC election. We've had a solid response rate for qualified voters voting, but there are 37 incomplete ballots (people that authorized their email address, but have not yet actually voted).

I'll be closing the poll tomorrow, Tuesday June 15, at the end of the work day Pacific US time (around 5 PM US Pacific).

If you are a qualified voter and have any questions or concerns, please let me know ASAP!

Happy voting!

Chip Childers
Executive Director
Cloud Foundry Foundation


Re: [cf-dev] CFF TOC Election Starting

Chip Childers
 

Not until the 15th... ;)

Chip Childers
Executive Director
Cloud Foundry Foundation


On Tue, Jun 8, 2021 at 8:40 AM 'Thun, Philipp' via open-service-broker-api <open-service-broker-api@...> wrote:

Count the votes!

 

SCNR,

Philipp

 

 

From: <cf-dev@...> on behalf of Chip Childers <cchilders@...>
Reply-To: "cf-dev@..." <cf-dev@...>
Date: Tuesday, 8. June 2021 at 14:26
To: CF Developers Mailing List <cf-dev@...>, "Discussions about the Cloud Foundry BOSH project." <cf-bosh@...>, open-service-broker-api <open-service-broker-api@...>
Subject: Re: [cf-dev] CFF TOC Election Starting

 

The CIVS site has an expired HTTPS certificate. It's a hosted, free, service... so there's nothing I can do about it at the moment. I've reached out to the service's owner at Cornell University.

 

Chip Childers

Executive Director

Cloud Foundry Foundation

 

 

On Tue, Jun 1, 2021 at 9:48 PM Chip Childers <cchilders@...> wrote:

All,

 

The list of eligible voters for our TOC election has been finalized, and an initial instruction has been sent to the best available email address that we have in our systems for all eligible voters. If you are listed in the voters.md file, but did not receive your initial registration instructions, please reach out to me privately.

 

Unless we experience major technical or logistical issues, the election will run from now, until June 15th. That said, I ask that the community bear with us as we go through this first election cycle and sort out the processes. :)

 

The following individuals are listed as eligible voters, but I have been unable to determine their email addresses in our various systems. If anyone is able to provide me with their email addresses, please reply to me privately at cchilders@....

 

"Mo Sahihi Benis","SAP"
"Sebastian Vollath","SUSE"
"Shannon Coen","VMware"
"Sven Krieger","SAP"
"Urse Searle","VMware"
"Valentin Velkov","SAP"
"Yaron Parasol","VMware"
"Visarg Soneji","SAP"

 

Chip Childers

Executive Director

Cloud Foundry Foundation

--
You received this message because you are subscribed to the Google Groups "open-service-broker-api" group.
To unsubscribe from this group and stop receiving emails from it, send an email to open-service-broker-api+unsubscribe@....
To view this discussion on the web visit https://groups.google.com/d/msgid/open-service-broker-api/39DE6D85-9B4B-4E0C-8918-E090FC1C7549%40sap.com.


Re: CFF TOC Election Starting

Chip Childers
 

The CIVS site has an expired HTTPS certificate. It's a hosted, free, service... so there's nothing I can do about it at the moment. I've reached out to the service's owner at Cornell University.

Chip Childers
Executive Director
Cloud Foundry Foundation


On Tue, Jun 1, 2021 at 9:48 PM Chip Childers <cchilders@...> wrote:
All,

The list of eligible voters for our TOC election has been finalized, and an initial instruction has been sent to the best available email address that we have in our systems for all eligible voters. If you are listed in the voters.md file, but did not receive your initial registration instructions, please reach out to me privately.

Unless we experience major technical or logistical issues, the election will run from now, until June 15th. That said, I ask that the community bear with us as we go through this first election cycle and sort out the processes. :)

The following individuals are listed as eligible voters, but I have been unable to determine their email addresses in our various systems. If anyone is able to provide me with their email addresses, please reply to me privately at cchilders@....

"Mo Sahihi Benis","SAP"
"Sebastian Vollath","SUSE"
"Shannon Coen","VMware"
"Sven Krieger","SAP"
"Urse Searle","VMware"
"Valentin Velkov","SAP"
"Yaron Parasol","VMware"
"Visarg Soneji","SAP"

Chip Childers
Executive Director
Cloud Foundry Foundation


CFF TOC Election Starting

Chip Childers
 

All,

The list of eligible voters for our TOC election has been finalized, and an initial instruction has been sent to the best available email address that we have in our systems for all eligible voters. If you are listed in the voters.md file, but did not receive your initial registration instructions, please reach out to me privately.

Unless we experience major technical or logistical issues, the election will run from now, until June 15th. That said, I ask that the community bear with us as we go through this first election cycle and sort out the processes. :)

The following individuals are listed as eligible voters, but I have been unable to determine their email addresses in our various systems. If anyone is able to provide me with their email addresses, please reply to me privately at cchilders@....

"Mo Sahihi Benis","SAP"
"Sebastian Vollath","SUSE"
"Shannon Coen","VMware"
"Sven Krieger","SAP"
"Urse Searle","VMware"
"Valentin Velkov","SAP"
"Yaron Parasol","VMware"
"Visarg Soneji","SAP"

Chip Childers
Executive Director
Cloud Foundry Foundation