CF on K8s Update
Hi Jan,
It’s great to hear your thoughts on this topic. I definitely agree that the core of what makes CF so powerful is the simplicity for the end user.
Just to clarify, would the CLI you’re referring to remain backwards compatible with existing CLI commands or would it be new CLI commands/workflows?
In contrast, building container images on the server side and threading a call to cf push through a cascade of custom resources and corresponding controllers seems to contribute a lot of complexity while provided only marginally to the value. Both, for the application developers/system operators and for the CF community developing and maintaining these controllers.
I agree that having custom resources/controllers doesn’t provide additional benefit to the end user experience since it is an implementation detail. However, I would argue that this architecture does provide value to the CF community.
Primarily, by having this layer of indirection (versus a direct CLI -> end resource architecture), we are able to have a defined interface (the CF custom resources) that allows for the introduction of alternative technologies. This allows for pluggability of different Kubernetes ecosystem projects depending on the user’s needs, which was called out as a major guiding principle in the Vision for CF on K8s.
One thing that we struggled with in CF for VMs was how tightly coupled all of the components were with one another. The architecture did not lend itself to extensibility. For example, there was an attempt to replace the gorouter with Istio-- while this ultimately failed for several reasons, the tight integration of gorouter in the rest of the system played a major role.
Another benefit that this architecture brings is the ability to support both users who require backward compatibility with existing CF workflows and users who want more cutting edge features. Depending on the organization’s requirements, different build or runtime solutions could be plugged in and used.
If we truly want to evolve CF on Kubernetes, I believe we should be attempting to not only meet the existing requirements of CF, but look at places where we can improve the system even more. I think extensibility is a major area, which is why I believe this architecture is the right level of abstraction.
To your point on the number of resources and controllers though, we are looking into possibly consolidating the App and Process objects as well as Build and Droplet. We’ve prioritized explorations for each on our Github project.
In terms of next steps, if we want to discuss the tradeoffs of this architecture and what you proposed, I’d suggest we revisit the Vision for CF on K8s document and the goals and principles it lays out :)
- Angela
Sent: Thursday, August 26, 2021 11:52 PM
To: cf-dev@... <cf-dev@...>
Subject: Re: [cf-dev] CF on K8s Update
Here is my codeRun it in the cloud for meI do not care how
pack
on a developer machine and some integration into CI/CD - be it Jenkins, Concourse, Tekton, or something else.pack
works nicely for the development flow and the production flow anyhow runs in a CI/CD system.cf push
through a cascade of custom resources and corresponding controllers seems to contribute a lot of complexity while provided only marginally to the value. Both, for the application developers/system operators and for the CF community developing
and maintaining these controllers.Sent: 23 August 2021 19:24
To: cf-dev@... <cf-dev@...>
Subject: [cf-dev] CF on K8s Update
Hi cf-dev,
Earlier this year, leadership from IBM, SAP, and VMware shared a new Vision for CF on K8s document and discussed it with the community. Since then, our group of engineers at VMware and SAP have been exploring what it would take to support this vision. We have been engaging in the CF on K8s SIG meetings to discuss future technical direction and spiking out a proof-of-concept to support the core cf push workflow using Kubernetes-native concepts, which you can view here.
We are excited to share that we have finished the proof-of-concept and have recorded a demo video to illustrate. Additionally, we have written a high-level summary of the current architecture and have opened a PR for an initial set of CF CRDs. All of this work is being actively tracked in the CF on K8s Github project. We are looking to engage a larger swath of the community in our efforts, and would appreciate any and all contributions to this effort :) If you're interested in contributing to this effort, please join us in the new #cf-k8s-dev channel in the CF Slack!
As a result of devoting development resources to accelerate this new vision and technical architecture for CF on K8s, we have decided to pause our contributions to the cf-for-k8s project. It is now in a position where it is stable and demonstrates the promise of the CF developer experience on top of Kubernetes. We anticipate future development to consist of only a small amount of regular maintenance to keep up to date with the latest versions of some of the dependencies it incorporates, such as Istio and kpack. We recently updated to the latest version of Istio but would appreciate additional community assistance in maintaining cf-for-k8s as we focus on bootstrapping the new CF on K8s architecture and reference deployments. We expect this activity to happen in the App Runtime Deployments working group that is forming under the new CFF technical governance structure.
- Angela Chin, on behalf of the cf-for-k8s and Eirini maintainers
Here is my codeRun it in the cloud for meI do not care how
pack
on a developer machine and some integration into CI/CD - be it Jenkins, Concourse, Tekton, or something else.pack
works nicely for the development flow and the production flow anyhow runs in a CI/CD system.cf push
through a cascade of custom resources and corresponding controllers seems to contribute a lot of complexity while provided only marginally to the value. Both, for the application developers/system operators and for the CF community developing
and maintaining these controllers.Sent: 23 August 2021 19:24
To: cf-dev@... <cf-dev@...>
Subject: [cf-dev] CF on K8s Update
Hi cf-dev,
Earlier this year, leadership from IBM, SAP, and VMware shared a new Vision for CF on K8s document and discussed it with the community. Since then, our group of engineers at VMware and SAP have been exploring what it would take to support this vision. We have been engaging in the CF on K8s SIG meetings to discuss future technical direction and spiking out a proof-of-concept to support the core cf push workflow using Kubernetes-native concepts, which you can view here.
We are excited to share that we have finished the proof-of-concept and have recorded a demo video to illustrate. Additionally, we have written a high-level summary of the current architecture and have opened a PR for an initial set of CF CRDs. All of this work is being actively tracked in the CF on K8s Github project. We are looking to engage a larger swath of the community in our efforts, and would appreciate any and all contributions to this effort :) If you're interested in contributing to this effort, please join us in the new #cf-k8s-dev channel in the CF Slack!
As a result of devoting development resources to accelerate this new vision and technical architecture for CF on K8s, we have decided to pause our contributions to the cf-for-k8s project. It is now in a position where it is stable and demonstrates the promise of the CF developer experience on top of Kubernetes. We anticipate future development to consist of only a small amount of regular maintenance to keep up to date with the latest versions of some of the dependencies it incorporates, such as Istio and kpack. We recently updated to the latest version of Istio but would appreciate additional community assistance in maintaining cf-for-k8s as we focus on bootstrapping the new CF on K8s architecture and reference deployments. We expect this activity to happen in the App Runtime Deployments working group that is forming under the new CFF technical governance structure.
- Angela Chin, on behalf of the cf-for-k8s and Eirini maintainers
Hi cf-dev,
Earlier this year, leadership from IBM, SAP, and VMware shared a new Vision for CF on K8s document and discussed it with the community. Since then, our group of engineers at VMware and SAP have been exploring what it would take to support this vision. We have been engaging in the CF on K8s SIG meetings to discuss future technical direction and spiking out a proof-of-concept to support the core cf push workflow using Kubernetes-native concepts, which you can view here.
We are excited to share that we have finished the proof-of-concept and have recorded a demo video to illustrate. Additionally, we have written a high-level summary of the current architecture and have opened a PR for an initial set of CF CRDs. All of this work is being actively tracked in the CF on K8s Github project. We are looking to engage a larger swath of the community in our efforts, and would appreciate any and all contributions to this effort :) If you're interested in contributing to this effort, please join us in the new #cf-k8s-dev channel in the CF Slack!
As a result of devoting development resources to accelerate this new vision and technical architecture for CF on K8s, we have decided to pause our contributions to the cf-for-k8s project. It is now in a position where it is stable and demonstrates the promise of the CF developer experience on top of Kubernetes. We anticipate future development to consist of only a small amount of regular maintenance to keep up to date with the latest versions of some of the dependencies it incorporates, such as Istio and kpack. We recently updated to the latest version of Istio but would appreciate additional community assistance in maintaining cf-for-k8s as we focus on bootstrapping the new CF on K8s architecture and reference deployments. We expect this activity to happen in the App Runtime Deployments working group that is forming under the new CFF technical governance structure.
- Angela Chin, on behalf of the cf-for-k8s and Eirini maintainers