Date   
IMPORTANT NOTICE: End of support for cflinuxfs2 buildpacks after 2019-08-31

Elliott Shanks
 

This notice is regarding the end of support for cflinuxfs2 buildpacks after 2019-08-31. As previously mentioned, Ubuntu has ended support for 14.04, and with that, support for cflinuxfs2 has also ended. Due to the need to migrate all applications from cflinuxfs2 to cflinuxfs3, we have continued to release cflinuxfs2 buildpacks while those migrations came to a conclusion. After August 31, 2019, all buildpack releases will be for cflinuxfs3 only.


If you are still in need of migrating your applications over to cflinuxfs3, the Stack Auditor tool can be of use.


Thanks,

Elliott Shanks

PM, Buildpacks


Re: CF Application Runtime PMC: Diego Project Lead Call for Nominations

Benjamin Gandon
 

Congratulations Eric for the great work done as Diego project lead!

Benjamin


Le 14 juin 2019 à 00:50, Eric Malm <emalm@...> a écrit :

Hi, everyone,

I am stepping down as the lead for the Diego project within the Application Runtime PMC, although I will remain in my role as the lead for the PMC itself.

The Diego team, located primarily in San Francisco with remote members in New York and Virginia, now has an opening for its project lead. Project leads must be nominated by a Cloud Foundry Foundation member. Please send nominations directly to me or in reply to this message no later than 11:59 PM PDT on Monday, June 24, 2019.

Also, if you have any questions about the role or the nomination process, as described in the CFF governance documents (https://www.cloudfoundry.org/governance/cff_development_operations_policy/), please let me know.

Thanks,
Eric Malm, CF Application Runtime PMC Lead

Re: CF Application Runtime PMC: Diego Project Lead Call for Nominations

Nima
 

Thank you Eric for all the great work on Diego. It definitely was a great pleasure working with you.
 

----- Original message -----
From: "Eric Malm" <emalm@...>
Sent by: cf-dev@...
To: cf-dev <cf-dev@...>
Cc:
Subject: [EXTERNAL] [cf-dev] CF Application Runtime PMC: Diego Project Lead Call for Nominations
Date: Thu, Jun 13, 2019 3:51 PM
 
Hi, everyone,

I am stepping down as the lead for the Diego project within the Application Runtime PMC, although I will remain in my role as the lead for the PMC itself.

The Diego team, located primarily in San Francisco with remote members in New York and Virginia, now has an opening for its project lead. Project leads must be nominated by a Cloud Foundry Foundation member. Please send nominations directly to me or in reply to this message no later than 11:59 PM PDT on Monday, June 24, 2019.

Also, if you have any questions about the role or the nomination process, as described in the CFF governance documents (https://www.cloudfoundry.org/governance/cff_development_operations_policy/), please let me know.

Thanks,
Eric Malm, CF Application Runtime PMC Lead
 

cf-networking-release & silk-release v2.23.0 published

Aidan Obley
 

The Networking Program has released silk-release 2.23.0 and cf-networking-release 2.23.0.

cf-networking-release

Release Highlights

  • golang version bumped to 1.12.6 story
  • golang version is now discoverable in docs [story]
  • Default values for database connections have been updated to reduce pressure on the database story
  • Dynamic Egress ping tests can be disabled when running Networking Acceptance Tests to support environments that prevent ping requests story
  • Increase timeout for fixture apps to start in Networking Acceptance Tests to better support small footprint environments story
  • Policy Server API now returns X-XSS-Protection header story

Manifest Property Changes

Job Property 2.22.0 Default 2.23.0 Default
policy-server max_idle_connections 200 10
policy-server-internal max_idle_connections 200 10

silk-release

Release Highlights

  • golang version bumped to 1.12.6 story
  • golang version is now discoverable in docs story
  • Default values for database connections and silk daemon polling have been updated to reduce pressure on the database story
  • Silk accurately calculates cidr pools for some cidr ranges story
  • vxlan-policy-agent fails fast when unable to mount container-metadata dir story
  • silk-daemon drain points to the correct pid file story
  • Bumped containernetworking/plugins dependency for latest fixes story

Manifest Property Changes

Job Property 2.22.0 Default 2.23.0 Default
silk-controller max_idle_connections 200 10
silk-daemon lease_poll_interval_seconds 5 30

Regards,
The Networking Program

Unconference at Den Haag - Talks and Sponsors Please!

Daniel Jones
 

Hi all,

The Unconference will return at CF Summit Europe 2019, the night before the first day of the summit (Tuesday 10th September, 6pm). If you want to attend, please register so we know how much food and drink to procure.

We'll have talks (submit proposals please!), and more importantly open space discussions (suggest topics too, please) where you can chat with peers about the hottest issues in the ecosystem.

By now, y'all will know if your talks were accepted for the summit proper. At the Unconference we want to give talks that weren't accepted a second chance, along with talks that are off-the-wall. The Unconference is also a great friendly crowd for new speakers - in Philadelphia we had some first-timers give a staggeringly good talk.

We're looking for sponsors too. At the last EU Unconference 20% of the entire summit audience attended, so it's a really cost-effective way of reaching the community. Contact Ivana Scott (ivana.scott@...) and Sara Lenz (slenz@...) for more details on sponsor packages.


Regards,
Daniel 'Deejay' Jones - CTO
+44 (0)79 8000 9153
EngineerBetter Ltd - More than cloud platform specialists

FINAL REMINDER: CAB call for June is next week Wednesday 19th @ 8a Pacific

Michael Maximilien
 

FYI...

Change in agenda: Talk by Dr Nic Williams of Stark & Wayne entitled "Distributing sidecars with buildpacks" [2]

Zoom soon. Best,

dr.max
ibm ☁️
silicon valley, ca
maximilien.org

using IBM Verse

[2] https://gist.github.com/drnic/28ba8d0dc8e91bfdc47e53916d6cdb56

---------- Forwarded message ---------
From: Michael Maximilien <maxim@...>
Date: Thu, Jun 13, 2019 at 1:18 PM
Subject: [cf-dev] REMINDER: CAB call for June is next week Wednesday 19th @ 8a Pacific
To: < cf-dev@...>

Hi, all,
 
Reminder that the CAB call for June is next Wednesday 19th @ 8a Pacific.
 
We will have regular highlights, QAs, as well as  one planned talk with another TBD:
 
1. External DNS connector for Cloud Foundry  [1] by Sergey Matochkin and Comcast engineering team
2. TBD (contact me if you have a talk to share)
 
All other info in agenda [0]. Zoom soon. Best,

------
dr.max
ibm ☁ 
silicon valley, ca
maximilien.org
 

#java #springboot #java #springboot

rajkinra@...
 

Please advise on the best approach to specify an external directory folder for use by springboot application to store business data (such as images).  The following attempt to do so far did not work:  By adding it to the User Provided JAVA_OPTS Environment Variable (using the syntax:  -cp /myfoo ).  This did not work because PCF start command adds an additional -cp entry with the $PWD present working directory that appears to override (and mask) the user provided value of /myfoo.  

Re: #java #springboot #java #springboot

Warren, Paul <Paul.Warren@...>
 

You should take a look at CloudFoundry's Volume Service feature.  This feature allows you to bind NFS, or SMB volumes into your application containers.  

You may also want to take a look at Spring Content.  A project that allows you to quickly and easily create headless Content Services (with Spring Boot if you would like) for managing unstructured data such as; images, documents and movies, in any of a number of different types of storage including the filesystem (that you would use with the Volume Service feature).

You can reach out to the Volume Service team on slack #cf-persistence

HTH
_Paul 

CAPI V3 Deployment states

Scott Sisil
 

Hi All,

The CAPI team has been gathering feedback on how we communicate state of a rolling deployment through the V3 deployments resource.  After getting feedback from integrating client teams, we have generated a new proposal that defines a new approach to communicating a more robust status for deployment state.


We are seeking feedback over the next week from the community to help us finalize this proposal and move forward with implementation in the coming weeks.

Looking forward to your comments!

Thanks,
Scott Sisil

CAPI PM

Removing Consul support from capi-release

Tim Downey
 

Hello everyone!

As you're likely aware, in 2017 Cloud Foundry began moving away from using Consul for distributed locks and service discovery. Additionally, last year's release of cf-deployment 5.0.0 removed it as a default component of CF completely.

We have begun the process of removing Consul support from some of the jobs in capi-release, starting with the cc-uploader. This means that as of the next capi-release (1.83.0), staging will no longer work in deployments that are relying on Consul for discovery. If you're using cf-deployment (BOSH DNS) or some other mechanism for service discovery this should not cause any issues.

Best,
Tim Downey, Connor Braa, and the CAPI Team

Re: Announcement: Planned Deprecation of the v6 cf CLI

Guillaume Berche
 

Hi Abby,

It seems that the notification URL [1] mentionned into your email isn't accessible to the OSS community. Can you please share its content if its different from the cf-dev@ announcement?

Thanks in advance,

Guillaume.

On Fri, May 31, 2019 at 2:20 AM Abby Chau <achau@...> wrote:
Hello everyone,

Hope this finds you well. 

Following on the CC V2 API Deprecation Plan notification recently, the cf CLI team are writing to also announce the planned deprecation of the v6 cf CLI. Please provide feedback and review the v6 CF CLI Deprecation Plan for details on strategy and timeframes. 

We hope to make the upgrade process as seamless as possible for our users, and that the v7 cf CLI will provide greater opportunities to grow and enhance the product for our users.

As always, we value your feedback and would appreciate comments in the document. We also hang out on Cloud Foundry Slack at #cli. We hope to hear from you. 

Many thanks,

Abby Chau and the CF CLI team


Re: Announcement: Planned Deprecation of the v6 cf CLI

Abby Chau
 

Hi Guillaume,

Thanks for reaching out. Sorry the url is inaccessible - it was meant to point at an email entitled " [cf-dev] Announcement: CC API v2 Deprecation Plan" sent in February 2019 by Greg Cobb and the V3 Acceleration Team. 

Please let me know if you have any additional questions. Thanks.

Best,

Abby


On Fri, Jun 21, 2019 at 12:46 PM Guillaume Berche <bercheg@...> wrote:
Hi Abby,

It seems that the notification URL [1] mentionned into your email isn't accessible to the OSS community. Can you please share its content if its different from the cf-dev@ announcement?

Thanks in advance,

Guillaume.

On Fri, May 31, 2019 at 2:20 AM Abby Chau <achau@...> wrote:
Hello everyone,

Hope this finds you well. 

Following on the CC V2 API Deprecation Plan notification recently, the cf CLI team are writing to also announce the planned deprecation of the v6 cf CLI. Please provide feedback and review the v6 CF CLI Deprecation Plan for details on strategy and timeframes. 

We hope to make the upgrade process as seamless as possible for our users, and that the v7 cf CLI will provide greater opportunities to grow and enhance the product for our users.

As always, we value your feedback and would appreciate comments in the document. We also hang out on Cloud Foundry Slack at #cli. We hope to hear from you. 

Many thanks,

Abby Chau and the CF CLI team


[Proposal] CAPI V3 Service Offerings

Niki Maslarski
 

Hello everyone,

The SAPI Team has been working on a model for the Cloud Controller V3 API for services.

We are seeking feedback over the next week from the community to help us finalize this proposal and move forward with implementation.

Looking forward to your comments!
You can contact us by replying to this email, via mail to cf-services-api@...
or in our cloud foundry slack channel #sapi 

Best regards
Niki && George
On Behalf of the SAPI Team

REQUEST for REVIEW - Scope for CF-Deployment v10.0

Saikiran Yerram
 

Good day everyone,

I want to share and gather feedback on the proposed scope of the next major release of cf-deployment v10.0 version.

This Google doc describes the high-level changes.


Anyone with the link above can review and comment. Please take some time to peruse it and comment directly within the doc when you have a moment.

You can reach us at CloudFoundry slack channels #cf-deployment, #release-integration

Regards,

--
Saikiran Yerram

Update from cf-networking

Shannon Coen
 

Thanks to Dan from Engineer Better for prompting this update. We (the CF-Networking team) haven't reached out in a while. 

Last October we shared that with our integration between and CF and Istio Pilot, we were able to offer weighted routing via a new Envoy-base ingress gateway and Cloud Controller APIs or a CLI plugin, enabling developers to have more control in shifting traffic from one version of their application to another. That thread is here: https://lists.cloudfoundry.org/g/cf-dev/message/8328

Since then we've been working on extending our integrations to the application-to-application data plane. Our target milestones are enabling developers to rely on the platform for client-side load balancing, timeouts, retries, and mTLS between applications over the C2C overlay network. These features will remove additional toil from developers for having to implement these behaviors, increasing productivity, and give platform operators and security teams confidence that intra-application traffic is secured in a consistent way.

We already support routing of traffic from apps to internal routes through the sidecars and have default policies set for load balancing, timeouts and retries. But this only works at a relatively low scale; 300 total internal routes. Scaling past this will likely require enhancing our integration with Istio to uniquely configure the sidecars for each application based on c2c security policies. 

We've successful spiked out having the consumer and provider sidecars negotiate mTLS, but we've set that down while we work on the scaling problem above. On this topic we want some feedback from the community on a rollout strategy. Look for a follow up email coming soon.

All this work has been happening in istio-release (https://github.com/cloudfoundry/istio-release), which you can deploy with BOSH alongside cf-deployment using an ops file. Documentation can be found on the README. Warning: our integration with Istio is not yet ready for production use cases. In addition to scaling concerns, the control plane is not yet HA nor is it sufficiently instrumented for monitoring. 

In parallel with the Networking team's efforts, other CF teams are doing great work toward the same vision:
  • A collaboration between CAPI and CLI teams, responsible for completing the v3 CC API and delivering the v7 CLI, have been working on the APIs and CLI commands to support declarative configuration of routing rules, starting with percentage-based traffic splitting for external routes. 
  • One engineer from the Windows team has been laboring to contribute Windows support to the Envoy Proxy OSS project, which will enable developers of .NET apps to achieve all the same outcomes planned for Linux apps, plus any outcomes delivered with service mesh in the future. Once the sidecars are in place, the operating system is abstracted. 
We'd love help with all of this. If you'd like to contribute please reply here or reach out to us in #networking.

At CF Summit in Philadelphia earlier this year, I gave a presentation at User Day sharing our journey from routing to service mesh in CF, with the ever present goal of delivering business outcomes for platform operators and the development teams they serve. I've attached the slides. I plan to attend CF Summit EU in The Hague in September, and SpringOne Platform in Austin October; reach out if you'd like to meet up. 

Best,

Shannon Coen
Product Lead, PCF Networking
Pivotal, Inc.

Re: Update from cf-networking

Daniel Jones
 

Awesome, thanks!

Are the scaling issues intrinsic to Istio, or is it CF's use of Istio that's causing the scaling problem?

I'm just curious as to whether we can use this information to infer that no-one is using Istio beyond this scale, or perhaps they are, but they're using it differently.

Regards,
Daniel 'Deejay' Jones - CTO
+44 (0)79 8000 9153
EngineerBetter Ltd - More than cloud platform specialists


On Thu, 27 Jun 2019 at 03:27, Shannon Coen <scoen@...> wrote:
Thanks to Dan from Engineer Better for prompting this update. We (the CF-Networking team) haven't reached out in a while. 

Last October we shared that with our integration between and CF and Istio Pilot, we were able to offer weighted routing via a new Envoy-base ingress gateway and Cloud Controller APIs or a CLI plugin, enabling developers to have more control in shifting traffic from one version of their application to another. That thread is here: https://lists.cloudfoundry.org/g/cf-dev/message/8328

Since then we've been working on extending our integrations to the application-to-application data plane. Our target milestones are enabling developers to rely on the platform for client-side load balancing, timeouts, retries, and mTLS between applications over the C2C overlay network. These features will remove additional toil from developers for having to implement these behaviors, increasing productivity, and give platform operators and security teams confidence that intra-application traffic is secured in a consistent way.

We already support routing of traffic from apps to internal routes through the sidecars and have default policies set for load balancing, timeouts and retries. But this only works at a relatively low scale; 300 total internal routes. Scaling past this will likely require enhancing our integration with Istio to uniquely configure the sidecars for each application based on c2c security policies. 

We've successful spiked out having the consumer and provider sidecars negotiate mTLS, but we've set that down while we work on the scaling problem above. On this topic we want some feedback from the community on a rollout strategy. Look for a follow up email coming soon.

All this work has been happening in istio-release (https://github.com/cloudfoundry/istio-release), which you can deploy with BOSH alongside cf-deployment using an ops file. Documentation can be found on the README. Warning: our integration with Istio is not yet ready for production use cases. In addition to scaling concerns, the control plane is not yet HA nor is it sufficiently instrumented for monitoring. 

In parallel with the Networking team's efforts, other CF teams are doing great work toward the same vision:
  • A collaboration between CAPI and CLI teams, responsible for completing the v3 CC API and delivering the v7 CLI, have been working on the APIs and CLI commands to support declarative configuration of routing rules, starting with percentage-based traffic splitting for external routes. 
  • One engineer from the Windows team has been laboring to contribute Windows support to the Envoy Proxy OSS project, which will enable developers of .NET apps to achieve all the same outcomes planned for Linux apps, plus any outcomes delivered with service mesh in the future. Once the sidecars are in place, the operating system is abstracted. 
We'd love help with all of this. If you'd like to contribute please reply here or reach out to us in #networking.

At CF Summit in Philadelphia earlier this year, I gave a presentation at User Day sharing our journey from routing to service mesh in CF, with the ever present goal of delivering business outcomes for platform operators and the development teams they serve. I've attached the slides. I plan to attend CF Summit EU in The Hague in September, and SpringOne Platform in Austin October; reach out if you'd like to meet up. 

Best,

Shannon Coen
Product Lead, PCF Networking
Pivotal, Inc.

Re: Update from cf-networking

Shannon Coen
 

Hi Dan,

The scaling issues are primarily related to CF's use of Istio. 

Currently every Envoy sidecar in the platform receives configuration for all internal routes, regardless of whether there are C2C policies in place that enable apps to connect to one another directly via the overlay network. The sidecars memory utilization increases with the config it holds in memory, and this resource is constrained by a quota for the application container. With default configuration for container memory quota, at around 300 total internal routes all sidecars run out of memory and crash. We can be smarter about the configuration each sidecar receives. We are exploring options, including configuring the sidecars for a given app with routing configuration only for destinations for which a C2C security policy has been created. This would shift the scaling limit to 300 policies per app, which seems more than enough.

Looking further forward, as we explore networking investments in K8s as the orchestrator for CFAR (Eirini), we could explore leveraging pods to give the sidecar and apps independent resource limits. 

Best,
 
Shannon Coen
Product Lead, PCF Networking
Pivotal, Inc.


On Thu, Jun 27, 2019 at 8:50 AM Daniel Jones <daniel.jones@...> wrote:
Awesome, thanks!

Are the scaling issues intrinsic to Istio, or is it CF's use of Istio that's causing the scaling problem?

I'm just curious as to whether we can use this information to infer that no-one is using Istio beyond this scale, or perhaps they are, but they're using it differently.

Regards,
Daniel 'Deejay' Jones - CTO
+44 (0)79 8000 9153
EngineerBetter Ltd - More than cloud platform specialists


On Thu, 27 Jun 2019 at 03:27, Shannon Coen <scoen@...> wrote:
Thanks to Dan from Engineer Better for prompting this update. We (the CF-Networking team) haven't reached out in a while. 

Last October we shared that with our integration between and CF and Istio Pilot, we were able to offer weighted routing via a new Envoy-base ingress gateway and Cloud Controller APIs or a CLI plugin, enabling developers to have more control in shifting traffic from one version of their application to another. That thread is here: https://lists.cloudfoundry.org/g/cf-dev/message/8328

Since then we've been working on extending our integrations to the application-to-application data plane. Our target milestones are enabling developers to rely on the platform for client-side load balancing, timeouts, retries, and mTLS between applications over the C2C overlay network. These features will remove additional toil from developers for having to implement these behaviors, increasing productivity, and give platform operators and security teams confidence that intra-application traffic is secured in a consistent way.

We already support routing of traffic from apps to internal routes through the sidecars and have default policies set for load balancing, timeouts and retries. But this only works at a relatively low scale; 300 total internal routes. Scaling past this will likely require enhancing our integration with Istio to uniquely configure the sidecars for each application based on c2c security policies. 

We've successful spiked out having the consumer and provider sidecars negotiate mTLS, but we've set that down while we work on the scaling problem above. On this topic we want some feedback from the community on a rollout strategy. Look for a follow up email coming soon.

All this work has been happening in istio-release (https://github.com/cloudfoundry/istio-release), which you can deploy with BOSH alongside cf-deployment using an ops file. Documentation can be found on the README. Warning: our integration with Istio is not yet ready for production use cases. In addition to scaling concerns, the control plane is not yet HA nor is it sufficiently instrumented for monitoring. 

In parallel with the Networking team's efforts, other CF teams are doing great work toward the same vision:
  • A collaboration between CAPI and CLI teams, responsible for completing the v3 CC API and delivering the v7 CLI, have been working on the APIs and CLI commands to support declarative configuration of routing rules, starting with percentage-based traffic splitting for external routes. 
  • One engineer from the Windows team has been laboring to contribute Windows support to the Envoy Proxy OSS project, which will enable developers of .NET apps to achieve all the same outcomes planned for Linux apps, plus any outcomes delivered with service mesh in the future. Once the sidecars are in place, the operating system is abstracted. 
We'd love help with all of this. If you'd like to contribute please reply here or reach out to us in #networking.

At CF Summit in Philadelphia earlier this year, I gave a presentation at User Day sharing our journey from routing to service mesh in CF, with the ever present goal of delivering business outcomes for platform operators and the development teams they serve. I've attached the slides. I plan to attend CF Summit EU in The Hague in September, and SpringOne Platform in Austin October; reach out if you'd like to meet up. 

Best,

Shannon Coen
Product Lead, PCF Networking
Pivotal, Inc.

Security feed not updating

Lee Porte
 

Hi,

Has anyone else noticed that https://www.cloudfoundry.org/foundryblog/security-advisory/feed/ is not being updated with new security issues? 

Has it moved? I've not spotted anything via the blog site to indicate either way. We use automated monitoring of this feed to alert us of potential CVEs we need to look at specifically on the platform.

Thanks

Lee

--
Lee Porte
Reliability Engineer 
GOV.UK PaaS Team
07785 449292

Re: Security feed not updating

Dr Nic Williams
 

Someone else may have a more comprehensive answer, but I’ve seen the CVEs announced on the #security channel on CF slack.

Nic

 


From: cf-dev@... on behalf of Lee Porte via Lists.Cloudfoundry.Org <lee.porte=digital.cabinet-office.gov.uk@...>
Sent: Monday, July 1, 2019 10:50 pm
To: Discussions about Cloud Foundry projects and the system overall.
Subject: [cf-dev] Security feed not updating
 
Hi,

Has anyone else noticed that https://www.cloudfoundry.org/foundryblog/security-advisory/feed/ is not being updated with new security issues? 

Has it moved? I've not spotted anything via the blog site to indicate either way. We use automated monitoring of this feed to alert us of potential CVEs we need to look at specifically on the platform.

Thanks

Lee

--
Lee Porte
Reliability Engineer 
GOV.UK PaaS Team
07785 449292

Re: Security feed not updating

Lee Porte
 

I've seen them on there too, but that's a bit more awkward to put automated monitoring in for.


On Tue, 2 Jul 2019 at 07:28, Dr Nic Williams <drnicwilliams@...> wrote:
Someone else may have a more comprehensive answer, but I’ve seen the CVEs announced on the #security channel on CF slack.

Nic

 

From: cf-dev@... on behalf of Lee Porte via Lists.Cloudfoundry.Org <lee.porte=digital.cabinet-office.gov.uk@...>
Sent: Monday, July 1, 2019 10:50 pm
To: Discussions about Cloud Foundry projects and the system overall.
Subject: [cf-dev] Security feed not updating
 
Hi,

Has anyone else noticed that https://www.cloudfoundry.org/foundryblog/security-advisory/feed/ is not being updated with new security issues? 

Has it moved? I've not spotted anything via the blog site to indicate either way. We use automated monitoring of this feed to alert us of potential CVEs we need to look at specifically on the platform.

Thanks

Lee

--
Lee Porte
Reliability Engineer 
GOV.UK PaaS Team
07785 449292



--
Lee Porte
Reliability Engineer 
GOV.UK PaaS Team
07785 449292