Date   

Re: How to lock account in UAA using UAA API? #cf

shilpa kulkarni
 

Hi DHR,

Yes I tried setting locked to true as follows:

But I am getting following error response:
{
  "error_description": "Cannot set user account to locked. User accounts only become locked through exceeding the allowed failed login attempts.",
  "error": "scim",
  "message": "Cannot set user account to locked. User accounts only become locked through exceeding the allowed failed login attempts."
}


Re: How to lock account in UAA using UAA API? #cf

DHR
 

Hi Shilpa,

Did you try sending a PATCH request as per the example you linked, with locked: true ?

Eg

PATCH /Users/0daf6e54-52c5-4360-bd4a-77f5355950b3/status HTTP/1.1
Content-Type: application/json
Authorization: Bearer fe11ae74b429403796ea13c017bff06c
Accept: application/json
Host: localhost
Content-Length: 22

{
  "locked" : true }

On 19 Apr 2018, at 08:13, shilpa kulkarni <shilpakulkarni91@...> wrote:

Hello,
I am using cloud foundry UAA APIs.  I want to lock user account. But in the API documentation,  I am getting only unlock account API. 
Reference link:
http://docs.cloudfoundry.org/api/uaa/version/4.12.0/#unlock-account

Is there any way to lock account in UAA using API.?

Thanks & Regards
Shilpa Kulkarni


How to lock account in UAA using UAA API? #cf

shilpa kulkarni
 

Hello,
I am using cloud foundry UAA APIs.  I want to lock user account. But in the API documentation,  I am getting only unlock account API. 
Reference link:
http://docs.cloudfoundry.org/api/uaa/version/4.12.0/#unlock-account

Is there any way to lock account in UAA using API.?

Thanks & Regards
Shilpa Kulkarni


Re: Announcing the Windows 2016 stack on Cloud Foundry

Dr Nic Williams <drnicwilliams@...>
 

Incredible work everyone!

Dr. Nic


From: cf-dev@... <cf-dev@...> on behalf of A William Martin <amartin@...>
Sent: Wednesday, April 18, 2018 6:19:21 PM
To: Cloud Foundry dev
Subject: [cf-dev] Announcing the Windows 2016 stack on Cloud Foundry
 

Dear cf-dev:


With the recent cf-deployment version 1.22, I’m pleased to announce the general availability of Cloud Foundry’s support for Windows Server 2016 (actually, version 1709) and its native Windows container technology.



TL;DR


Cloud Foundry developers can now use the windows2016 stack to push .NET applications, use more CF features (i.e. cf ssh), and expect a more elastic, scalable, stable experience on Cloud Foundry, due to the advantages afforded by Windows Server Containers. This is an important step for a sustainable runtime and platform experience for .NET.


Getting Started


Operators can upload a Windows Server version 1709 stemcell from bosh.io and bosh deploy with the windows2016-cell.yml ops file. Developers can then cf push with the windows2016 stack and hwc_buildpack.



A Giant Leap for .NET + Cloud Foundry


This effectively brings the .NET and Cloud Foundry technology ecosystems closer together, giving .NET apps a sustainable opportunity to leverage the full benefits of CF’s evolving runtime while consuming the Windows and IIS capabilities they also need. One can even use cf ssh to debug a cf-pushed .NET app remotely from Visual Studio – to be demoed at CF Summit NA this week!


With the previous 2012 R2 stack (powered by IronFrame) and older Diego MSI, the experience for .NET apps and Windows deployment was rather sub-par. To address this, the CFF Windows teams have been working towards a level of “pragmatic parity” between the capabilities available for apps hosted on Linux and those on Windows. We think we’ve finally achieved the right foundation to serve many more CF features in the future in both worlds.


As more is now possible for .NET on CF, at the same time, the Windows runtime has become simpler. For example, it now uses the same app lifecycle as Linux-hosted apps, which means multiple buildpacks landed for .NET apps without extra work. This shows an important aspect of sustainable “pragmatic parity” between Linux and Windows moving forward.


What can .NET apps use on Cloud Foundry now?


The latest features are cf ssh, accurate application CPU metrics, full container file system, context path routing, and multiple buildpacks. However, the benefits afforded by Windows containers are more than in the list of working features, but shine in the actual behavior of the platform while running .NET apps.


For example, on the 2012 R2 stack, greedy apps could easily starve others of CPU and even consume the cell itself. Now, since CPU shares are available, apps on the windows2016 stack don’t suffer from noisy neighbors and are as elastic and scalable as Linux apps (but with a bit more memory overhead on Windows of course).


How was this made possible?


The new Windows has containers! (They’re analogous to those on Linux but naturally not quite like them.) Fortuitously, this means the new Windows stack can now use the same Garden infrastructure as Linux Diego cells, in which it swaps out runc for winc, an OCI-compliant container plugin for Windows, authored by the CF Garden Windows team. The stack also ships with groot-windows, a Garden image plugin. For details, there’s a CF Summit session coming up about it this Friday.


What is Windows Server version 1709?


This is a derivative of Windows Server 2016, delivered as part of Microsoft’s “Semi-Annual Channel” (SAC) releases of Windows Server. You can think of them similarly to the non-LTS versions of Ubuntu (16.10, 17.04, 17.10, etc.). While they may seem merely another version schema, the advances in the containerization characteristics in each SAC release are quite significant.


Why 1709?


Pinning first to “version 1709” aims to deliver to CF the best isolation and networking available from Windows containers. To continue along this track, the CF Garden Windows team has started to develop against “version 1803” with an eye towards the upcoming Windows Server 2019. We hope these versions will have what’s needed to implement container-to-container networking and more.


What about AWS?


If you look on bosh.io, you’ll notice the AWS stemcell (under “Windows 2016”) is conspicuously missing. Unfortunately, AWS doesn’t yet publish a Windows Server v1709 AMI, but Azure and GCP are good to go. (For vSphere, you still need to build your own stemcell, but CF BOSH Windows continues to work on improved ways of doing so.) While we’re lobbying for AWS to ship a 1709 AMI, Amazon still hasn’t given any timeline for its availability, and for that we could use your help.


Thanks


There are so many Cloud Foundry contributors to thank in this year-long effort. To call out a few:


  • Special thanks to the Garden team, pun-master Julian Friedman, Will Martin, Ed King, et al., who have given their expertise, inspiration, and guidance since the inception in February last year to send us along the OCI-compatible route. The next frontier is containerd!

  • Thanks to Diego, especially Eric Malm and John Shahid, for working with us so openly and acceptingly.

  • Thanks to Buildpacks, Stephen Levine and team, for maintaining the HWC Buildpack and guiding the common developer infrastructure for all.

  • And many thanks to CAPI, BOSH, Release Integration, for supporting the new Windows stack efforts and making it real.

  • I'm sure there are others as well, and we look forward to working with even more teams as our efforts grow!


Stay tuned for more info from the CFF Windows teams! There is still lots work to do, and we thrive on your feedback. Find us on the Cloud Foundry Slack at #garden-windows or #bosh-core-dev.


Cheers,

William

CFF Garden Windows / BOSH Windows project lead




Announcing the Windows 2016 stack on Cloud Foundry

A William Martin
 

Dear cf-dev:


With the recent cf-deployment version 1.22, I’m pleased to announce the general availability of Cloud Foundry’s support for Windows Server 2016 (actually, version 1709) and its native Windows container technology.



TL;DR


Cloud Foundry developers can now use the windows2016 stack to push .NET applications, use more CF features (i.e. cf ssh), and expect a more elastic, scalable, stable experience on Cloud Foundry, due to the advantages afforded by Windows Server Containers. This is an important step for a sustainable runtime and platform experience for .NET.


Getting Started


Operators can upload a Windows Server version 1709 stemcell from bosh.io and bosh deploy with the windows2016-cell.yml ops file. Developers can then cf push with the windows2016 stack and hwc_buildpack.



A Giant Leap for .NET + Cloud Foundry


This effectively brings the .NET and Cloud Foundry technology ecosystems closer together, giving .NET apps a sustainable opportunity to leverage the full benefits of CF’s evolving runtime while consuming the Windows and IIS capabilities they also need. One can even use cf ssh to debug a cf-pushed .NET app remotely from Visual Studio – to be demoed at CF Summit NA this week!


With the previous 2012 R2 stack (powered by IronFrame) and older Diego MSI, the experience for .NET apps and Windows deployment was rather sub-par. To address this, the CFF Windows teams have been working towards a level of “pragmatic parity” between the capabilities available for apps hosted on Linux and those on Windows. We think we’ve finally achieved the right foundation to serve many more CF features in the future in both worlds.


As more is now possible for .NET on CF, at the same time, the Windows runtime has become simpler. For example, it now uses the same app lifecycle as Linux-hosted apps, which means multiple buildpacks landed for .NET apps without extra work. This shows an important aspect of sustainable “pragmatic parity” between Linux and Windows moving forward.


What can .NET apps use on Cloud Foundry now?


The latest features are cf ssh, accurate application CPU metrics, full container file system, context path routing, and multiple buildpacks. However, the benefits afforded by Windows containers are more than in the list of working features, but shine in the actual behavior of the platform while running .NET apps.


For example, on the 2012 R2 stack, greedy apps could easily starve others of CPU and even consume the cell itself. Now, since CPU shares are available, apps on the windows2016 stack don’t suffer from noisy neighbors and are as elastic and scalable as Linux apps (but with a bit more memory overhead on Windows of course).


How was this made possible?


The new Windows has containers! (They’re analogous to those on Linux but naturally not quite like them.) Fortuitously, this means the new Windows stack can now use the same Garden infrastructure as Linux Diego cells, in which it swaps out runc for winc, an OCI-compliant container plugin for Windows, authored by the CF Garden Windows team. The stack also ships with groot-windows, a Garden image plugin. For details, there’s a CF Summit session coming up about it this Friday.


What is Windows Server version 1709?


This is a derivative of Windows Server 2016, delivered as part of Microsoft’s “Semi-Annual Channel” (SAC) releases of Windows Server. You can think of them similarly to the non-LTS versions of Ubuntu (16.10, 17.04, 17.10, etc.). While they may seem merely another version schema, the advances in the containerization characteristics in each SAC release are quite significant.


Why 1709?


Pinning first to “version 1709” aims to deliver to CF the best isolation and networking available from Windows containers. To continue along this track, the CF Garden Windows team has started to develop against “version 1803” with an eye towards the upcoming Windows Server 2019. We hope these versions will have what’s needed to implement container-to-container networking and more.


What about AWS?


If you look on bosh.io, you’ll notice the AWS stemcell (under “Windows 2016”) is conspicuously missing. Unfortunately, AWS doesn’t yet publish a Windows Server v1709 AMI, but Azure and GCP are good to go. (For vSphere, you still need to build your own stemcell, but CF BOSH Windows continues to work on improved ways of doing so.) While we’re lobbying for AWS to ship a 1709 AMI, Amazon still hasn’t given any timeline for its availability, and for that we could use your help.


Thanks


There are so many Cloud Foundry contributors to thank in this year-long effort. To call out a few:


  • Special thanks to the Garden team, pun-master Julian Friedman, Will Martin, Ed King, et al., who have given their expertise, inspiration, and guidance since the inception in February last year to send us along the OCI-compatible route. The next frontier is containerd!

  • Thanks to Diego, especially Eric Malm and John Shahid, for working with us so openly and acceptingly.

  • Thanks to Buildpacks, Stephen Levine and team, for maintaining the HWC Buildpack and guiding the common developer infrastructure for all.

  • And many thanks to CAPI, BOSH, Release Integration, for supporting the new Windows stack efforts and making it real.

  • I'm sure there are others as well, and we look forward to working with even more teams as our efforts grow!


Stay tuned for more info from the CFF Windows teams! There is still lots work to do, and we thrive on your feedback. Find us on the Cloud Foundry Slack at #garden-windows or #bosh-core-dev.


Cheers,

William

CFF Garden Windows / BOSH Windows project lead




Re: REMINDER: CAB call for April is Wednesday 04/18 @ 8a PST or 11a EST

Michael Maximilien
 

Final reminder. If you are in Boston for summit meet in room 156C otherwise zoom.

No specific agenda items. Just chatting and QAs. It will be fun!

Zoom soon.

Best,

dr.max
ibm ☁ 
silicon valley, ca



dr.max
ibm ☁ 
silicon valley, ca


On Apr 11, 2018, at 4:39 PM, Michael Maximilien <maxim@...> wrote:

FYI...
 
Reminder that the CAB call for April is scheduled for next Wednesday 04/18 @ 8a PST / 11a EST.
 
Since next week is also CF Summit week, in Boston, MA, we plan to do the CAB call live at the conference.
 
So please plan to join us in Room 156C as we will have a live discussions and QAs with conference attendees and those joining us on Zoom [1].
 
No other agenda items are planed. I will send one more reminder next week on this list. See you all soon.
 
Best,


JBP 4.x: Committed heap shows signs of memory leak

Siva <mailsiva@...>
 

Hello CF community,
I wanted to poll the community to see if anyone has come across this issue as described in the below github issue:
We are noticing this in more than 1 service running JBP 4.x. Any feedback/input would be greatly appreciated. 

Thanks


Re: Understanding hard CPU limits

Marco Voelz
 

Thanks for the thorough explanation, Eric!

 

Warm regards

Marco

 

From: <cf-dev@...> on behalf of Eric Malm <emalm@...>
Reply-To: "cf-dev@..." <cf-dev@...>
Date: Wednesday, 11. April 2018 at 11:32
To: cf-dev <cf-dev@...>
Subject: Re: [cf-dev] Understanding hard CPU limits

 

Oh, I omitted one detail from the CC logic: the 1024 CPU-share maximum corresponds to 8192 MB (8 GB) of allocated memory, and this controls the proportionality constant between memory and CPU shares.

 

Thanks,

Eric

 

On Tue, Apr 10, 2018 at 10:42 PM, Eric Malm <emalm@...> wrote:

Hey, Michael and Marco,

 

Sorry for the delay in getting a chance to respond to this thread. In general, CF apps receive an allocation of CPU shares proportional to their allocated memory, but with some high and low cutoffs and some limited granularity:

 

- The minimum number of CPU shares that an app instance (or task) will receive is 10.

- The maximum is 1024.

- The granularity is roughly every 10 shares (10.24, to be precise).

 

 

In my personal experiments when setting the garden.cpu_quota_per_share_in_us property, the number of cores does not factor into the per-instance limit, and the quota is enforced as CPU time across all the cores. To constrain a 64 MB-memory app instance to at most 6.4% CPU usage, I had to set garden.cpu_quota_per_share_in_us to 640. A 1 GB-memory app instance, which has 122 CPU shares, can then use up to 78.1% of a CPU core.

 

Best,

Eric

 

On Fri, Apr 6, 2018 at 11:02 AM, Dieu Cao <dcao@...> wrote:

I believe Julz is on vacation this week.

Adding Ed King, the anchor on the Garden team.

 

Dieu

 

On Tue, Apr 3, 2018, 3:08 AM Marco Voelz <marco.voelz@...> wrote:

 

/cc Eric and Julz: Could you maybe help us understand this? Thanks!


From: cf-dev@... <cf-dev@...> on behalf of Grifalconi, Michael <michael.grifalconi@...>
Sent: Monday, March 26, 2018 10:54:32 AM
To:
cf-dev@...
Subject: [CAUTION] [cf-dev] Understanding hard CPU limits

 

Hi,

We were trying out the hard CPU limit as per docs
https://github.com/cloudfoundry/garden-runc-release/releases?after=v1.9.2

GitHub is where people build software. More than 27 million people use GitHub to discover, fork, and contribute to over 80 million projects.




according to formula for single core machine,

APP_MEM_IN_MB * 100us / 1000 = MILLISECONDS_PER_100_MILLISECOND_PERIOD

In our tests, to get 6.4% CPU usage on a 64MB application and ~100% for a 1GB application we had to set the 'cpu_quota_per_share_in_us' to 3200. (Cell has 4 core and 16GB or ram, overcommit factor of 2).

That changes the formula to:
APP_MEM_IN_MB * 100us / 32000 = MILLISECONDS_PER_100_MILLISECOND_PERIOD

Can you help us to understand from where this 'times 32' comes from? Is the total available RAM of the cell (16GB * overcommit of 2) and number of cpu cores does not matter?

Thanks and regards,
Michael



 

 


Re: Proposal for Incubation in the Extensions PMC: CF Dev

Michael Maximilien
 

Hi, all,

Don’t forget to provide feedback for CF-dev. The proposing team has asked for a vote and since we have a CAB call for this Wednesday, I want to use that as the deadline.

If there are no pending comments, we will move for a vote after the call.

Thanks for your attention.

Best,

Max


On Fri, Feb 23, 2018 at 3:20 PM Stephen Levine <slevine@...> wrote:

Hi All,


Pivotal is proposing CF Dev for inclusion in the Extensions PMC as a new incubating project.


CF Dev is a deployment of BOSH and cf-deployment that runs locally in Garden containers. It uses native hypervisors to start quickly with zero external dependencies. It also provides a fully functional BOSH director.


CF Dev is currently available on Github [1].


I would also like to introduce everyone to Scott Sisil, who is joining me as Co-PM on Pivotal's local and community developer initiatives. Please reach out to Scott and/or myself for questions about CF Dev.


Details:


Project Name: CF Dev

Project Proposal: See [2], and attached.

Proposed Project Leads: Stephen Levine, Pivotal & Scott Sisil Pivotal

Proposed Contributors: Emily Casey, Pivotal; Dave Protasowski, Pivotal; Stephen Hiehn, Pivotal

Proposed Scope: See [2], and attached.

Development Operating Model: Pairing (local + remote)

Technical Approach: CF CLI plugin that starts a VM using the OS native hypervisor.


Please let us know if you have any questions or feedback.


[1] https://github.com/pivotal-cf/cfdev

[2] https://docs.google.com/document/d/1QBTstRXN-1MmINZr1b1G5K6cM6bptQRvsafg_nyP8I8/edit#


--
dr.max Sent from my iPhone http://maximilien.org


Re: Proposal for Incubation in the Extensions PMC: CF Dev

Dr Nic Williams <drnicwilliams@...>
 

Wow, a 3G ram footprint would be incredible goal.


From: cf-dev@... <cf-dev@...> on behalf of Scott Sisil <ssisil@...>
Sent: Monday, April 16, 2018 8:03:21 AM
To: Guillaume Berche
Cc: cf-dev; Casey, Emily (Pivotal); Chip Childers
Subject: Re: [cf-dev] Proposal for Incubation in the Extensions PMC: CF Dev
 
Hi All,

We wanted to give everyone a quick update on progress we have made with CF Dev:
  •  We fixed the privileged port access bug (Issue #11). Now all users should be able to login to CF Dev using the CF CLI.
  •  We received a lot of feedback on the memory / disk space footprint. The team is actively working on shrinking both of these with a goal that we can get to a memory footprint of less than 3 GB
  •  We have added telemetry to start collecting anonymous usage analytics. The raw analytics data will be available via a public Amazon S3 bucket. We also plan to publish a quarterly analytics report to the projects Github page.

Just a quick reminder that Stephen Levine will be presenting a demo of CF Dev at the Thursday keynote @ CF Summit this week.

Finally, we would appreciate any additional feedback to the project - please submit feedback in the proposal itself or respond directly to this email thread.

Thanks
Scott

On Tue, Feb 27, 2018 at 4:32 AM, Guillaume Berche <bercheg@...> wrote:
Thanks Stephen.

Guillaume.

On Tue, Feb 27, 2018 at 12:09 AM, Stephen Levine <slevine@...> wrote:
Hi Guillaume,

Apologies for the google doc permissions issue -- should be fixed now :)

We don't have a public tracker or design docs yet, but we plan to have both soon.

-Stephen

On Mon, Feb 26, 2018 at 5:56 PM, Guillaume Berche <bercheg@...> wrote:
Thanks Stephen for opensourcing CF Dev through this proposal, seems great!

I wonder whether there is a public backlog for the project ? I see some commits such as [1] apparently refer to pivotal tracker story ids, but the project does not yet seem public [2].
May be the tracker refers to some design doc that I could not yet find in the source, if not this might be useful to the community.

Thanks in advance,

Guillaume.

ps: the proposal google doc permissions seem to be set to read-only and not accept comments or suggestions.


Guillaume.

On Mon, Feb 26, 2018 at 5:28 PM, Stephen Levine <slevine@...> wrote:
PCF Dev is already often (mistakenly) referred to as CF Dev, as `cf dev [command]` is the CLI invocation. We figured that calling this product CF Dev would give both products consistent names that make sense.

We considered the conflict with the list alias, but given that a mailing list alias and a software tool are significantly different things, and that CF Dev is already commonly used to refer to PCF Dev anyways, we decided not to change it.

On Mon, Feb 26, 2018 at 11:04 AM, Michael Maximilien <maxim@...> wrote:
We’ll definitely have to find better names as cf-dev is also this mailing list. Too many polymorphic usage of this one moniker. Let’s be creative!

dr.max
ibm ☁ 
silicon valley, ca



dr.max
ibm ☁ 
silicon valley, ca


On Feb 23, 2018, at 2:40 PM, Hjortshoj, Julian <Julian.Hjortshoj@...> wrote:

Is it too late to change the name?  I suspect that folks might not instantly understand that cf-dev and CF Dev are two entirely different things. 

From: <cf-dev@...> on behalf of Stephen Levine <slevine@...>
Reply-To: "cf-dev@..." <cf-dev@...>
Date: Friday, February 23, 2018 at 12:20 PM
To: "cf-dev@..." <cf-dev@...>
Cc: "Sisil, Scott (Pivotal)" <ssisil@...>, "Casey, Emily (Pivotal)" <ecasey@...>, Chip Childers <cchilders@...>, Michael Maximilien <maxim@...>
Subject: [cf-dev] Proposal for Incubation in the Extensions PMC: CF Dev

Hi All,


Pivotal is proposing CF Dev for inclusion in the Extensions PMC as a new incubating project.


CF Dev is a deployment of BOSH and cf-deployment that runs locally in Garden containers. It uses native hypervisors to start quickly with zero external dependencies. It also provides a fully functional BOSH director.


CF Dev is currently available on Github [1].


I would also like to introduce everyone to Scott Sisil, who is joining me as Co-PM on Pivotal's local and community developer initiatives. Please reach out to Scott and/or myself for questions about CF Dev.


Details:


Project Name: CF Dev

Project Proposal: See [2], and attached.

Proposed Project Leads: Stephen Levine, Pivotal & Scott Sisil Pivotal

Proposed Contributors: Emily Casey, Pivotal; Dave Protasowski, Pivotal; Stephen Hiehn, Pivotal

Proposed Scope: See [2], and attached.

Development Operating Model: Pairing (local + remote)

Technical Approach: CF CLI plugin that starts a VM using the OS native hypervisor.


Please let us know if you have any questions or feedback.


[1] https://github.com/pivotal-cf/cfdev

[2] https://docs.google.com/document/d/1QBTstRXN-1MmINZr1b1G5K6cM6bptQRvsafg_nyP8I8/edit#









Re: Proposal for Incubation in the Extensions PMC: CF Dev

Scott Sisil <ssisil@...>
 

Hi All,

We wanted to give everyone a quick update on progress we have made with CF Dev:
  •  We fixed the privileged port access bug (Issue #11). Now all users should be able to login to CF Dev using the CF CLI.
  •  We received a lot of feedback on the memory / disk space footprint. The team is actively working on shrinking both of these with a goal that we can get to a memory footprint of less than 3 GB
  •  We have added telemetry to start collecting anonymous usage analytics. The raw analytics data will be available via a public Amazon S3 bucket. We also plan to publish a quarterly analytics report to the projects Github page.

Just a quick reminder that Stephen Levine will be presenting a demo of CF Dev at the Thursday keynote @ CF Summit this week.

Finally, we would appreciate any additional feedback to the project - please submit feedback in the proposal itself or respond directly to this email thread.

Thanks
Scott

On Tue, Feb 27, 2018 at 4:32 AM, Guillaume Berche <bercheg@...> wrote:
Thanks Stephen.

Guillaume.

On Tue, Feb 27, 2018 at 12:09 AM, Stephen Levine <slevine@...> wrote:
Hi Guillaume,

Apologies for the google doc permissions issue -- should be fixed now :)

We don't have a public tracker or design docs yet, but we plan to have both soon.

-Stephen

On Mon, Feb 26, 2018 at 5:56 PM, Guillaume Berche <bercheg@...> wrote:
Thanks Stephen for opensourcing CF Dev through this proposal, seems great!

I wonder whether there is a public backlog for the project ? I see some commits such as [1] apparently refer to pivotal tracker story ids, but the project does not yet seem public [2].
May be the tracker refers to some design doc that I could not yet find in the source, if not this might be useful to the community.

Thanks in advance,

Guillaume.

ps: the proposal google doc permissions seem to be set to read-only and not accept comments or suggestions.


Guillaume.

On Mon, Feb 26, 2018 at 5:28 PM, Stephen Levine <slevine@...> wrote:
PCF Dev is already often (mistakenly) referred to as CF Dev, as `cf dev [command]` is the CLI invocation. We figured that calling this product CF Dev would give both products consistent names that make sense.

We considered the conflict with the list alias, but given that a mailing list alias and a software tool are significantly different things, and that CF Dev is already commonly used to refer to PCF Dev anyways, we decided not to change it.

On Mon, Feb 26, 2018 at 11:04 AM, Michael Maximilien <maxim@...> wrote:
We’ll definitely have to find better names as cf-dev is also this mailing list. Too many polymorphic usage of this one moniker. Let’s be creative!

dr.max
ibm ☁ 
silicon valley, ca



dr.max
ibm ☁ 
silicon valley, ca


On Feb 23, 2018, at 2:40 PM, Hjortshoj, Julian <Julian.Hjortshoj@...> wrote:

Is it too late to change the name?  I suspect that folks might not instantly understand that cf-dev and CF Dev are two entirely different things. 

From: <cf-dev@...> on behalf of Stephen Levine <slevine@...>
Reply-To: "cf-dev@..." <cf-dev@...>
Date: Friday, February 23, 2018 at 12:20 PM
To: "cf-dev@..." <cf-dev@...>
Cc: "Sisil, Scott (Pivotal)" <ssisil@...>, "Casey, Emily (Pivotal)" <ecasey@...>, Chip Childers <cchilders@...>, Michael Maximilien <maxim@...>
Subject: [cf-dev] Proposal for Incubation in the Extensions PMC: CF Dev

Hi All,


Pivotal is proposing CF Dev for inclusion in the Extensions PMC as a new incubating project.


CF Dev is a deployment of BOSH and cf-deployment that runs locally in Garden containers. It uses native hypervisors to start quickly with zero external dependencies. It also provides a fully functional BOSH director.


CF Dev is currently available on Github [1].


I would also like to introduce everyone to Scott Sisil, who is joining me as Co-PM on Pivotal's local and community developer initiatives. Please reach out to Scott and/or myself for questions about CF Dev.


Details:


Project Name: CF Dev

Project Proposal: See [2], and attached.

Proposed Project Leads: Stephen Levine, Pivotal & Scott Sisil Pivotal

Proposed Contributors: Emily Casey, Pivotal; Dave Protasowski, Pivotal; Stephen Hiehn, Pivotal

Proposed Scope: See [2], and attached.

Development Operating Model: Pairing (local + remote)

Technical Approach: CF CLI plugin that starts a VM using the OS native hypervisor.


Please let us know if you have any questions or feedback.


[1] https://github.com/pivotal-cf/cfdev

[2] https://docs.google.com/document/d/1QBTstRXN-1MmINZr1b1G5K6cM6bptQRvsafg_nyP8I8/edit#









Re: Proposal for incubation in the Extensions PMC: MS-SQL Service Broker

Michael Maximilien
 

Thanks Zach.

All please do provide any feedback. If none outstanding, we will move for a vote after deadline.

Best,

dr.max
ibm ☁ 
silicon valley, ca



dr.max
ibm ☁ 
silicon valley, ca


On Apr 13, 2018, at 5:48 PM, Zach Brown <zbrown@...> wrote:

Hi All,

We proposed this Service Broker for MS SQL Server at the last CAB meeting: 

There don't appear to be any outstanding comments or questions on the proposal. If you've got a question or concern, please speak up by next Sunday, April 22

After that date, we'd like to propose a vote. To be clear, we're voting whether or not to deprecate the existing (abandoned) broker in the incubator, and replace it with this one maintained by Microsoft. Further explanation and links to the repos are in the proposal.

Thank you!


On Sun, Mar 4, 2018 at 5:07 PM, Michael Maximilien <mmaximilien@...> wrote:
Thanks Zach and team for sharing. Looking forward to the details during next call and super happy to consider a broker for a Windows Service. I believe this would be a first! Hopefully not last.

Cheers all,

Mac

On Sun, Mar 4, 2018 at 4:30 PM Zach Brown <zbrown@...> wrote:
Hi All,

There's a SQL Server service broker currently in the incubator, but it
appears to be abandoned. We'd like to propose replacing it with this
newer broker built by Jared Gordon and Mallika Iyer from Pivotal.

At the same time, a team from Microsoft has agreed to take over
ownership and ongoing maintenance of this new broker.

Feel free to ask any questions you may have in this thread. If
possible we'd like to discuss this proposal at the March PMC meeting.

View the complete proposal here:
https://docs.google.com/document/d/1cUjY2fqdHn8GPjqp4jY-wjGlPIhZMYj4Qv7zeGf_6T8/


--
Zach Brown
zbrown@...



--
dr.max Sent from my iPhone http://maximilien.org




--

Zach Brown

650-954-0427 - mobile

zbrown@...




Re: Proposal for incubation in the Extensions PMC: MS-SQL Service Broker

Zach Brown
 

Hi All,

We proposed this Service Broker for MS SQL Server at the last CAB meeting: 

There don't appear to be any outstanding comments or questions on the proposal. If you've got a question or concern, please speak up by next Sunday, April 22

After that date, we'd like to propose a vote. To be clear, we're voting whether or not to deprecate the existing (abandoned) broker in the incubator, and replace it with this one maintained by Microsoft. Further explanation and links to the repos are in the proposal.

Thank you!


On Sun, Mar 4, 2018 at 5:07 PM, Michael Maximilien <mmaximilien@...> wrote:
Thanks Zach and team for sharing. Looking forward to the details during next call and super happy to consider a broker for a Windows Service. I believe this would be a first! Hopefully not last.

Cheers all,

Mac

On Sun, Mar 4, 2018 at 4:30 PM Zach Brown <zbrown@...> wrote:
Hi All,

There's a SQL Server service broker currently in the incubator, but it
appears to be abandoned. We'd like to propose replacing it with this
newer broker built by Jared Gordon and Mallika Iyer from Pivotal.

At the same time, a team from Microsoft has agreed to take over
ownership and ongoing maintenance of this new broker.

Feel free to ask any questions you may have in this thread. If
possible we'd like to discuss this proposal at the March PMC meeting.

View the complete proposal here:
https://docs.google.com/document/d/1cUjY2fqdHn8GPjqp4jY-wjGlPIhZMYj4Qv7zeGf_6T8/


--
Zach Brown
zbrown@...



--
dr.max Sent from my iPhone http://maximilien.org




--

Zach Brown

650-954-0427 - mobile

zbrown@...



Re: Cloud Foundry and Kubernetes integration: Service Manager/Service Instance Sharing across CF and K8s

Mike Youngstrom
 

Thanks for posting to the list.  It's great to see this work continuing to move forward and improve.

Mike

On Thu, Apr 12, 2018 at 7:22 AM, Mueller, Florian <florian.mueller02@...> wrote:
Hello all,

In the context of our Cloud Foundry and Kubernetes integration efforts [1], one important topic was the ability to share Open Service Broker-compliant services and service instances between Cloud Foundry and Kubernetes. After creating an initial specification [2] which received quite some attention we held a face-to-face workshop [3] mid-February at the SAP headquarters in Germany with participation from colleagues working at Google, IBM, Pivotal and SUSE to shape the topic further.

As a result of the joint workshop and the discussions, there is now a very first draft specification document [4] which will be reviewed and refined by the involved parties moving forward. Additionally, SAP has started a corresponding reference implementation [5] which is driven by our implementation team consisting of colleagues from SAP labs Sofia.

We are looking forward to feedback and continued collaboration from both the Cloud Foundry and Kubernetes communities hoping that the resulting work will serve as the basis for service instance sharing across both stacks.

If you are interested in contributing, please don't hesitate to join our mailing list [6] and slack channel [7] or open issues in our Github repository [8][9].


Thanks in advance,

Florian Müller


[1] https://lists.cloudfoundry.org/g/cf-dev/message/7613
[2] https://docs.google.com/document/d/1jmcvqsz8I724Zqp-cm4KJWloHNtwPjgIeMC_PgA08rg/edit
[3] https://docs.google.com/document/d/1fbzLLsI7PkU00NUdX4VNGZ8MHJ3Vqy-oYDO55AbqjWw/edit
[4] https://github.com/Peripli/specification/blob/master/api.md
[5] https://github.com/Peripli
[6] https://groups.google.com/forum/#!forum/service-manager-wg
[7] https://openservicebrokerapi.slack.com/messages/C99PBB6ER
[8] https://github.com/Peripli/service-manager/issues
[9] https://github.com/Peripli/specification/issues







Cloud Foundry and Kubernetes integration: Service Manager/Service Instance Sharing across CF and K8s

Mueller, Florian
 

Hello all,

In the context of our Cloud Foundry and Kubernetes integration efforts [1], one important topic was the ability to share Open Service Broker-compliant services and service instances between Cloud Foundry and Kubernetes. After creating an initial specification [2] which received quite some attention we held a face-to-face workshop [3] mid-February at the SAP headquarters in Germany with participation from colleagues working at Google, IBM, Pivotal and SUSE to shape the topic further.

As a result of the joint workshop and the discussions, there is now a very first draft specification document [4] which will be reviewed and refined by the involved parties moving forward. Additionally, SAP has started a corresponding reference implementation [5] which is driven by our implementation team consisting of colleagues from SAP labs Sofia.

We are looking forward to feedback and continued collaboration from both the Cloud Foundry and Kubernetes communities hoping that the resulting work will serve as the basis for service instance sharing across both stacks.

If you are interested in contributing, please don't hesitate to join our mailing list [6] and slack channel [7] or open issues in our Github repository [8][9].


Thanks in advance,

Florian Müller


[1] https://lists.cloudfoundry.org/g/cf-dev/message/7613
[2] https://docs.google.com/document/d/1jmcvqsz8I724Zqp-cm4KJWloHNtwPjgIeMC_PgA08rg/edit
[3] https://docs.google.com/document/d/1fbzLLsI7PkU00NUdX4VNGZ8MHJ3Vqy-oYDO55AbqjWw/edit
[4] https://github.com/Peripli/specification/blob/master/api.md
[5] https://github.com/Peripli
[6] https://groups.google.com/forum/#!forum/service-manager-wg
[7] https://openservicebrokerapi.slack.com/messages/C99PBB6ER
[8] https://github.com/Peripli/service-manager/issues
[9] https://github.com/Peripli/specification/issues


Re: Understanding hard CPU limits

Grifalconi, Michael <michael.grifalconi@...>
 

Hello Eric,

 

Many thanks for the detailed explanation!

 

Best regards,

Michael

 

From: <cf-dev@...> on behalf of Eric Malm <emalm@...>
Reply-To: "cf-dev@...
g" <cf-dev@...>
Date: Wednesday, 11. April 2018 at 17:56
To: cf-dev <cf-dev@...>
Subject: Re: [cf-dev] Understanding hard CPU limits

 

Oh, I omitted one detail from the CC logic: the 1024 CPU-share maximum corresponds to 8192 MB (8 GB) of allocated memory, and this controls the proportionality constant between memory and CPU shares.

 

Thanks,

Eric

 

On Tue, Apr 10, 2018 at 10:42 PM, Eric Malm <emalm@...> wrote:

Hey, Michael and Marco,

 

Sorry for the delay in getting a chance to respond to this thread. In general, CF apps receive an allocation of CPU shares proportional to their allocated memory, but with some high and low cutoffs and some limited granularity:

 

- The minimum number of CPU shares that an app instance (or task) will receive is 10.

- The maximum is 1024.

- The granularity is roughly every 10 shares (10.24, to be precise).

 

 

In my personal experiments when setting the garden.cpu_quota_per_share_in_us property, the number of cores does not factor into the per-instance limit, and the quota is enforced as CPU time across all the cores. To constrain a 64 MB-memory app instance to at most 6.4% CPU usage, I had to set garden.cpu_quota_per_share_in_us to 640. A 1 GB-memory app instance, which has 122 CPU shares, can then use up to 78.1% of a CPU core.

 

Best,

Eric

 

On Fri, Apr 6, 2018 at 11:02 AM, Dieu Cao <dcao@...> wrote:

I believe Julz is on vacation this week.

Adding Ed King, the anchor on the Garden team.

 

Dieu

 

On Tue, Apr 3, 2018, 3:08 AM Marco Voelz <marco.voelz@...> wrote:

 

/cc Eric and Julz: Could you maybe help us understand this? Thanks!


From: cf-dev@... <cf-dev@...> on behalf of Grifalconi, Michael <michael.grifalconi@...>
Sent: Monday, March 26, 2018 10:54:32 AM
To:
cf-dev@...
Subject: [CAUTION] [cf-dev] Understanding hard CPU limits

 

Hi,

We were trying out the hard CPU limit as per docs
https://github.com/cloudfoundry/garden-runc-release/releases?after=v1.9.2

GitHub is where people build software. More than 27 million people use GitHub to discover, fork, and contribute to over 80 million projects.




according to formula for single core machine,

APP_MEM_IN_MB * 100us / 1000 = MILLISECONDS_PER_100_MILLISECOND_PERIOD

In our tests, to get 6.4% CPU usage on a 64MB application and ~100% for a 1GB application we had to set the 'cpu_quota_per_share_in_us' to 3200. (Cell has 4 core and 16GB or ram, overcommit factor of 2).

That changes the formula to:
APP_MEM_IN_MB * 100us / 32000 = MILLISECONDS_PER_100_MILLISECOND_PERIOD

Can you help us to understand from where this 'times 32' comes from? Is the total available RAM of the cell (16GB * overcommit of 2) and number of cpu cores does not matter?

Thanks and regards,
Michael



 

 


routing-release 0.175.0

Shubha Anjur Tupil
 

Hello all, 

The routing team just cut release 0.175.0 with a few bug fixes and an update of routing-release to Golang 1.10.1. Release highlights here


- Operators can now configure the manifest property `router.sanitize_forwarded_proto: true` to sanitize the X-Forwarded-Proto HTTP header in a request when `router.force_forwarded_proto_https` is set to `false`. We recommend setting the property to `true` if the Gorouter is the first component to terminate TLS, and setting it to `false` when your load balancer is terminating TLS and setting the X-Forwarded-Proto header details. The issue was identified by Aaron Huber. Thanks Aaron! 
- Gorouter and dependencies have been updated to Golang 1.10.1 details
- Fixed an issue where the Gorouter was temporarily(for 30 seconds) removing backends from the pool of available backends when a downstream client closes the connection while the request is still being processed. This could lead to temporary application unavailability details
- Fixed a bug where `request_timeout_in_seconds` was being set per connection and not per request, leading to requests timing out while the request is still being processed details. Thanks Swetha Repakula and Richard Johnson for identifying, submitting a PR and helping with testing the fix for this issue.  
- Fixed a bug where the router was temporarily(for 30 seconds) not removing a backend from the pool of available backends when a backend application instance was misbehaving (e.g. closing the connection or crashing). Operators would see 502 errors in the Gorouter logs details

Regards, 
Shubha & Shannon 
CF Routing, Product Managers


REMINDER: CAB call for April is Wednesday 04/18 @ 8a PST or 11a EST

Michael Maximilien
 

FYI...
 
Reminder that the CAB call for April is scheduled for next Wednesday 04/18 @ 8a PST / 11a EST.
 
Since next week is also CF Summit week, in Boston, MA, we plan to do the CAB call live at the conference.
 
So please plan to join us in Room 156C as we will have a live discussions and QAs with conference attendees and those joining us on Zoom [1].
 
No other agenda items are planed. I will send one more reminder next week on this list. See you all soon.
 
Best,
 
------
dr.max
ibm ☁ 
silicon valley, ca
maximilien.org
 
[1] https://docs.google.com/document/d/1SCOlAquyUmNM-AQnekCOXiwhLs6gveTxAcduvDcW_xI 


Re: Understanding hard CPU limits

Eric Malm <emalm@...>
 

Oh, I omitted one detail from the CC logic: the 1024 CPU-share maximum corresponds to 8192 MB (8 GB) of allocated memory, and this controls the proportionality constant between memory and CPU shares.

Thanks,
Eric

On Tue, Apr 10, 2018 at 10:42 PM, Eric Malm <emalm@...> wrote:
Hey, Michael and Marco,

Sorry for the delay in getting a chance to respond to this thread. In general, CF apps receive an allocation of CPU shares proportional to their allocated memory, but with some high and low cutoffs and some limited granularity:

- The minimum number of CPU shares that an app instance (or task) will receive is 10.
- The maximum is 1024.
- The granularity is roughly every 10 shares (10.24, to be precise).


In my personal experiments when setting the garden.cpu_quota_per_share_in_us property, the number of cores does not factor into the per-instance limit, and the quota is enforced as CPU time across all the cores. To constrain a 64 MB-memory app instance to at most 6.4% CPU usage, I had to set garden.cpu_quota_per_share_in_us to 640. A 1 GB-memory app instance, which has 122 CPU shares, can then use up to 78.1% of a CPU core.

Best,
Eric

On Fri, Apr 6, 2018 at 11:02 AM, Dieu Cao <dcao@...> wrote:
I believe Julz is on vacation this week.
Adding Ed King, the anchor on the Garden team.

Dieu

On Tue, Apr 3, 2018, 3:08 AM Marco Voelz <marco.voelz@...> wrote:


/cc Eric and Julz: Could you maybe help us understand this? Thanks!


From: cf-dev@... <cf-dev@...> on behalf of Grifalconi, Michael <michael.grifalconi@...>
Sent: Monday, March 26, 2018 10:54:32 AM
To: cf-dev@...
Subject: [CAUTION] [cf-dev] Understanding hard CPU limits
 
Hi,

We were trying out the hard CPU limit as per docs
https://github.com/cloudfoundry/garden-runc-release/releases?after=v1.9.2
GitHub is where people build software. More than 27 million people use GitHub to discover, fork, and contribute to over 80 million projects.



according to formula for single core machine,

APP_MEM_IN_MB * 100us / 1000 = MILLISECONDS_PER_100_MILLISECOND_PERIOD

In our tests, to get 6.4% CPU usage on a 64MB application and ~100% for a 1GB application we had to set the 'cpu_quota_per_share_in_us' to 3200. (Cell has 4 core and 16GB or ram, overcommit factor of 2).

That changes the formula to:
APP_MEM_IN_MB * 100us / 32000 = MILLISECONDS_PER_100_MILLISECOND_PERIOD

Can you help us to understand from where this 'times 32' comes from? Is the total available RAM of the cell (16GB * overcommit of 2) and number of cpu cores does not matter?

Thanks and regards,
Michael







Re: Understanding hard CPU limits

Eric Malm <emalm@...>
 

Hey, Michael and Marco,

Sorry for the delay in getting a chance to respond to this thread. In general, CF apps receive an allocation of CPU shares proportional to their allocated memory, but with some high and low cutoffs and some limited granularity:

- The minimum number of CPU shares that an app instance (or task) will receive is 10.
- The maximum is 1024.
- The granularity is roughly every 10 shares (10.24, to be precise).


In my personal experiments when setting the garden.cpu_quota_per_share_in_us property, the number of cores does not factor into the per-instance limit, and the quota is enforced as CPU time across all the cores. To constrain a 64 MB-memory app instance to at most 6.4% CPU usage, I had to set garden.cpu_quota_per_share_in_us to 640. A 1 GB-memory app instance, which has 122 CPU shares, can then use up to 78.1% of a CPU core.

Best,
Eric

On Fri, Apr 6, 2018 at 11:02 AM, Dieu Cao <dcao@...> wrote:
I believe Julz is on vacation this week.
Adding Ed King, the anchor on the Garden team.

Dieu

On Tue, Apr 3, 2018, 3:08 AM Marco Voelz <marco.voelz@...> wrote:


/cc Eric and Julz: Could you maybe help us understand this? Thanks!


From: cf-dev@... <cf-dev@...> on behalf of Grifalconi, Michael <michael.grifalconi@...>
Sent: Monday, March 26, 2018 10:54:32 AM
To: cf-dev@...
Subject: [CAUTION] [cf-dev] Understanding hard CPU limits
 
Hi,

We were trying out the hard CPU limit as per docs
https://github.com/cloudfoundry/garden-runc-release/releases?after=v1.9.2
GitHub is where people build software. More than 27 million people use GitHub to discover, fork, and contribute to over 80 million projects.



according to formula for single core machine,

APP_MEM_IN_MB * 100us / 1000 = MILLISECONDS_PER_100_MILLISECOND_PERIOD

In our tests, to get 6.4% CPU usage on a 64MB application and ~100% for a 1GB application we had to set the 'cpu_quota_per_share_in_us' to 3200. (Cell has 4 core and 16GB or ram, overcommit factor of 2).

That changes the formula to:
APP_MEM_IN_MB * 100us / 32000 = MILLISECONDS_PER_100_MILLISECOND_PERIOD

Can you help us to understand from where this 'times 32' comes from? Is the total available RAM of the cell (16GB * overcommit of 2) and number of cpu cores does not matter?

Thanks and regards,
Michael





1461 - 1480 of 9378