Re: How to lock account in UAA using UAA API?
#cf
shilpa kulkarni
Hi DHR,
Yes I tried setting locked to true as follows: curl --request PATCH \
--url http://localhost:8081/uaa/Users/0bec1cf1-7ab2-489a-8509-/status \
--header 'Accept: application/json' \
--header 'Authorization: Bearer fd76944bb1534edda25a5a0962431be1' \
--header 'Content-Type: application/json' \
--data '{
"locked" : true
}'
But I am getting following error response:{
"error_description": "Cannot set user account to locked. User accounts only become locked through exceeding the allowed failed login attempts.",
"error": "scim",
"message": "Cannot set user account to locked. User accounts only become locked through exceeding the allowed failed login attempts."
}
|
|||
|
|||
Re: How to lock account in UAA using UAA API?
#cf
DHR
Hi Shilpa, Did you try sending a PATCH request as per the example you linked, with locked: true ? Eg
On 19 Apr 2018, at 08:13, shilpa kulkarni <shilpakulkarni91@...> wrote:
|
|||
|
|||
How to lock account in UAA using UAA API?
#cf
shilpa kulkarni
Hello,
I am using cloud foundry UAA APIs. I want to lock user account. But in the API documentation, I am getting only unlock account API. Reference link: http://docs.cloudfoundry.org/api/uaa/version/4.12.0/#unlock-account Is there any way to lock account in UAA using API.? Thanks & Regards Shilpa Kulkarni
|
|||
|
|||
Re: Announcing the Windows 2016 stack on Cloud Foundry
Dr Nic Williams <drnicwilliams@...>
Incredible work everyone!
Dr. Nic
From: cf-dev@... <cf-dev@...> on behalf of A William Martin <amartin@...>
Sent: Wednesday, April 18, 2018 6:19:21 PM To: Cloud Foundry dev Subject: [cf-dev] Announcing the Windows 2016 stack on Cloud Foundry Dear cf-dev: With the recent cf-deployment version 1.22, I’m pleased to announce the general availability of Cloud Foundry’s support for Windows Server 2016 (actually, version 1709) and its native Windows container technology. TL;DR Cloud Foundry developers can now use the windows2016 stack to push .NET applications, use more CF features (i.e. cf ssh), and expect a more elastic, scalable, stable experience on Cloud Foundry, due to the advantages afforded by Windows Server Containers. This is an important step for a sustainable runtime and platform experience for .NET. Getting Started Operators can upload a Windows Server version 1709 stemcell from bosh.io and bosh deploy with the windows2016-cell.yml ops file. Developers can then cf push with the windows2016 stack and hwc_buildpack. A Giant Leap for .NET + Cloud Foundry This effectively brings the .NET and Cloud Foundry technology ecosystems closer together, giving .NET apps a sustainable opportunity to leverage the full benefits of CF’s evolving runtime while consuming the Windows and IIS capabilities they also need. One can even use cf ssh to debug a cf-pushed .NET app remotely from Visual Studio – to be demoed at CF Summit NA this week! With the previous 2012 R2 stack (powered by IronFrame) and older Diego MSI, the experience for .NET apps and Windows deployment was rather sub-par. To address this, the CFF Windows teams have been working towards a level of “pragmatic parity” between the capabilities available for apps hosted on Linux and those on Windows. We think we’ve finally achieved the right foundation to serve many more CF features in the future in both worlds. As more is now possible for .NET on CF, at the same time, the Windows runtime has become simpler. For example, it now uses the same app lifecycle as Linux-hosted apps, which means multiple buildpacks landed for .NET apps without extra work. This shows an important aspect of sustainable “pragmatic parity” between Linux and Windows moving forward. What can .NET apps use on Cloud Foundry now? The latest features are cf ssh, accurate application CPU metrics, full container file system, context path routing, and multiple buildpacks. However, the benefits afforded by Windows containers are more than in the list of working features, but shine in the actual behavior of the platform while running .NET apps. For example, on the 2012 R2 stack, greedy apps could easily starve others of CPU and even consume the cell itself. Now, since CPU shares are available, apps on the windows2016 stack don’t suffer from noisy neighbors and are as elastic and scalable as Linux apps (but with a bit more memory overhead on Windows of course). How was this made possible? The new Windows has containers! (They’re analogous to those on Linux but naturally not quite like them.) Fortuitously, this means the new Windows stack can now use the same Garden infrastructure as Linux Diego cells, in which it swaps out runc for winc, an OCI-compliant container plugin for Windows, authored by the CF Garden Windows team. The stack also ships with groot-windows, a Garden image plugin. For details, there’s a CF Summit session coming up about it this Friday. What is Windows Server version 1709? This is a derivative of Windows Server 2016, delivered as part of Microsoft’s “Semi-Annual Channel” (SAC) releases of Windows Server. You can think of them similarly to the non-LTS versions of Ubuntu (16.10, 17.04, 17.10, etc.). While they may seem merely another version schema, the advances in the containerization characteristics in each SAC release are quite significant. Why 1709? Pinning first to “version 1709” aims to deliver to CF the best isolation and networking available from Windows containers. To continue along this track, the CF Garden Windows team has started to develop against “version 1803” with an eye towards the upcoming Windows Server 2019. We hope these versions will have what’s needed to implement container-to-container networking and more. What about AWS? If you look on bosh.io, you’ll notice the AWS stemcell (under “Windows 2016”) is conspicuously missing. Unfortunately, AWS doesn’t yet publish a Windows Server v1709 AMI, but Azure and GCP are good to go. (For vSphere, you still need to build your own stemcell, but CF BOSH Windows continues to work on improved ways of doing so.) While we’re lobbying for AWS to ship a 1709 AMI, Amazon still hasn’t given any timeline for its availability, and for that we could use your help. Thanks There are so many Cloud Foundry contributors to thank in this year-long effort. To call out a few:
Stay tuned for more info from the CFF Windows teams! There is still lots work to do, and we thrive on your feedback. Find us on the Cloud Foundry Slack at #garden-windows or #bosh-core-dev. Cheers, William CFF Garden Windows / BOSH Windows project lead
|
|||
|
|||
Announcing the Windows 2016 stack on Cloud Foundry
Dear cf-dev: With the recent cf-deployment version 1.22, I’m pleased to announce the general availability of Cloud Foundry’s support for Windows Server 2016 (actually, version 1709) and its native Windows container technology. TL;DR Cloud Foundry developers can now use the windows2016 stack to push .NET applications, use more CF features (i.e. cf ssh), and expect a more elastic, scalable, stable experience on Cloud Foundry, due to the advantages afforded by Windows Server Containers. This is an important step for a sustainable runtime and platform experience for .NET. Getting Started Operators can upload a Windows Server version 1709 stemcell from bosh.io and bosh deploy with the windows2016-cell.yml ops file. Developers can then cf push with the windows2016 stack and hwc_buildpack. A Giant Leap for .NET + Cloud Foundry This effectively brings the .NET and Cloud Foundry technology ecosystems closer together, giving .NET apps a sustainable opportunity to leverage the full benefits of CF’s evolving runtime while consuming the Windows and IIS capabilities they also need. One can even use cf ssh to debug a cf-pushed .NET app remotely from Visual Studio – to be demoed at CF Summit NA this week! With the previous 2012 R2 stack (powered by IronFrame) and older Diego MSI, the experience for .NET apps and Windows deployment was rather sub-par. To address this, the CFF Windows teams have been working towards a level of “pragmatic parity” between the capabilities available for apps hosted on Linux and those on Windows. We think we’ve finally achieved the right foundation to serve many more CF features in the future in both worlds. As more is now possible for .NET on CF, at the same time, the Windows runtime has become simpler. For example, it now uses the same app lifecycle as Linux-hosted apps, which means multiple buildpacks landed for .NET apps without extra work. This shows an important aspect of sustainable “pragmatic parity” between Linux and Windows moving forward. What can .NET apps use on Cloud Foundry now? The latest features are cf ssh, accurate application CPU metrics, full container file system, context path routing, and multiple buildpacks. However, the benefits afforded by Windows containers are more than in the list of working features, but shine in the actual behavior of the platform while running .NET apps. For example, on the 2012 R2 stack, greedy apps could easily starve others of CPU and even consume the cell itself. Now, since CPU shares are available, apps on the windows2016 stack don’t suffer from noisy neighbors and are as elastic and scalable as Linux apps (but with a bit more memory overhead on Windows of course). How was this made possible? The new Windows has containers! (They’re analogous to those on Linux but naturally not quite like them.) Fortuitously, this means the new Windows stack can now use the same Garden infrastructure as Linux Diego cells, in which it swaps out runc for winc, an OCI-compliant container plugin for Windows, authored by the CF Garden Windows team. The stack also ships with groot-windows, a Garden image plugin. For details, there’s a CF Summit session coming up about it this Friday. What is Windows Server version 1709? This is a derivative of Windows Server 2016, delivered as part of Microsoft’s “Semi-Annual Channel” (SAC) releases of Windows Server. You can think of them similarly to the non-LTS versions of Ubuntu (16.10, 17.04, 17.10, etc.). While they may seem merely another version schema, the advances in the containerization characteristics in each SAC release are quite significant. Why 1709? Pinning first to “version 1709” aims to deliver to CF the best isolation and networking available from Windows containers. To continue along this track, the CF Garden Windows team has started to develop against “version 1803” with an eye towards the upcoming Windows Server 2019. We hope these versions will have what’s needed to implement container-to-container networking and more. What about AWS? If you look on bosh.io, you’ll notice the AWS stemcell (under “Windows 2016”) is conspicuously missing. Unfortunately, AWS doesn’t yet publish a Windows Server v1709 AMI, but Azure and GCP are good to go. (For vSphere, you still need to build your own stemcell, but CF BOSH Windows continues to work on improved ways of doing so.) While we’re lobbying for AWS to ship a 1709 AMI, Amazon still hasn’t given any timeline for its availability, and for that we could use your help. Thanks There are so many Cloud Foundry contributors to thank in this year-long effort. To call out a few:
Stay tuned for more info from the CFF Windows teams! There is still lots work to do, and we thrive on your feedback. Find us on the Cloud Foundry Slack at #garden-windows or #bosh-core-dev. Cheers, William CFF Garden Windows / BOSH Windows project lead
|
|||
|
|||
Re: REMINDER: CAB call for April is Wednesday 04/18 @ 8a PST or 11a EST
Michael Maximilien
Final reminder. If you are in Boston for summit meet in room 156C otherwise zoom.
toggle quoted messageShow quoted text
No specific agenda items. Just chatting and QAs. It will be fun! Zoom soon. Best, dr.max ibm ☁ silicon valley, ca dr.max ibm ☁ silicon valley, ca
On Apr 11, 2018, at 4:39 PM, Michael Maximilien <maxim@...> wrote:
|
|||
|
|||
JBP 4.x: Committed heap shows signs of memory leak
Siva <mailsiva@...>
Hello CF community, I wanted to poll the community to see if anyone has come across this issue as described in the below github issue: We are noticing this in more than 1 service running JBP 4.x. Any feedback/input would be greatly appreciated. Thanks
|
|||
|
|||
Re: Understanding hard CPU limits
Marco Voelz
Thanks for the thorough explanation, Eric!
Warm regards Marco
From: <cf-dev@...> on behalf of Eric Malm <emalm@...>
Oh, I omitted one detail from the CC logic: the 1024 CPU-share maximum corresponds to 8192 MB (8 GB) of allocated memory, and this controls the proportionality constant between memory and CPU shares.
Thanks, Eric
On Tue, Apr 10, 2018 at 10:42 PM, Eric Malm <emalm@...>
wrote:
|
|||
|
|||
Re: Proposal for Incubation in the Extensions PMC: CF Dev
Michael Maximilien
Hi, all, Don’t forget to provide feedback for CF-dev. The proposing team has asked for a vote and since we have a CAB call for this Wednesday, I want to use that as the deadline. If there are no pending comments, we will move for a vote after the call. Thanks for your attention. Best, Max
On Fri, Feb 23, 2018 at 3:20 PM Stephen Levine <slevine@...> wrote:
--
dr.max
Sent from my iPhone
http://maximilien.org
|
|||
|
|||
Re: Proposal for Incubation in the Extensions PMC: CF Dev
Dr Nic Williams <drnicwilliams@...>
Wow, a 3G ram footprint would be incredible goal.
From: cf-dev@... <cf-dev@...> on behalf of Scott Sisil <ssisil@...>
Sent: Monday, April 16, 2018 8:03:21 AM To: Guillaume Berche Cc: cf-dev; Casey, Emily (Pivotal); Chip Childers Subject: Re: [cf-dev] Proposal for Incubation in the Extensions PMC: CF Dev Hi All,
We wanted to give everyone a quick update on progress we have made with CF Dev:
Just a quick reminder that Stephen Levine will be presenting a demo of CF Dev at the Thursday keynote @ CF Summit this week.
Finally, we would appreciate any additional feedback to the project - please submit feedback in the proposal itself or respond directly to this email thread. Thanks
Scott
On Tue, Feb 27, 2018 at 4:32 AM, Guillaume Berche
<bercheg@...> wrote:
|
|||
|
|||
Re: Proposal for Incubation in the Extensions PMC: CF Dev
Scott Sisil <ssisil@...>
Hi All, We wanted to give everyone a quick update on progress we have made with CF Dev:
Just a quick reminder that Stephen Levine will be presenting a demo of CF Dev at the Thursday keynote @ CF Summit this week. Finally, we would appreciate any additional feedback to the project - please submit feedback in the proposal itself or respond directly to this email thread. Thanks Scott
On Tue, Feb 27, 2018 at 4:32 AM, Guillaume Berche <bercheg@...> wrote:
|
|||
|
|||
Re: Proposal for incubation in the Extensions PMC: MS-SQL Service Broker
Michael Maximilien
Thanks Zach.
toggle quoted messageShow quoted text
All please do provide any feedback. If none outstanding, we will move for a vote after deadline. Best, dr.max ibm ☁ silicon valley, ca dr.max ibm ☁ silicon valley, ca
On Apr 13, 2018, at 5:48 PM, Zach Brown <zbrown@...> wrote:
|
|||
|
|||
Re: Proposal for incubation in the Extensions PMC: MS-SQL Service Broker
Zach Brown
Hi All, We proposed this Service Broker for MS SQL Server at the last CAB meeting: There don't appear to be any outstanding comments or questions on the proposal. If you've got a question or concern, please speak up by next Sunday, April 22. After that date, we'd like to propose a vote. To be clear, we're voting whether or not to deprecate the existing (abandoned) broker in the incubator, and replace it with this one maintained by Microsoft. Further explanation and links to the repos are in the proposal. Thank you!
On Sun, Mar 4, 2018 at 5:07 PM, Michael Maximilien <mmaximilien@...> wrote:
--
|
|||
|
|||
Re: Cloud Foundry and Kubernetes integration: Service Manager/Service Instance Sharing across CF and K8s
Mike Youngstrom
Thanks for posting to the list. It's great to see this work continuing to move forward and improve. Mike
On Thu, Apr 12, 2018 at 7:22 AM, Mueller, Florian <florian.mueller02@...> wrote: Hello all,
|
|||
|
|||
Cloud Foundry and Kubernetes integration: Service Manager/Service Instance Sharing across CF and K8s
Mueller, Florian
Hello all,
In the context of our Cloud Foundry and Kubernetes integration efforts [1], one important topic was the ability to share Open Service Broker-compliant services and service instances between Cloud Foundry and Kubernetes. After creating an initial specification [2] which received quite some attention we held a face-to-face workshop [3] mid-February at the SAP headquarters in Germany with participation from colleagues working at Google, IBM, Pivotal and SUSE to shape the topic further. As a result of the joint workshop and the discussions, there is now a very first draft specification document [4] which will be reviewed and refined by the involved parties moving forward. Additionally, SAP has started a corresponding reference implementation [5] which is driven by our implementation team consisting of colleagues from SAP labs Sofia. We are looking forward to feedback and continued collaboration from both the Cloud Foundry and Kubernetes communities hoping that the resulting work will serve as the basis for service instance sharing across both stacks. If you are interested in contributing, please don't hesitate to join our mailing list [6] and slack channel [7] or open issues in our Github repository [8][9]. Thanks in advance, Florian Müller [1] https://lists.cloudfoundry.org/g/cf-dev/message/7613 [2] https://docs.google.com/document/d/1jmcvqsz8I724Zqp-cm4KJWloHNtwPjgIeMC_PgA08rg/edit [3] https://docs.google.com/document/d/1fbzLLsI7PkU00NUdX4VNGZ8MHJ3Vqy-oYDO55AbqjWw/edit [4] https://github.com/Peripli/specification/blob/master/api.md [5] https://github.com/Peripli [6] https://groups.google.com/forum/#!forum/service-manager-wg [7] https://openservicebrokerapi.slack.com/messages/C99PBB6ER [8] https://github.com/Peripli/service-manager/issues [9] https://github.com/Peripli/specification/issues
|
|||
|
|||
Re: Understanding hard CPU limits
Grifalconi, Michael <michael.grifalconi@...>
Hello Eric,
Many thanks for the detailed explanation!
Best regards, Michael
From:
<cf-dev@...> on behalf of Eric Malm <emalm@...>
Oh, I omitted one detail from the CC logic: the 1024 CPU-share maximum corresponds to 8192 MB (8 GB) of allocated memory, and this controls the proportionality constant between memory and CPU shares.
Thanks, Eric
On Tue, Apr 10, 2018 at 10:42 PM, Eric Malm <emalm@...>
wrote:
|
|||
|
|||
routing-release 0.175.0
Shubha Anjur Tupil
Hello all, The routing team just cut release 0.175.0 with a few bug fixes and an update of routing-release to Golang 1.10.1. Release highlights here. - Operators can now configure the manifest property `router.sanitize_forwarded_proto: true` to sanitize the X-Forwarded-Proto HTTP header in a request when `router.force_forwarded_proto_https` is set to `false`. We recommend setting the property to `true` if the Gorouter is the first component to terminate TLS, and setting it to `false` when your load balancer is terminating TLS and setting the X-Forwarded-Proto header details. The issue was identified by Aaron Huber. Thanks Aaron! - Gorouter and dependencies have been updated to Golang 1.10.1 details - Fixed an issue where the Gorouter was temporarily(for 30 seconds) removing backends from the pool of available backends when a downstream client closes the connection while the request is still being processed. This could lead to temporary application unavailability details - Fixed a bug where `request_timeout_in_seconds` was being set per connection and not per request, leading to requests timing out while the request is still being processed details. Thanks Swetha Repakula and Richard Johnson for identifying, submitting a PR and helping with testing the fix for this issue. - Fixed a bug where the router was temporarily(for 30 seconds) not removing a backend from the pool of available backends when a backend application instance was misbehaving (e.g. closing the connection or crashing). Operators would see 502 errors in the Gorouter logs detailsRegards, Shubha & Shannon CF Routing, Product Managers
|
|||
|
|||
REMINDER: CAB call for April is Wednesday 04/18 @ 8a PST or 11a EST
Michael Maximilien
FYI...
Reminder that the CAB call for April is scheduled for next Wednesday 04/18 @ 8a PST / 11a EST. Since next week is also CF Summit week, in Boston, MA, we plan to do the CAB call live at the conference.
So please plan to join us in Room 156C as we will have a live discussions and QAs with conference attendees and those joining us on Zoom [1].
No other agenda items are planed. I will send one more reminder next week on this list. See you all soon.
Best, ------ dr.max ibm ☁ silicon valley, ca maximilien.org [1] https://docs.google.com/document/d/1SCOlAquyUmNM-AQnekCOXiwhLs6gveTxAcduvDcW_xI
|
|||
|
|||
Re: Understanding hard CPU limits
Eric Malm <emalm@...>
Oh, I omitted one detail from the CC logic: the 1024 CPU-share maximum corresponds to 8192 MB (8 GB) of allocated memory, and this controls the proportionality constant between memory and CPU shares. Thanks, Eric
On Tue, Apr 10, 2018 at 10:42 PM, Eric Malm <emalm@...> wrote:
|
|||
|
|||
Re: Understanding hard CPU limits
Eric Malm <emalm@...>
Hey, Michael and Marco, Sorry for the delay in getting a chance to respond to this thread. In general, CF apps receive an allocation of CPU shares proportional to their allocated memory, but with some high and low cutoffs and some limited granularity: - The minimum number of CPU shares that an app instance (or task) will receive is 10. - The maximum is 1024. - The granularity is roughly every 10 shares (10.24, to be precise). This scale is a result of the conversions made at https://github.com/ In my personal experiments when setting the garden.cpu_quota_per_share_in_ Best, Eric
On Fri, Apr 6, 2018 at 11:02 AM, Dieu Cao <dcao@...> wrote:
|
|||
|