Re: Understand Billing Event
toggle quoted message
Show quoted text
On Thu, May 14, 2015 at 8:13 PM, Dieu Cao <dcao(a)pivotal.io> wrote: Hi Guancai,
This manifest property and api's have been deprecated as they calculated things incorrectly. They have been replaced with App Usage Events and Service Usage Events which you can find more info on in the apidocs. [1]
-Dieu Cao CF Runtime PM
[1]
On Thu, May 14, 2015 at 5:01 PM, Guangcai Wang <guangcai.wang(a)gmail.com> wrote:
Hi,
I am trying to understand the property "cc:billing_event_writing_enabled". Who can share some knowledge on my following questions? if I enable this property, when will the billing events be written? How long will the billing events be kept?
Thanks
_______________________________________________ cf-dev mailing list cf-dev(a)lists.cloudfoundry.org https://lists.cloudfoundry.org/mailman/listinfo/cf-dev
|
|
Re: Understand Billing Event
Hi Guancai, This manifest property and api's have been deprecated as they calculated things incorrectly. They have been replaced with App Usage Events and Service Usage Events which you can find more info on in the apidocs. [1] -Dieu Cao CF Runtime PM [1] On Thu, May 14, 2015 at 5:01 PM, Guangcai Wang <guangcai.wang(a)gmail.com> wrote: Hi,
I am trying to understand the property "cc:billing_event_writing_enabled". Who can share some knowledge on my following questions? if I enable this property, when will the billing events be written? How long will the billing events be kept?
Thanks
_______________________________________________ cf-dev mailing list cf-dev(a)lists.cloudfoundry.org https://lists.cloudfoundry.org/mailman/listinfo/cf-dev
|
|
Re: [vcap-dev] Java OOM debugging
|
|
Hi,
I am trying to understand the property "cc:billing_event_writing_enabled". Who can share some knowledge on my following questions? if I enable this property, when will the billing events be written? How long will the billing events be kept?
Thanks
|
|
visual studio extension - login error
We have been trying to use the visual studio extension from https://github.com/cloudfoundry-incubator/cf-vs-extension but get an error when logging in. Unexpected character encountered while parsing value: <. Path “. Line 0 position 0. The error message in the uaa logs come from the filter called BackwardsCompatibleTokenEndpointAuthenticationFilter in the uaa code: [2015-05-14 20:26:41.209] uaa - 14916 [http-bio-8080-exec-9] .... DEBUG --- BackwardsCompatibleTokenEndpointAuthenticationFilter: Authentication request for failed: org.springframework.security.authentication.BadCredentialsException: No client authentication found. Remember to put a filter upstream of the TokenEndpointAuthenticationFilter. [2015-05-14 20:26:41.211] uaa - 14916 [http-bio-8080-exec-9] .... DEBUG --- DefaultOAuth2ExceptionRenderer: Written [error="unauthorized", error_description="No client authentication found. Remember to put a filter upstream of the TokenEndpointAuthenticationFilter."] as "application/json" using [org.springframework.http.converter.json.MappingJacksonHttpMessageConverter(a)60f269ce] Has anyone else tried the cf-vs-extention on their private CF environments and have you seen this problem? Jon Price Intel Corp
|
|
Re: [vcap-dev] Java OOM debugging
On 15-05-14 10:23 AM, Daniel Jones wrote: Thanks again for your input. Have you seen this problem with versions of Tomcat before 8.0.20? I don't have proper data gathered from older than 8.0.20, so I cannot compare. I was just wondering when did 8.0.20 become available in JBP, I found this date: HEAD https://download.run.pivotal.io/tomcat/tomcat-8.0.20.tar.gz | grep Last-Modified Last-Modified: Tue, 03 Mar 2015 11:35:19 GMT David and I think we've narrowed down the issue to a change from using Tomcat 8.0.18 to 8.0.21. We're running more tests and collaborating with Pivotal support. We also noticed that non-prod versions of our apps were taking longer to crash, so it would seem to be activity-related at least.
Do you know how Tomcat's APR/NIO memory gets allocated? Is there a way of telling from pmap whether pages are being used for NIO buffers or by the APR?
I don't think you can get the info from pmap. The malloc_info xml shows better allocation stats, but only stats. Is Tomcat using APR library or NIO by default in CloudFoundry? I'd assume that NIO isn't used by default. Have you tried the "-Dsun.zip.disableMemoryMapping=true" JVM option to rule out the possibility that zip/jar file access is causing the trouble? There has been some bugs in the past in the JVM in that area: http://javaeesupportpatterns.blogspot.com.es/2011/08/mmap-file-outofmemoryerror-and-pmap.html. That has been fixed http://bugs.java.com/bugdatabase/view_bug.do?bug_id=6280693 , but doing a check with "-Dsun.zip.disableMemoryMapping=true" JVM option would be interesting. Mainly concerned about this commit: https://github.com/apache/tomcat/commit/6e5420c67fbad81973d888ad3701a392fac4fc71Since most commits weren't very interesting in this diff: https://github.com/apache/tomcat/compare/075bc2d6...c0eb033f?w=1Might make a difference to Jar file access. I'm not saying that this commit is a problem, just seemed like a big change. -Lari
|
|
Re: - Is it possible to create custom Roles
toggle quoted message
Show quoted text
From: Kinjal Doshi <kindoshi(a)gmail.com> Date: Wed, May 13, 2015 at 11:04 PM Subject: [cf-dev] - Is it possible to create custom Roles To: cf-dev(a)lists.cloudfoundry.org
Hi,
In Pivotal CF, is it possible to create custom roles?
Thanks, Kinjal
_______________________________________________ cf-dev mailing list cf-dev(a)lists.cloudfoundry.org https://lists.cloudfoundry.org/mailman/listinfo/cf-dev
|
|
Re: UAA, SAML, and LDAP questions
Hi Aaron, ECP Support is a roadmap item at this time and doesn't have a set timeline. Apart from adding ECP SAML SP support on the UAA side, the SAML IDP needs to implement and support this profile as well. Thanks, Sree Tummidi Sr. Product Manager Identity - Pivotal Cloud Foundry On Wed, May 13, 2015 at 5:03 PM, Huber, Aaron M <aaron.m.huber(a)intel.com> wrote: In our case we use email address as the username via LDAP as well (UPN actually, but same thing) so it would be the same. Is there a timeline for the ECP profile support?
Aaron
*From:* Filip Hanik [mailto:fhanik(a)pivotal.io] *Sent:* Wednesday, May 13, 2015 3:54 PM *To:* Mike Youngstrom *Cc:* Sree Tummidi; Huber, Aaron M; CF Developers Mailing List
*Subject:* Re: [cf-dev] UAA, SAML, and LDAP questions
The problem with SAML is that we never see the username. We only receive the username in form of an email address from the SAML IDP. This would not correspond to the username you would log in to LDAP with.
The use case you describe would indicate we want two different authentication sources represent the same authentication source.
I believe the correct solution here is to implement the SAML ECP profile. At that point you'd have an option to go LDAP or SAML rather than trying to mix both.
Filip
On Wed, May 13, 2015 at 3:30 PM, Mike Youngstrom <youngm(a)gmail.com> wrote:
Possibly, though I think regular user authentication would still be a concern for our users since security forces a rather short TTL for our access tokens. I'll have to take a look and try a few things. We may decide to just use LDAP and forget about the SSO integration for now.
Mike
On Wed, May 13, 2015 at 3:03 PM, Sree Tummidi <stummidi(a)pivotal.io> wrote:
Hi Aaron,
You could potentially use the access token (similar to a personal access token used for GitHub API ) to achieve the CLI automation. The access token can either be retrieved via an authentication to the CLI itself or via UAAC.
Regular users would still continue to use the -sso option.
Thanks,
Sree Tummidi
Sr. Product Manager
Identity - Pivotal Cloud Foundry
On Wed, May 13, 2015 at 1:56 PM, Huber, Aaron M <aaron.m.huber(a)intel.com> wrote:
That’s the main concern we have as well – we currently need LDAP for the CLI since SAML doesn’t work in that case, but we’d like SAML for web-based interactions (SSO in a portal, etc.). But at present it seems like that’s not possible without the user having to deal with effectively two separate accounts.
Aaron
*From:* Mike Youngstrom [mailto:youngm(a)gmail.com] *Sent:* Wednesday, May 13, 2015 1:34 PM *To:* Filip Hanik *Cc:* Huber, Aaron M; CF Developers Mailing List *Subject:* Re: [cf-dev] UAA, SAML, and LDAP questions
Well, that's a bummer. Is there any way around that? Our SAML is backed by the same LDAP so they are the same user. We can provide a unique ID to correlate SAML with LDAP users.
Mike
On Wed, May 13, 2015 at 2:28 PM, Filip Hanik <fhanik(a)pivotal.io> wrote:
yes, it would result in two different shadow accounts, differentiated by the value of the user's origin field
On Wed, May 13, 2015 at 2:08 PM, aaron_huber <aaron.m.huber(a)intel.com> wrote:
Would the same user logging in via SAML and LDAP result in two different UAA user objects with different sources, so that the user would have two different sets of orgs/spaces/apps?
Aaron
-- View this message in context: http://cf-dev.70369.x6.nabble.com/cf-dev-UAA-SAML-and-LDAP-questions-tp62p65.html Sent from the CF Dev mailing list archive at Nabble.com.
_______________________________________________ cf-dev mailing list cf-dev(a)lists.cloudfoundry.org https://lists.cloudfoundry.org/mailman/listinfo/cf-dev
_______________________________________________ cf-dev mailing list cf-dev(a)lists.cloudfoundry.org https://lists.cloudfoundry.org/mailman/listinfo/cf-dev
_______________________________________________ cf-dev mailing list cf-dev(a)lists.cloudfoundry.org https://lists.cloudfoundry.org/mailman/listinfo/cf-dev
|
|
Re: [vcap-dev] Java OOM debugging
Hi, We've had a look through this and think it would be useful to give our perspective for now. To clarify the Memory heuristics, they are just weightings and not percentages, they were never meant to add up to a hundred. The fact the default settings add up to 105 is purely chance. Will see if the docs can be improved around this. The fact the JRE isn't detecting an OOM error and triggering the killjava.sh script and because increasing the native memory prolongs the OOM, we believe the leak is occurring outside the JRE. NIO, JNI, lots of threads creation, etc... .There are other things running in the container that could be consuming the memory. If you suspect Tomcat then 8.0.22 is out and has been published for use in the Java Buildpack, moving up to that might help things. We still haven't seen a rash of memory problems from the wider Tomcat community for either 8.0.20 or 8.0.22 though so this would be unexpected. It is actually possible to set the MALLOC_ARENA_MAX using an environment variable. cf set-env <APP> MALLOC_ARENA_MAX 2 This can also be specified in an applications manifest file. Finally, a change has gone in to the master branch of the Java buildpack that moves all of the memory heuristics code to an external program written in Go. This means that scaling a Java application with cf scale no longer requires a separate restage for the new memory setting to be applied as the Go code will calculate them during every application start. During staging you will also be able to see the memory settings that have been calculated come out on the console. The plan is to release this new feature in version 3.1 of the buildpack in a week or so time. We do still have an issue on our backlog to look at this and it shouldn't be too long before we get to it. https://www.pivotaltracker.com/story/show/94381284Chris. *On Mon, May 11, 2015 at 13:19 PM, Lari Hotari <Lari at hotari.net < https://lists.cloudfoundry.org/mailman/listinfo/cf-dev> **<mailto:Lari at hotari.net < https://lists.cloudfoundry.org/mailman/listinfo/cf-dev>>> wrote:* fyi. Tomcat 8.0.20 might be consuming more memory than 8.0.18: https://github.com/cloudfoundry/java-buildpack/issues/166#issuecomment-94517568
* Other things we’ve tried: *>>* - We set verbose garbage collection to verify there was no *>* memory size issues within the JVM. There wasn’t. *>>* - We tried setting minimum memory for native, it had no *>* effect. The container still gets killed *>>* - We tried adjusting the ‘memory heuristics’ so that they *>* added up to 80 rather than 100. This had the effect of causing a delay *>* in the container being killed. However it still was killed. *>I think adjusting memory heuristics so that they add up to 80 doesn't make a difference because the values aren't percentages. The values are proportional weighting values used in the memory calculation:https://github.com/grails-samples/java-buildpack/blob/b4abf89/docs/jre-oracle_jre.md#memory-calculation
I found out that the only way to reserve "unused" memory is to set a high value for the native memory lower bound in the memory_sizes.native setting of config/open_jdk_jre.yml . Example:https://github.com/grails-samples/java-buildpack/blob/22e0f6a/config/open_jdk_jre.yml#L25
>>* This seems like classic memory leak behaviour to me. *>In my case it wasn't a classical Java memory leak, since the Java application wasn't leaking memory. I was able to confirm this by getting some heap dumps with the HeapDumpServlet (https://github.com/lhotari/java-buildpack-diagnostics-app/blob/master/src/main/groovy/io/github/lhotari/jbpdiagnostics/HeapDumpServlet.groovy) and analyzing them.
In my case the JVM's RSS memory size is slowly growing. It probably is some kind of memory leak since one process I've been monitoring now is very close to the memory limit. The uptime is now almost 3 weeks.
Here is the latest diff of the meminfo report.https://gist.github.com/lhotari/ee77decc2585f56cf3ad#file-meminfo_diff_example2-txt
*From a Java perspective this isn't classical. The JVM heap isn't filling *up. The problem is that RSS size is slowly growing and will eventually cause the Java process to cross the memory boundary so that the process gets kill by the Linux kernel cgroups OOM killer.
RSS size might be growing because of many reasons. I have been able to slow down the growth by doing the various MALLOC_ and JVM parameter tuning (-XX:MinMetaspaceExpansion=1M -XX:CodeCacheExpansionSize=1M). I'm able to get a longer uptime, but the problem isn't solved.
Lari
On 15-05-11 06:41 AM, Head-Rapson, David wrote:
* Thanks for the continued advice. *>> >>* We’ve hit on a key discovery after yet another a soak test this weekend. *>>* - When we deploy using Tomcat 8.0.18 we don’t see the issue *>>* - When we deploy using Tomcat 8.0.20 (same app version, same *>* CF space, same services bound, same JBP code version, same JRE *>* version, running at the same time), we see the crashes occurring after *>* just a couple of hours. *>> >>* Ideally we’d go ahead with the memory calculations you mentioned *>* however we’re stuck on lucid64 because we’re using Pivotal CF 1.3.x & *>* we’re having upgrade issues to 1.4.x. *>>* So we’re not able to adjust MALLOC_ARENA_MAX, nor are we able to view *>* RSS in pmap as you describe *>> >>* Other things we’ve tried: *>>* - We set verbose garbage collection to verify there was no *>* memory size issues within the JVM. There wasn’t. *>>* - We tried setting minimum memory for native, it had no *>* effect. The container still gets killed *>>* - We tried adjusting the ‘memory heuristics’ so that they *>* added up to 80 rather than 100. This had the effect of causing a delay *>* in the container being killed. However it still was killed. *>> >>* This seems like classic memory leak behaviour to me. *>> >>* *From:*Lari Hotari [mailto:lari.hotari at sagire.fi <https://lists.cloudfoundry.org/mailman/listinfo/cf-dev>] *On Behalf Of *Lari *>* Hotari *>* *Sent:* 08 May 2015 16:25 *>* *To:* Daniel Jones; Head-Rapson, David *>* *Cc:* cf-dev at lists.cloudfoundry.org <https://lists.cloudfoundry.org/mailman/listinfo/cf-dev> *>* *Subject:* Re: [Cf-dev] [vcap-dev] Java OOM debugging *>> >>>* For my case, it turned out to be essential to reserve enough memory *>* for "native" in the JBP. For the 2GB total memory, I set the minimum *>* to 330M. With that setting I have been able to get over 2 weeks up *>* time by now. *>>* I mentioned this in my previous email: *>>* The workaround for that in my case was to add a native key under *>* memory_sizes in open_jdk_jre.yml and set the minimum to 330M (that is *>* for a 2GB total memory). *>* see example *>* https://github.com/grails-samples/java-buildpack/blob/22e0f6a/config/open_jdk_jre.yml#L25 <https://github.com/grails-samples/java-buildpack/blob/22e0f6a/config/open_jdk_jre.yml#L25> *>* that was how I got the app I'm running on CF to stay within the memory *>* bounds. I'm sure there is now also a way to get the keys without *>* forking the buildpack. I could have also adjusted the percentage *>* portions, but I wanted to set a hard minimum for this case. *>>>* I've been trying to get some insight by diffing the reports gathered *>* from the meminfo servlet *>* https://github.com/lhotari/java-buildpack-diagnostics-app/blob/master/src/main/groovy/io/github/lhotari/jbpdiagnostics/MemoryInfoServlet.groovy <https://github.com/lhotari/java-buildpack-diagnostics-app/blob/master/src/main/groovy/io/github/lhotari/jbpdiagnostics/MemoryInfoServlet.groovy> *>>>* Here is such an example of a diff: *>* https://gist.github.com/lhotari/ee77decc2585f56cf3ad#file-meminfo_diff_example-txt <https://gist.github.com/lhotari/ee77decc2585f56cf3ad#file-meminfo_diff_example-txt> *>>* meminfo has pmap output included to get the report of the memory map *>* of the process. I have just noticed that most of the memory has *>* already been mmap:ed from the OS and it's just growing in RSS size. *>* For example: *>* < 00000000a7600000 1471488 1469556 1469556 rw--- [ anon ] *>* > 00000000a7600000 1471744 1470444 1470444 rw--- [ anon ] *>>* The pmap output from lucid64 didn't include the RSS size, so you have *>* to use cflinuxfs2 for this. It's also better because of other reasons. *>* The glibc in lucid64 is old and has some bugs around the MALLOC_ARENA_MAX. *>>* I was manually able to estimate the maximum size of the RSS size of *>* what the Java process will consume by simply picking the large *>* anon-blocks from the pmap report and calculating those blocks by the *>* allocated virtual size (VSS). *>* Based on this calculation, I picked the minimum of 330M for "native" *>* in open_jdk_jre.yml as I mentioned before. *>>* It looks like these rows are for the Heap size: *>* < 00000000a7600000 1471488 1469556 1469556 rw--- [ anon ] *>* > 00000000a7600000 1471744 1470444 1470444 rw--- [ anon ] *>>* It looks like the JVM doesn't fully allocate that block in RSS *>* initially and most of the growth of RSS size comes from that in my *>* case. In your case, it might be something different. *>>* I also added a servlet for getting glibc malloc_info statistics in XML *>* format (). I haven't really analysed that information because of time *>* constraints and because I don't have a pressing problem any more. btw. *>* The malloc_info XML report is missing some key elements, that has been *>* added in later glibc versions *>* (https://github.com/bminor/glibc/commit/4d653a59ffeae0f46f76a40230e2cfa9587b7e7e <https://github.com/bminor/glibc/commit/4d653a59ffeae0f46f76a40230e2cfa9587b7e7e>). *>>* If killjava.sh never fires and the app crashed with Warden out of *>* memory errors, then I believe it's the kernel's cgroups OOM killer *>* that has killed the container processes. I have found this location *>* where Warden oom notifier gets the OOM notification event: *>* https://github.com/cloudfoundry/warden/blob/ad18bff/warden/lib/warden/container/features/mem_limit.rb#L70 <https://github.com/cloudfoundry/warden/blob/ad18bff/warden/lib/warden/container/features/mem_limit.rb#L70> *>* This is the oom.c source code: *>* https://github.com/cloudfoundry/warden/blob/ad18bff7dc56acbc55ff10bcc6045ebdf0b20c97/warden/src/oom/oom.c <https://github.com/cloudfoundry/warden/blob/ad18bff7dc56acbc55ff10bcc6045ebdf0b20c97/warden/src/oom/oom.c> *>* . It reads the cgroups control files and receives events from the *>* kernel that way. *>>* I'd suggest that you use pmap for the Java process after it has *>* started and calculate the maximum RSS size by calculating the VSS size *>* of the large anon blocks instead of RSS for the blocks that the Java *>* process has reserved for it's different memory areas (I think you *>* shouldn't . You should discard adding VSS for the *>* CompressedClassSpaceSize block. *>* After this calculation, add enough memory to the "native" parameter in *>* JBP until the RSS size calculated this way stays under the limit. *>* That's the only "method" I have come up by now. *>>* It might be required to have some RSS space allocated for any zip/jar *>* files read by the Java process. I think that Java uses mmap files for *>* zip file reading by default and that might go on top of all other limits. *>* To test this theory, I'd suggest testing by adding *>* -Dsun.zip.disableMemoryMapping=true system property setting to *>* JAVA_OPTS. That disables the native mmap for zip/jar file reading. I *>* haven't had time to test this assumption. *>>* I guess the only way to understand how Java allocates memory is to *>* look at the source code. *>* from http://openjdk.java.net/projects/jdk8u/ <http://openjdk.java.net/projects/jdk8u/> , the instructions to get *>* the source code of JDK 8: *>* hg clone http://hg.openjdk.java.net/jdk8u/jdk8u;cd <http://hg.openjdk.java.net/jdk8u/jdk8u;cd> jdk8u;sh get_source.sh *>* This tool is really good for grepping and searching the source code: *>* http://geoff.greer.fm/ag/ <http://geoff.greer.fm/ag/> <http://geoff.greer.fm/ag/ <http://geoff.greer.fm/ag/>> *>* On Ubuntu it's in silversearcher-ag package, "apt-get install *>* silversearcher-ag" and on MacOSX brew it's "brew install *>* the_silver_searcher". *>* This alias is pretty useful: *>* alias codegrep='ag --color --group --pager less -C 5' *>* Then you just search for the correct location in code by starting with *>* the tokens you know about: *>* codegrep MaxMetaspaceSize *>* this gives pretty good starting points in looking how the JDK *>* allocates memory. *>>* So the JDK source code is only a few commands away. *>>* It would be interesting to hear more about this if someone has the *>* time to dig in to this. This is about how far I got and I hope sharing *>* this information helps someone continue. :) *>>>* Lari *>* github/twitter: lhotari *>
Truncated -- -- Christopher Frost - GoPivotal UK
|
|
Re: [vcap-dev] Java OOM debugging
Hi Lari,
Thanks again for your input. Have you seen this problem with versions of Tomcat before 8.0.20?
David and I think we've narrowed down the issue to a change from using Tomcat 8.0.18 to 8.0.21. We're running more tests and collaborating with Pivotal support. We also noticed that non-prod versions of our apps were taking longer to crash, so it would seem to be activity-related at least.
Do you know how Tomcat's APR/NIO memory gets allocated? Is there a way of telling from pmap whether pages are being used for NIO buffers or by the APR?
I wonder if the other folks that have reported CF out of memory errors with later versions of Tomcat are seeing slow creeps in native memory consumption?
toggle quoted message
Show quoted text
On Mon, May 11, 2015 at 2:19 PM, Lari Hotari <Lari(a)hotari.net> wrote: fyi. Tomcat 8.0.20 might be consuming more memory than 8.0.18:
https://github.com/cloudfoundry/java-buildpack/issues/166#issuecomment-94517568
Other things we’ve tried:
- We set verbose garbage collection to verify there was no memory size issues within the JVM. There wasn’t.
- We tried setting minimum memory for native, it had no effect. The container still gets killed
- We tried adjusting the ‘memory heuristics’ so that they added up to 80 rather than 100. This had the effect of causing a delay in the container being killed. However it still was killed.
I think adjusting memory heuristics so that they add up to 80 doesn't make a difference because the values aren't percentages. The values are proportional weighting values used in the memory calculation:
https://github.com/grails-samples/java-buildpack/blob/b4abf89/docs/jre-oracle_jre.md#memory-calculation
I found out that the only way to reserve "unused" memory is to set a high value for the native memory lower bound in the memory_sizes.native setting of config/open_jdk_jre.yml . Example:
https://github.com/grails-samples/java-buildpack/blob/22e0f6a/config/open_jdk_jre.yml#L25
This seems like classic memory leak behaviour to me.
In my case it wasn't a classical Java memory leak, since the Java application wasn't leaking memory. I was able to confirm this by getting some heap dumps with the HeapDumpServlet ( https://github.com/lhotari/java-buildpack-diagnostics-app/blob/master/src/main/groovy/io/github/lhotari/jbpdiagnostics/HeapDumpServlet.groovy) and analyzing them.
In my case the JVM's RSS memory size is slowly growing. It probably is some kind of memory leak since one process I've been monitoring now is very close to the memory limit. The uptime is now almost 3 weeks.
Here is the latest diff of the meminfo report.
https://gist.github.com/lhotari/ee77decc2585f56cf3ad#file-meminfo_diff_example2-txt
From a Java perspective this isn't classical. The JVM heap isn't filling up. The problem is that RSS size is slowly growing and will eventually cause the Java process to cross the memory boundary so that the process gets kill by the Linux kernel cgroups OOM killer.
RSS size might be growing because of many reasons. I have been able to slow down the growth by doing the various MALLOC_ and JVM parameter tuning (-XX:MinMetaspaceExpansion=1M -XX:CodeCacheExpansionSize=1M). I'm able to get a longer uptime, but the problem isn't solved.
Lari
On 15-05-11 06:41 AM, Head-Rapson, David wrote:
Thanks for the continued advice.
We’ve hit on a key discovery after yet another a soak test this weekend.
- When we deploy using Tomcat 8.0.18 we don’t see the issue
- When we deploy using Tomcat 8.0.20 (same app version, same CF space, same services bound, same JBP code version, same JRE version, running at the same time), we see the crashes occurring after just a couple of hours.
Ideally we’d go ahead with the memory calculations you mentioned however we’re stuck on lucid64 because we’re using Pivotal CF 1.3.x & we’re having upgrade issues to 1.4.x.
So we’re not able to adjust MALLOC_ARENA_MAX, nor are we able to view RSS in pmap as you describe
Other things we’ve tried:
- We set verbose garbage collection to verify there was no memory size issues within the JVM. There wasn’t.
- We tried setting minimum memory for native, it had no effect. The container still gets killed
- We tried adjusting the ‘memory heuristics’ so that they added up to 80 rather than 100. This had the effect of causing a delay in the container being killed. However it still was killed.
This seems like classic memory leak behaviour to me.
*From:* Lari Hotari [mailto:lari.hotari(a)sagire.fi <lari.hotari(a)sagire.fi>] *On Behalf Of *Lari Hotari *Sent:* 08 May 2015 16:25 *To:* Daniel Jones; Head-Rapson, David *Cc:* cf-dev(a)lists.cloudfoundry.org *Subject:* Re: [Cf-dev] [vcap-dev] Java OOM debugging
For my case, it turned out to be essential to reserve enough memory for "native" in the JBP. For the 2GB total memory, I set the minimum to 330M. With that setting I have been able to get over 2 weeks up time by now.
I mentioned this in my previous email:
The workaround for that in my case was to add a native key under memory_sizes in open_jdk_jre.yml and set the minimum to 330M (that is for a 2GB total memory). see example https://github.com/grails-samples/java-buildpack/blob/22e0f6a/config/open_jdk_jre.yml#L25 that was how I got the app I'm running on CF to stay within the memory bounds. I'm sure there is now also a way to get the keys without forking the buildpack. I could have also adjusted the percentage portions, but I wanted to set a hard minimum for this case.
I've been trying to get some insight by diffing the reports gathered from the meminfo servlet https://github.com/lhotari/java-buildpack-diagnostics-app/blob/master/src/main/groovy/io/github/lhotari/jbpdiagnostics/MemoryInfoServlet.groovy
Here is such an example of a diff:
https://gist.github.com/lhotari/ee77decc2585f56cf3ad#file-meminfo_diff_example-txt
meminfo has pmap output included to get the report of the memory map of the process. I have just noticed that most of the memory has already been mmap:ed from the OS and it's just growing in RSS size. For example: < 00000000a7600000 1471488 1469556 1469556 rw--- [ anon ]
00000000a7600000 1471744 1470444 1470444 rw--- [ anon ] The pmap output from lucid64 didn't include the RSS size, so you have to use cflinuxfs2 for this. It's also better because of other reasons. The glibc in lucid64 is old and has some bugs around the MALLOC_ARENA_MAX.
I was manually able to estimate the maximum size of the RSS size of what the Java process will consume by simply picking the large anon-blocks from the pmap report and calculating those blocks by the allocated virtual size (VSS). Based on this calculation, I picked the minimum of 330M for "native" in open_jdk_jre.yml as I mentioned before.
It looks like these rows are for the Heap size: < 00000000a7600000 1471488 1469556 1469556 rw--- [ anon ]
00000000a7600000 1471744 1470444 1470444 rw--- [ anon ] It looks like the JVM doesn't fully allocate that block in RSS initially and most of the growth of RSS size comes from that in my case. In your case, it might be something different.
I also added a servlet for getting glibc malloc_info statistics in XML format (). I haven't really analysed that information because of time constraints and because I don't have a pressing problem any more. btw. The malloc_info XML report is missing some key elements, that has been added in later glibc versions ( https://github.com/bminor/glibc/commit/4d653a59ffeae0f46f76a40230e2cfa9587b7e7e ).
If killjava.sh never fires and the app crashed with Warden out of memory errors, then I believe it's the kernel's cgroups OOM killer that has killed the container processes. I have found this location where Warden oom notifier gets the OOM notification event:
https://github.com/cloudfoundry/warden/blob/ad18bff/warden/lib/warden/container/features/mem_limit.rb#L70 This is the oom.c source code: https://github.com/cloudfoundry/warden/blob/ad18bff7dc56acbc55ff10bcc6045ebdf0b20c97/warden/src/oom/oom.c . It reads the cgroups control files and receives events from the kernel that way.
I'd suggest that you use pmap for the Java process after it has started and calculate the maximum RSS size by calculating the VSS size of the large anon blocks instead of RSS for the blocks that the Java process has reserved for it's different memory areas (I think you shouldn't . You should discard adding VSS for the CompressedClassSpaceSize block. After this calculation, add enough memory to the "native" parameter in JBP until the RSS size calculated this way stays under the limit. That's the only "method" I have come up by now.
It might be required to have some RSS space allocated for any zip/jar files read by the Java process. I think that Java uses mmap files for zip file reading by default and that might go on top of all other limits. To test this theory, I'd suggest testing by adding -Dsun.zip.disableMemoryMapping=true system property setting to JAVA_OPTS. That disables the native mmap for zip/jar file reading. I haven't had time to test this assumption.
I guess the only way to understand how Java allocates memory is to look at the source code. from http://openjdk.java.net/projects/jdk8u/ , the instructions to get the source code of JDK 8: hg clone http://hg.openjdk.java.net/jdk8u/jdk8u;cd jdk8u;sh get_source.sh This tool is really good for grepping and searching the source code: http://geoff.greer.fm/ag/ On Ubuntu it's in silversearcher-ag package, "apt-get install silversearcher-ag" and on MacOSX brew it's "brew install the_silver_searcher". This alias is pretty useful: alias codegrep='ag --color --group --pager less -C 5' Then you just search for the correct location in code by starting with the tokens you know about: codegrep MaxMetaspaceSize this gives pretty good starting points in looking how the JDK allocates memory.
So the JDK source code is only a few commands away.
It would be interesting to hear more about this if someone has the time to dig in to this. This is about how far I got and I hope sharing this information helps someone continue. :)
Lari github/twitter: lhotari
On 15-05-08 10:02 AM, Daniel Jones wrote:
Hi Lari et al,
Thanks for your help Lari.
David and I are pairing on this issue, and we're yet to resolve it. We're in the process of creating a repeatable test case (our most crashy app makes calls to external services that need mocking), but in the meantime, here's what we've seen.
Between Java Buildpack commit e89e546 and 17162df, we see apps crashing with Warden out of memory errors. killjava.sh never fires, and this has led us to believe that the kernel is shooting a cgroup process in the head after the cgroup oversteps its memory limit. We cannot find any evidence of the OOM killer firing in any logs, but we may not be looking in the right place.
The JBP is setting heap to be 70%, metaspace to be 15% (with max set to the same as initial), 5% for "stack", 5% for "normalised stack" and 10% for "native". We do not understand why this adds up to 105%, but haven't looked into the JBP algorithm yet. Any pointers on what "normalised stack" is would be much appreciated, as this doesn't appear in the list of heuristics supplied via app env.
Other team members tried applying the same settings that you suggested - thanks for this. Apps still crash with these settings, albeit less frequently.
After reading the blog you linked to ( http://java.dzone.com/articles/java-8-permgen-metaspace) we wondered whether the increased *reserved *metaspace claimed after metaspace GC might be causing a problem; however we reused the test code to create a metaspace leak in a CF app and saw metaspace GCs occur correctly, and memory usage never grow over MaxMetaspaceSize. This figures, as the committed metaspace is still less than MaxMetaspaceSize, and the reserved appears to be whatever RAM is free across the whole DEA.
We noted that an Oracle blog ( https://blogs.oracle.com/poonam/entry/about_g1_garbage_collector_permanent) mentions that the metaspace size parameters are approximate. We're currently wondering if native allocations by Tomcat (APR, NIO) are taking up more container memory, and so when the metaspace fills, it's creeping slightly over the limit and triggering the kernel's OOM killer.
Any suggestions would be much appreciated. We've tried to resist tweaking heuristics blindly, but are running out of options as we're struggling to figure out how the Java process is using *committed* memory. pmap seems to show virtual memory, and so it's hard to see if things like the metaspace or NIO ByteBuffers are nabbing too much and trigger the kernel's OOM killer.
Thanks for all your help,
Daniel Jones & David Head-Rapson
On Wed, Apr 29, 2015 at 8:07 PM, Lari Hotari <Lari(a)hotari.net> wrote:
Hi,
I created a few tools to debug OOM problems since the application I was responsible for running on CF was failing constantly because of OOM problems. The problems I had, turned out not to be actual memory leaks in the Java application.
In the "cf events appname" log I would get entries like this: 2015-xx-xxTxx:xx:xx.00-0400 app.crash appname index: 1, reason: CRASHED, exit_description: out of memory, exit_status: 255
These type of entries are produced when the container goes over it's memory resource limits. It doesn't mean that there is a memory leak in the Java application. The container gets killed by the Linux kernel oom killer ( https://github.com/cloudfoundry/warden/blob/master/warden/README.md#limit-handle-mem-value) based on the resource limits set to the warden container.
The memory limit is specified in number of bytes. It is enforced using the control group associated with the container. When a container exceeds this limit, one or more of its processes will be killed by the kernel. Additionally, the Warden will be notified that an OOM happened and it subsequently tears down the container.
In my case it never got killed by the killjava.sh script that gets called in the java-buildpack when an OOM happens in Java.
This is the tool I built to debug the problems: https://github.com/lhotari/java-buildpack-diagnostics-app I deployed that app as part of the forked buildpack I'm using. Please read the readme about what it's limitations are. It worked for me, but it might not work for you. It's opensource and you can fork it. :)
There is a solution in my toolcase for creating a heapdump and uploading that to S3:
https://github.com/lhotari/java-buildpack-diagnostics-app/blob/master/src/main/groovy/io/github/lhotari/jbpdiagnostics/HeapDumpServlet.groovy The readme explains how to setup Amazon S3 keys for this: https://github.com/lhotari/java-buildpack-diagnostics-app#amazon-s3-setup Once you get a dump, you can then analyse the dump in a java profiler tool like YourKit.
I also have a solution that forks the java-buildpack modifies killjava.sh and adds a script that uploads the heapdump to S3 in the case of OOM:
https://github.com/lhotari/java-buildpack/commit/2d654b80f3bf1a0e0f1bae4f29cb85f56f5f8c46
In java-buildpack-diagnostics-app I have also other tools for getting Linux operation system specific memory information, for example:
https://github.com/lhotari/java-buildpack-diagnostics-app/blob/master/src/main/groovy/io/github/lhotari/jbpdiagnostics/MemoryInfoServlet.groovy
https://github.com/lhotari/java-buildpack-diagnostics-app/blob/master/src/main/groovy/io/github/lhotari/jbpdiagnostics/MemorySmapServlet.groovy
https://github.com/lhotari/java-buildpack-diagnostics-app/blob/master/src/main/groovy/io/github/lhotari/jbpdiagnostics/MallocInfoServlet.groovy
These tools are handy for looking at details of the Java process RSS memory usage growth.
There is also a solution for getting ssh shell access inside your application with tmate.io:
https://github.com/lhotari/java-buildpack-diagnostics-app/blob/master/src/main/groovy/io/github/lhotari/jbpdiagnostics/TmateSshServlet.groovy (this version is only compatible with the new "cflinuxfs2" stack)
It looks like there are serious problems on CloudFoundry with the memory sizing calculation. An application that doesn't have a OOM problem will get killed by the oom killer because the Java process will go over the memory limits. I filed this issue: https://github.com/cloudfoundry/java-buildpack/issues/157 , but that might not cover everything.
The workaround for that in my case was to add a native key under memory_sizes in open_jdk_jre.yml and set the minimum to 330M (that is for a 2GB total memory). see example https://github.com/grails-samples/java-buildpack/blob/22e0f6a/config/open_jdk_jre.yml#L25 that was how I got the app I'm running on CF to stay within the memory bounds. I'm sure there is now also a way to get the keys without forking the buildpack. I could have also adjusted the percentage portions, but I wanted to set a hard minimum for this case.
It was also required to do some other tuning.
I added this to JAVA_OPTS: -XX:CompressedClassSpaceSize=256M -XX:InitialCodeCacheSize=64M -XX:CodeCacheExpansionSize=1M -XX:CodeCacheMinimumFreeSpace=1M -XX:ReservedCodeCacheSize=200M -XX:MinMetaspaceExpansion=1M -XX:MaxMetaspaceExpansion=8M -XX:MaxDirectMemorySize=96M while trying to keep the Java process from growing in RSS memory size.
The memory overhead of a 64 bit Java process on Linux can be reduced by specifying these environment variables:
stack: cflinuxfs2 . . . env: MALLOC_ARENA_MAX: 2 MALLOC_MMAP_THRESHOLD_: 131072 MALLOC_TRIM_THRESHOLD_: 131072 MALLOC_TOP_PAD_: 131072 MALLOC_MMAP_MAX_: 65536
MALLOC_ARENA_MAX works only on cflinuxfs2 stack (the lucid64 stack has a buggy version of glibc).
explanation about MALLOC_ARENA_MAX from Heroku: https://devcenter.heroku.com/articles/tuning-glibc-memory-behavior some measurement data how it reduces memory consumption: https://devcenter.heroku.com/articles/testing-cedar-14-memory-use
I have created a PR to add this to CF java-buildpack: https://github.com/cloudfoundry/java-buildpack/pull/160
I also created an issues https://github.com/cloudfoundry/java-buildpack/issues/163 and https://github.com/cloudfoundry/java-buildpack/pull/159 .
I hope this information helps others struggling with OOM problems in CF. I'm not saying that this is a ready made solution just for you. YMMV. It worked for me.
-Lari
On 15-04-29 10:53 AM, Head-Rapson, David wrote:
Hi,
I’m after some guidance on how to get profile Java apps in CF, in order to get to the bottom of memory issues.
We have an app that’s crashing every few hours with OOM error, most likely it’s a memory leak.
I’d like to profile the JVM and work out what’s eating memory, however tools like yourkit require connectivity INTO the JVM server (i.e. the warden container), either via host / port or via SSH.
Since warden containers cannot be connected to on ports other than for HTTP and cannot be SSHd to, neither of these works for me.
I tried installed a standalone JDK onto the warden container, however as soon as I ran ‘jmap’ to invoke the dump, warden cleaned up the container – most likely for memory over-consumption.
I had previously found a hack in the Weblogic buildpack ( https://github.com/pivotal-cf/weblogic-buildpack/blob/master/docs/container-wls-monitoring.md) for modifying the start script which, when used with –XX:HeapDumpOnOutOfMemoryError, should copy any heapdump files to a file share somewhere. I have my own custom buildpack so I could use something similar.
Has anyone got a better solution than this?
We would love to use newrelic / app dynamics for this however we’re not allowed. And I’m not 100% certain they could help with this either.
Dave
The information transmitted is intended for the person or entity to which it is addressed and may contain confidential, privileged or copyrighted material. If you receive this in error, please contact the sender and delete the material from any computer. Fidelity only gives information on products and services and does not give investment advice to retail clients based on individual circumstances. Any comments or statements made are not necessarily those of Fidelity. All e-mails may be monitored. FIL Investments International (Reg. No.1448245), FIL Investment Services (UK) Limited (Reg. No. 2016555), FIL Pensions Management (Reg. No. 2015142) and Financial Administration Services Limited (Reg. No. 1629709) are authorised and regulated in the UK by the Financial Conduct Authority. FIL Life Insurance Limited (Reg No. 3406905) is authorised in the UK by the Prudential Regulation Authority and regulated in the UK by the Financial Conduct Authority and the Prudential Regulation Authority. Registered offices at Oakhill House, 130 Tonbridge Road, Hildenborough, Tonbridge, Kent TN11 9DZ.
-- You received this message because you are subscribed to the Google Groups "Cloud Foundry Developers" group. To view this discussion on the web visit https://groups.google.com/a/cloudfoundry.org/d/msgid/vcap-dev/DFFA4ADB9F3BC34194429921AB329336408CAB04%40UKFIL7006WIN.intl.intlroot.fid-intl.com <https://groups.google.com/a/cloudfoundry.org/d/msgid/vcap-dev/DFFA4ADB9F3BC34194429921AB329336408CAB04%40UKFIL7006WIN.intl.intlroot.fid-intl.com?utm_medium=email&utm_source=footer> . To unsubscribe from this group and stop receiving emails from it, send an email to vcap-dev+unsubscribe(a)cloudfoundry.org.
_______________________________________________ Cf-dev mailing list Cf-dev(a)lists.cloudfoundry.org https://lists.cloudfoundry.org/mailman/listinfo/cf-dev
--
Regards,
Daniel Jones
EngineerBetter.com
-- Regards,
Daniel Jones EngineerBetter.com
|
|
Re: Is it possible to use git push to deploy applications on CF
I have a story in the CF CLI backlog to look at a git style push: https://www.pivotaltracker.com/story/show/90658212. Is there a lot of interest here? Greg Oehmen Cloud Foundry Product Manager 415.205.6596 On Wed, May 13, 2015 at 2:18 AM, Alexander Lomov < alexander.lomov(a)altoros.com> wrote: Hey.
The simplest way to add this behaviour is to add `cf push` command to `.git/hooks/pre-push` executable file. The detail you can find in git docs [0]
In this article you can find the possible reasons not to use `cf push` together with `git push` [1]
[0] http://git-scm.com/book/en/v2/Customizing-Git-Git-Hooks [1] http://blog.pivotal.io/pivotal-labs/labs/deploying-jruby-rails-application-cloud-foundry
------------------------ Alex Lomov *Altoros* — Cloud Foundry deployment, training and integration *Twitter:* @code1n <https://twitter.com/code1n> *GitHub:* @allomov <https://gist.github.com/allomov>
On Wed, May 13, 2015 at 10:04 AM, Alan Moran <bonzofenix(a)gmail.com> wrote:
Hi Kinjal,
CF push does not support git input afaik. But It would be fairly simple to implement a cf-cli plugin that does that from the client side to offer a heroku-like experience.
Regards,
— Alan
On May 12, 2015, at 10:44 PM, Kinjal Doshi <kindoshi(a)gmail.com> wrote:
Hi,
I would like to know if it is possible to deploy applications on cloud foundry using git push. Or is it that only CF CLI can be used for pushing applications?
Thanks, Kinjal _______________________________________________ cf-dev mailing list cf-dev(a)lists.cloudfoundry.org https://lists.cloudfoundry.org/mailman/listinfo/cf-dev
_______________________________________________ cf-dev mailing list cf-dev(a)lists.cloudfoundry.org https://lists.cloudfoundry.org/mailman/listinfo/cf-dev
_______________________________________________ cf-dev mailing list cf-dev(a)lists.cloudfoundry.org https://lists.cloudfoundry.org/mailman/listinfo/cf-dev
|
|
Understanding the external network access in Diego
Lev Berman <lev.berman@...>
|
|
- Is it possible to create custom Roles
Hi,
In Pivotal CF, is it possible to create custom roles?
Thanks, Kinjal
|
|
- About services w.r.t orgs and spaces
Hi,
I wanted to understand if orgs and spaces in Pivotal CF contribute towards accessing services from:
1. Applications deployed in different spaces of same org and 2. Application deployed in different orgs altogether. (spaces would be different here by default, right?)
Thanks for your help in advance.
Regards,
Kinjal
|
|
Re: Adding multiple users to user/auditor roles of an orgnization
Hi Anil,
There is not an API to add multiple users to multiple roles of an organization.
-Dieu CF Runtime PM
toggle quoted message
Show quoted text
On Tue, May 12, 2015 at 7:22 PM, Anil Ambati <aambati(a)hotmail.com> wrote: Hi, is there a CF API to add multiple users to multiple roles of an organization? I have looked at the CF docs, but did not find any indication that such API exists.
Thank you.
Regards, Anil
_______________________________________________ cf-dev mailing list cf-dev(a)lists.cloudfoundry.org https://lists.cloudfoundry.org/mailman/listinfo/cf-dev
|
|
Re: Recipe to install Diego?
Thanks all!
Xianfeng Ye
toggle quoted message
Show quoted text
On Mon, May 11, 2015 at 9:17 PM, OzzOzz <ozzozz(a)gmail.com> wrote: Hi,
I have posted a sample BOSH deployment manifest to Gist. https://gist.github.com/ozzozz/4c08c37863b703a75afc I could deploy cf-release v207 and diego-release 0.1099.0 to AWS Tokyo region by MicroBOSH.
I could also deploy cf-release and diego-release to OpenStack(Juno). The manifests differs only in 'networks', 'cloud_properties' and 'stemcell'.
Regards, Ken
--- <ozzozz(a)gmail.com> Mitaka, Tokyo Japan
On Sat, May 9, 2015 at 8:57 PM, Tom Sherrod <tom.sherrod(a)gmail.com> wrote:
Hi,
Are there any examples or docs on installing Diego with bosh/microbosh? Using the bosh-lite as a template, I'm tripping up on various parts. Is this
even a valid direction in installing? Either AWS or Openstack..
Thanks, Tom
_______________________________________________ cf-dev mailing list cf-dev(a)lists.cloudfoundry.org https://lists.cloudfoundry.org/mailman/listinfo/cf-dev
_______________________________________________ cf-dev mailing list cf-dev(a)lists.cloudfoundry.org https://lists.cloudfoundry.org/mailman/listinfo/cf-dev
|
|
Re: UAA, SAML, and LDAP questions
In our case we use email address as the username via LDAP as well (UPN actually, but same thing) so it would be the same. Is there a timeline for the ECP profile support? Aaron From: Filip Hanik [mailto:fhanik(a)pivotal.io] Sent: Wednesday, May 13, 2015 3:54 PM To: Mike Youngstrom Cc: Sree Tummidi; Huber, Aaron M; CF Developers Mailing List Subject: Re: [cf-dev] UAA, SAML, and LDAP questions The problem with SAML is that we never see the username. We only receive the username in form of an email address from the SAML IDP. This would not correspond to the username you would log in to LDAP with. The use case you describe would indicate we want two different authentication sources represent the same authentication source. I believe the correct solution here is to implement the SAML ECP profile. At that point you'd have an option to go LDAP or SAML rather than trying to mix both. Filip On Wed, May 13, 2015 at 3:30 PM, Mike Youngstrom <youngm(a)gmail.com<mailto:youngm(a)gmail.com>> wrote: Possibly, though I think regular user authentication would still be a concern for our users since security forces a rather short TTL for our access tokens. I'll have to take a look and try a few things. We may decide to just use LDAP and forget about the SSO integration for now. Mike On Wed, May 13, 2015 at 3:03 PM, Sree Tummidi <stummidi(a)pivotal.io<mailto:stummidi(a)pivotal.io>> wrote: Hi Aaron, You could potentially use the access token (similar to a personal access token used for GitHub API ) to achieve the CLI automation. The access token can either be retrieved via an authentication to the CLI itself or via UAAC. Regular users would still continue to use the -sso option. Thanks, Sree Tummidi Sr. Product Manager Identity - Pivotal Cloud Foundry On Wed, May 13, 2015 at 1:56 PM, Huber, Aaron M <aaron.m.huber(a)intel.com<mailto:aaron.m.huber(a)intel.com>> wrote: That’s the main concern we have as well – we currently need LDAP for the CLI since SAML doesn’t work in that case, but we’d like SAML for web-based interactions (SSO in a portal, etc.). But at present it seems like that’s not possible without the user having to deal with effectively two separate accounts. Aaron From: Mike Youngstrom [mailto:youngm(a)gmail.com<mailto:youngm(a)gmail.com>] Sent: Wednesday, May 13, 2015 1:34 PM To: Filip Hanik Cc: Huber, Aaron M; CF Developers Mailing List Subject: Re: [cf-dev] UAA, SAML, and LDAP questions Well, that's a bummer. Is there any way around that? Our SAML is backed by the same LDAP so they are the same user. We can provide a unique ID to correlate SAML with LDAP users. Mike On Wed, May 13, 2015 at 2:28 PM, Filip Hanik <fhanik(a)pivotal.io<mailto:fhanik(a)pivotal.io>> wrote: yes, it would result in two different shadow accounts, differentiated by the value of the user's origin field On Wed, May 13, 2015 at 2:08 PM, aaron_huber <aaron.m.huber(a)intel.com<mailto:aaron.m.huber(a)intel.com>> wrote: Would the same user logging in via SAML and LDAP result in two different UAA user objects with different sources, so that the user would have two different sets of orgs/spaces/apps? Aaron -- View this message in context: http://cf-dev.70369.x6.nabble.com/cf-dev-UAA-SAML-and-LDAP-questions-tp62p65.htmlSent from the CF Dev mailing list archive at Nabble.com. _______________________________________________ cf-dev mailing list cf-dev(a)lists.cloudfoundry.org<mailto:cf-dev(a)lists.cloudfoundry.org> https://lists.cloudfoundry.org/mailman/listinfo/cf-dev_______________________________________________ cf-dev mailing list cf-dev(a)lists.cloudfoundry.org<mailto:cf-dev(a)lists.cloudfoundry.org> https://lists.cloudfoundry.org/mailman/listinfo/cf-dev_______________________________________________ cf-dev mailing list cf-dev(a)lists.cloudfoundry.org<mailto:cf-dev(a)lists.cloudfoundry.org> https://lists.cloudfoundry.org/mailman/listinfo/cf-dev
|
|
Re: UAA, SAML, and LDAP questions
The problem with SAML is that we never see the username. We only receive the username in form of an email address from the SAML IDP. This would not correspond to the username you would log in to LDAP with.
The use case you describe would indicate we want two different authentication sources represent the same authentication source. I believe the correct solution here is to implement the SAML ECP profile. At that point you'd have an option to go LDAP or SAML rather than trying to mix both.
Filip
toggle quoted message
Show quoted text
On Wed, May 13, 2015 at 3:30 PM, Mike Youngstrom <youngm(a)gmail.com> wrote: Possibly, though I think regular user authentication would still be a concern for our users since security forces a rather short TTL for our access tokens. I'll have to take a look and try a few things. We may decide to just use LDAP and forget about the SSO integration for now.
Mike
On Wed, May 13, 2015 at 3:03 PM, Sree Tummidi <stummidi(a)pivotal.io> wrote:
Hi Aaron, You could potentially use the access token (similar to a personal access token used for GitHub API ) to achieve the CLI automation. The access token can either be retrieved via an authentication to the CLI itself or via UAAC. Regular users would still continue to use the -sso option.
Thanks, Sree Tummidi Sr. Product Manager Identity - Pivotal Cloud Foundry
On Wed, May 13, 2015 at 1:56 PM, Huber, Aaron M <aaron.m.huber(a)intel.com> wrote:
That’s the main concern we have as well – we currently need LDAP for the CLI since SAML doesn’t work in that case, but we’d like SAML for web-based interactions (SSO in a portal, etc.). But at present it seems like that’s not possible without the user having to deal with effectively two separate accounts.
Aaron
*From:* Mike Youngstrom [mailto:youngm(a)gmail.com] *Sent:* Wednesday, May 13, 2015 1:34 PM *To:* Filip Hanik *Cc:* Huber, Aaron M; CF Developers Mailing List *Subject:* Re: [cf-dev] UAA, SAML, and LDAP questions
Well, that's a bummer. Is there any way around that? Our SAML is backed by the same LDAP so they are the same user. We can provide a unique ID to correlate SAML with LDAP users.
Mike
On Wed, May 13, 2015 at 2:28 PM, Filip Hanik <fhanik(a)pivotal.io> wrote:
yes, it would result in two different shadow accounts, differentiated by the value of the user's origin field
On Wed, May 13, 2015 at 2:08 PM, aaron_huber <aaron.m.huber(a)intel.com> wrote:
Would the same user logging in via SAML and LDAP result in two different UAA user objects with different sources, so that the user would have two different sets of orgs/spaces/apps?
Aaron
-- View this message in context: http://cf-dev.70369.x6.nabble.com/cf-dev-UAA-SAML-and-LDAP-questions-tp62p65.html Sent from the CF Dev mailing list archive at Nabble.com.
_______________________________________________ cf-dev mailing list cf-dev(a)lists.cloudfoundry.org https://lists.cloudfoundry.org/mailman/listinfo/cf-dev
_______________________________________________ cf-dev mailing list cf-dev(a)lists.cloudfoundry.org https://lists.cloudfoundry.org/mailman/listinfo/cf-dev
_______________________________________________ cf-dev mailing list cf-dev(a)lists.cloudfoundry.org https://lists.cloudfoundry.org/mailman/listinfo/cf-dev
|
|
Re: UAA, SAML, and LDAP questions
Mike Youngstrom <youngm@...>
Possibly, though I think regular user authentication would still be a concern for our users since security forces a rather short TTL for our access tokens. I'll have to take a look and try a few things. We may decide to just use LDAP and forget about the SSO integration for now.
Mike
toggle quoted message
Show quoted text
On Wed, May 13, 2015 at 3:03 PM, Sree Tummidi <stummidi(a)pivotal.io> wrote: Hi Aaron, You could potentially use the access token (similar to a personal access token used for GitHub API ) to achieve the CLI automation. The access token can either be retrieved via an authentication to the CLI itself or via UAAC. Regular users would still continue to use the -sso option.
Thanks, Sree Tummidi Sr. Product Manager Identity - Pivotal Cloud Foundry
On Wed, May 13, 2015 at 1:56 PM, Huber, Aaron M <aaron.m.huber(a)intel.com> wrote:
That’s the main concern we have as well – we currently need LDAP for the CLI since SAML doesn’t work in that case, but we’d like SAML for web-based interactions (SSO in a portal, etc.). But at present it seems like that’s not possible without the user having to deal with effectively two separate accounts.
Aaron
*From:* Mike Youngstrom [mailto:youngm(a)gmail.com] *Sent:* Wednesday, May 13, 2015 1:34 PM *To:* Filip Hanik *Cc:* Huber, Aaron M; CF Developers Mailing List *Subject:* Re: [cf-dev] UAA, SAML, and LDAP questions
Well, that's a bummer. Is there any way around that? Our SAML is backed by the same LDAP so they are the same user. We can provide a unique ID to correlate SAML with LDAP users.
Mike
On Wed, May 13, 2015 at 2:28 PM, Filip Hanik <fhanik(a)pivotal.io> wrote:
yes, it would result in two different shadow accounts, differentiated by the value of the user's origin field
On Wed, May 13, 2015 at 2:08 PM, aaron_huber <aaron.m.huber(a)intel.com> wrote:
Would the same user logging in via SAML and LDAP result in two different UAA user objects with different sources, so that the user would have two different sets of orgs/spaces/apps?
Aaron
-- View this message in context: http://cf-dev.70369.x6.nabble.com/cf-dev-UAA-SAML-and-LDAP-questions-tp62p65.html Sent from the CF Dev mailing list archive at Nabble.com.
_______________________________________________ cf-dev mailing list cf-dev(a)lists.cloudfoundry.org https://lists.cloudfoundry.org/mailman/listinfo/cf-dev
_______________________________________________ cf-dev mailing list cf-dev(a)lists.cloudfoundry.org https://lists.cloudfoundry.org/mailman/listinfo/cf-dev
_______________________________________________ cf-dev mailing list cf-dev(a)lists.cloudfoundry.org https://lists.cloudfoundry.org/mailman/listinfo/cf-dev
|
|
Re: UAA, SAML, and LDAP questions
Hi Aaron, You could potentially use the access token (similar to a personal access token used for GitHub API ) to achieve the CLI automation. The access token can either be retrieved via an authentication to the CLI itself or via UAAC. Regular users would still continue to use the -sso option. Thanks, Sree Tummidi Sr. Product Manager Identity - Pivotal Cloud Foundry On Wed, May 13, 2015 at 1:56 PM, Huber, Aaron M <aaron.m.huber(a)intel.com> wrote: That’s the main concern we have as well – we currently need LDAP for the CLI since SAML doesn’t work in that case, but we’d like SAML for web-based interactions (SSO in a portal, etc.). But at present it seems like that’s not possible without the user having to deal with effectively two separate accounts.
Aaron
*From:* Mike Youngstrom [mailto:youngm(a)gmail.com] *Sent:* Wednesday, May 13, 2015 1:34 PM *To:* Filip Hanik *Cc:* Huber, Aaron M; CF Developers Mailing List *Subject:* Re: [cf-dev] UAA, SAML, and LDAP questions
Well, that's a bummer. Is there any way around that? Our SAML is backed by the same LDAP so they are the same user. We can provide a unique ID to correlate SAML with LDAP users.
Mike
On Wed, May 13, 2015 at 2:28 PM, Filip Hanik <fhanik(a)pivotal.io> wrote:
yes, it would result in two different shadow accounts, differentiated by the value of the user's origin field
On Wed, May 13, 2015 at 2:08 PM, aaron_huber <aaron.m.huber(a)intel.com> wrote:
Would the same user logging in via SAML and LDAP result in two different UAA user objects with different sources, so that the user would have two different sets of orgs/spaces/apps?
Aaron
-- View this message in context: http://cf-dev.70369.x6.nabble.com/cf-dev-UAA-SAML-and-LDAP-questions-tp62p65.html Sent from the CF Dev mailing list archive at Nabble.com.
_______________________________________________ cf-dev mailing list cf-dev(a)lists.cloudfoundry.org https://lists.cloudfoundry.org/mailman/listinfo/cf-dev
_______________________________________________ cf-dev mailing list cf-dev(a)lists.cloudfoundry.org https://lists.cloudfoundry.org/mailman/listinfo/cf-dev
_______________________________________________ cf-dev mailing list cf-dev(a)lists.cloudfoundry.org https://lists.cloudfoundry.org/mailman/listinfo/cf-dev
|
|