toggle quoted messageShow quoted text
Thanks for information.
I said "pre-allocated" because after I pushed an APP with 5G memory specified, if I go to Cell VM, and I notice the available memory is 5G less than totally memory via "curl -s http://localhost:1800/state"
on Cell VM.
I think overcommit factor is not very suitable in my case, but "resource reclamation and predictive analytics" are quite helpful, and it's a quite useful/flexible mechanism.
Do we have any plan on such cool ideas?
No physical memory is actually pre-allocated, it's simply a maximum used to
determine if the container needs to be killed when it exceeds it. However,
since your VM has some fixed amount of physical memory (e.g. 7.5G), the
operator will want to be able to make some guarantees that the VM doesn't
run a bunch of apps that consume the entire physical memory even if the
apps don't individually exceed their maximum memory limit. This is
especially important in a multi-tenant scenario.
One mechanism to deal with this is an "over-commit factor". This is what
Dan Mikusa's link was about in case you didn't read it yet. If you want
absolute guarantees that the VM will only have work scheduled on it such
that applications cannot consume more memory than what's "guaranteed" to
them by whatever their max memory limits are set to, you'll want an
overcommit factor on memory of 1. An overcommit factor of 2 means that on
a 7.5G VM, you could allocate containers whose sum total of their max
memory limits was up to 15G, and you'd be fine as long as you can trust the
containers to not consume, in total, more than 7.5G of real memory.
The DEA architecture supports setting the overcommit factors, I'm not sure
whether Diego supports this (yet).
The two concepts Deepak brings up, resource reclamation and predictive
analytics, are both pretty cool ideas. But these are not currently
supported in Cloud Foundry.
On Thu, Mar 10, 2016 at 7:54 AM, Stanley Shen <meteorping(a)gmail.com> wrote: