Date 1 - 3 of 3
Default cgroup CPU share
I am reading
https://docs.cloudfoundry.org/concepts/architecture/warden.html#cpu and it
If B is idle, A may receive up to all the CPU. Shares per cgroup rangefrom 2 to 1024, with 1024 the default. Both Diego apps and DEA apps scale
the number of allocated shares linearly with the amount of memory, with an
app instance requesting 8G of memory getting the upper limit of 1024
shares. Diego also guarantees a minimum of 10 shares per app instance.
So 1024 is the default share every app get by default?
Say I start with an empty DEA.
APP #1: 1G shared = 1024?
APP #2 added. 1G shared =? what happen to APP #1?
APP #2 added: 512MB shared =? What happened to APP #1 & APP 2?
APP #3 added: 8GB, now what happened?
I am all assuming their usage is nearly idle. What is the total number of
share for a N-core DEA? Also are the shares dynamic? In the mean time I
will try to understand how CPU usage is shared in cgroup from other
Matthew Sykes <matthew.sykes@...>
The old vcap-dev mailing list had a number of exchanges around this topictoggle quoted message Show quoted text
that you might want to look at.
The basic gist is that linux gives processes that are not associated with a
cgroup a cpu share of 1024. That means that the code that runs the DEA and
all of the linux daemons that make things go will get that share.
When applications are placed on a DEA, the containers they run in are
associated with a cpu share that is proportional to the amount of memory
requested. If you request a lot of memory per app instance, you'll have a
high cpu share; if you request a little memory per app instance, you'll
have a low cpu share.
The cpu share values associated with the container cgroups will never be
allowed to exceed 1024 (to prevent applications from adversely impacting
the DEA processes).
These cpu share values really only start to impact things when there's
competition for the cpu. When that happens, processes in a cgroup that is
associated with higher shares will get more cpu than those with lower
There is no "limit" to the number of shares - they're treated as relative
values when the scheduler needs to make a choice. The goal is that, given
two processes A and B, if process A has a share weight that is twice that
of process B and both processes are cpu bound, process A will get twice as
many shares of the cpu as process A.
For a more complete understanding, you should read the documentation in the
linux tree for the scheduler.
On Tue, Jul 28, 2015 at 8:11 PM, John Wong <gokoproject(a)gmail.com> wrote:
I am reading
Will Pragnell <wpragnell@...>
In case it's not clear, shares are not dynamically reallocated to apps whentoggle quoted message Show quoted text
new apps are deployed. So in the example from the original email, if app #1
initially has N shares, it will still have N shares after app #2 is
deployed (and app #2 will also have N shares, given it has the same amount
of memory). This ties in with Matthew's point that there's no overall limit
to the number of shares.
I'm afraid I'm not quite sure what the absolute share values are and how
they're calculated relative to the memory amount.
On 30 July 2015 at 18:10, Matthew Sykes <matthew.sykes(a)gmail.com> wrote:
The old vcap-dev mailing list had a number of exchanges around this topic
|1 - 3 of 3|