Re: Understanding hard CPU limits


Eric Malm <emalm@...>
 

Hey, Michael and Marco,

Sorry for the delay in getting a chance to respond to this thread. In general, CF apps receive an allocation of CPU shares proportional to their allocated memory, but with some high and low cutoffs and some limited granularity:

- The minimum number of CPU shares that an app instance (or task) will receive is 10.
- The maximum is 1024.
- The granularity is roughly every 10 shares (10.24, to be precise).


In my personal experiments when setting the garden.cpu_quota_per_share_in_us property, the number of cores does not factor into the per-instance limit, and the quota is enforced as CPU time across all the cores. To constrain a 64 MB-memory app instance to at most 6.4% CPU usage, I had to set garden.cpu_quota_per_share_in_us to 640. A 1 GB-memory app instance, which has 122 CPU shares, can then use up to 78.1% of a CPU core.

Best,
Eric

On Fri, Apr 6, 2018 at 11:02 AM, Dieu Cao <dcao@...> wrote:
I believe Julz is on vacation this week.
Adding Ed King, the anchor on the Garden team.

Dieu

On Tue, Apr 3, 2018, 3:08 AM Marco Voelz <marco.voelz@...> wrote:


/cc Eric and Julz: Could you maybe help us understand this? Thanks!


From: cf-dev@... <cf-dev@...> on behalf of Grifalconi, Michael <michael.grifalconi@...>
Sent: Monday, March 26, 2018 10:54:32 AM
To: cf-dev@...
Subject: [CAUTION] [cf-dev] Understanding hard CPU limits
 
Hi,

We were trying out the hard CPU limit as per docs
https://github.com/cloudfoundry/garden-runc-release/releases?after=v1.9.2
GitHub is where people build software. More than 27 million people use GitHub to discover, fork, and contribute to over 80 million projects.



according to formula for single core machine,

APP_MEM_IN_MB * 100us / 1000 = MILLISECONDS_PER_100_MILLISECOND_PERIOD

In our tests, to get 6.4% CPU usage on a 64MB application and ~100% for a 1GB application we had to set the 'cpu_quota_per_share_in_us' to 3200. (Cell has 4 core and 16GB or ram, overcommit factor of 2).

That changes the formula to:
APP_MEM_IN_MB * 100us / 32000 = MILLISECONDS_PER_100_MILLISECOND_PERIOD

Can you help us to understand from where this 'times 32' comes from? Is the total available RAM of the cell (16GB * overcommit of 2) and number of cpu cores does not matter?

Thanks and regards,
Michael





Join {cf-dev@lists.cloudfoundry.org to automatically receive all group messages.