Given a DEA with 15GB, overcommit factor = 2, total "memory" is 30GB.
Ideally we can push up to 30 app instances per host, if each app instance
requires 1GB mem allocation.
Supposed the environment has 3 DEAs (max = 90GB) and we need to place a
total of 40GB of app instances:
1. should I kill the 3rd DEA given I still have "20GB" left and provision
the 3rd one when I am about to run low?
2. do you consider overcommit factor in your chargeback? i.e. despite you
can get up to 30GB, you charge customer the physical RAM (15). In this
case, you still charge the customer
n* box_price * (percentage of mem consumption / total physical
memory) = 3 * box_price * (40/45) ?
3. would I actually see "unavailable stager" error even with overcommit,
for a 40/90 deployment?
Thanks.... I hope these questions make sense.