Node.js Apps with small memory limits; Inaccurate Memory Availability in Containers


Sai Vennam <svennam92@...>
 

Hey All,

I've recently started investigating a memory issue with Node.js apps
running in CloudFoundry environments. FYI, I'm using CFv210. As an
example, if I push a Node.js app with a mem leak with a 512MB memory limit,
the Node.js V8 engine tries to allocate more and more memory until it
passes that memory limit and the application crashes. The behavior I expect
to see is that it will stop trying to allocate more memory when it reaches
the limit, and instead try to GC more aggressively (and then crash at a
later time).

By default, on 64 bit machines, the Node.js v8 engine has a 1GB heap limit,
so I can see why the engine tries to allocate more than is really
available. There should be some way to prevent the Node.js v8 engine from
trying to allocate more than is available. In Java, you can use JVM opts to
set heap limits, maybe something similar?

I did find one thing that might help, --max-old-space-size. But... has any
one done any investigation as to how to set that space size?
"--max-old-space-size" only accounts for the v8 engine's heap, not the
buffers or other processes. For example, should that limit be set to 50% of
the memory_limit? 75%? Maybe that's something the Node.js buildpack should
set as a reasonable default?

There is a separate issue that might be related to this. When you run
'free' or 'top' as a shell command from within the container spun up for my
application, I am seeing "32gb" total. This can't be right... I specified
512 when creating my application! When I run commands like "os.totalmem()"
from within Node.js, I'm also seeing 32gb.

There may be a better solution that doesn't involve setting any params, but
instead just fixing those kernel commands to be accurate.

Thanks,
Sai


Mike Dalessio
 

Hi Sai,

Thanks for asking these questions. The buildpacks team, who currently
maintains the nodejs-buildpack, is totally open to improving the node.js
developer experience.

I'd love to hear about anyone's experience managing the total heap size
within the node.js interpreter. If you have played with this, let us know,
and we'd be happy to work with you on how it might work in conjunction with
container memory limits.

Cheers,
-mike

On Wed, Jul 29, 2015 at 10:47 AM, Sai Vennam <svennam92(a)gmail.com> wrote:

Hey All,

I've recently started investigating a memory issue with Node.js apps
running in CloudFoundry environments. FYI, I'm using CFv210. As an
example, if I push a Node.js app with a mem leak with a 512MB memory limit,
the Node.js V8 engine tries to allocate more and more memory until it
passes that memory limit and the application crashes. The behavior I expect
to see is that it will stop trying to allocate more memory when it reaches
the limit, and instead try to GC more aggressively (and then crash at a
later time).

By default, on 64 bit machines, the Node.js v8 engine has a 1GB heap
limit, so I can see why the engine tries to allocate more than is really
available. There should be some way to prevent the Node.js v8 engine from
trying to allocate more than is available. In Java, you can use JVM opts to
set heap limits, maybe something similar?

I did find one thing that might help, --max-old-space-size. But... has any
one done any investigation as to how to set that space size?
"--max-old-space-size" only accounts for the v8 engine's heap, not the
buffers or other processes. For example, should that limit be set to 50% of
the memory_limit? 75%? Maybe that's something the Node.js buildpack should
set as a reasonable default?

There is a separate issue that might be related to this. When you run
'free' or 'top' as a shell command from within the container spun up for my
application, I am seeing "32gb" total. This can't be right... I specified
512 when creating my application! When I run commands like "os.totalmem()"
from within Node.js, I'm also seeing 32gb.

There may be a better solution that doesn't involve setting any params,
but instead just fixing those kernel commands to be accurate.

Thanks,
Sai

_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev


Christopher Piraino <cpiraino@...>
 

Sai,

Running free/top from inside the container is going to report the VM's
memory/cpu statistics in which the container is running. The correct stats
are located in the appropriate cgroups filesystem, which is where Cloud
Foundry pulls stats from when using the CLI.

From inside the container it is actually very hard to figure out how much
memory the cgroup is using (see this article for more information
<http://fabiokung.com/2014/03/13/memory-inside-linux-containers/>).

Best,
Chris

On Wed, Jul 29, 2015 at 7:55 AM, Mike Dalessio <mdalessio(a)pivotal.io> wrote:

Hi Sai,

Thanks for asking these questions. The buildpacks team, who currently
maintains the nodejs-buildpack, is totally open to improving the node.js
developer experience.

I'd love to hear about anyone's experience managing the total heap size
within the node.js interpreter. If you have played with this, let us know,
and we'd be happy to work with you on how it might work in conjunction with
container memory limits.

Cheers,
-mike


On Wed, Jul 29, 2015 at 10:47 AM, Sai Vennam <svennam92(a)gmail.com> wrote:

Hey All,

I've recently started investigating a memory issue with Node.js apps
running in CloudFoundry environments. FYI, I'm using CFv210. As an
example, if I push a Node.js app with a mem leak with a 512MB memory limit,
the Node.js V8 engine tries to allocate more and more memory until it
passes that memory limit and the application crashes. The behavior I expect
to see is that it will stop trying to allocate more memory when it reaches
the limit, and instead try to GC more aggressively (and then crash at a
later time).

By default, on 64 bit machines, the Node.js v8 engine has a 1GB heap
limit, so I can see why the engine tries to allocate more than is really
available. There should be some way to prevent the Node.js v8 engine from
trying to allocate more than is available. In Java, you can use JVM opts to
set heap limits, maybe something similar?

I did find one thing that might help, --max-old-space-size. But... has
any one done any investigation as to how to set that space size?
"--max-old-space-size" only accounts for the v8 engine's heap, not the
buffers or other processes. For example, should that limit be set to 50% of
the memory_limit? 75%? Maybe that's something the Node.js buildpack should
set as a reasonable default?

There is a separate issue that might be related to this. When you run
'free' or 'top' as a shell command from within the container spun up for my
application, I am seeing "32gb" total. This can't be right... I specified
512 when creating my application! When I run commands like "os.totalmem()"
from within Node.js, I'm also seeing 32gb.

There may be a better solution that doesn't involve setting any params,
but instead just fixing those kernel commands to be accurate.

Thanks,
Sai

_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev

_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev