BOSH Director Performance on Jammy Stemcells FAQ


BOSH Director Performance on Jammy Stemcells FAQ

What’s the problem?

Ruby programs using Ruby compiled with GCC (GNU Compiler Collection) on Jammy stemcells have a much larger RSS (Resident Set Size) memory footprint, which can cause memory pressure. This affects Ruby-based programs such as the BOSH Director and the BOSH Azure, AWS, and vSphere CPIs. This can cause BOSH operations such as “bosh deploy” to take much longer and even time out.

What’s the fix?

We plan to compile the Ruby interpreter on Jammy with Clang (an Apple-sponsored GCC-compatible compiler). Ruby interpreters compiled with Clang don’t appear to have the same memory bloat when running the BOSH Director or the Ruby-based CPIs.

How will we accomplish that?

We plan to include the Clang compiler on the Jammy stemcells. We also plan to modify the Ruby BOSH package to use Clang if it’s available, otherwise fall back to GCC.

Doesn’t the Clang compiler take up a lot of disk space?

Yes, the Clang compiler takes up 700-800 MB of disk space; however, we plan to instruct the BOSH agent to remove the Clang compiler on boot unless the VM is a compilation VM. In other words, the Clang compiler won’t take up precious space on the root disk for the typically deployed VM.

What about the Xenial, Bionic, and CentOS stemcells?

We don’t plan to install Clang on Xenial and Bionic stemcells; they don’t exhibit the performance problem, so Clang has little to offer.

We’re not sure whether the CentOS stemcell is affected.

Other than memory, is there a performance impact of using Clang-based Ruby?

In our testing, it appears that a Jammy-based Clang-based Ruby Director with a Clang-based vSphere CPI offers 5-25% performance boost over a Xenial-based GCC-based Ruby Director.

What is the root cause of the problem?

We’re not sure; it appears that the problem is related to the version of GCC used to compile Ruby. We notice that the memory footprint of Ruby’s threads grew from Ubuntu Disco to Ubuntu Eoan: 204 kiB → 10400 kiB.

Brian Cunnie, 650.968.6262

Join to automatically receive all group messages.