Re: Increasing warden yml network and user pool size


CF Runtime
 

I believe that those limits would only need to be increased if you have
more than 256 warden containers on a single server. Is that the case?

Joseph & Zak
CF Runtime Team

On Wed, Jul 1, 2015 at 4:29 PM, Animesh Singh <animation2007(a)gmail.com>
wrote:

We are seeing some performance bottlenecks at warden, and at times warden
drops all connections under increasing load. We think increasing this
network and user pool_size might help. We have tried effecting those
changes thorugh CF YML, but they arent getting set.

Any clues on how can we get this effective?

sudo more
./var/vcap/data/jobs/dea_next/a25eb00c949666d87c19508cc917f1601a5c5ba8-136
0a7f1564ff515d5948677293e3aa209712f4f/config/warden.yml

---

server:

unix_domain_permissions: 0777

unix_domain_path: /var/vcap/data/warden/warden.sock

container_klass: Warden::Container::Linux

container_rootfs_path: /var/vcap/packages/rootfs_lucid64

container_depot_path: /var/vcap/data/warden/depot

container_rlimits:

core: 0

pidfile: /var/vcap/sys/run/warden/warden.pid

quota:

disk_quota_enabled: true


logging:

file: /var/vcap/sys/log/warden/warden.log

level: info

syslog: vcap.warden


health_check_server:

port: 2345


network:

pool_start_address: 10.254.0.0

pool_size: 256




# Interface MTU size

# (for OpenStack use 1454 to avoid problems with rubygems with GRE
tunneling)

mtu: 1400


user:

pool_start_uid: 20000

pool_size: 256




Thanks,

Animesh


_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev

Join cf-dev@lists.cloudfoundry.org to automatically receive all group messages.