Throttling App Logging


Daniel Jones
 

Is it possible with the current logging infrastructure in CF to limit the
logging throughput of particular apps or spaces?

Current client is running CF multi-tenant, and has some particularly noisy
customers. It'd be nice to be able to put a hard limit on how much they can
pass through to downstream commercial log indexers.

Any suggestions most gratefully received!

Regards,

Daniel Jones
EngineerBetter.com


Aleksey Zalesov
 

Hi!

Today CF quota can be set on three things:

1. Memory
2. Services number
3. Routes number

You can’t limit number of logging messages.

But I think its a good idea for feature request! Excessive debug logging can overwhelm log management system.

Aleksey Zalesov | CloudFoundry Engineer | Altoros
Tel: (617) 841-2121 ext. 5707 | Toll free: 855-ALTOROS
Fax: (866) 201-3646 | Skype: aleksey_zalesov
www.altoros.com <http://www.altoros.com/> | blog.altoros.com <http://blog.altoros.com/> | twitter.com/altoros <http://twitter.com/altoros>

On 21 Sep 2015, at 11:57, Daniel Jones <daniel.jones(a)engineerbetter.com> wrote:


Is it possible with the current logging infrastructure in CF to limit the logging throughput of particular apps or spaces?

Current client is running CF multi-tenant, and has some particularly noisy customers. It'd be nice to be able to put a hard limit on how much they can pass through to downstream commercial log indexers.

Any suggestions most gratefully received!

Regards,

Daniel Jones
EngineerBetter.com


Rohit Kumar
 

It isn't possible to throttle logging output on a per application basis. It
is possible to configure the message_drain_buffer_size [1] to be lower than
the default value of 100, which will reduce the number of logs which
loggregator will buffer. If the producer is filling up logs too quickly,
loggregator will drop the messages present in its buffer. This
configuration will affect ALL the applications running on your Cloud
Foundry environment. You could play with that property and see if it helps.

Rohit

[1]:
https://github.com/cloudfoundry/loggregator/blob/develop/bosh/jobs/doppler/spec#L60-L62

On Mon, Sep 21, 2015 at 2:57 AM, Daniel Jones <
daniel.jones(a)engineerbetter.com> wrote:


Is it possible with the current logging infrastructure in CF to limit the
logging throughput of particular apps or spaces?

Current client is running CF multi-tenant, and has some particularly noisy
customers. It'd be nice to be able to put a hard limit on how much they can
pass through to downstream commercial log indexers.

Any suggestions most gratefully received!

Regards,

Daniel Jones
EngineerBetter.com


Daniel Jones
 

Thanks all.

Sadly the client has a regulatory requirement for *some* apps that all logs
must be persisted for a number of years, so we can't drop messages
indiscriminately using the loggregator buffer. They're a PCF customer, so
I'll raise a feature request through the support process.

Cheers!

On Mon, Sep 21, 2015 at 5:24 PM, Rohit Kumar <rokumar(a)pivotal.io> wrote:

It isn't possible to throttle logging output on a per application basis.
It is possible to configure the message_drain_buffer_size [1] to be lower
than the default value of 100, which will reduce the number of logs which
loggregator will buffer. If the producer is filling up logs too quickly,
loggregator will drop the messages present in its buffer. This
configuration will affect ALL the applications running on your Cloud
Foundry environment. You could play with that property and see if it helps.

Rohit

[1]:
https://github.com/cloudfoundry/loggregator/blob/develop/bosh/jobs/doppler/spec#L60-L62

On Mon, Sep 21, 2015 at 2:57 AM, Daniel Jones <
daniel.jones(a)engineerbetter.com> wrote:


Is it possible with the current logging infrastructure in CF to limit the
logging throughput of particular apps or spaces?

Current client is running CF multi-tenant, and has some particularly
noisy customers. It'd be nice to be able to put a hard limit on how much
they can pass through to downstream commercial log indexers.

Any suggestions most gratefully received!

Regards,

Daniel Jones
EngineerBetter.com
--
Regards,

Daniel Jones
EngineerBetter.com


Aleksey Zalesov
 

By the way how do you comply with the requirement to persist all the logs? CF logging system is lossy by its nature so can drop messages.

We have similar requirement - some log messages must be reliably delivered and stored. After some considerations we decided to use RabbitMQ service for delivering these kind of messages from apps to log storage. All other logs go through metron-doppler chain as usual.

Aleksey Zalesov | CloudFoundry Engineer | Altoros
Tel: (617) 841-2121 ext. 5707 | Toll free: 855-ALTOROS
Fax: (866) 201-3646 | Skype: aleksey_zalesov
www.altoros.com <http://www.altoros.com/> | blog.altoros.com <http://blog.altoros.com/> | twitter.com/altoros <http://twitter.com/altoros>

On 22 Sep 2015, at 11:01, Daniel Jones <daniel.jones(a)engineerbetter.com> wrote:

Thanks all.

Sadly the client has a regulatory requirement for some apps that all logs must be persisted for a number of years, so we can't drop messages indiscriminately using the loggregator buffer. They're a PCF customer, so I'll raise a feature request through the support process.

Cheers!

On Mon, Sep 21, 2015 at 5:24 PM, Rohit Kumar <rokumar(a)pivotal.io <mailto:rokumar(a)pivotal.io>> wrote:
It isn't possible to throttle logging output on a per application basis. It is possible to configure the message_drain_buffer_size [1] to be lower than the default value of 100, which will reduce the number of logs which loggregator will buffer. If the producer is filling up logs too quickly, loggregator will drop the messages present in its buffer. This configuration will affect ALL the applications running on your Cloud Foundry environment. You could play with that property and see if it helps.

Rohit

[1]: https://github.com/cloudfoundry/loggregator/blob/develop/bosh/jobs/doppler/spec#L60-L62 <https://github.com/cloudfoundry/loggregator/blob/develop/bosh/jobs/doppler/spec#L60-L62>

On Mon, Sep 21, 2015 at 2:57 AM, Daniel Jones <daniel.jones(a)engineerbetter.com <mailto:daniel.jones(a)engineerbetter.com>> wrote:

Is it possible with the current logging infrastructure in CF to limit the logging throughput of particular apps or spaces?

Current client is running CF multi-tenant, and has some particularly noisy customers. It'd be nice to be able to put a hard limit on how much they can pass through to downstream commercial log indexers.

Any suggestions most gratefully received!

Regards,

Daniel Jones
EngineerBetter.com




--
Regards,

Daniel Jones
EngineerBetter.com


Daniel Jones
 

It's a similar situation, in that app teams are being re-educated to store
important data in persistent data stores, rather than logging.

AFAIK there is an element of best-effort with regards to the logging
requirements, in that reasonable efforts to persist the logs must be
demonstrable - so whilst it's acknowledged that logs aren't guaranteed, the
PaaS team has done all they can to make sure they stand as high a chance as
possible that they get to their destination. Just like having logs backed
up to tape doesn't guarantee their invincibility (the tapes might get
destroyed), but reasonable efforts have been made to keep them.

On Tue, Sep 22, 2015 at 10:17 AM, Aleksey Zalesov <
aleksey.zalesov(a)altoros.com> wrote:

By the way how do you comply with the requirement to persist all the logs?
CF logging system is lossy by its nature so can drop messages.

We have similar requirement - *some* log messages must be reliably
delivered and stored. After some considerations we decided to use RabbitMQ
service for delivering these kind of messages from apps to log storage. All
other logs go through metron-doppler chain as usual.

Aleksey Zalesov | CloudFoundry Engineer | Altoros
Tel: (617) 841-2121 ext. 5707 | Toll free: 855-ALTOROS
Fax: (866) 201-3646 | Skype: aleksey_zalesov
www.altoros.com | blog.altoros.com | twitter.com/altoros

On 22 Sep 2015, at 11:01, Daniel Jones <daniel.jones(a)engineerbetter.com>
wrote:

Thanks all.

Sadly the client has a regulatory requirement for *some* apps that all
logs must be persisted for a number of years, so we can't drop messages
indiscriminately using the loggregator buffer. They're a PCF customer, so
I'll raise a feature request through the support process.

Cheers!

On Mon, Sep 21, 2015 at 5:24 PM, Rohit Kumar <rokumar(a)pivotal.io> wrote:

It isn't possible to throttle logging output on a per application basis.
It is possible to configure the message_drain_buffer_size [1] to be lower
than the default value of 100, which will reduce the number of logs which
loggregator will buffer. If the producer is filling up logs too quickly,
loggregator will drop the messages present in its buffer. This
configuration will affect ALL the applications running on your Cloud
Foundry environment. You could play with that property and see if it helps.

Rohit

[1]:
https://github.com/cloudfoundry/loggregator/blob/develop/bosh/jobs/doppler/spec#L60-L62

On Mon, Sep 21, 2015 at 2:57 AM, Daniel Jones <
daniel.jones(a)engineerbetter.com> wrote:


Is it possible with the current logging infrastructure in CF to limit
the logging throughput of particular apps or spaces?

Current client is running CF multi-tenant, and has some particularly
noisy customers. It'd be nice to be able to put a hard limit on how much
they can pass through to downstream commercial log indexers.

Any suggestions most gratefully received!

Regards,

Daniel Jones
EngineerBetter.com

--
Regards,

Daniel Jones
EngineerBetter.com


--
Regards,

Daniel Jones
EngineerBetter.com