Re: How random is Metron's Doppler selection?


Mike Youngstrom
 

As far as your comment John about having our applications send their
syslogs to a remote syslog server. Though that would certainly provide a
way to get better logs into splunk it would eliminate all the value we get
from loggregator.

* cf logs won't work (unless we fork our logs)
* we won't get the redundancy and reliability of logging locally (same
reason why metron exists as an agent)
* Complex customer config for a solution that should for the most part
"just work"
* etc.

There are all kinds of hacks we can use to improve our multi-line logging.
But, they are all hacks that diminish the customer experience.

I understand that nobody here can "speculate as to the future of CF and
whether or not a particular feature will someday be included". All I'm
asking for is an acknowledgement from the LAMB team that draining
multi-line log messages is a point point for users and that the team would
consider investing some future time to a solution (any solution) for this
issue.

If the logging team really believes that the way multi-line log events are
currently handled isn't a problem then lets discuss that. I as a user
believe this is a problem that ought to be looked at some point in the
future.

Mike

On Mon, Jun 15, 2015 at 3:48 PM, Mike Heath <elcapo(a)gmail.com> wrote:

I think our situation is a little bit different since we have a custom
syslog server that send logs directly to our Splunk indexers rather than
going through a Splunk forwarder that can aggregate multiple syslog streams
into a single event. This is part of our Splunk magic that allows our users
to do Splunk searches based of their Cloud Foundry app name, space, org,
etc rather than GUIDs.

Regardless, we can fix this by having our developers format their stack
traces differently.

Thanks Stuart.

-Mike

On Sat, Jun 13, 2015 at 1:32 PM, Stuart Charlton <scharlton(a)pivotal.io>
wrote:

Mike,


Actually, this might explain why some of our customers are so frustrated
trying to read their stack traces in Splunk. :\

So each line of a stack trace could go to a different Doppler. That means
each line of the stack trace goes out to a different syslog drain making
it
impossible to consolidate that stack trace into a single logging event
when
passed off to a third-party logging system like Splunk. This sucks. To be
fair, Splunk has never been any good at dealing with stack traces.

I'm not sure this is a specific issue with Doppler, as I've dealt with
syslog aggregated servers in the past with Splunk and generally I've been
able to merge stack traces (with some false mergers on corner cases) with
some props.conf voodoo setting up custom linebreaker clauses for Java stack
traces.

Usually Log4J or whatnot can be configured to emit a predictable field
like an extra timestamp ahead of any any app log messages so I can
differentiate a multi-line event from single.

Multiple syslog drains shouldn't be a problem because Splunk will merge
events based on the date you tell it to merge on.


--

Stuart Charlton

Pivotal Software | Field Engineering

Mobile: 403-671-9778 | Email: scharlton(a)pivotal.io



_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev

_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev

Join cf-dev@lists.cloudfoundry.org to automatically receive all group messages.