I think our situation is a little bit different since we have a custom
syslog server that send logs directly to our Splunk indexers rather than
going through a Splunk forwarder that can aggregate multiple syslog streams
into a single event. This is part of our Splunk magic that allows our users
to do Splunk searches based of their Cloud Foundry app name, space, org,
etc rather than GUIDs.
Regardless, we can fix this by having our developers format their stack
traces differently.
Thanks Stuart.
-Mike
On Sat, Jun 13, 2015 at 1:32 PM, Stuart Charlton <scharlton(a)pivotal.io>
wrote:
Mike,
Actually, this might explain why some of our customers are so frustrated
trying to read their stack traces in Splunk. :\
So each line of a stack trace could go to a different Doppler. That means
each line of the stack trace goes out to a different syslog drain making
it
impossible to consolidate that stack trace into a single logging event
when
passed off to a third-party logging system like Splunk. This sucks. To be
fair, Splunk has never been any good at dealing with stack traces.
I'm not sure this is a specific issue with Doppler, as I've dealt with
syslog aggregated servers in the past with Splunk and generally I've been
able to merge stack traces (with some false mergers on corner cases) with
some props.conf voodoo setting up custom linebreaker clauses for Java stack
traces.
Usually Log4J or whatnot can be configured to emit a predictable field
like an extra timestamp ahead of any any app log messages so I can
differentiate a multi-line event from single.
Multiple syslog drains shouldn't be a problem because Splunk will merge
events based on the date you tell it to merge on.
--
Stuart Charlton
Pivotal Software | Field Engineering
Mobile: 403-671-9778 | Email: scharlton(a)pivotal.io
_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev