toggle quoted messageShow quoted text
I'm not saying that we have a good solution to multi-line log messages.
It's definitely a challenge today.
It's my understanding that the reasons for providing the stdout/stderr
- adherence to the 12 Factor App <http://12factor.net/logs
- zero-configuration, "just works" support for the broadest set of use
- compatibility with other PaaS offerings (e.g. Heroku).
None of that is meant to disregard your use case. I completely agree that
it's difficult-to-impossible for Loggregator to play nice with multi-line
logs, and that to bypass it would eliminate the value that the system
provides. I also agree that, while line-by-line processing of the console
works fine for a human watching the logs in real-time, it makes storing and
processing messages more difficult.
– John Tuley
On Mon, Jun 15, 2015 at 4:27 PM, Mike Youngstrom <youngm(a)gmail.com> wrote:
As far as your comment John about having our applications send their
syslogs to a remote syslog server. Though that would certainly provide a
way to get better logs into splunk it would eliminate all the value we get
* cf logs won't work (unless we fork our logs)
* we won't get the redundancy and reliability of logging locally (same
reason why metron exists as an agent)
* Complex customer config for a solution that should for the most part
There are all kinds of hacks we can use to improve our multi-line
logging. But, they are all hacks that diminish the customer experience.
I understand that nobody here can "speculate as to the future of CF and
whether or not a particular feature will someday be included". All I'm
asking for is an acknowledgement from the LAMB team that draining
multi-line log messages is a point point for users and that the team would
consider investing some future time to a solution (any solution) for this
If the logging team really believes that the way multi-line log events are
currently handled isn't a problem then lets discuss that. I as a user
believe this is a problem that ought to be looked at some point in the
On Mon, Jun 15, 2015 at 3:48 PM, Mike Heath <elcapo(a)gmail.com> wrote:
I think our situation is a little bit different since we have a custom_______________________________________________
syslog server that send logs directly to our Splunk indexers rather than
going through a Splunk forwarder that can aggregate multiple syslog streams
into a single event. This is part of our Splunk magic that allows our users
to do Splunk searches based of their Cloud Foundry app name, space, org,
etc rather than GUIDs.
Regardless, we can fix this by having our developers format their stack
On Sat, Jun 13, 2015 at 1:32 PM, Stuart Charlton <scharlton(a)pivotal.io>
Actually, this might explain why some of our customers are so frustratedI'm not sure this is a specific issue with Doppler, as I've dealt with
trying to read their stack traces in Splunk. :\
So each line of a stack trace could go to a different Doppler. That
each line of the stack trace goes out to a different syslog drain
impossible to consolidate that stack trace into a single logging event
passed off to a third-party logging system like Splunk. This sucks. To
fair, Splunk has never been any good at dealing with stack traces.
syslog aggregated servers in the past with Splunk and generally I've been
able to merge stack traces (with some false mergers on corner cases) with
some props.conf voodoo setting up custom linebreaker clauses for Java stack
Usually Log4J or whatnot can be configured to emit a predictable field
like an extra timestamp ahead of any any app log messages so I can
differentiate a multi-line event from single.
Multiple syslog drains shouldn't be a problem because Splunk will merge
events based on the date you tell it to merge on.
Pivotal Software | Field Engineering
Mobile: 403-671-9778 | Email: scharlton(a)pivotal.io
cf-dev mailing list
cf-dev mailing list
cf-dev mailing list