Date   

Re: CF Auto-scaling with an external application

Daniel Mikusa
 

On Fri, Apr 8, 2016 at 2:12 PM, Giovanni Napoli <gio.napoli2(a)gmail.com>
wrote:

@Daniel Mikusa

Thank you for your support. I have few more questions and hope that you
can help me.
If i'll use the library you linked, i'd like to have some suggestions
about the task i have to solve:

- is there a way to collect in some way the data i need in a struct that i
could use for send "the scale command"? I mean, would be great to have a
struct in wich i could have fields like "AppName", "CpuUse", "MemoryUse",
etc. so i cloud just check the App.CpuUse field, for istance, and send a
"cf scale" command to CF to solve resource problems.

- i found this client library for CF, do you think could help my work?
https://github.com/cloudfoundry/cf-java-client/ I'm asking you cause i'll
prefer to use Java, by the way. Is you know, is there a way to have ".jar"
library of this repo so i could use as simple as i can? Also i found this
http://www.ibm.com/developerworks/cloud/library/cl-bluemix-cloudappswithjava/index.html
and a "cloudfoundry-client-lib.jar" online but i don't kwon if could be
good for my problem.
There's two parts to this problem:

1.) You need to get metrics from the platform. I don't think (although
this could have changed, since it's being rewritten at the moment) that
cf-java-client supports this. I know it will allow you to stream logs, but
I'm not sure if you can get metrics as well. If Ben Hale see's this,
perhaps he can comment and confirm.

2.) Once you get metrics, you'd have to manage them and when appropriate
initiate a request to scale the app. You can definitely use the
cf-java-client for this. You can also use it to query information if for
example you have an app guid and need the app name.

Hope that helps!

Thanks,

Dan


Re: Static IP setup for routers on AWS

Daniel Mikusa
 

On Fri, Apr 8, 2016 at 7:04 AM, Engelke, Johannes <info(a)johannes-engelke.de>
wrote:

Hi Amit,
thanks for your answer. I deployed cloud foundry without using static
IP’s. It is working well.

As far as I understood the uaa config the entire 10.x.x.x network is
allowed to access the UAA Servers anyway, so there is no reason to place
the dedicated static IP's of the routers into the config.
Are you referring to the RemoteIpValve that is configured for UAA?

https://github.com/cloudfoundry/uaa-release/blob/develop/jobs/uaa/templates/tomcat.server.xml.erb#L70-L73

Because the RemoteIpValve doesn't restrict access to Tomcat / UAA. It's
controls how (and if) Tomcat handles the x-forwarded-* headers. In short,
it will only process those headers if it "trusts" them (by trust, it really
means if the regex matches).

My understanding is that the UAA job will take the gorouter IP's and
prepend them to the front of this regex so that it will always match at
least the IP's for the gorouter. If you're using private IP's, it's not
really necessary as the default regex used by Tomcat will match all private
IP's.

If you're using public IP's for some reason, you'd need to configure this
or UAA might not detect the incoming connects as HTTPS and it would very
likely detect the wrong remote IP address (necessary for audit records in
the logs).


Do you see any security improvements, if only routers are allowed to
access the UAA?
As long as we're talking about RemoteIpValve, sorry if I'm not following
the conversation completely I jumped in a little late, and you're using
private IP addresses for your VMs then I don't see any difference in
behavior.

If you have public IP's assigned to your gorouter VMs then you may see some
issues with how the x-forwarded-for and x-forwarded-proto headers are
processed, which in turn could affect the accuracy of the audit messages in
the logs.

Hope that helps!

Dan



On 08 Apr 2016, at 02:19, Amit Gupta <agupta(a)pivotal.io> wrote:

The UAA needs to know the router IPs to know which IPs to accept inbound
requests from. If you don't care about this, you can try configuring UAA
to allow requests from many IPs, and remove the static IPs from gorouter.
I would be interested to find out the result of this experiment should you
try it out.

Best,
Amit

On Thu, Apr 7, 2016 at 6:28 AM, Engelke, Johannes <
info(a)johannes-engelke.de> wrote:

Hi,
does anybody know, why the routers got static ips in the
cf-infrastructure-aws.yml file?
https://github.com/cloudfoundry/cf-release/blob/master/templates/cf-infrastructure-aws.yml#L173

Bosh is assigning the instances to ELB’s during deploy time, so there
should be no need to have static addresses here.

If nobody know’s a good reason should we remove them ;-)

Cheers
Johannes


Re: cf push working

Bharath
 

https://docs.pivotal.io/pivotalcf/concepts/how-applications-are-staged.html

you can check the curl messages by exporting environment variable
CF_TRACE=true

bharath

On Wed, Apr 13, 2016 at 12:21 PM, Ankur Srivastava <ankursri1(a)gmail.com>
wrote:

Hi,
What does cf push do in the background ? What all steps are done by cf
push.

Regards,
Ankur


Removing 1.4 support from the Go Buildpack

Danny Rosen
 

Hello!

As the [official Go release policy](
https://golang.org/doc/devel/release.html) has deprecated support for Go
1.4, the Buildpacks team would like to propose the removal of Go 1.4
support from the Go buildpack as well.

If you have any questions or concerns regarding this change we would like
to hear from you! Please respond to this thread or this issue [1
<https://github.com/cloudfoundry/go-buildpack/issues/36>] no later than
4/25, when we plan to schedule work to make the change.

[1] - https://github.com/cloudfoundry/go-buildpack/issues/36


Re: How can we customized "404 Not Found"

James Leavers
 

The end of this presentation [1] from the CF Summit has an example of creating a wildcard route to route 404s to an app, using cf map-route.

[1] http://berlin2015.cfsummit.com/sites/berlin2015.cfsummit.com/files//pages/files/summit-berlin.pdf

On 14 April 2016 at 11:20:17, Stanley Shen (meteorping(a)gmail.com) wrote:

Thanks Mike.
Could you show me how can I do things like that, of if there are some document about it?

If there are other ways to do that?

Thanks,
Stanley


Re: AWS / CF v233 deployment

Sylvain Gibier
 

Hi,

i end up regenerating the whole stack using v234 instead and it worked -
using the same consul certificates.

Sylvain

On Wed, Apr 13, 2016 at 7:54 PM, Christian Ang <cang(a)pivotal.io> wrote:

Hi Sylvain,

It looks like your problem might be that one or more of the consul
certificates in your cf manifest is not a valid PEM encoded certificate, or
the certificates are missing entirely. Do the consul properties in your cf
manifest look approximately like this (with your own certificates and keys):


https://github.com/cloudfoundry-incubator/consul-release/blob/master/manifests/aws/multi-az-ssl.yml#L122-L261

Also, if you decode your certificates by running `openssl x509 -in
server-ca.crt -text -noout`, do they appear to be valid?

If they are invalid you can try regenerating them using
`scripts/generate-consul-certs` and copying each files contents into the
appropriate place in your cf manifest's consul properties.

Thanks,
Christian and George


Re: How can we customized "404 Not Found"

Stanley Shen <meteorping@...>
 

Thanks Mike.
Could you show me how can I do things like that, of if there are some document about it?

If there are other ways to do that?

Thanks,
Stanley


Re: Doppler/Firehose - Multiline Log Entry

Mike Youngstrom <youngm@...>
 

I'm an idiot. I see what you and Eric are saying now. Put the code in
Dropsonde then let the Executor simply initialize Dropsonde that way.
Works for me.

Thanks,
Mike

On Wed, Apr 13, 2016 at 5:26 PM, Jim CF Campbell <jcampbell(a)pivotal.io>
wrote:

My last 2 cents. It'll be configurable so will only be active in users of
dropsonde that want the functionality such as the Executor.

On Wed, Apr 13, 2016 at 5:21 PM, Mike Youngstrom <youngm(a)gmail.com> wrote:

You may want to reference the issue I created on executor.

In that issue I note that I don't think dropsonde is the right place to
do this token replacement because dropsonde doesn't know that the event
originally came through the limited stdout/stderr interface that needs this
functionality. However, executor does. If I'm using the dropsonde API
directly where I can safely put new line characters I don't want dropsonde
looking to replace a character I don't want replaced especially since that
character replacement isn't even needed when using a more rich interface
like dropsonde directly.

That's my 2 cents.

Thanks,
Mike

On Wed, Apr 13, 2016 at 4:34 PM, Jim CF Campbell <jcampbell(a)pivotal.io>
wrote:

We're going to look into it
<https://www.pivotaltracker.com/story/show/117583365>.

On Wed, Apr 13, 2016 at 12:33 PM, Eric Malm <emalm(a)pivotal.io> wrote:

Thanks, Mike. If source-side processing is the right place to do
that \u2028-to-newline substitution, I think that there could also be a
config option on the dropsonde library to have its LogSender perform that
within each message before forwarding it on. The local metron-agent could
also do that processing. I think it's appropriate to push as much of that
log processing as possible to the Loggregator components and libraries:
it's already a bit much that the executor knows anything at all about the
content of the byte-streams that it receives from the stdout and stderr of
a process in the container, so that it can break those streams into the
log-lines that the dropsonde library expects.

Best,
Eric

On Wed, Apr 13, 2016 at 11:00 AM, Mike Youngstrom <youngm(a)gmail.com>
wrote:

Thanks for the insight Jim. I still think that the Executor is the
place to fix this since multi line logging isn't a Loggregator limitation
it is a log inject limitation which is owned by the Executor. I'll open an
issue with Diego and see how it goes.

Thanks,
Mike

On Tue, Apr 12, 2016 at 2:51 PM, Jim CF Campbell <jcampbell(a)pivotal.io
wrote:
That strategy is going to be hard to sell. Diego's Executor takes the
log lines out of Garden and drops them into dropsonde messages. I doubt
they'll think it's a good idea to implement substitution in that
processing. You can certainly ask Eric - he's very aware of the underlying
problem.

After that point, the Loggregator system does it's best to touch
messages as little as possible, and to improve performance and reliability,
we have thinking about the future that will lower the amount of touching
ever further. The next place that log message processing can be done is
either in a nozzle, or the injester of a log aggregator.

I'd vote for those downstream places - a single configuration and
algorithm instead of distributed across runner VMs.

On Tue, Apr 12, 2016 at 2:15 PM, Mike Youngstrom <youngm(a)gmail.com>
wrote:

I was thinking whoever demarcates and submits the original event to
loggregator. dea_logging_agent and the equivalent in Deigo. Doing it at
that point could provide a bit more flexibility. I know this isn't
necessarily the loggregator team's code but I think loggregator team buy
off would be important for those projects to accept such a PR.

Unless you can think of a better place to make that transformation
within the loggregator processing chain?

Mike

On Tue, Apr 12, 2016 at 2:02 PM, Jim CF Campbell <
jcampbell(a)pivotal.io> wrote:

what exactly do you mean by "event creation time"?

On Tue, Apr 12, 2016 at 1:57 PM, Mike Youngstrom <youngm(a)gmail.com>
wrote:

Before I submit the CLI issue let me ask one more question.

Would it be better to replace the newline token with /n at event
creation time instead of asking the cli, splunk, anyone listening on the
firehose, etc. to do so?

The obvious downside is this would probably need to be a global
configuration. However, I know my organization wouldn't have a problem
swapping /u2028 with /n for a deployment. The feature would obviously be
off by default.

Thoughs?

Mike

On Tue, Apr 12, 2016 at 11:24 AM, Mike Youngstrom <
youngm(a)gmail.com> wrote:

Sounds good. I'll submit an issue to start the discussion. I
imagine the first question Dies will ask though is if you would support
something like that. :)

Mike

On Tue, Apr 12, 2016 at 11:12 AM, Jim CF Campbell <
jcampbell(a)pivotal.io> wrote:

cf logs
<https://github.com/cloudfoundry/cli/blob/40eb5be48eaac697c3700d5f1e6f654bae471cec/cf/commands/application/logs.go>
is actually maintained by the CLI team under Dies
<https://www.pivotaltracker.com/n/projects/892938>. You can
talk to them. I'll certainly support you by helping explain the need. I'd
think we want a general solution (token in ENV for instance).



On Tue, Apr 12, 2016 at 11:02 AM, Mike Youngstrom <
youngm(a)gmail.com> wrote:

Jim,

If I submitted a CLI PR to change the cf logs command to
substitute /u2028 with /n could the loggregator team get behind that?

Mike

On Tue, Apr 12, 2016 at 10:20 AM, Jim CF Campbell <
jcampbell(a)pivotal.io> wrote:

Mike,

When you get a bit more desperate ;-) here is a nozzle plug in
<https://github.com/jtuchscherer/nozzle-plugin> for the CLI.
It's attaches to the firehose to display everything, but would be easy to
modify to just look at a single app, and sub out the magic token for
newlines.

Jim

On Tue, Apr 12, 2016 at 9:56 AM, Mike Youngstrom <
youngm(a)gmail.com> wrote:

Hi David,

The problem for me is that I'm searching for a solution that
can works for development (though less of a priority cause you can switch
config between dev and cf) and for viewing logs via "cf logs" in addition
to a log aggregator. I had hoped that /u2028 would work for viewing logs
via "cf logs" but it doesn't in bash. I'd need to write a plugin or
something for cf logs and train all my users to use it. Certainly possible
but I'm not that desperate yet. :)

Mike

On Tue, Apr 12, 2016 at 5:58 AM, David Laing <
david(a)davidlaing.com> wrote:

FWIW, the technique is to have your logging solution (eg,
logback, log4j) log a token (eg, \u2028) other than \n to
denote line breaks in your stack traces; and then have your log aggregation
software replace that token with a \n again when processing the log
messages.

If \u2028 doesn't work in your environment; use something
else; eg NEWLINE

On Mon, 11 Apr 2016 at 21:12 Mike Youngstrom <
youngm(a)gmail.com> wrote:

Finally got around to testing this. Preliminary testing
show that "\u2028" doesn't function as a new line
character in bash and causes eclipse console to wig out. I don't think "
\u2028" is a viable long term solution. Hope you make
progress on a metric format available to an app in a container. I too
would like a tracker link to such a feature if there is one.

Thanks,
Mike

On Mon, Mar 14, 2016 at 2:28 PM, Mike Youngstrom <
youngm(a)gmail.com> wrote:

Hi Jim,

So, to be clear what we're basically doing is using
unicode newline character to fool loggregator (which is looking for \n)
into thinking that it isn't a new log event right? Does \u2028 work as a
new line character when tailing logs in the CLI? Anyone tried this unicode
new line character in various consoles? IDE, xterm, etc? I'm wondering if
developers will need to have different config for development.

Mike

On Mon, Mar 14, 2016 at 12:17 PM, Jim CF Campbell <
jcampbell(a)pivotal.io> wrote:

Hi Mike and Alex,

Two things - for Java, we are working toward defining an
enhanced metric format that will support transport of Multi Lines.

The second is this workaround that David Laing suggested
for Logstash. Think you could use it for Splunk?

With the Java Logback library you can do this by adding
"%replace(%xException){'\n','\u2028'}%nopex" to your logging config[1] ,
and then use the following logstash conf.[2]
Replace the unicode newline character \u2028 with \n,
which Kibana will display as a new line.

mutate {

gsub => [ "[@message]", '\u2028', "

"]
^^^ Seems that passing a string with an actual newline in
it is the only way to make gsub work

}

to replace the token with a regular newline again so it
displays "properly" in Kibana.

[1] github.com/dpin...ication.yml#L12
<https://github.com/dpinto-pivotal/cf-SpringBootTrader-config/blob/master/application.yml#L12>

[2] github.com/logs...se.conf#L60-L64
<https://github.com/logsearch/logsearch-for-cloudfoundry/blob/master/src/logsearch-config/src/logstash-filters/snippets/firehose.conf#L60-L64>


On Mon, Mar 14, 2016 at 11:11 AM, Mike Youngstrom <
youngm(a)gmail.com> wrote:

I'll let the Loggregator team respond formally. But, in
my conversations with the Loggregator team I think we're basically stuck
not sure what the right thing to do is on the client side. How does the
client trigger in loggregator that this is a multi line log message or what
is the right way for loggregator to detect that the client is trying to
send a multi line log message? Any ideas?

Mike

On Mon, Mar 14, 2016 at 10:25 AM, Aliaksandr Prysmakou <
prysmakou(a)gmail.com> wrote:

Hi guys,
Are there any updates about "Multiline Log Entry"
issue? How correctly deal with stacktraces?
Links to the tracker to read?
----
Alex Prysmakou / Altoros
Tel: (617) 841-2121 ext. 5161 | Toll free: 855-ALTOROS
Skype: aliaksandr.prysmakou
www.altoros.com | blog.altoros.com |
twitter.com/altoros


--
Jim Campbell | Product Manager | Cloud Foundry |
Pivotal.io | 303.618.0963

--
Jim Campbell | Product Manager | Cloud Foundry | Pivotal.io |
303.618.0963

--
Jim Campbell | Product Manager | Cloud Foundry | Pivotal.io |
303.618.0963

--
Jim Campbell | Product Manager | Cloud Foundry | Pivotal.io |
303.618.0963

--
Jim Campbell | Product Manager | Cloud Foundry | Pivotal.io |
303.618.0963

--
Jim Campbell | Product Manager | Cloud Foundry | Pivotal.io |
303.618.0963

--
Jim Campbell | Product Manager | Cloud Foundry | Pivotal.io | 303.618.0963


Re: Doppler/Firehose - Multiline Log Entry

Jim CF Campbell
 

My last 2 cents. It'll be configurable so will only be active in users of
dropsonde that want the functionality such as the Executor.

On Wed, Apr 13, 2016 at 5:21 PM, Mike Youngstrom <youngm(a)gmail.com> wrote:

You may want to reference the issue I created on executor.

In that issue I note that I don't think dropsonde is the right place to do
this token replacement because dropsonde doesn't know that the event
originally came through the limited stdout/stderr interface that needs this
functionality. However, executor does. If I'm using the dropsonde API
directly where I can safely put new line characters I don't want dropsonde
looking to replace a character I don't want replaced especially since that
character replacement isn't even needed when using a more rich interface
like dropsonde directly.

That's my 2 cents.

Thanks,
Mike

On Wed, Apr 13, 2016 at 4:34 PM, Jim CF Campbell <jcampbell(a)pivotal.io>
wrote:

We're going to look into it
<https://www.pivotaltracker.com/story/show/117583365>.

On Wed, Apr 13, 2016 at 12:33 PM, Eric Malm <emalm(a)pivotal.io> wrote:

Thanks, Mike. If source-side processing is the right place to do
that \u2028-to-newline substitution, I think that there could also be a
config option on the dropsonde library to have its LogSender perform that
within each message before forwarding it on. The local metron-agent could
also do that processing. I think it's appropriate to push as much of that
log processing as possible to the Loggregator components and libraries:
it's already a bit much that the executor knows anything at all about the
content of the byte-streams that it receives from the stdout and stderr of
a process in the container, so that it can break those streams into the
log-lines that the dropsonde library expects.

Best,
Eric

On Wed, Apr 13, 2016 at 11:00 AM, Mike Youngstrom <youngm(a)gmail.com>
wrote:

Thanks for the insight Jim. I still think that the Executor is the
place to fix this since multi line logging isn't a Loggregator limitation
it is a log inject limitation which is owned by the Executor. I'll open an
issue with Diego and see how it goes.

Thanks,
Mike

On Tue, Apr 12, 2016 at 2:51 PM, Jim CF Campbell <jcampbell(a)pivotal.io>
wrote:

That strategy is going to be hard to sell. Diego's Executor takes the
log lines out of Garden and drops them into dropsonde messages. I doubt
they'll think it's a good idea to implement substitution in that
processing. You can certainly ask Eric - he's very aware of the underlying
problem.

After that point, the Loggregator system does it's best to touch
messages as little as possible, and to improve performance and reliability,
we have thinking about the future that will lower the amount of touching
ever further. The next place that log message processing can be done is
either in a nozzle, or the injester of a log aggregator.

I'd vote for those downstream places - a single configuration and
algorithm instead of distributed across runner VMs.

On Tue, Apr 12, 2016 at 2:15 PM, Mike Youngstrom <youngm(a)gmail.com>
wrote:

I was thinking whoever demarcates and submits the original event to
loggregator. dea_logging_agent and the equivalent in Deigo. Doing it at
that point could provide a bit more flexibility. I know this isn't
necessarily the loggregator team's code but I think loggregator team buy
off would be important for those projects to accept such a PR.

Unless you can think of a better place to make that transformation
within the loggregator processing chain?

Mike

On Tue, Apr 12, 2016 at 2:02 PM, Jim CF Campbell <
jcampbell(a)pivotal.io> wrote:

what exactly do you mean by "event creation time"?

On Tue, Apr 12, 2016 at 1:57 PM, Mike Youngstrom <youngm(a)gmail.com>
wrote:

Before I submit the CLI issue let me ask one more question.

Would it be better to replace the newline token with /n at event
creation time instead of asking the cli, splunk, anyone listening on the
firehose, etc. to do so?

The obvious downside is this would probably need to be a global
configuration. However, I know my organization wouldn't have a problem
swapping /u2028 with /n for a deployment. The feature would obviously be
off by default.

Thoughs?

Mike

On Tue, Apr 12, 2016 at 11:24 AM, Mike Youngstrom <youngm(a)gmail.com
wrote:
Sounds good. I'll submit an issue to start the discussion. I
imagine the first question Dies will ask though is if you would support
something like that. :)

Mike

On Tue, Apr 12, 2016 at 11:12 AM, Jim CF Campbell <
jcampbell(a)pivotal.io> wrote:

cf logs
<https://github.com/cloudfoundry/cli/blob/40eb5be48eaac697c3700d5f1e6f654bae471cec/cf/commands/application/logs.go>
is actually maintained by the CLI team under Dies
<https://www.pivotaltracker.com/n/projects/892938>. You can talk
to them. I'll certainly support you by helping explain the need. I'd think
we want a general solution (token in ENV for instance).



On Tue, Apr 12, 2016 at 11:02 AM, Mike Youngstrom <
youngm(a)gmail.com> wrote:

Jim,

If I submitted a CLI PR to change the cf logs command to
substitute /u2028 with /n could the loggregator team get behind that?

Mike

On Tue, Apr 12, 2016 at 10:20 AM, Jim CF Campbell <
jcampbell(a)pivotal.io> wrote:

Mike,

When you get a bit more desperate ;-) here is a nozzle plug in
<https://github.com/jtuchscherer/nozzle-plugin> for the CLI.
It's attaches to the firehose to display everything, but would be easy to
modify to just look at a single app, and sub out the magic token for
newlines.

Jim

On Tue, Apr 12, 2016 at 9:56 AM, Mike Youngstrom <
youngm(a)gmail.com> wrote:

Hi David,

The problem for me is that I'm searching for a solution that
can works for development (though less of a priority cause you can switch
config between dev and cf) and for viewing logs via "cf logs" in addition
to a log aggregator. I had hoped that /u2028 would work for viewing logs
via "cf logs" but it doesn't in bash. I'd need to write a plugin or
something for cf logs and train all my users to use it. Certainly possible
but I'm not that desperate yet. :)

Mike

On Tue, Apr 12, 2016 at 5:58 AM, David Laing <
david(a)davidlaing.com> wrote:

FWIW, the technique is to have your logging solution (eg,
logback, log4j) log a token (eg, \u2028) other than \n to
denote line breaks in your stack traces; and then have your log aggregation
software replace that token with a \n again when processing the log
messages.

If \u2028 doesn't work in your environment; use something
else; eg NEWLINE

On Mon, 11 Apr 2016 at 21:12 Mike Youngstrom <
youngm(a)gmail.com> wrote:

Finally got around to testing this. Preliminary testing
show that "\u2028" doesn't function as a new line character
in bash and causes eclipse console to wig out. I don't think "
\u2028" is a viable long term solution. Hope you make
progress on a metric format available to an app in a container. I too
would like a tracker link to such a feature if there is one.

Thanks,
Mike

On Mon, Mar 14, 2016 at 2:28 PM, Mike Youngstrom <
youngm(a)gmail.com> wrote:

Hi Jim,

So, to be clear what we're basically doing is using unicode
newline character to fool loggregator (which is looking for \n) into
thinking that it isn't a new log event right? Does \u2028 work as a new
line character when tailing logs in the CLI? Anyone tried this unicode new
line character in various consoles? IDE, xterm, etc? I'm wondering if
developers will need to have different config for development.

Mike

On Mon, Mar 14, 2016 at 12:17 PM, Jim CF Campbell <
jcampbell(a)pivotal.io> wrote:

Hi Mike and Alex,

Two things - for Java, we are working toward defining an
enhanced metric format that will support transport of Multi Lines.

The second is this workaround that David Laing suggested
for Logstash. Think you could use it for Splunk?

With the Java Logback library you can do this by adding
"%replace(%xException){'\n','\u2028'}%nopex" to your logging config[1] ,
and then use the following logstash conf.[2]
Replace the unicode newline character \u2028 with \n,
which Kibana will display as a new line.

mutate {

gsub => [ "[@message]", '\u2028', "

"]
^^^ Seems that passing a string with an actual newline in
it is the only way to make gsub work

}

to replace the token with a regular newline again so it
displays "properly" in Kibana.

[1] github.com/dpin...ication.yml#L12
<https://github.com/dpinto-pivotal/cf-SpringBootTrader-config/blob/master/application.yml#L12>

[2] github.com/logs...se.conf#L60-L64
<https://github.com/logsearch/logsearch-for-cloudfoundry/blob/master/src/logsearch-config/src/logstash-filters/snippets/firehose.conf#L60-L64>


On Mon, Mar 14, 2016 at 11:11 AM, Mike Youngstrom <
youngm(a)gmail.com> wrote:

I'll let the Loggregator team respond formally. But, in
my conversations with the Loggregator team I think we're basically stuck
not sure what the right thing to do is on the client side. How does the
client trigger in loggregator that this is a multi line log message or what
is the right way for loggregator to detect that the client is trying to
send a multi line log message? Any ideas?

Mike

On Mon, Mar 14, 2016 at 10:25 AM, Aliaksandr Prysmakou <
prysmakou(a)gmail.com> wrote:

Hi guys,
Are there any updates about "Multiline Log Entry" issue?
How correctly deal with stacktraces?
Links to the tracker to read?
----
Alex Prysmakou / Altoros
Tel: (617) 841-2121 ext. 5161 | Toll free: 855-ALTOROS
Skype: aliaksandr.prysmakou
www.altoros.com | blog.altoros.com | twitter.com/altoros


--
Jim Campbell | Product Manager | Cloud Foundry |
Pivotal.io | 303.618.0963

--
Jim Campbell | Product Manager | Cloud Foundry | Pivotal.io |
303.618.0963

--
Jim Campbell | Product Manager | Cloud Foundry | Pivotal.io |
303.618.0963

--
Jim Campbell | Product Manager | Cloud Foundry | Pivotal.io |
303.618.0963

--
Jim Campbell | Product Manager | Cloud Foundry | Pivotal.io |
303.618.0963

--
Jim Campbell | Product Manager | Cloud Foundry | Pivotal.io |
303.618.0963
--
Jim Campbell | Product Manager | Cloud Foundry | Pivotal.io | 303.618.0963


Re: Doppler/Firehose - Multiline Log Entry

Mike Youngstrom <youngm@...>
 

You may want to reference the issue I created on executor.

In that issue I note that I don't think dropsonde is the right place to do
this token replacement because dropsonde doesn't know that the event
originally came through the limited stdout/stderr interface that needs this
functionality. However, executor does. If I'm using the dropsonde API
directly where I can safely put new line characters I don't want dropsonde
looking to replace a character I don't want replaced especially since that
character replacement isn't even needed when using a more rich interface
like dropsonde directly.

That's my 2 cents.

Thanks,
Mike

On Wed, Apr 13, 2016 at 4:34 PM, Jim CF Campbell <jcampbell(a)pivotal.io>
wrote:

We're going to look into it
<https://www.pivotaltracker.com/story/show/117583365>.

On Wed, Apr 13, 2016 at 12:33 PM, Eric Malm <emalm(a)pivotal.io> wrote:

Thanks, Mike. If source-side processing is the right place to do
that \u2028-to-newline substitution, I think that there could also be a
config option on the dropsonde library to have its LogSender perform that
within each message before forwarding it on. The local metron-agent could
also do that processing. I think it's appropriate to push as much of that
log processing as possible to the Loggregator components and libraries:
it's already a bit much that the executor knows anything at all about the
content of the byte-streams that it receives from the stdout and stderr of
a process in the container, so that it can break those streams into the
log-lines that the dropsonde library expects.

Best,
Eric

On Wed, Apr 13, 2016 at 11:00 AM, Mike Youngstrom <youngm(a)gmail.com>
wrote:

Thanks for the insight Jim. I still think that the Executor is the
place to fix this since multi line logging isn't a Loggregator limitation
it is a log inject limitation which is owned by the Executor. I'll open an
issue with Diego and see how it goes.

Thanks,
Mike

On Tue, Apr 12, 2016 at 2:51 PM, Jim CF Campbell <jcampbell(a)pivotal.io>
wrote:

That strategy is going to be hard to sell. Diego's Executor takes the
log lines out of Garden and drops them into dropsonde messages. I doubt
they'll think it's a good idea to implement substitution in that
processing. You can certainly ask Eric - he's very aware of the underlying
problem.

After that point, the Loggregator system does it's best to touch
messages as little as possible, and to improve performance and reliability,
we have thinking about the future that will lower the amount of touching
ever further. The next place that log message processing can be done is
either in a nozzle, or the injester of a log aggregator.

I'd vote for those downstream places - a single configuration and
algorithm instead of distributed across runner VMs.

On Tue, Apr 12, 2016 at 2:15 PM, Mike Youngstrom <youngm(a)gmail.com>
wrote:

I was thinking whoever demarcates and submits the original event to
loggregator. dea_logging_agent and the equivalent in Deigo. Doing it at
that point could provide a bit more flexibility. I know this isn't
necessarily the loggregator team's code but I think loggregator team buy
off would be important for those projects to accept such a PR.

Unless you can think of a better place to make that transformation
within the loggregator processing chain?

Mike

On Tue, Apr 12, 2016 at 2:02 PM, Jim CF Campbell <jcampbell(a)pivotal.io
wrote:
what exactly do you mean by "event creation time"?

On Tue, Apr 12, 2016 at 1:57 PM, Mike Youngstrom <youngm(a)gmail.com>
wrote:

Before I submit the CLI issue let me ask one more question.

Would it be better to replace the newline token with /n at event
creation time instead of asking the cli, splunk, anyone listening on the
firehose, etc. to do so?

The obvious downside is this would probably need to be a global
configuration. However, I know my organization wouldn't have a problem
swapping /u2028 with /n for a deployment. The feature would obviously be
off by default.

Thoughs?

Mike

On Tue, Apr 12, 2016 at 11:24 AM, Mike Youngstrom <youngm(a)gmail.com>
wrote:

Sounds good. I'll submit an issue to start the discussion. I
imagine the first question Dies will ask though is if you would support
something like that. :)

Mike

On Tue, Apr 12, 2016 at 11:12 AM, Jim CF Campbell <
jcampbell(a)pivotal.io> wrote:

cf logs
<https://github.com/cloudfoundry/cli/blob/40eb5be48eaac697c3700d5f1e6f654bae471cec/cf/commands/application/logs.go>
is actually maintained by the CLI team under Dies
<https://www.pivotaltracker.com/n/projects/892938>. You can talk
to them. I'll certainly support you by helping explain the need. I'd think
we want a general solution (token in ENV for instance).



On Tue, Apr 12, 2016 at 11:02 AM, Mike Youngstrom <
youngm(a)gmail.com> wrote:

Jim,

If I submitted a CLI PR to change the cf logs command to
substitute /u2028 with /n could the loggregator team get behind that?

Mike

On Tue, Apr 12, 2016 at 10:20 AM, Jim CF Campbell <
jcampbell(a)pivotal.io> wrote:

Mike,

When you get a bit more desperate ;-) here is a nozzle plug in
<https://github.com/jtuchscherer/nozzle-plugin> for the CLI.
It's attaches to the firehose to display everything, but would be easy to
modify to just look at a single app, and sub out the magic token for
newlines.

Jim

On Tue, Apr 12, 2016 at 9:56 AM, Mike Youngstrom <
youngm(a)gmail.com> wrote:

Hi David,

The problem for me is that I'm searching for a solution that
can works for development (though less of a priority cause you can switch
config between dev and cf) and for viewing logs via "cf logs" in addition
to a log aggregator. I had hoped that /u2028 would work for viewing logs
via "cf logs" but it doesn't in bash. I'd need to write a plugin or
something for cf logs and train all my users to use it. Certainly possible
but I'm not that desperate yet. :)

Mike

On Tue, Apr 12, 2016 at 5:58 AM, David Laing <
david(a)davidlaing.com> wrote:

FWIW, the technique is to have your logging solution (eg,
logback, log4j) log a token (eg, \u2028) other than \n to
denote line breaks in your stack traces; and then have your log aggregation
software replace that token with a \n again when processing the log
messages.

If \u2028 doesn't work in your environment; use something
else; eg NEWLINE

On Mon, 11 Apr 2016 at 21:12 Mike Youngstrom <youngm(a)gmail.com>
wrote:

Finally got around to testing this. Preliminary testing show
that "\u2028" doesn't function as a new line character in
bash and causes eclipse console to wig out. I don't think "\u2028"
is a viable long term solution. Hope you make progress on a metric format
available to an app in a container. I too would like a tracker link to
such a feature if there is one.

Thanks,
Mike

On Mon, Mar 14, 2016 at 2:28 PM, Mike Youngstrom <
youngm(a)gmail.com> wrote:

Hi Jim,

So, to be clear what we're basically doing is using unicode
newline character to fool loggregator (which is looking for \n) into
thinking that it isn't a new log event right? Does \u2028 work as a new
line character when tailing logs in the CLI? Anyone tried this unicode new
line character in various consoles? IDE, xterm, etc? I'm wondering if
developers will need to have different config for development.

Mike

On Mon, Mar 14, 2016 at 12:17 PM, Jim CF Campbell <
jcampbell(a)pivotal.io> wrote:

Hi Mike and Alex,

Two things - for Java, we are working toward defining an
enhanced metric format that will support transport of Multi Lines.

The second is this workaround that David Laing suggested
for Logstash. Think you could use it for Splunk?

With the Java Logback library you can do this by adding
"%replace(%xException){'\n','\u2028'}%nopex" to your logging config[1] ,
and then use the following logstash conf.[2]
Replace the unicode newline character \u2028 with \n, which
Kibana will display as a new line.

mutate {

gsub => [ "[@message]", '\u2028', "

"]
^^^ Seems that passing a string with an actual newline in
it is the only way to make gsub work

}

to replace the token with a regular newline again so it
displays "properly" in Kibana.

[1] github.com/dpin...ication.yml#L12
<https://github.com/dpinto-pivotal/cf-SpringBootTrader-config/blob/master/application.yml#L12>

[2] github.com/logs...se.conf#L60-L64
<https://github.com/logsearch/logsearch-for-cloudfoundry/blob/master/src/logsearch-config/src/logstash-filters/snippets/firehose.conf#L60-L64>


On Mon, Mar 14, 2016 at 11:11 AM, Mike Youngstrom <
youngm(a)gmail.com> wrote:

I'll let the Loggregator team respond formally. But, in
my conversations with the Loggregator team I think we're basically stuck
not sure what the right thing to do is on the client side. How does the
client trigger in loggregator that this is a multi line log message or what
is the right way for loggregator to detect that the client is trying to
send a multi line log message? Any ideas?

Mike

On Mon, Mar 14, 2016 at 10:25 AM, Aliaksandr Prysmakou <
prysmakou(a)gmail.com> wrote:

Hi guys,
Are there any updates about "Multiline Log Entry" issue?
How correctly deal with stacktraces?
Links to the tracker to read?
----
Alex Prysmakou / Altoros
Tel: (617) 841-2121 ext. 5161 | Toll free: 855-ALTOROS
Skype: aliaksandr.prysmakou
www.altoros.com | blog.altoros.com | twitter.com/altoros


--
Jim Campbell | Product Manager | Cloud Foundry |
Pivotal.io | 303.618.0963

--
Jim Campbell | Product Manager | Cloud Foundry | Pivotal.io |
303.618.0963

--
Jim Campbell | Product Manager | Cloud Foundry | Pivotal.io |
303.618.0963

--
Jim Campbell | Product Manager | Cloud Foundry | Pivotal.io |
303.618.0963

--
Jim Campbell | Product Manager | Cloud Foundry | Pivotal.io |
303.618.0963

--
Jim Campbell | Product Manager | Cloud Foundry | Pivotal.io | 303.618.0963


Re: Doppler/Firehose - Multiline Log Entry

Jim CF Campbell
 

On Wed, Apr 13, 2016 at 12:33 PM, Eric Malm <emalm(a)pivotal.io> wrote:

Thanks, Mike. If source-side processing is the right place to do
that \u2028-to-newline substitution, I think that there could also be a
config option on the dropsonde library to have its LogSender perform that
within each message before forwarding it on. The local metron-agent could
also do that processing. I think it's appropriate to push as much of that
log processing as possible to the Loggregator components and libraries:
it's already a bit much that the executor knows anything at all about the
content of the byte-streams that it receives from the stdout and stderr of
a process in the container, so that it can break those streams into the
log-lines that the dropsonde library expects.

Best,
Eric

On Wed, Apr 13, 2016 at 11:00 AM, Mike Youngstrom <youngm(a)gmail.com>
wrote:

Thanks for the insight Jim. I still think that the Executor is the place
to fix this since multi line logging isn't a Loggregator limitation it is a
log inject limitation which is owned by the Executor. I'll open an issue
with Diego and see how it goes.

Thanks,
Mike

On Tue, Apr 12, 2016 at 2:51 PM, Jim CF Campbell <jcampbell(a)pivotal.io>
wrote:

That strategy is going to be hard to sell. Diego's Executor takes the
log lines out of Garden and drops them into dropsonde messages. I doubt
they'll think it's a good idea to implement substitution in that
processing. You can certainly ask Eric - he's very aware of the underlying
problem.

After that point, the Loggregator system does it's best to touch
messages as little as possible, and to improve performance and reliability,
we have thinking about the future that will lower the amount of touching
ever further. The next place that log message processing can be done is
either in a nozzle, or the injester of a log aggregator.

I'd vote for those downstream places - a single configuration and
algorithm instead of distributed across runner VMs.

On Tue, Apr 12, 2016 at 2:15 PM, Mike Youngstrom <youngm(a)gmail.com>
wrote:

I was thinking whoever demarcates and submits the original event to
loggregator. dea_logging_agent and the equivalent in Deigo. Doing it at
that point could provide a bit more flexibility. I know this isn't
necessarily the loggregator team's code but I think loggregator team buy
off would be important for those projects to accept such a PR.

Unless you can think of a better place to make that transformation
within the loggregator processing chain?

Mike

On Tue, Apr 12, 2016 at 2:02 PM, Jim CF Campbell <jcampbell(a)pivotal.io>
wrote:

what exactly do you mean by "event creation time"?

On Tue, Apr 12, 2016 at 1:57 PM, Mike Youngstrom <youngm(a)gmail.com>
wrote:

Before I submit the CLI issue let me ask one more question.

Would it be better to replace the newline token with /n at event
creation time instead of asking the cli, splunk, anyone listening on the
firehose, etc. to do so?

The obvious downside is this would probably need to be a global
configuration. However, I know my organization wouldn't have a problem
swapping /u2028 with /n for a deployment. The feature would obviously be
off by default.

Thoughs?

Mike

On Tue, Apr 12, 2016 at 11:24 AM, Mike Youngstrom <youngm(a)gmail.com>
wrote:

Sounds good. I'll submit an issue to start the discussion. I
imagine the first question Dies will ask though is if you would support
something like that. :)

Mike

On Tue, Apr 12, 2016 at 11:12 AM, Jim CF Campbell <
jcampbell(a)pivotal.io> wrote:

cf logs
<https://github.com/cloudfoundry/cli/blob/40eb5be48eaac697c3700d5f1e6f654bae471cec/cf/commands/application/logs.go>
is actually maintained by the CLI team under Dies
<https://www.pivotaltracker.com/n/projects/892938>. You can talk
to them. I'll certainly support you by helping explain the need. I'd think
we want a general solution (token in ENV for instance).



On Tue, Apr 12, 2016 at 11:02 AM, Mike Youngstrom <youngm(a)gmail.com
wrote:
Jim,

If I submitted a CLI PR to change the cf logs command to
substitute /u2028 with /n could the loggregator team get behind that?

Mike

On Tue, Apr 12, 2016 at 10:20 AM, Jim CF Campbell <
jcampbell(a)pivotal.io> wrote:

Mike,

When you get a bit more desperate ;-) here is a nozzle plug in
<https://github.com/jtuchscherer/nozzle-plugin> for the CLI.
It's attaches to the firehose to display everything, but would be easy to
modify to just look at a single app, and sub out the magic token for
newlines.

Jim

On Tue, Apr 12, 2016 at 9:56 AM, Mike Youngstrom <
youngm(a)gmail.com> wrote:

Hi David,

The problem for me is that I'm searching for a solution that can
works for development (though less of a priority cause you can switch
config between dev and cf) and for viewing logs via "cf logs" in addition
to a log aggregator. I had hoped that /u2028 would work for viewing logs
via "cf logs" but it doesn't in bash. I'd need to write a plugin or
something for cf logs and train all my users to use it. Certainly possible
but I'm not that desperate yet. :)

Mike

On Tue, Apr 12, 2016 at 5:58 AM, David Laing <
david(a)davidlaing.com> wrote:

FWIW, the technique is to have your logging solution (eg,
logback, log4j) log a token (eg, \u2028) other than \n to
denote line breaks in your stack traces; and then have your log aggregation
software replace that token with a \n again when processing the log
messages.

If \u2028 doesn't work in your environment; use something
else; eg NEWLINE

On Mon, 11 Apr 2016 at 21:12 Mike Youngstrom <youngm(a)gmail.com>
wrote:

Finally got around to testing this. Preliminary testing show
that "\u2028" doesn't function as a new line character in
bash and causes eclipse console to wig out. I don't think "\u2028"
is a viable long term solution. Hope you make progress on a metric format
available to an app in a container. I too would like a tracker link to
such a feature if there is one.

Thanks,
Mike

On Mon, Mar 14, 2016 at 2:28 PM, Mike Youngstrom <
youngm(a)gmail.com> wrote:

Hi Jim,

So, to be clear what we're basically doing is using unicode
newline character to fool loggregator (which is looking for \n) into
thinking that it isn't a new log event right? Does \u2028 work as a new
line character when tailing logs in the CLI? Anyone tried this unicode new
line character in various consoles? IDE, xterm, etc? I'm wondering if
developers will need to have different config for development.

Mike

On Mon, Mar 14, 2016 at 12:17 PM, Jim CF Campbell <
jcampbell(a)pivotal.io> wrote:

Hi Mike and Alex,

Two things - for Java, we are working toward defining an
enhanced metric format that will support transport of Multi Lines.

The second is this workaround that David Laing suggested for
Logstash. Think you could use it for Splunk?

With the Java Logback library you can do this by adding
"%replace(%xException){'\n','\u2028'}%nopex" to your logging config[1] ,
and then use the following logstash conf.[2]
Replace the unicode newline character \u2028 with \n, which
Kibana will display as a new line.

mutate {

gsub => [ "[@message]", '\u2028', "

"]
^^^ Seems that passing a string with an actual newline in it
is the only way to make gsub work

}

to replace the token with a regular newline again so it
displays "properly" in Kibana.

[1] github.com/dpin...ication.yml#L12
<https://github.com/dpinto-pivotal/cf-SpringBootTrader-config/blob/master/application.yml#L12>

[2] github.com/logs...se.conf#L60-L64
<https://github.com/logsearch/logsearch-for-cloudfoundry/blob/master/src/logsearch-config/src/logstash-filters/snippets/firehose.conf#L60-L64>


On Mon, Mar 14, 2016 at 11:11 AM, Mike Youngstrom <
youngm(a)gmail.com> wrote:

I'll let the Loggregator team respond formally. But, in my
conversations with the Loggregator team I think we're basically stuck not
sure what the right thing to do is on the client side. How does the client
trigger in loggregator that this is a multi line log message or what is the
right way for loggregator to detect that the client is trying to send a
multi line log message? Any ideas?

Mike

On Mon, Mar 14, 2016 at 10:25 AM, Aliaksandr Prysmakou <
prysmakou(a)gmail.com> wrote:

Hi guys,
Are there any updates about "Multiline Log Entry" issue?
How correctly deal with stacktraces?
Links to the tracker to read?
----
Alex Prysmakou / Altoros
Tel: (617) 841-2121 ext. 5161 | Toll free: 855-ALTOROS
Skype: aliaksandr.prysmakou
www.altoros.com | blog.altoros.com | twitter.com/altoros


--
Jim Campbell | Product Manager | Cloud Foundry | Pivotal.io
| 303.618.0963

--
Jim Campbell | Product Manager | Cloud Foundry | Pivotal.io |
303.618.0963

--
Jim Campbell | Product Manager | Cloud Foundry | Pivotal.io |
303.618.0963

--
Jim Campbell | Product Manager | Cloud Foundry | Pivotal.io |
303.618.0963

--
Jim Campbell | Product Manager | Cloud Foundry | Pivotal.io |
303.618.0963
--
Jim Campbell | Product Manager | Cloud Foundry | Pivotal.io | 303.618.0963


Container termination timeout, warden aligning

ndemeshchenko <neveragny@...>
 

Hi,
I have a pull request to warden repo to expose container termination timeout
out to yml config.
But I've been told that it needs to be discussed with Diego team since it
can affect it and needs to be align with it.

Diego Team, can some one please give me a hand with it.

warden pull request: https://github.com/cloudfoundry/warden/pull/108



--
View this message in context: http://cf-dev.70369.x6.nabble.com/Container-termination-timeout-warden-aligning-tp4609.html
Sent from the CF Dev mailing list archive at Nabble.com.


Re: [cf-bosh] Re: Re: [Metron Agent] failed to generate job templates with metron agent on top of OpenStack Dynamic network

Amit Kumar Gupta
 

Hi Yitao,

I would recommend either using manual networks, or following the GitHub
issue Ben R created above to see if the BOSH team can figure out the root
cause of this issue with dynamic networks.

Best,
Amit

On Tue, Apr 12, 2016 at 6:46 PM, Yitao Jiang <jiangyt.cn(a)gmail.com> wrote:

Thanks Amit.

If using dynamic network, the OpenStack support allocate ip address, using
the ip address to configure metron_agent and launch vm, will it possible
solve the issue?

On Tue, Apr 12, 2016 at 1:24 PM, Amit Gupta <agupta(a)pivotal.io> wrote:

This will not work with dynamic networks. Many jobs in cf-release rely
on data from BOSH to determine their IP so that configuration files can be
rendered up-front by the director rather than at runtime, requiring system
calls to determine IP. metron_agent is one such job, and it tends to be
colocated with each other job (it is what allows all system component logs
to be aggregated through the loggregator system), so this would require all
Cloud Foundry VMs to be on a manual network. You don't need to manually
pick the IPs, you just need to tell BOSH which IPs in the network not to
use and specify these in the "reserved" range.

Since so many different components depend on being able to determine
their IP via BOSH data, there's no quick workaround if you want to stick to
using dynamic networks, but we're aware of this current limitation.

Best,
Amit

On Mon, Apr 11, 2016 at 7:23 PM, Yitao Jiang <jiangyt.cn(a)gmail.com>
wrote:

Is it a bug of CF or Bosh ?

On Fri, Apr 8, 2016 at 12:08 PM, Ben R <vagcom.ben(a)gmail.com> wrote:

I have the same issue. It has to do with every release since bosh 248.
However, dynamic networks with older bosh releases + cf-231/cf-231 work.

This must be a bug.

Ben R


On Thu, Apr 7, 2016 at 8:55 PM, Yitao Jiang <jiangyt.cn(a)gmail.com>
wrote:

Hi,guys

When deploy CF on top of OpenStack with dynamic network, the jobs
failed with metron-agent
Error filling in template 'syslog_forwarder.conf.erb' (line 44:
undefined method `strip' for nil:NilClass)

here's related logs

​D​
etecting deployment changes
----------------------------
Releases
cf
version type changed: String -> Fixnum
- 233
+ 233

Compilation
No changes

Update
± canaries:
- 1
+ 0

Resource pools
No changes

Disk pools
No changes

Networks
dynamic-net
+ name: dynamic-net
subnets
10.0.0.0/24
cloud_properties
+ net_id: 0700ae03-4b38-464e-b40d-0a9c8dd18ff0
+ security_groups: ["Test OS SG_20160128T070152Z"]
+ dns: ["114.114.114.114", "8.8.8.8"]

+ range: 10.0.0.0/24
+ name: Test OS Sub Internal Network_20160128T070152Z

+ type: dynamic


Jobs
stats_z1

± networks:

- {"name"=>"cf1"}

+ {"name"=>"dynamic-net"}


Properties
No changes


Meta
No changes


Please review all changes carefully

Deploying
---------

Are you sure you want to deploy? (type 'yes' to continue): yes


Director task 57
Started preparing deployment > Preparing deployment. Done (00:00:03)


Error 100: Unable to render instance groups for deployment. Errors
are:
- Unable to render jobs for instance group 'stats_z1'. Errors are:

- Unable to render templates for job 'metron_agent'. Errors are:

- Error filling in template 'syslog_forwarder.conf.erb' (line
44: undefined method `strip' for nil:NilClass)

Task 57 error

For a more detailed error report, run: bosh task 57 --debug
as the ip manged by OpenStack, bosh cannot get the actual ip address
of each vm until vm alive, this lead to the generated job spec doesn't
contain ip address infos
so, must i have to configure network type to manual?​

snippets of deployment yml

1001 - name: dynamic-net
1002 subnets:
1003 - cloud_properties:
1004 net_id: 0700ae03-4b38-464e-b40d-0a9c8dd18ff0
1005 security_groups:
1006 - Test OS SG_20160128T070152Z
1007 dns:
1010 - 114.114.114.114
1011 - 8.8.8.8
1012 range: 10.0.0.0/24
1013 name: Test OS Sub Internal Network_20160128T070152Z
1014 type: dynamic

​Rendered job spec

{"deployment"=>"staging-01", "job"=

{"name"=>"stats_z1", "templates"=>[{"name"=>"collector",
"version"=>"6c210292f18d129e9a037fe7053836db2d494344",
"sha1"=>"38927f47b15c2daf6c8a2e7c760e73e5ff90
dfd4", "blobstore_id"=>"23531029-0ee1-4267-8863-b5f931afaecb"},
{"name"=>"metron_agent",
"version"=>"2b80a211127fc642fc8bb0d14d7eb30c37730db3", "sha1"=>"150f2
7445c2ef960951c1f26606525d41ec629b2",
"blobstore_id"=>"e87174dc-f3f7-4768-94cd-74f299813528"}],
"template"=>"collector", "version"=>"6c210292f18d129e9a037fe70
53836db2d494344", "sha1"=>"38927f47b15c2daf6c8a2e7c760e73e5ff90dfd4",
"blobstore_id"=>"23531029-0ee1-4267-8863-b5f931afaecb"}, "index"=>0,
"bootstrap"=>true,
"name"=>"stats_z1", "id"=>"99f349d0-fb5d-4de7-9912-3de5559d2f19",
"az"=>nil,

*"networks"=>{"dynamic-net"=>{"type"=>"dynamic",
"cloud_properties"=>{"net_id"=>"0700ae03-4b38-464e-b40d-0a9c8dd18ff0",
"security_groups"=>["Test OS SG_20160128T070152Z"]},
"dns"=>["114.114.114.114", "8.8.8.8", "10.0.0.13"], "default"=>["dns",
"gateway"],
"dns_record_name"=>"0.stats-z1.dynamic-net.staging-01.microbosh"}}*,
"properties"=>{"collector"=>{"aws"=>{
"access_key_id"=>nil, "secret_access_key"=>nil},
"datadog"=>{"api_key"=>nil, "application_key"=>nil},
"deployment_name"=>nil, "logging_level"=>"info", "interv
als"=>{"discover"=>60, "healthz"=>30, "local_metrics"=>30,
"nats_ping"=>30, "prune"=>300, "varz"=>30}, "use_aws_cloudwatch"=>false,
"use_datadog"=>false, "use
_tsdb"=>false, "opentsdb"=>{"address"=>nil, "port"=>nil},
"use_graphite"=>false, "graphite"=>{"address"=>nil, "port"=>nil},
"memory_threshold"=>800}, "nats"=>
{"machines"=>["10.0.0.127"], "password"=>"NATS_PASSWORD",
"port"=>4222, "user"=>"NATS_USER"},
"syslog_daemon_config"=>{"address"=>nil, "port"=>nil, "transport
"=>"tcp", "fallback_addresses"=>[], "custom_rule"=>"",
"max_message_size"=>"4k"},
"metron_agent"=>{"dropsonde_incoming_port"=>3457, "preferred_protocol"=>"udp
", "tls"=>{"client_cert"=>"", "client_key"=>""}, "debug"=>false,
"zone"=>"z1", "deployment"=>"ya-staging-01",
"tcp"=>{"batching_buffer_bytes"=>10240, "batchin
g_buffer_flush_interval_milliseconds"=>100},
"logrotate"=>{"freq_min"=>5, "rotate"=>7, "size"=>"50M"},
"buffer_size"=>10000, "enable_buffer"=>false}, "metron_
endpoint"=>{"shared_secret"=>"LOGGREGATOR_ENDPOINT_SHARED_SECRET"},
"loggregator"=>{"tls"=>{"ca_cert"=>""}, "dropsonde_incoming_port"=>3457,
"etcd"=>{"machine
s"=>["10.0.0.133"], "maxconcurrentrequests"=>10}}},
"dns_domain_name"=>"microbosh", "links"=>{},
"address"=>"99f349d0-fb5d-4de7-9912-3de5559d2f19.stats-z1.dyn
amic-net.ya-staging-01.microbosh", "persistent_disk"=>0,
"resource_pool"=>"small_z1"}​

--

Regards,

Yitao

--

Regards,

Yitao

--

Regards,

Yitao


Re: Doppler/Firehose - Multiline Log Entry

Mike Youngstrom <youngm@...>
 

Rather than continue this discussion here I've created an issue stating my
case here: https://github.com/cloudfoundry-incubator/executor/issues/17

Mike

On Wed, Apr 13, 2016 at 12:33 PM, Eric Malm <emalm(a)pivotal.io> wrote:

Thanks, Mike. If source-side processing is the right place to do
that \u2028-to-newline substitution, I think that there could also be a
config option on the dropsonde library to have its LogSender perform that
within each message before forwarding it on. The local metron-agent could
also do that processing. I think it's appropriate to push as much of that
log processing as possible to the Loggregator components and libraries:
it's already a bit much that the executor knows anything at all about the
content of the byte-streams that it receives from the stdout and stderr of
a process in the container, so that it can break those streams into the
log-lines that the dropsonde library expects.

Best,
Eric

On Wed, Apr 13, 2016 at 11:00 AM, Mike Youngstrom <youngm(a)gmail.com>
wrote:

Thanks for the insight Jim. I still think that the Executor is the place
to fix this since multi line logging isn't a Loggregator limitation it is a
log inject limitation which is owned by the Executor. I'll open an issue
with Diego and see how it goes.

Thanks,
Mike

On Tue, Apr 12, 2016 at 2:51 PM, Jim CF Campbell <jcampbell(a)pivotal.io>
wrote:

That strategy is going to be hard to sell. Diego's Executor takes the
log lines out of Garden and drops them into dropsonde messages. I doubt
they'll think it's a good idea to implement substitution in that
processing. You can certainly ask Eric - he's very aware of the underlying
problem.

After that point, the Loggregator system does it's best to touch
messages as little as possible, and to improve performance and reliability,
we have thinking about the future that will lower the amount of touching
ever further. The next place that log message processing can be done is
either in a nozzle, or the injester of a log aggregator.

I'd vote for those downstream places - a single configuration and
algorithm instead of distributed across runner VMs.

On Tue, Apr 12, 2016 at 2:15 PM, Mike Youngstrom <youngm(a)gmail.com>
wrote:

I was thinking whoever demarcates and submits the original event to
loggregator. dea_logging_agent and the equivalent in Deigo. Doing it at
that point could provide a bit more flexibility. I know this isn't
necessarily the loggregator team's code but I think loggregator team buy
off would be important for those projects to accept such a PR.

Unless you can think of a better place to make that transformation
within the loggregator processing chain?

Mike

On Tue, Apr 12, 2016 at 2:02 PM, Jim CF Campbell <jcampbell(a)pivotal.io>
wrote:

what exactly do you mean by "event creation time"?

On Tue, Apr 12, 2016 at 1:57 PM, Mike Youngstrom <youngm(a)gmail.com>
wrote:

Before I submit the CLI issue let me ask one more question.

Would it be better to replace the newline token with /n at event
creation time instead of asking the cli, splunk, anyone listening on the
firehose, etc. to do so?

The obvious downside is this would probably need to be a global
configuration. However, I know my organization wouldn't have a problem
swapping /u2028 with /n for a deployment. The feature would obviously be
off by default.

Thoughs?

Mike

On Tue, Apr 12, 2016 at 11:24 AM, Mike Youngstrom <youngm(a)gmail.com>
wrote:

Sounds good. I'll submit an issue to start the discussion. I
imagine the first question Dies will ask though is if you would support
something like that. :)

Mike

On Tue, Apr 12, 2016 at 11:12 AM, Jim CF Campbell <
jcampbell(a)pivotal.io> wrote:

cf logs
<https://github.com/cloudfoundry/cli/blob/40eb5be48eaac697c3700d5f1e6f654bae471cec/cf/commands/application/logs.go>
is actually maintained by the CLI team under Dies
<https://www.pivotaltracker.com/n/projects/892938>. You can talk
to them. I'll certainly support you by helping explain the need. I'd think
we want a general solution (token in ENV for instance).



On Tue, Apr 12, 2016 at 11:02 AM, Mike Youngstrom <youngm(a)gmail.com
wrote:
Jim,

If I submitted a CLI PR to change the cf logs command to
substitute /u2028 with /n could the loggregator team get behind that?

Mike

On Tue, Apr 12, 2016 at 10:20 AM, Jim CF Campbell <
jcampbell(a)pivotal.io> wrote:

Mike,

When you get a bit more desperate ;-) here is a nozzle plug in
<https://github.com/jtuchscherer/nozzle-plugin> for the CLI.
It's attaches to the firehose to display everything, but would be easy to
modify to just look at a single app, and sub out the magic token for
newlines.

Jim

On Tue, Apr 12, 2016 at 9:56 AM, Mike Youngstrom <
youngm(a)gmail.com> wrote:

Hi David,

The problem for me is that I'm searching for a solution that can
works for development (though less of a priority cause you can switch
config between dev and cf) and for viewing logs via "cf logs" in addition
to a log aggregator. I had hoped that /u2028 would work for viewing logs
via "cf logs" but it doesn't in bash. I'd need to write a plugin or
something for cf logs and train all my users to use it. Certainly possible
but I'm not that desperate yet. :)

Mike

On Tue, Apr 12, 2016 at 5:58 AM, David Laing <
david(a)davidlaing.com> wrote:

FWIW, the technique is to have your logging solution (eg,
logback, log4j) log a token (eg, \u2028) other than \n to
denote line breaks in your stack traces; and then have your log aggregation
software replace that token with a \n again when processing the log
messages.

If \u2028 doesn't work in your environment; use something
else; eg NEWLINE

On Mon, 11 Apr 2016 at 21:12 Mike Youngstrom <youngm(a)gmail.com>
wrote:

Finally got around to testing this. Preliminary testing show
that "\u2028" doesn't function as a new line character in
bash and causes eclipse console to wig out. I don't think "\u2028"
is a viable long term solution. Hope you make progress on a metric format
available to an app in a container. I too would like a tracker link to
such a feature if there is one.

Thanks,
Mike

On Mon, Mar 14, 2016 at 2:28 PM, Mike Youngstrom <
youngm(a)gmail.com> wrote:

Hi Jim,

So, to be clear what we're basically doing is using unicode
newline character to fool loggregator (which is looking for \n) into
thinking that it isn't a new log event right? Does \u2028 work as a new
line character when tailing logs in the CLI? Anyone tried this unicode new
line character in various consoles? IDE, xterm, etc? I'm wondering if
developers will need to have different config for development.

Mike

On Mon, Mar 14, 2016 at 12:17 PM, Jim CF Campbell <
jcampbell(a)pivotal.io> wrote:

Hi Mike and Alex,

Two things - for Java, we are working toward defining an
enhanced metric format that will support transport of Multi Lines.

The second is this workaround that David Laing suggested for
Logstash. Think you could use it for Splunk?

With the Java Logback library you can do this by adding
"%replace(%xException){'\n','\u2028'}%nopex" to your logging config[1] ,
and then use the following logstash conf.[2]
Replace the unicode newline character \u2028 with \n, which
Kibana will display as a new line.

mutate {

gsub => [ "[@message]", '\u2028', "

"]
^^^ Seems that passing a string with an actual newline in it
is the only way to make gsub work

}

to replace the token with a regular newline again so it
displays "properly" in Kibana.

[1] github.com/dpin...ication.yml#L12
<https://github.com/dpinto-pivotal/cf-SpringBootTrader-config/blob/master/application.yml#L12>

[2] github.com/logs...se.conf#L60-L64
<https://github.com/logsearch/logsearch-for-cloudfoundry/blob/master/src/logsearch-config/src/logstash-filters/snippets/firehose.conf#L60-L64>


On Mon, Mar 14, 2016 at 11:11 AM, Mike Youngstrom <
youngm(a)gmail.com> wrote:

I'll let the Loggregator team respond formally. But, in my
conversations with the Loggregator team I think we're basically stuck not
sure what the right thing to do is on the client side. How does the client
trigger in loggregator that this is a multi line log message or what is the
right way for loggregator to detect that the client is trying to send a
multi line log message? Any ideas?

Mike

On Mon, Mar 14, 2016 at 10:25 AM, Aliaksandr Prysmakou <
prysmakou(a)gmail.com> wrote:

Hi guys,
Are there any updates about "Multiline Log Entry" issue?
How correctly deal with stacktraces?
Links to the tracker to read?
----
Alex Prysmakou / Altoros
Tel: (617) 841-2121 ext. 5161 | Toll free: 855-ALTOROS
Skype: aliaksandr.prysmakou
www.altoros.com | blog.altoros.com | twitter.com/altoros


--
Jim Campbell | Product Manager | Cloud Foundry | Pivotal.io
| 303.618.0963

--
Jim Campbell | Product Manager | Cloud Foundry | Pivotal.io |
303.618.0963

--
Jim Campbell | Product Manager | Cloud Foundry | Pivotal.io |
303.618.0963

--
Jim Campbell | Product Manager | Cloud Foundry | Pivotal.io |
303.618.0963

--
Jim Campbell | Product Manager | Cloud Foundry | Pivotal.io |
303.618.0963


Re: Staging and Runtime Hooks Feature Narrative

Ted Young
 

Hi Troy,

I agree that we should bring our buildpack implementation up to date with
Heroku's by adding the .profile runtime hook. Also, I believe Eric Malm had
some thoughts on this hook, and how we could use something like it to make
application start be aware of when an app was in "pre-flight" and delay
starting the health monitor.

However, I am concerned about adding the staging hooks. In particular, what
is being proposed is that we fork the buildpack workflow from Heroku. That
sounds like a pretty big public maneuver, as it could make our buildpacks
no longer compatible with Heroku's. I would suggest we not fork the
buildpack model, and instead consult with Heroku about improving it.

The primary given reason for staging hooks is not that this feature is
necessary for Cloud Foundry users, but that it will help with demos. In
general, I do not think we should make large changes just to help with
demos, there must be broader utility that is commensurate with the size of
the change. I understand you have found utility with these hooks as a
useful shim, and users would like to shim things as well. However, some of
the use cases mentioned for staging hooks look like issues that are not
specific to staging (database migrations, etc) but are instead "camping" on
staging as a convenience. Buildpack staging is very specific to a single
operating system (linux), and a single deployment style (docker apps cannot
use these features). For issues that are not specific to buildpacks, I
would prefer we deliver higher-level, cross-platform solutions.

My suggestion is to unpack the staging issues you raise, and separate out
the issues that are specific to buildpacks. For those issues, we open a
discussion with Heroku. For more general issues, we look for cf-wide
solutions.

Thanks,
Ted

On Tue, Apr 12, 2016 at 8:14 PM, John Shahid <jshahid(a)pivotal.io> wrote:

Hi Troy,

Thanks for putting together this proposal. I added some comments/questions
to the document. I would love your feedback/response on those. Most of the
comments are concerned with the lack of concrete use cases. I think adding
a few examples to each use case will clarify the value added by the hooks.

Cheers,

JS


On Mon, Apr 11, 2016 at 1:04 PM Mike Youngstrom <youngm(a)gmail.com> wrote:

An interesting proposal. Any thoughts about this proposal in relation to
multi-buildpacks [0]? How many of the use cases for this feature go away
in lue of multi-buildpack support? I think it would be interesting to be
able to apply hooks without checking scripts into application (like
multi-bulidpack).

This feature also appears to be somewhat related to [1]. I hope that
someone is overseeing all these newly proposed buildpack features to help
ensure they are coherent.

Mike


[0]
https://lists.cloudfoundry.org/archives/list/cf-dev(a)lists.cloudfoundry.org/message/H64GGU6Z75CZDXNWC7CKUX64JNPARU6Y/
[1]
https://lists.cloudfoundry.org/archives/list/cf-dev(a)lists.cloudfoundry.org/thread/GRKFQ2UOQL7APRN6OTGET5HTOJZ7DHRQ/#SEA2RWDCAURSVPIMBXXJMWN7JYFQICL3

On Fri, Apr 8, 2016 at 4:16 PM, Troy Topnik <troy.topnik(a)hpe.com> wrote:

This feature allows developers more control of the staging and
deployment of their application code, without them having to fork existing
buildpacks or create their own.


https://docs.google.com/document/d/1PnTtTLwXOTG7f70ilWGlbTbi1LAXZu9zYnrUVvjr31I/edit

Hooks give developers the ability to optionally:
* run scripts in the staging container before and/or after the
bin/compile scripts executed by the buildpack, and
* run scripts in each app container before the app starts (via .profile
as per the Heroku buildpack API)

A similar feature has been available and used extensively in Stackato
for a few years, and we'd like to contribute this functionality back to
Cloud Foundry.

A proof-of-concept of this feature has already been submitted as a pull
request, and the Feature Narrative addresses many of the questions raised
in the PR discussion:


https://github.com/cloudfoundry-incubator/buildpack_app_lifecycle/pull/13

Please weigh in with comments in the document itself or in this thread.

Thanks,

TT


Re: Staging and Runtime Hooks Feature Narrative

Troy Topnik
 

I think many of the use cases *could* be dealt with by multi-buildpack support or in a forked buildpack which executes the desired commands. However, the point of the feature is to allow the addition of these commands to the staging process without having to create an additional buildpack or make modifications to an existing one.

In my experience, users who know the application framework they work with may not be that familiar with the buildpacks that deploy them (e.g. Java developers trying to figure out the Ruby code in the Java buildpack). This allows them to make small "one off" modifications for a particular application deployment.

TT


Re: Doppler/Firehose - Multiline Log Entry

Eric Malm <emalm@...>
 

Thanks, Mike. If source-side processing is the right place to do
that \u2028-to-newline substitution, I think that there could also be a
config option on the dropsonde library to have its LogSender perform that
within each message before forwarding it on. The local metron-agent could
also do that processing. I think it's appropriate to push as much of that
log processing as possible to the Loggregator components and libraries:
it's already a bit much that the executor knows anything at all about the
content of the byte-streams that it receives from the stdout and stderr of
a process in the container, so that it can break those streams into the
log-lines that the dropsonde library expects.

Best,
Eric

On Wed, Apr 13, 2016 at 11:00 AM, Mike Youngstrom <youngm(a)gmail.com> wrote:

Thanks for the insight Jim. I still think that the Executor is the place
to fix this since multi line logging isn't a Loggregator limitation it is a
log inject limitation which is owned by the Executor. I'll open an issue
with Diego and see how it goes.

Thanks,
Mike

On Tue, Apr 12, 2016 at 2:51 PM, Jim CF Campbell <jcampbell(a)pivotal.io>
wrote:

That strategy is going to be hard to sell. Diego's Executor takes the log
lines out of Garden and drops them into dropsonde messages. I doubt they'll
think it's a good idea to implement substitution in that processing. You
can certainly ask Eric - he's very aware of the underlying problem.

After that point, the Loggregator system does it's best to touch messages
as little as possible, and to improve performance and reliability, we have
thinking about the future that will lower the amount of touching ever
further. The next place that log message processing can be done is either
in a nozzle, or the injester of a log aggregator.

I'd vote for those downstream places - a single configuration and
algorithm instead of distributed across runner VMs.

On Tue, Apr 12, 2016 at 2:15 PM, Mike Youngstrom <youngm(a)gmail.com>
wrote:

I was thinking whoever demarcates and submits the original event to
loggregator. dea_logging_agent and the equivalent in Deigo. Doing it at
that point could provide a bit more flexibility. I know this isn't
necessarily the loggregator team's code but I think loggregator team buy
off would be important for those projects to accept such a PR.

Unless you can think of a better place to make that transformation
within the loggregator processing chain?

Mike

On Tue, Apr 12, 2016 at 2:02 PM, Jim CF Campbell <jcampbell(a)pivotal.io>
wrote:

what exactly do you mean by "event creation time"?

On Tue, Apr 12, 2016 at 1:57 PM, Mike Youngstrom <youngm(a)gmail.com>
wrote:

Before I submit the CLI issue let me ask one more question.

Would it be better to replace the newline token with /n at event
creation time instead of asking the cli, splunk, anyone listening on the
firehose, etc. to do so?

The obvious downside is this would probably need to be a global
configuration. However, I know my organization wouldn't have a problem
swapping /u2028 with /n for a deployment. The feature would obviously be
off by default.

Thoughs?

Mike

On Tue, Apr 12, 2016 at 11:24 AM, Mike Youngstrom <youngm(a)gmail.com>
wrote:

Sounds good. I'll submit an issue to start the discussion. I
imagine the first question Dies will ask though is if you would support
something like that. :)

Mike

On Tue, Apr 12, 2016 at 11:12 AM, Jim CF Campbell <
jcampbell(a)pivotal.io> wrote:

cf logs
<https://github.com/cloudfoundry/cli/blob/40eb5be48eaac697c3700d5f1e6f654bae471cec/cf/commands/application/logs.go>
is actually maintained by the CLI team under Dies
<https://www.pivotaltracker.com/n/projects/892938>. You can talk to
them. I'll certainly support you by helping explain the need. I'd think we
want a general solution (token in ENV for instance).



On Tue, Apr 12, 2016 at 11:02 AM, Mike Youngstrom <youngm(a)gmail.com>
wrote:

Jim,

If I submitted a CLI PR to change the cf logs command to substitute
/u2028 with /n could the loggregator team get behind that?

Mike

On Tue, Apr 12, 2016 at 10:20 AM, Jim CF Campbell <
jcampbell(a)pivotal.io> wrote:

Mike,

When you get a bit more desperate ;-) here is a nozzle plug in
<https://github.com/jtuchscherer/nozzle-plugin> for the CLI. It's
attaches to the firehose to display everything, but would be easy to modify
to just look at a single app, and sub out the magic token for newlines.

Jim

On Tue, Apr 12, 2016 at 9:56 AM, Mike Youngstrom <youngm(a)gmail.com
wrote:
Hi David,

The problem for me is that I'm searching for a solution that can
works for development (though less of a priority cause you can switch
config between dev and cf) and for viewing logs via "cf logs" in addition
to a log aggregator. I had hoped that /u2028 would work for viewing logs
via "cf logs" but it doesn't in bash. I'd need to write a plugin or
something for cf logs and train all my users to use it. Certainly possible
but I'm not that desperate yet. :)

Mike

On Tue, Apr 12, 2016 at 5:58 AM, David Laing <
david(a)davidlaing.com> wrote:

FWIW, the technique is to have your logging solution (eg,
logback, log4j) log a token (eg, \u2028) other than \n to
denote line breaks in your stack traces; and then have your log aggregation
software replace that token with a \n again when processing the log
messages.

If \u2028 doesn't work in your environment; use something else;
eg NEWLINE

On Mon, 11 Apr 2016 at 21:12 Mike Youngstrom <youngm(a)gmail.com>
wrote:

Finally got around to testing this. Preliminary testing show
that "\u2028" doesn't function as a new line character in bash
and causes eclipse console to wig out. I don't think "\u2028"
is a viable long term solution. Hope you make progress on a metric format
available to an app in a container. I too would like a tracker link to
such a feature if there is one.

Thanks,
Mike

On Mon, Mar 14, 2016 at 2:28 PM, Mike Youngstrom <
youngm(a)gmail.com> wrote:

Hi Jim,

So, to be clear what we're basically doing is using unicode
newline character to fool loggregator (which is looking for \n) into
thinking that it isn't a new log event right? Does \u2028 work as a new
line character when tailing logs in the CLI? Anyone tried this unicode new
line character in various consoles? IDE, xterm, etc? I'm wondering if
developers will need to have different config for development.

Mike

On Mon, Mar 14, 2016 at 12:17 PM, Jim CF Campbell <
jcampbell(a)pivotal.io> wrote:

Hi Mike and Alex,

Two things - for Java, we are working toward defining an
enhanced metric format that will support transport of Multi Lines.

The second is this workaround that David Laing suggested for
Logstash. Think you could use it for Splunk?

With the Java Logback library you can do this by adding
"%replace(%xException){'\n','\u2028'}%nopex" to your logging config[1] ,
and then use the following logstash conf.[2]
Replace the unicode newline character \u2028 with \n, which
Kibana will display as a new line.

mutate {

gsub => [ "[@message]", '\u2028', "

"]
^^^ Seems that passing a string with an actual newline in it
is the only way to make gsub work

}

to replace the token with a regular newline again so it
displays "properly" in Kibana.

[1] github.com/dpin...ication.yml#L12
<https://github.com/dpinto-pivotal/cf-SpringBootTrader-config/blob/master/application.yml#L12>

[2] github.com/logs...se.conf#L60-L64
<https://github.com/logsearch/logsearch-for-cloudfoundry/blob/master/src/logsearch-config/src/logstash-filters/snippets/firehose.conf#L60-L64>


On Mon, Mar 14, 2016 at 11:11 AM, Mike Youngstrom <
youngm(a)gmail.com> wrote:

I'll let the Loggregator team respond formally. But, in my
conversations with the Loggregator team I think we're basically stuck not
sure what the right thing to do is on the client side. How does the client
trigger in loggregator that this is a multi line log message or what is the
right way for loggregator to detect that the client is trying to send a
multi line log message? Any ideas?

Mike

On Mon, Mar 14, 2016 at 10:25 AM, Aliaksandr Prysmakou <
prysmakou(a)gmail.com> wrote:

Hi guys,
Are there any updates about "Multiline Log Entry" issue?
How correctly deal with stacktraces?
Links to the tracker to read?
----
Alex Prysmakou / Altoros
Tel: (617) 841-2121 ext. 5161 | Toll free: 855-ALTOROS
Skype: aliaksandr.prysmakou
www.altoros.com | blog.altoros.com | twitter.com/altoros


--
Jim Campbell | Product Manager | Cloud Foundry | Pivotal.io
| 303.618.0963

--
Jim Campbell | Product Manager | Cloud Foundry | Pivotal.io |
303.618.0963

--
Jim Campbell | Product Manager | Cloud Foundry | Pivotal.io |
303.618.0963

--
Jim Campbell | Product Manager | Cloud Foundry | Pivotal.io |
303.618.0963

--
Jim Campbell | Product Manager | Cloud Foundry | Pivotal.io |
303.618.0963


Re: Staging and Runtime Hooks Feature Narrative

Troy Topnik
 

Credit to Jan Dubois, who actually drafted the document, and who will be helping me respond to your comments in the document.

I'll provide concrete use cases where possible. The ones I have immediately to hand are specific to Stackato which implements this feature a little differently and in some cases rely on features not available in CF (e.g. OS package installs). I'll try to generalize the examples, or at least explain how they would be relevant to core CF.

TT


Re: Doppler/Firehose - Multiline Log Entry

Mike Youngstrom <youngm@...>
 

Thanks for the insight Jim. I still think that the Executor is the place
to fix this since multi line logging isn't a Loggregator limitation it is a
log inject limitation which is owned by the Executor. I'll open an issue
with Diego and see how it goes.

Thanks,
Mike

On Tue, Apr 12, 2016 at 2:51 PM, Jim CF Campbell <jcampbell(a)pivotal.io>
wrote:

That strategy is going to be hard to sell. Diego's Executor takes the log
lines out of Garden and drops them into dropsonde messages. I doubt they'll
think it's a good idea to implement substitution in that processing. You
can certainly ask Eric - he's very aware of the underlying problem.

After that point, the Loggregator system does it's best to touch messages
as little as possible, and to improve performance and reliability, we have
thinking about the future that will lower the amount of touching ever
further. The next place that log message processing can be done is either
in a nozzle, or the injester of a log aggregator.

I'd vote for those downstream places - a single configuration and
algorithm instead of distributed across runner VMs.

On Tue, Apr 12, 2016 at 2:15 PM, Mike Youngstrom <youngm(a)gmail.com> wrote:

I was thinking whoever demarcates and submits the original event to
loggregator. dea_logging_agent and the equivalent in Deigo. Doing it at
that point could provide a bit more flexibility. I know this isn't
necessarily the loggregator team's code but I think loggregator team buy
off would be important for those projects to accept such a PR.

Unless you can think of a better place to make that transformation within
the loggregator processing chain?

Mike

On Tue, Apr 12, 2016 at 2:02 PM, Jim CF Campbell <jcampbell(a)pivotal.io>
wrote:

what exactly do you mean by "event creation time"?

On Tue, Apr 12, 2016 at 1:57 PM, Mike Youngstrom <youngm(a)gmail.com>
wrote:

Before I submit the CLI issue let me ask one more question.

Would it be better to replace the newline token with /n at event
creation time instead of asking the cli, splunk, anyone listening on the
firehose, etc. to do so?

The obvious downside is this would probably need to be a global
configuration. However, I know my organization wouldn't have a problem
swapping /u2028 with /n for a deployment. The feature would obviously be
off by default.

Thoughs?

Mike

On Tue, Apr 12, 2016 at 11:24 AM, Mike Youngstrom <youngm(a)gmail.com>
wrote:

Sounds good. I'll submit an issue to start the discussion. I imagine
the first question Dies will ask though is if you would support something
like that. :)

Mike

On Tue, Apr 12, 2016 at 11:12 AM, Jim CF Campbell <
jcampbell(a)pivotal.io> wrote:

cf logs
<https://github.com/cloudfoundry/cli/blob/40eb5be48eaac697c3700d5f1e6f654bae471cec/cf/commands/application/logs.go>
is actually maintained by the CLI team under Dies
<https://www.pivotaltracker.com/n/projects/892938>. You can talk to
them. I'll certainly support you by helping explain the need. I'd think we
want a general solution (token in ENV for instance).



On Tue, Apr 12, 2016 at 11:02 AM, Mike Youngstrom <youngm(a)gmail.com>
wrote:

Jim,

If I submitted a CLI PR to change the cf logs command to substitute
/u2028 with /n could the loggregator team get behind that?

Mike

On Tue, Apr 12, 2016 at 10:20 AM, Jim CF Campbell <
jcampbell(a)pivotal.io> wrote:

Mike,

When you get a bit more desperate ;-) here is a nozzle plug in
<https://github.com/jtuchscherer/nozzle-plugin> for the CLI. It's
attaches to the firehose to display everything, but would be easy to modify
to just look at a single app, and sub out the magic token for newlines.

Jim

On Tue, Apr 12, 2016 at 9:56 AM, Mike Youngstrom <youngm(a)gmail.com>
wrote:

Hi David,

The problem for me is that I'm searching for a solution that can
works for development (though less of a priority cause you can switch
config between dev and cf) and for viewing logs via "cf logs" in addition
to a log aggregator. I had hoped that /u2028 would work for viewing logs
via "cf logs" but it doesn't in bash. I'd need to write a plugin or
something for cf logs and train all my users to use it. Certainly possible
but I'm not that desperate yet. :)

Mike

On Tue, Apr 12, 2016 at 5:58 AM, David Laing <david(a)davidlaing.com
wrote:
FWIW, the technique is to have your logging solution (eg,
logback, log4j) log a token (eg, \u2028) other than \n to denote
line breaks in your stack traces; and then have your log aggregation
software replace that token with a \n again when processing the log
messages.

If \u2028 doesn't work in your environment; use something else;
eg NEWLINE

On Mon, 11 Apr 2016 at 21:12 Mike Youngstrom <youngm(a)gmail.com>
wrote:

Finally got around to testing this. Preliminary testing show
that "\u2028" doesn't function as a new line character in bash
and causes eclipse console to wig out. I don't think "\u2028"
is a viable long term solution. Hope you make progress on a metric format
available to an app in a container. I too would like a tracker link to
such a feature if there is one.

Thanks,
Mike

On Mon, Mar 14, 2016 at 2:28 PM, Mike Youngstrom <
youngm(a)gmail.com> wrote:

Hi Jim,

So, to be clear what we're basically doing is using unicode
newline character to fool loggregator (which is looking for \n) into
thinking that it isn't a new log event right? Does \u2028 work as a new
line character when tailing logs in the CLI? Anyone tried this unicode new
line character in various consoles? IDE, xterm, etc? I'm wondering if
developers will need to have different config for development.

Mike

On Mon, Mar 14, 2016 at 12:17 PM, Jim CF Campbell <
jcampbell(a)pivotal.io> wrote:

Hi Mike and Alex,

Two things - for Java, we are working toward defining an
enhanced metric format that will support transport of Multi Lines.

The second is this workaround that David Laing suggested for
Logstash. Think you could use it for Splunk?

With the Java Logback library you can do this by adding
"%replace(%xException){'\n','\u2028'}%nopex" to your logging config[1] ,
and then use the following logstash conf.[2]
Replace the unicode newline character \u2028 with \n, which
Kibana will display as a new line.

mutate {

gsub => [ "[@message]", '\u2028', "

"]
^^^ Seems that passing a string with an actual newline in it
is the only way to make gsub work

}

to replace the token with a regular newline again so it
displays "properly" in Kibana.

[1] github.com/dpin...ication.yml#L12
<https://github.com/dpinto-pivotal/cf-SpringBootTrader-config/blob/master/application.yml#L12>

[2] github.com/logs...se.conf#L60-L64
<https://github.com/logsearch/logsearch-for-cloudfoundry/blob/master/src/logsearch-config/src/logstash-filters/snippets/firehose.conf#L60-L64>


On Mon, Mar 14, 2016 at 11:11 AM, Mike Youngstrom <
youngm(a)gmail.com> wrote:

I'll let the Loggregator team respond formally. But, in my
conversations with the Loggregator team I think we're basically stuck not
sure what the right thing to do is on the client side. How does the client
trigger in loggregator that this is a multi line log message or what is the
right way for loggregator to detect that the client is trying to send a
multi line log message? Any ideas?

Mike

On Mon, Mar 14, 2016 at 10:25 AM, Aliaksandr Prysmakou <
prysmakou(a)gmail.com> wrote:

Hi guys,
Are there any updates about "Multiline Log Entry" issue? How
correctly deal with stacktraces?
Links to the tracker to read?
----
Alex Prysmakou / Altoros
Tel: (617) 841-2121 ext. 5161 | Toll free: 855-ALTOROS
Skype: aliaksandr.prysmakou
www.altoros.com | blog.altoros.com | twitter.com/altoros


--
Jim Campbell | Product Manager | Cloud Foundry | Pivotal.io |
303.618.0963

--
Jim Campbell | Product Manager | Cloud Foundry | Pivotal.io |
303.618.0963

--
Jim Campbell | Product Manager | Cloud Foundry | Pivotal.io |
303.618.0963

--
Jim Campbell | Product Manager | Cloud Foundry | Pivotal.io |
303.618.0963

--
Jim Campbell | Product Manager | Cloud Foundry | Pivotal.io | 303.618.0963


Re: AWS / CF v233 deployment

Christian Ang
 

Hi Sylvain,

It looks like your problem might be that one or more of the consul certificates in your cf manifest is not a valid PEM encoded certificate, or the certificates are missing entirely. Do the consul properties in your cf manifest look approximately like this (with your own certificates and keys):

https://github.com/cloudfoundry-incubator/consul-release/blob/master/manifests/aws/multi-az-ssl.yml#L122-L261

Also, if you decode your certificates by running `openssl x509 -in server-ca.crt -text -noout`, do they appear to be valid?

If they are invalid you can try regenerating them using `scripts/generate-consul-certs` and copying each files contents into the appropriate place in your cf manifest's consul properties.

Thanks,
Christian and George