Date   

Re: Question regarding simple application statistics

Don Nelson
 

Thanks Dan!


Re: Question regarding simple application statistics

Dan Wendorf
 

Hi Don,

The stats endpoint
<http://apidocs.cloudfoundry.org/234/apps/get_detailed_stats_for_a_started_app.html>
gives
statistics about each instance individually. In the returned JSON object,
each instance's stats are nested under their instance index (e.g.
`response["2"]`). To get total app usage, you would need to sum over all
indices.

If you wanted to look at quota information, you could also look at the
/v2/apps/:app_guid
<http://apidocs.cloudfoundry.org/234/apps/retrieve_a_particular_app.html>
endpoint, which will report the memory and disk that will be allocated to
each instance, as well as the number of instances. Multiplying the quota
numbers by the number of instances will give you the amount of resources
currently counting against your quota.


Cheers,
Dan

On Wed, Apr 20, 2016 at 10:39 AM, Don Nelson <dieseldonx(a)gmail.com> wrote:

Someone recently asked me about the data returned from
/v2/apps/<APPLICATION_GUID>/stats, regarding whether the stats returned
were a total across all instances of the application. I assume that they
are, and that to compare usage to quota one would have to divide the usage
by number of instances. Can anyone confirm this for me?

Thanks in advance.


Question regarding simple application statistics

Don Nelson
 

Someone recently asked me about the data returned from /v2/apps/<APPLICATION_GUID>/stats, regarding whether the stats returned were a total across all instances of the application. I assume that they are, and that to compare usage to quota one would have to divide the usage by number of instances. Can anyone confirm this for me?

Thanks in advance.


Re: Postgresql data is not migrated after CF is upgraded from v197 to v230

Amit Kumar Gupta
 

Auto-migration does happen, but at some point we end support for certain
versions. Upgrading to 211 will automigrate your Postgres to 9.4.2.
Upgrading to 230 will automigrate that to 9.4.6.

On Tue, Apr 19, 2016 at 8:14 PM, Meng, Xiangyi <xiangyi.meng(a)emc.com> wrote:

Hi, Amit



Thank you for you explanation. I will try this process out. However I
would suggest to take auto migration as consideration for next time
upgrading any CF component.



Regards,

Maggie



*From:* Amit Gupta [mailto:agupta(a)pivotal.io]
*Sent:* 2016年4月19日 15:20
*To:* Discussions about Cloud Foundry projects and the system overall.
*Subject:* [cf-dev] Re: Postgresql data is not migrated after CF is
upgraded from v197 to v230



Hi Maggie,



First off, please be sure to backup your data in case anything goes wrong
during an upgrade.



Also, do note that while private vendors of Cloud Foundry may support
upgrades between several different versions of Cloud Foundry, the open
source project currently only tests and guarantees upgrades between
consecutive final releases.



You're correct, at minimum, you will first need to upgrade from v197 to
some version between v211 (the first version which introduces the upgrade
from postgres 9.0 to 9.4.2) and v225 (the last version including such an
upgrade), and then upgrade to v230. I would review the release notes for
releases that you're skipping to see if there are any other important
releases that are actually not safe to skip.



I've updated the v226 release notes [1
<https://github.com/cloudfoundry/cf-release/releases/tag/v226>] to call
attention to this fact, thank you for highlighting this issue!



Please let me know if you have further questions about the upgrade process.



[1] https://github.com/cloudfoundry/cf-release/releases/tag/v226



Cheers,

Amit



On Mon, Apr 18, 2016 at 11:22 PM, Meng, Xiangyi <xiangyi.meng(a)emc.com>
wrote:

Hi,



I found all org, spaces and applications information were lost after I
upgraded my CF env from v197 to v230. After checked the postgres node, I
found there were two directory under /var/vcap/store. One is “postgres” and
the other is “postgres-9.4.5”. I guess the reason why all information got
lost is data from previous postgresql database(9.0.3) was not migrated to
new postgresql database(9.4.5). I found two related releases.



V211



Postgres Job Upgrade

The Postgres Job will upgrade the postgres database to version 9.4.2.
Postgres will be unavailable during this upgrade.



V229



In support of work in progress to enable developers to specify application
ports when mapping routes, cf-release v229 introduces a database migration
for CCDB. For deployments that use a PostgreSQL database for CCDB that is
NOT the PostreSQL job that comes with cf-release, v229 introduces the
following requirements. These requirements are applicable for subsequent
releases. If you are using the PostgreSQL job that comes with cf-release,
or if you are using MySQL as the backing db for CC, no action is necessary.

◦PostgreSQL 9.1 is required at a minimum

◦For versions 9.1-9.3, operators must first install the extension
uuid-ossp

◦For versions 9.4 and newer, operators must first install the extension
pgcrypto



So does it mean I have to upgrade v197 to v211 and then upgrade to v230
after installing pgcrypto? Any help would be appreciated.



Thanks,

Maggie



Re: Postgresql data is not migrated after CF is upgraded from v197 to v230

MaggieMeng
 

Hi, Amit

Thank you for you explanation. I will try this process out. However I would suggest to take auto migration as consideration for next time upgrading any CF component.

Regards,
Maggie

From: Amit Gupta [mailto:agupta(a)pivotal.io]
Sent: 2016年4月19日 15:20
To: Discussions about Cloud Foundry projects and the system overall.
Subject: [cf-dev] Re: Postgresql data is not migrated after CF is upgraded from v197 to v230

Hi Maggie,

First off, please be sure to backup your data in case anything goes wrong during an upgrade.

Also, do note that while private vendors of Cloud Foundry may support upgrades between several different versions of Cloud Foundry, the open source project currently only tests and guarantees upgrades between consecutive final releases.

You're correct, at minimum, you will first need to upgrade from v197 to some version between v211 (the first version which introduces the upgrade from postgres 9.0 to 9.4.2) and v225 (the last version including such an upgrade), and then upgrade to v230. I would review the release notes for releases that you're skipping to see if there are any other important releases that are actually not safe to skip.

I've updated the v226 release notes [1<https://github.com/cloudfoundry/cf-release/releases/tag/v226>] to call attention to this fact, thank you for highlighting this issue!

Please let me know if you have further questions about the upgrade process.

[1] https://github.com/cloudfoundry/cf-release/releases/tag/v226

Cheers,
Amit

On Mon, Apr 18, 2016 at 11:22 PM, Meng, Xiangyi <xiangyi.meng(a)emc.com<mailto:xiangyi.meng(a)emc.com>> wrote:
Hi,

I found all org, spaces and applications information were lost after I upgraded my CF env from v197 to v230. After checked the postgres node, I found there were two directory under /var/vcap/store. One is “postgres” and the other is “postgres-9.4.5”. I guess the reason why all information got lost is data from previous postgresql database(9.0.3) was not migrated to new postgresql database(9.4.5). I found two related releases.

V211

Postgres Job Upgrade
The Postgres Job will upgrade the postgres database to version 9.4.2. Postgres will be unavailable during this upgrade.

V229

In support of work in progress to enable developers to specify application ports when mapping routes, cf-release v229 introduces a database migration for CCDB. For deployments that use a PostgreSQL database for CCDB that is NOT the PostreSQL job that comes with cf-release, v229 introduces the following requirements. These requirements are applicable for subsequent releases. If you are using the PostgreSQL job that comes with cf-release, or if you are using MySQL as the backing db for CC, no action is necessary.
◦PostgreSQL 9.1 is required at a minimum
◦For versions 9.1-9.3, operators must first install the extension uuid-ossp
◦For versions 9.4 and newer, operators must first install the extension pgcrypto

So does it mean I have to upgrade v197 to v211 and then upgrade to v230 after installing pgcrypto? Any help would be appreciated.

Thanks,
Maggie


Re: Doppler/Firehose - Multiline Log Entry

Mike Jacobi
 

Another possible solution is to enhance platform logging by adding a unique event id such as a UUID to each message. This likely helps auditing efforts by being able to specify an exact message and could possibly provide a way for other systems to identify and aggregate the display of a single message that spans lines or greater time (out of order message receipt).

Mike Jacobi / Altoros


Re: Request for Multibuildpack Use Cases

Troy Topnik
 

We used heroku-buildpack-multi to deploy OpenProject (NodeJS and Ruby).

https://github.com/Stackato-Apps/openproject

The manifest.yml in that repo has some Stackato-isms, but you get the idea.

Deploying this kind of Ruby+Node app (which I'm starting to see more of) without multibuildpacks would otherwise a customized buildpack:

https://github.com/qnyp/heroku-buildpack-ruby-bower/tree/run-bower

It's easier for a developer to combine existing buildpacks than fork one to add functionality.


Re: Maven: Resolve Dependencies on Platform?

Matthew Tyson
 

Great thought, Jesse. It was a nested /node_modules directory that was the bulk of the size. I added that to .cfignore. Thanks.


Re: Doppler/Firehose - Multiline Log Entry

Eric Malm <emalm@...>
 

Thanks for pointing that out, Mike! From the executor commit history, seems
to have been the path of least resistance when Diego switched to the newer
dropsonde API circa Dec 2014. Also looks to me like ScanLogStream might
need at least one goroutine per log-stream, whereas the current log
destination pushes data straight from the garden Process to the
dropsonde/logs functions. Sounds like a good thing to reconsider if we
consolidate those logging responsibilities after the Loggregator spike.

Thanks again,
Eric

On Tue, Apr 19, 2016 at 9:18 AM, Jim CF Campbell <jcampbell(a)pivotal.io>
wrote:

I added Eric's comments to the spike
<https://www.pivotaltracker.com/story/show/117583365>.

On Tue, Apr 19, 2016 at 10:10 AM, Mike Youngstrom <youngm(a)gmail.com>
wrote:

Yup, makes sense. I'm sure there is some valid reason for Diego not to
use log_sender.ScanLogStream today. When that get's sorted out then the
event demarcation and this replacement will all be in the correct place.

Thanks,
Mike

On Tue, Apr 19, 2016 at 9:29 AM, Eric Malm <emalm(a)pivotal.io> wrote:

Thanks, Mike, don't be too hard on yourself. :) I think that's a valid
point that if there is processing to translate a byte-stream from process
stdout/stderr to loggregator log events, it would be beneficial to make
that as efficient as possible. Right now, the executor's stream-destination
type has all of that logic: it breaks the byte-stream into messages on
newlines and carriage returns, it breaks too-long messages on UTF-8
boundaries, and then it emits those messages to dropsonde with the
app-specific tagging. So from the standpoint of efficiency and coherence,
it probably would make sense to do this proposed \u2028-to-\n substitution
in the same single-pass as the other processing.

That said, the Diego team has gotten some of that processing wrong in
the executor: we've been breaking log messages at 4KiB when loggregator can
handle messages close to 64KiB (whatever ends up being the limit imposed by
the size of a UDP datagram and the additional fields in the event
envelope), and we just fixed a bug where we could break 3- and 4-byte UTF-8
sequences incorrectly at those message-length boundaries. So I also think
it could make sense for the dropsonde library to provide a type that does
the app-byte-stream-to-log-message processing that the executor's
stream-destination currently does. The executor could then instantiate a
pair of those and hook them up as io.Writer interfaces to the stdout and
stderr of the Garden process it invokes, as it does with its own
stream-destinations today. The stream-destinations could then also be
individually configured to do the substitution.

In any case, I think it makes sense to proceed with the Loggregator
spike, and then if we want to implement the solution for real the Diego and
Loggregator teams can figure out the best way to make it efficient and
maintainable.

Thanks,
Eric

On Wed, Apr 13, 2016 at 5:53 PM, Mike Youngstrom <youngm(a)gmail.com>
wrote:

I'm an idiot. I see what you and Eric are saying now. Put the code in
Dropsonde then let the Executor simply initialize Dropsonde that way.
Works for me.

Thanks,
Mike

On Wed, Apr 13, 2016 at 5:26 PM, Jim CF Campbell <jcampbell(a)pivotal.io>
wrote:

My last 2 cents. It'll be configurable so will only be active in users
of dropsonde that want the functionality such as the Executor.

On Wed, Apr 13, 2016 at 5:21 PM, Mike Youngstrom <youngm(a)gmail.com>
wrote:

You may want to reference the issue I created on executor.

In that issue I note that I don't think dropsonde is the right place
to do this token replacement because dropsonde doesn't know that the event
originally came through the limited stdout/stderr interface that needs this
functionality. However, executor does. If I'm using the dropsonde API
directly where I can safely put new line characters I don't want dropsonde
looking to replace a character I don't want replaced especially since that
character replacement isn't even needed when using a more rich interface
like dropsonde directly.

That's my 2 cents.

Thanks,
Mike

On Wed, Apr 13, 2016 at 4:34 PM, Jim CF Campbell <
jcampbell(a)pivotal.io> wrote:

We're going to look into it
<https://www.pivotaltracker.com/story/show/117583365>.

On Wed, Apr 13, 2016 at 12:33 PM, Eric Malm <emalm(a)pivotal.io>
wrote:

Thanks, Mike. If source-side processing is the right place to do
that \u2028-to-newline substitution, I think that there could also be a
config option on the dropsonde library to have its LogSender perform that
within each message before forwarding it on. The local metron-agent could
also do that processing. I think it's appropriate to push as much of that
log processing as possible to the Loggregator components and libraries:
it's already a bit much that the executor knows anything at all about the
content of the byte-streams that it receives from the stdout and stderr of
a process in the container, so that it can break those streams into the
log-lines that the dropsonde library expects.

Best,
Eric

On Wed, Apr 13, 2016 at 11:00 AM, Mike Youngstrom <youngm(a)gmail.com
wrote:
Thanks for the insight Jim. I still think that the Executor is
the place to fix this since multi line logging isn't a Loggregator
limitation it is a log inject limitation which is owned by the Executor.
I'll open an issue with Diego and see how it goes.

Thanks,
Mike

On Tue, Apr 12, 2016 at 2:51 PM, Jim CF Campbell <
jcampbell(a)pivotal.io> wrote:

That strategy is going to be hard to sell. Diego's Executor takes
the log lines out of Garden and drops them into dropsonde messages. I doubt
they'll think it's a good idea to implement substitution in that
processing. You can certainly ask Eric - he's very aware of the underlying
problem.

After that point, the Loggregator system does it's best to touch
messages as little as possible, and to improve performance and reliability,
we have thinking about the future that will lower the amount of touching
ever further. The next place that log message processing can be done is
either in a nozzle, or the injester of a log aggregator.

I'd vote for those downstream places - a single configuration
and algorithm instead of distributed across runner VMs.

On Tue, Apr 12, 2016 at 2:15 PM, Mike Youngstrom <
youngm(a)gmail.com> wrote:

I was thinking whoever demarcates and submits the original event
to loggregator. dea_logging_agent and the equivalent in Deigo. Doing it
at that point could provide a bit more flexibility. I know this isn't
necessarily the loggregator team's code but I think loggregator team buy
off would be important for those projects to accept such a PR.

Unless you can think of a better place to make that
transformation within the loggregator processing chain?

Mike

On Tue, Apr 12, 2016 at 2:02 PM, Jim CF Campbell <
jcampbell(a)pivotal.io> wrote:

what exactly do you mean by "event creation time"?

On Tue, Apr 12, 2016 at 1:57 PM, Mike Youngstrom <
youngm(a)gmail.com> wrote:

Before I submit the CLI issue let me ask one more question.

Would it be better to replace the newline token with /n at
event creation time instead of asking the cli, splunk, anyone listening on
the firehose, etc. to do so?

The obvious downside is this would probably need to be a
global configuration. However, I know my organization wouldn't have a
problem swapping /u2028 with /n for a deployment. The feature would
obviously be off by default.

Thoughs?

Mike

On Tue, Apr 12, 2016 at 11:24 AM, Mike Youngstrom <
youngm(a)gmail.com> wrote:

Sounds good. I'll submit an issue to start the discussion.
I imagine the first question Dies will ask though is if you would support
something like that. :)

Mike

On Tue, Apr 12, 2016 at 11:12 AM, Jim CF Campbell <
jcampbell(a)pivotal.io> wrote:

cf logs
<https://github.com/cloudfoundry/cli/blob/40eb5be48eaac697c3700d5f1e6f654bae471cec/cf/commands/application/logs.go>
is actually maintained by the CLI team under Dies
<https://www.pivotaltracker.com/n/projects/892938>. You can
talk to them. I'll certainly support you by helping explain the need. I'd
think we want a general solution (token in ENV for instance).



On Tue, Apr 12, 2016 at 11:02 AM, Mike Youngstrom <
youngm(a)gmail.com> wrote:

Jim,

If I submitted a CLI PR to change the cf logs command to
substitute /u2028 with /n could the loggregator team get behind that?

Mike

On Tue, Apr 12, 2016 at 10:20 AM, Jim CF Campbell <
jcampbell(a)pivotal.io> wrote:

Mike,

When you get a bit more desperate ;-) here is a nozzle
plug in <https://github.com/jtuchscherer/nozzle-plugin>
for the CLI. It's attaches to the firehose to display everything, but would
be easy to modify to just look at a single app, and sub out the magic token
for newlines.

Jim

On Tue, Apr 12, 2016 at 9:56 AM, Mike Youngstrom <
youngm(a)gmail.com> wrote:

Hi David,

The problem for me is that I'm searching for a solution
that can works for development (though less of a priority cause you can
switch config between dev and cf) and for viewing logs via "cf logs" in
addition to a log aggregator. I had hoped that /u2028 would work for
viewing logs via "cf logs" but it doesn't in bash. I'd need to write a
plugin or something for cf logs and train all my users to use it.
Certainly possible but I'm not that desperate yet. :)

Mike

On Tue, Apr 12, 2016 at 5:58 AM, David Laing <
david(a)davidlaing.com> wrote:

FWIW, the technique is to have your logging solution
(eg, logback, log4j) log a token (eg, \u2028) other
than \n to denote line breaks in your stack traces; and then have your log
aggregation software replace that token with a \n again when processing the
log messages.

If \u2028 doesn't work in your environment; use
something else; eg NEWLINE

On Mon, 11 Apr 2016 at 21:12 Mike Youngstrom <
youngm(a)gmail.com> wrote:

Finally got around to testing this. Preliminary
testing show that "\u2028" doesn't function as a new
line character in bash and causes eclipse console to wig out. I don't
think "\u2028" is a viable long term solution. Hope
you make progress on a metric format available to an app in a container. I
too would like a tracker link to such a feature if there is one.

Thanks,
Mike

On Mon, Mar 14, 2016 at 2:28 PM, Mike Youngstrom <
youngm(a)gmail.com> wrote:

Hi Jim,

So, to be clear what we're basically doing is using
unicode newline character to fool loggregator (which is looking for \n)
into thinking that it isn't a new log event right? Does \u2028 work as a
new line character when tailing logs in the CLI? Anyone tried this unicode
new line character in various consoles? IDE, xterm, etc? I'm wondering if
developers will need to have different config for development.

Mike

On Mon, Mar 14, 2016 at 12:17 PM, Jim CF Campbell <
jcampbell(a)pivotal.io> wrote:

Hi Mike and Alex,

Two things - for Java, we are working toward defining
an enhanced metric format that will support transport of Multi Lines.

The second is this workaround that David Laing
suggested for Logstash. Think you could use it for Splunk?

With the Java Logback library you can do this by
adding "%replace(%xException){'\n','\u2028'}%nopex" to your logging
config[1] , and then use the following logstash conf.[2]
Replace the unicode newline character \u2028 with \n,
which Kibana will display as a new line.

mutate {

gsub => [ "[@message]", '\u2028', "

"]
^^^ Seems that passing a string with an actual
newline in it is the only way to make gsub work

}

to replace the token with a regular newline again so
it displays "properly" in Kibana.

[1] github.com/dpin...ication.yml#L12
<https://github.com/dpinto-pivotal/cf-SpringBootTrader-config/blob/master/application.yml#L12>

[2] github.com/logs...se.conf#L60-L64
<https://github.com/logsearch/logsearch-for-cloudfoundry/blob/master/src/logsearch-config/src/logstash-filters/snippets/firehose.conf#L60-L64>


On Mon, Mar 14, 2016 at 11:11 AM, Mike Youngstrom <
youngm(a)gmail.com> wrote:

I'll let the Loggregator team respond formally.
But, in my conversations with the Loggregator team I think we're basically
stuck not sure what the right thing to do is on the client side. How does
the client trigger in loggregator that this is a multi line log message or
what is the right way for loggregator to detect that the client is trying
to send a multi line log message? Any ideas?

Mike

On Mon, Mar 14, 2016 at 10:25 AM, Aliaksandr
Prysmakou <prysmakou(a)gmail.com> wrote:

Hi guys,
Are there any updates about "Multiline Log Entry"
issue? How correctly deal with stacktraces?
Links to the tracker to read?
----
Alex Prysmakou / Altoros
Tel: (617) 841-2121 ext. 5161 | Toll free:
855-ALTOROS
Skype: aliaksandr.prysmakou
www.altoros.com | blog.altoros.com |
twitter.com/altoros


--
Jim Campbell | Product Manager | Cloud Foundry |
Pivotal.io | 303.618.0963

--
Jim Campbell | Product Manager | Cloud Foundry |
Pivotal.io | 303.618.0963

--
Jim Campbell | Product Manager | Cloud Foundry | Pivotal.io
| 303.618.0963

--
Jim Campbell | Product Manager | Cloud Foundry | Pivotal.io |
303.618.0963

--
Jim Campbell | Product Manager | Cloud Foundry | Pivotal.io |
303.618.0963

--
Jim Campbell | Product Manager | Cloud Foundry | Pivotal.io |
303.618.0963

--
Jim Campbell | Product Manager | Cloud Foundry | Pivotal.io |
303.618.0963

--
Jim Campbell | Product Manager | Cloud Foundry | Pivotal.io | 303.618.0963


Re: Doppler/Firehose - Multiline Log Entry

Jim CF Campbell
 

I added Eric's comments to the spike
<https://www.pivotaltracker.com/story/show/117583365>.

On Tue, Apr 19, 2016 at 10:10 AM, Mike Youngstrom <youngm(a)gmail.com> wrote:

Yup, makes sense. I'm sure there is some valid reason for Diego not to
use log_sender.ScanLogStream today. When that get's sorted out then the
event demarcation and this replacement will all be in the correct place.

Thanks,
Mike

On Tue, Apr 19, 2016 at 9:29 AM, Eric Malm <emalm(a)pivotal.io> wrote:

Thanks, Mike, don't be too hard on yourself. :) I think that's a valid
point that if there is processing to translate a byte-stream from process
stdout/stderr to loggregator log events, it would be beneficial to make
that as efficient as possible. Right now, the executor's stream-destination
type has all of that logic: it breaks the byte-stream into messages on
newlines and carriage returns, it breaks too-long messages on UTF-8
boundaries, and then it emits those messages to dropsonde with the
app-specific tagging. So from the standpoint of efficiency and coherence,
it probably would make sense to do this proposed \u2028-to-\n substitution
in the same single-pass as the other processing.

That said, the Diego team has gotten some of that processing wrong in the
executor: we've been breaking log messages at 4KiB when loggregator can
handle messages close to 64KiB (whatever ends up being the limit imposed by
the size of a UDP datagram and the additional fields in the event
envelope), and we just fixed a bug where we could break 3- and 4-byte UTF-8
sequences incorrectly at those message-length boundaries. So I also think
it could make sense for the dropsonde library to provide a type that does
the app-byte-stream-to-log-message processing that the executor's
stream-destination currently does. The executor could then instantiate a
pair of those and hook them up as io.Writer interfaces to the stdout and
stderr of the Garden process it invokes, as it does with its own
stream-destinations today. The stream-destinations could then also be
individually configured to do the substitution.

In any case, I think it makes sense to proceed with the Loggregator
spike, and then if we want to implement the solution for real the Diego and
Loggregator teams can figure out the best way to make it efficient and
maintainable.

Thanks,
Eric

On Wed, Apr 13, 2016 at 5:53 PM, Mike Youngstrom <youngm(a)gmail.com>
wrote:

I'm an idiot. I see what you and Eric are saying now. Put the code in
Dropsonde then let the Executor simply initialize Dropsonde that way.
Works for me.

Thanks,
Mike

On Wed, Apr 13, 2016 at 5:26 PM, Jim CF Campbell <jcampbell(a)pivotal.io>
wrote:

My last 2 cents. It'll be configurable so will only be active in users
of dropsonde that want the functionality such as the Executor.

On Wed, Apr 13, 2016 at 5:21 PM, Mike Youngstrom <youngm(a)gmail.com>
wrote:

You may want to reference the issue I created on executor.

In that issue I note that I don't think dropsonde is the right place
to do this token replacement because dropsonde doesn't know that the event
originally came through the limited stdout/stderr interface that needs this
functionality. However, executor does. If I'm using the dropsonde API
directly where I can safely put new line characters I don't want dropsonde
looking to replace a character I don't want replaced especially since that
character replacement isn't even needed when using a more rich interface
like dropsonde directly.

That's my 2 cents.

Thanks,
Mike

On Wed, Apr 13, 2016 at 4:34 PM, Jim CF Campbell <jcampbell(a)pivotal.io
wrote:
We're going to look into it
<https://www.pivotaltracker.com/story/show/117583365>.

On Wed, Apr 13, 2016 at 12:33 PM, Eric Malm <emalm(a)pivotal.io> wrote:

Thanks, Mike. If source-side processing is the right place to do
that \u2028-to-newline substitution, I think that there could also be a
config option on the dropsonde library to have its LogSender perform that
within each message before forwarding it on. The local metron-agent could
also do that processing. I think it's appropriate to push as much of that
log processing as possible to the Loggregator components and libraries:
it's already a bit much that the executor knows anything at all about the
content of the byte-streams that it receives from the stdout and stderr of
a process in the container, so that it can break those streams into the
log-lines that the dropsonde library expects.

Best,
Eric

On Wed, Apr 13, 2016 at 11:00 AM, Mike Youngstrom <youngm(a)gmail.com>
wrote:

Thanks for the insight Jim. I still think that the Executor is the
place to fix this since multi line logging isn't a Loggregator limitation
it is a log inject limitation which is owned by the Executor. I'll open an
issue with Diego and see how it goes.

Thanks,
Mike

On Tue, Apr 12, 2016 at 2:51 PM, Jim CF Campbell <
jcampbell(a)pivotal.io> wrote:

That strategy is going to be hard to sell. Diego's Executor takes
the log lines out of Garden and drops them into dropsonde messages. I doubt
they'll think it's a good idea to implement substitution in that
processing. You can certainly ask Eric - he's very aware of the underlying
problem.

After that point, the Loggregator system does it's best to touch
messages as little as possible, and to improve performance and reliability,
we have thinking about the future that will lower the amount of touching
ever further. The next place that log message processing can be done is
either in a nozzle, or the injester of a log aggregator.

I'd vote for those downstream places - a single configuration and
algorithm instead of distributed across runner VMs.

On Tue, Apr 12, 2016 at 2:15 PM, Mike Youngstrom <youngm(a)gmail.com
wrote:
I was thinking whoever demarcates and submits the original event
to loggregator. dea_logging_agent and the equivalent in Deigo. Doing it
at that point could provide a bit more flexibility. I know this isn't
necessarily the loggregator team's code but I think loggregator team buy
off would be important for those projects to accept such a PR.

Unless you can think of a better place to make that
transformation within the loggregator processing chain?

Mike

On Tue, Apr 12, 2016 at 2:02 PM, Jim CF Campbell <
jcampbell(a)pivotal.io> wrote:

what exactly do you mean by "event creation time"?

On Tue, Apr 12, 2016 at 1:57 PM, Mike Youngstrom <
youngm(a)gmail.com> wrote:

Before I submit the CLI issue let me ask one more question.

Would it be better to replace the newline token with /n at
event creation time instead of asking the cli, splunk, anyone listening on
the firehose, etc. to do so?

The obvious downside is this would probably need to be a global
configuration. However, I know my organization wouldn't have a problem
swapping /u2028 with /n for a deployment. The feature would obviously be
off by default.

Thoughs?

Mike

On Tue, Apr 12, 2016 at 11:24 AM, Mike Youngstrom <
youngm(a)gmail.com> wrote:

Sounds good. I'll submit an issue to start the discussion. I
imagine the first question Dies will ask though is if you would support
something like that. :)

Mike

On Tue, Apr 12, 2016 at 11:12 AM, Jim CF Campbell <
jcampbell(a)pivotal.io> wrote:

cf logs
<https://github.com/cloudfoundry/cli/blob/40eb5be48eaac697c3700d5f1e6f654bae471cec/cf/commands/application/logs.go>
is actually maintained by the CLI team under Dies
<https://www.pivotaltracker.com/n/projects/892938>. You can
talk to them. I'll certainly support you by helping explain the need. I'd
think we want a general solution (token in ENV for instance).



On Tue, Apr 12, 2016 at 11:02 AM, Mike Youngstrom <
youngm(a)gmail.com> wrote:

Jim,

If I submitted a CLI PR to change the cf logs command to
substitute /u2028 with /n could the loggregator team get behind that?

Mike

On Tue, Apr 12, 2016 at 10:20 AM, Jim CF Campbell <
jcampbell(a)pivotal.io> wrote:

Mike,

When you get a bit more desperate ;-) here is a nozzle
plug in <https://github.com/jtuchscherer/nozzle-plugin>
for the CLI. It's attaches to the firehose to display everything, but would
be easy to modify to just look at a single app, and sub out the magic token
for newlines.

Jim

On Tue, Apr 12, 2016 at 9:56 AM, Mike Youngstrom <
youngm(a)gmail.com> wrote:

Hi David,

The problem for me is that I'm searching for a solution
that can works for development (though less of a priority cause you can
switch config between dev and cf) and for viewing logs via "cf logs" in
addition to a log aggregator. I had hoped that /u2028 would work for
viewing logs via "cf logs" but it doesn't in bash. I'd need to write a
plugin or something for cf logs and train all my users to use it.
Certainly possible but I'm not that desperate yet. :)

Mike

On Tue, Apr 12, 2016 at 5:58 AM, David Laing <
david(a)davidlaing.com> wrote:

FWIW, the technique is to have your logging solution (eg,
logback, log4j) log a token (eg, \u2028) other than \n
to denote line breaks in your stack traces; and then have your log
aggregation software replace that token with a \n again when processing the
log messages.

If \u2028 doesn't work in your environment; use
something else; eg NEWLINE

On Mon, 11 Apr 2016 at 21:12 Mike Youngstrom <
youngm(a)gmail.com> wrote:

Finally got around to testing this. Preliminary testing
show that "\u2028" doesn't function as a new line
character in bash and causes eclipse console to wig out. I don't think "
\u2028" is a viable long term solution. Hope you make
progress on a metric format available to an app in a container. I too
would like a tracker link to such a feature if there is one.

Thanks,
Mike

On Mon, Mar 14, 2016 at 2:28 PM, Mike Youngstrom <
youngm(a)gmail.com> wrote:

Hi Jim,

So, to be clear what we're basically doing is using
unicode newline character to fool loggregator (which is looking for \n)
into thinking that it isn't a new log event right? Does \u2028 work as a
new line character when tailing logs in the CLI? Anyone tried this unicode
new line character in various consoles? IDE, xterm, etc? I'm wondering if
developers will need to have different config for development.

Mike

On Mon, Mar 14, 2016 at 12:17 PM, Jim CF Campbell <
jcampbell(a)pivotal.io> wrote:

Hi Mike and Alex,

Two things - for Java, we are working toward defining
an enhanced metric format that will support transport of Multi Lines.

The second is this workaround that David Laing
suggested for Logstash. Think you could use it for Splunk?

With the Java Logback library you can do this by
adding "%replace(%xException){'\n','\u2028'}%nopex" to your logging
config[1] , and then use the following logstash conf.[2]
Replace the unicode newline character \u2028 with \n,
which Kibana will display as a new line.

mutate {

gsub => [ "[@message]", '\u2028', "

"]
^^^ Seems that passing a string with an actual newline
in it is the only way to make gsub work

}

to replace the token with a regular newline again so
it displays "properly" in Kibana.

[1] github.com/dpin...ication.yml#L12
<https://github.com/dpinto-pivotal/cf-SpringBootTrader-config/blob/master/application.yml#L12>

[2] github.com/logs...se.conf#L60-L64
<https://github.com/logsearch/logsearch-for-cloudfoundry/blob/master/src/logsearch-config/src/logstash-filters/snippets/firehose.conf#L60-L64>


On Mon, Mar 14, 2016 at 11:11 AM, Mike Youngstrom <
youngm(a)gmail.com> wrote:

I'll let the Loggregator team respond formally. But,
in my conversations with the Loggregator team I think we're basically stuck
not sure what the right thing to do is on the client side. How does the
client trigger in loggregator that this is a multi line log message or what
is the right way for loggregator to detect that the client is trying to
send a multi line log message? Any ideas?

Mike

On Mon, Mar 14, 2016 at 10:25 AM, Aliaksandr
Prysmakou <prysmakou(a)gmail.com> wrote:

Hi guys,
Are there any updates about "Multiline Log Entry"
issue? How correctly deal with stacktraces?
Links to the tracker to read?
----
Alex Prysmakou / Altoros
Tel: (617) 841-2121 ext. 5161 | Toll free:
855-ALTOROS
Skype: aliaksandr.prysmakou
www.altoros.com | blog.altoros.com |
twitter.com/altoros


--
Jim Campbell | Product Manager | Cloud Foundry |
Pivotal.io | 303.618.0963

--
Jim Campbell | Product Manager | Cloud Foundry |
Pivotal.io | 303.618.0963

--
Jim Campbell | Product Manager | Cloud Foundry | Pivotal.io
| 303.618.0963

--
Jim Campbell | Product Manager | Cloud Foundry | Pivotal.io |
303.618.0963

--
Jim Campbell | Product Manager | Cloud Foundry | Pivotal.io |
303.618.0963

--
Jim Campbell | Product Manager | Cloud Foundry | Pivotal.io |
303.618.0963

--
Jim Campbell | Product Manager | Cloud Foundry | Pivotal.io |
303.618.0963
--
Jim Campbell | Product Manager | Cloud Foundry | Pivotal.io | 303.618.0963


Re: Doppler/Firehose - Multiline Log Entry

Mike Youngstrom <youngm@...>
 

Yup, makes sense. I'm sure there is some valid reason for Diego not to use
log_sender.ScanLogStream today. When that get's sorted out then the event
demarcation and this replacement will all be in the correct place.

Thanks,
Mike

On Tue, Apr 19, 2016 at 9:29 AM, Eric Malm <emalm(a)pivotal.io> wrote:

Thanks, Mike, don't be too hard on yourself. :) I think that's a valid
point that if there is processing to translate a byte-stream from process
stdout/stderr to loggregator log events, it would be beneficial to make
that as efficient as possible. Right now, the executor's stream-destination
type has all of that logic: it breaks the byte-stream into messages on
newlines and carriage returns, it breaks too-long messages on UTF-8
boundaries, and then it emits those messages to dropsonde with the
app-specific tagging. So from the standpoint of efficiency and coherence,
it probably would make sense to do this proposed \u2028-to-\n substitution
in the same single-pass as the other processing.

That said, the Diego team has gotten some of that processing wrong in the
executor: we've been breaking log messages at 4KiB when loggregator can
handle messages close to 64KiB (whatever ends up being the limit imposed by
the size of a UDP datagram and the additional fields in the event
envelope), and we just fixed a bug where we could break 3- and 4-byte UTF-8
sequences incorrectly at those message-length boundaries. So I also think
it could make sense for the dropsonde library to provide a type that does
the app-byte-stream-to-log-message processing that the executor's
stream-destination currently does. The executor could then instantiate a
pair of those and hook them up as io.Writer interfaces to the stdout and
stderr of the Garden process it invokes, as it does with its own
stream-destinations today. The stream-destinations could then also be
individually configured to do the substitution.

In any case, I think it makes sense to proceed with the Loggregator spike,
and then if we want to implement the solution for real the Diego and
Loggregator teams can figure out the best way to make it efficient and
maintainable.

Thanks,
Eric

On Wed, Apr 13, 2016 at 5:53 PM, Mike Youngstrom <youngm(a)gmail.com> wrote:

I'm an idiot. I see what you and Eric are saying now. Put the code in
Dropsonde then let the Executor simply initialize Dropsonde that way.
Works for me.

Thanks,
Mike

On Wed, Apr 13, 2016 at 5:26 PM, Jim CF Campbell <jcampbell(a)pivotal.io>
wrote:

My last 2 cents. It'll be configurable so will only be active in users
of dropsonde that want the functionality such as the Executor.

On Wed, Apr 13, 2016 at 5:21 PM, Mike Youngstrom <youngm(a)gmail.com>
wrote:

You may want to reference the issue I created on executor.

In that issue I note that I don't think dropsonde is the right place to
do this token replacement because dropsonde doesn't know that the event
originally came through the limited stdout/stderr interface that needs this
functionality. However, executor does. If I'm using the dropsonde API
directly where I can safely put new line characters I don't want dropsonde
looking to replace a character I don't want replaced especially since that
character replacement isn't even needed when using a more rich interface
like dropsonde directly.

That's my 2 cents.

Thanks,
Mike

On Wed, Apr 13, 2016 at 4:34 PM, Jim CF Campbell <jcampbell(a)pivotal.io>
wrote:

We're going to look into it
<https://www.pivotaltracker.com/story/show/117583365>.

On Wed, Apr 13, 2016 at 12:33 PM, Eric Malm <emalm(a)pivotal.io> wrote:

Thanks, Mike. If source-side processing is the right place to do
that \u2028-to-newline substitution, I think that there could also be a
config option on the dropsonde library to have its LogSender perform that
within each message before forwarding it on. The local metron-agent could
also do that processing. I think it's appropriate to push as much of that
log processing as possible to the Loggregator components and libraries:
it's already a bit much that the executor knows anything at all about the
content of the byte-streams that it receives from the stdout and stderr of
a process in the container, so that it can break those streams into the
log-lines that the dropsonde library expects.

Best,
Eric

On Wed, Apr 13, 2016 at 11:00 AM, Mike Youngstrom <youngm(a)gmail.com>
wrote:

Thanks for the insight Jim. I still think that the Executor is the
place to fix this since multi line logging isn't a Loggregator limitation
it is a log inject limitation which is owned by the Executor. I'll open an
issue with Diego and see how it goes.

Thanks,
Mike

On Tue, Apr 12, 2016 at 2:51 PM, Jim CF Campbell <
jcampbell(a)pivotal.io> wrote:

That strategy is going to be hard to sell. Diego's Executor takes
the log lines out of Garden and drops them into dropsonde messages. I doubt
they'll think it's a good idea to implement substitution in that
processing. You can certainly ask Eric - he's very aware of the underlying
problem.

After that point, the Loggregator system does it's best to touch
messages as little as possible, and to improve performance and reliability,
we have thinking about the future that will lower the amount of touching
ever further. The next place that log message processing can be done is
either in a nozzle, or the injester of a log aggregator.

I'd vote for those downstream places - a single configuration and
algorithm instead of distributed across runner VMs.

On Tue, Apr 12, 2016 at 2:15 PM, Mike Youngstrom <youngm(a)gmail.com>
wrote:

I was thinking whoever demarcates and submits the original event
to loggregator. dea_logging_agent and the equivalent in Deigo. Doing it
at that point could provide a bit more flexibility. I know this isn't
necessarily the loggregator team's code but I think loggregator team buy
off would be important for those projects to accept such a PR.

Unless you can think of a better place to make that transformation
within the loggregator processing chain?

Mike

On Tue, Apr 12, 2016 at 2:02 PM, Jim CF Campbell <
jcampbell(a)pivotal.io> wrote:

what exactly do you mean by "event creation time"?

On Tue, Apr 12, 2016 at 1:57 PM, Mike Youngstrom <
youngm(a)gmail.com> wrote:

Before I submit the CLI issue let me ask one more question.

Would it be better to replace the newline token with /n at event
creation time instead of asking the cli, splunk, anyone listening on the
firehose, etc. to do so?

The obvious downside is this would probably need to be a global
configuration. However, I know my organization wouldn't have a problem
swapping /u2028 with /n for a deployment. The feature would obviously be
off by default.

Thoughs?

Mike

On Tue, Apr 12, 2016 at 11:24 AM, Mike Youngstrom <
youngm(a)gmail.com> wrote:

Sounds good. I'll submit an issue to start the discussion. I
imagine the first question Dies will ask though is if you would support
something like that. :)

Mike

On Tue, Apr 12, 2016 at 11:12 AM, Jim CF Campbell <
jcampbell(a)pivotal.io> wrote:

cf logs
<https://github.com/cloudfoundry/cli/blob/40eb5be48eaac697c3700d5f1e6f654bae471cec/cf/commands/application/logs.go>
is actually maintained by the CLI team under Dies
<https://www.pivotaltracker.com/n/projects/892938>. You can
talk to them. I'll certainly support you by helping explain the need. I'd
think we want a general solution (token in ENV for instance).



On Tue, Apr 12, 2016 at 11:02 AM, Mike Youngstrom <
youngm(a)gmail.com> wrote:

Jim,

If I submitted a CLI PR to change the cf logs command to
substitute /u2028 with /n could the loggregator team get behind that?

Mike

On Tue, Apr 12, 2016 at 10:20 AM, Jim CF Campbell <
jcampbell(a)pivotal.io> wrote:

Mike,

When you get a bit more desperate ;-) here is a nozzle plug
in <https://github.com/jtuchscherer/nozzle-plugin> for the
CLI. It's attaches to the firehose to display everything, but would be easy
to modify to just look at a single app, and sub out the magic token for
newlines.

Jim

On Tue, Apr 12, 2016 at 9:56 AM, Mike Youngstrom <
youngm(a)gmail.com> wrote:

Hi David,

The problem for me is that I'm searching for a solution
that can works for development (though less of a priority cause you can
switch config between dev and cf) and for viewing logs via "cf logs" in
addition to a log aggregator. I had hoped that /u2028 would work for
viewing logs via "cf logs" but it doesn't in bash. I'd need to write a
plugin or something for cf logs and train all my users to use it.
Certainly possible but I'm not that desperate yet. :)

Mike

On Tue, Apr 12, 2016 at 5:58 AM, David Laing <
david(a)davidlaing.com> wrote:

FWIW, the technique is to have your logging solution (eg,
logback, log4j) log a token (eg, \u2028) other than \n to
denote line breaks in your stack traces; and then have your log aggregation
software replace that token with a \n again when processing the log
messages.

If \u2028 doesn't work in your environment; use something
else; eg NEWLINE

On Mon, 11 Apr 2016 at 21:12 Mike Youngstrom <
youngm(a)gmail.com> wrote:

Finally got around to testing this. Preliminary testing
show that "\u2028" doesn't function as a new line
character in bash and causes eclipse console to wig out. I don't think "
\u2028" is a viable long term solution. Hope you make
progress on a metric format available to an app in a container. I too
would like a tracker link to such a feature if there is one.

Thanks,
Mike

On Mon, Mar 14, 2016 at 2:28 PM, Mike Youngstrom <
youngm(a)gmail.com> wrote:

Hi Jim,

So, to be clear what we're basically doing is using
unicode newline character to fool loggregator (which is looking for \n)
into thinking that it isn't a new log event right? Does \u2028 work as a
new line character when tailing logs in the CLI? Anyone tried this unicode
new line character in various consoles? IDE, xterm, etc? I'm wondering if
developers will need to have different config for development.

Mike

On Mon, Mar 14, 2016 at 12:17 PM, Jim CF Campbell <
jcampbell(a)pivotal.io> wrote:

Hi Mike and Alex,

Two things - for Java, we are working toward defining
an enhanced metric format that will support transport of Multi Lines.

The second is this workaround that David Laing
suggested for Logstash. Think you could use it for Splunk?

With the Java Logback library you can do this by adding
"%replace(%xException){'\n','\u2028'}%nopex" to your logging config[1] ,
and then use the following logstash conf.[2]
Replace the unicode newline character \u2028 with \n,
which Kibana will display as a new line.

mutate {

gsub => [ "[@message]", '\u2028', "

"]
^^^ Seems that passing a string with an actual newline
in it is the only way to make gsub work

}

to replace the token with a regular newline again so it
displays "properly" in Kibana.

[1] github.com/dpin...ication.yml#L12
<https://github.com/dpinto-pivotal/cf-SpringBootTrader-config/blob/master/application.yml#L12>

[2] github.com/logs...se.conf#L60-L64
<https://github.com/logsearch/logsearch-for-cloudfoundry/blob/master/src/logsearch-config/src/logstash-filters/snippets/firehose.conf#L60-L64>


On Mon, Mar 14, 2016 at 11:11 AM, Mike Youngstrom <
youngm(a)gmail.com> wrote:

I'll let the Loggregator team respond formally. But,
in my conversations with the Loggregator team I think we're basically stuck
not sure what the right thing to do is on the client side. How does the
client trigger in loggregator that this is a multi line log message or what
is the right way for loggregator to detect that the client is trying to
send a multi line log message? Any ideas?

Mike

On Mon, Mar 14, 2016 at 10:25 AM, Aliaksandr Prysmakou
<prysmakou(a)gmail.com> wrote:

Hi guys,
Are there any updates about "Multiline Log Entry"
issue? How correctly deal with stacktraces?
Links to the tracker to read?
----
Alex Prysmakou / Altoros
Tel: (617) 841-2121 ext. 5161 | Toll free:
855-ALTOROS
Skype: aliaksandr.prysmakou
www.altoros.com | blog.altoros.com |
twitter.com/altoros


--
Jim Campbell | Product Manager | Cloud Foundry |
Pivotal.io | 303.618.0963

--
Jim Campbell | Product Manager | Cloud Foundry | Pivotal.io
| 303.618.0963

--
Jim Campbell | Product Manager | Cloud Foundry | Pivotal.io |
303.618.0963

--
Jim Campbell | Product Manager | Cloud Foundry | Pivotal.io |
303.618.0963

--
Jim Campbell | Product Manager | Cloud Foundry | Pivotal.io |
303.618.0963

--
Jim Campbell | Product Manager | Cloud Foundry | Pivotal.io |
303.618.0963

--
Jim Campbell | Product Manager | Cloud Foundry | Pivotal.io |
303.618.0963


Re: Maven: Resolve Dependencies on Platform?

Jesse T. Alford
 

The cf CLI also respects a .cfignore file in the top level of your repo, if
there's stuff you'd like to explicitly exclude from push.

On Tue, Apr 19, 2016, 8:30 AM Daniel Mikusa <dmikusa(a)pivotal.io> wrote:

It's the cf cli that does this. It looks through your JAR / WAR or push
path for files and then uses the resource matching endpoint to ask the
server what it needs to upload. Then it uploads just those files. I
believe the algorithm works based on file hashes, someone else might be
able to add more detail if you need it.

Dan

On Tue, Apr 19, 2016 at 9:44 AM, Matthew Tyson <matthewcarltyson(a)gmail.com
wrote:
It does help -- thanks guys. I looks like Heroku has a buildpack that
does this also (https://github.com/heroku/heroku-buildpack-java).

The question I am coming up with is this: How does cloud foundry identify
what files to upload? I know the detect script determines what buildpack
to use, but what determines what all to upload?


Re: Maven: Resolve Dependencies on Platform?

Daniel Mikusa
 

It's the cf cli that does this. It looks through your JAR / WAR or push
path for files and then uses the resource matching endpoint to ask the
server what it needs to upload. Then it uploads just those files. I
believe the algorithm works based on file hashes, someone else might be
able to add more detail if you need it.

Dan

On Tue, Apr 19, 2016 at 9:44 AM, Matthew Tyson <matthewcarltyson(a)gmail.com>
wrote:

It does help -- thanks guys. I looks like Heroku has a buildpack that
does this also (https://github.com/heroku/heroku-buildpack-java).

The question I am coming up with is this: How does cloud foundry identify
what files to upload? I know the detect script determines what buildpack
to use, but what determines what all to upload?


Re: Doppler/Firehose - Multiline Log Entry

Eric Malm <emalm@...>
 

Thanks, Mike, don't be too hard on yourself. :) I think that's a valid
point that if there is processing to translate a byte-stream from process
stdout/stderr to loggregator log events, it would be beneficial to make
that as efficient as possible. Right now, the executor's stream-destination
type has all of that logic: it breaks the byte-stream into messages on
newlines and carriage returns, it breaks too-long messages on UTF-8
boundaries, and then it emits those messages to dropsonde with the
app-specific tagging. So from the standpoint of efficiency and coherence,
it probably would make sense to do this proposed \u2028-to-\n substitution
in the same single-pass as the other processing.

That said, the Diego team has gotten some of that processing wrong in the
executor: we've been breaking log messages at 4KiB when loggregator can
handle messages close to 64KiB (whatever ends up being the limit imposed by
the size of a UDP datagram and the additional fields in the event
envelope), and we just fixed a bug where we could break 3- and 4-byte UTF-8
sequences incorrectly at those message-length boundaries. So I also think
it could make sense for the dropsonde library to provide a type that does
the app-byte-stream-to-log-message processing that the executor's
stream-destination currently does. The executor could then instantiate a
pair of those and hook them up as io.Writer interfaces to the stdout and
stderr of the Garden process it invokes, as it does with its own
stream-destinations today. The stream-destinations could then also be
individually configured to do the substitution.

In any case, I think it makes sense to proceed with the Loggregator spike,
and then if we want to implement the solution for real the Diego and
Loggregator teams can figure out the best way to make it efficient and
maintainable.

Thanks,
Eric

On Wed, Apr 13, 2016 at 5:53 PM, Mike Youngstrom <youngm(a)gmail.com> wrote:

I'm an idiot. I see what you and Eric are saying now. Put the code in
Dropsonde then let the Executor simply initialize Dropsonde that way.
Works for me.

Thanks,
Mike

On Wed, Apr 13, 2016 at 5:26 PM, Jim CF Campbell <jcampbell(a)pivotal.io>
wrote:

My last 2 cents. It'll be configurable so will only be active in users of
dropsonde that want the functionality such as the Executor.

On Wed, Apr 13, 2016 at 5:21 PM, Mike Youngstrom <youngm(a)gmail.com>
wrote:

You may want to reference the issue I created on executor.

In that issue I note that I don't think dropsonde is the right place to
do this token replacement because dropsonde doesn't know that the event
originally came through the limited stdout/stderr interface that needs this
functionality. However, executor does. If I'm using the dropsonde API
directly where I can safely put new line characters I don't want dropsonde
looking to replace a character I don't want replaced especially since that
character replacement isn't even needed when using a more rich interface
like dropsonde directly.

That's my 2 cents.

Thanks,
Mike

On Wed, Apr 13, 2016 at 4:34 PM, Jim CF Campbell <jcampbell(a)pivotal.io>
wrote:

We're going to look into it
<https://www.pivotaltracker.com/story/show/117583365>.

On Wed, Apr 13, 2016 at 12:33 PM, Eric Malm <emalm(a)pivotal.io> wrote:

Thanks, Mike. If source-side processing is the right place to do
that \u2028-to-newline substitution, I think that there could also be a
config option on the dropsonde library to have its LogSender perform that
within each message before forwarding it on. The local metron-agent could
also do that processing. I think it's appropriate to push as much of that
log processing as possible to the Loggregator components and libraries:
it's already a bit much that the executor knows anything at all about the
content of the byte-streams that it receives from the stdout and stderr of
a process in the container, so that it can break those streams into the
log-lines that the dropsonde library expects.

Best,
Eric

On Wed, Apr 13, 2016 at 11:00 AM, Mike Youngstrom <youngm(a)gmail.com>
wrote:

Thanks for the insight Jim. I still think that the Executor is the
place to fix this since multi line logging isn't a Loggregator limitation
it is a log inject limitation which is owned by the Executor. I'll open an
issue with Diego and see how it goes.

Thanks,
Mike

On Tue, Apr 12, 2016 at 2:51 PM, Jim CF Campbell <
jcampbell(a)pivotal.io> wrote:

That strategy is going to be hard to sell. Diego's Executor takes
the log lines out of Garden and drops them into dropsonde messages. I doubt
they'll think it's a good idea to implement substitution in that
processing. You can certainly ask Eric - he's very aware of the underlying
problem.

After that point, the Loggregator system does it's best to touch
messages as little as possible, and to improve performance and reliability,
we have thinking about the future that will lower the amount of touching
ever further. The next place that log message processing can be done is
either in a nozzle, or the injester of a log aggregator.

I'd vote for those downstream places - a single configuration and
algorithm instead of distributed across runner VMs.

On Tue, Apr 12, 2016 at 2:15 PM, Mike Youngstrom <youngm(a)gmail.com>
wrote:

I was thinking whoever demarcates and submits the original event to
loggregator. dea_logging_agent and the equivalent in Deigo. Doing it at
that point could provide a bit more flexibility. I know this isn't
necessarily the loggregator team's code but I think loggregator team buy
off would be important for those projects to accept such a PR.

Unless you can think of a better place to make that transformation
within the loggregator processing chain?

Mike

On Tue, Apr 12, 2016 at 2:02 PM, Jim CF Campbell <
jcampbell(a)pivotal.io> wrote:

what exactly do you mean by "event creation time"?

On Tue, Apr 12, 2016 at 1:57 PM, Mike Youngstrom <youngm(a)gmail.com
wrote:
Before I submit the CLI issue let me ask one more question.

Would it be better to replace the newline token with /n at event
creation time instead of asking the cli, splunk, anyone listening on the
firehose, etc. to do so?

The obvious downside is this would probably need to be a global
configuration. However, I know my organization wouldn't have a problem
swapping /u2028 with /n for a deployment. The feature would obviously be
off by default.

Thoughs?

Mike

On Tue, Apr 12, 2016 at 11:24 AM, Mike Youngstrom <
youngm(a)gmail.com> wrote:

Sounds good. I'll submit an issue to start the discussion. I
imagine the first question Dies will ask though is if you would support
something like that. :)

Mike

On Tue, Apr 12, 2016 at 11:12 AM, Jim CF Campbell <
jcampbell(a)pivotal.io> wrote:

cf logs
<https://github.com/cloudfoundry/cli/blob/40eb5be48eaac697c3700d5f1e6f654bae471cec/cf/commands/application/logs.go>
is actually maintained by the CLI team under Dies
<https://www.pivotaltracker.com/n/projects/892938>. You can
talk to them. I'll certainly support you by helping explain the need. I'd
think we want a general solution (token in ENV for instance).



On Tue, Apr 12, 2016 at 11:02 AM, Mike Youngstrom <
youngm(a)gmail.com> wrote:

Jim,

If I submitted a CLI PR to change the cf logs command to
substitute /u2028 with /n could the loggregator team get behind that?

Mike

On Tue, Apr 12, 2016 at 10:20 AM, Jim CF Campbell <
jcampbell(a)pivotal.io> wrote:

Mike,

When you get a bit more desperate ;-) here is a nozzle plug
in <https://github.com/jtuchscherer/nozzle-plugin> for the
CLI. It's attaches to the firehose to display everything, but would be easy
to modify to just look at a single app, and sub out the magic token for
newlines.

Jim

On Tue, Apr 12, 2016 at 9:56 AM, Mike Youngstrom <
youngm(a)gmail.com> wrote:

Hi David,

The problem for me is that I'm searching for a solution that
can works for development (though less of a priority cause you can switch
config between dev and cf) and for viewing logs via "cf logs" in addition
to a log aggregator. I had hoped that /u2028 would work for viewing logs
via "cf logs" but it doesn't in bash. I'd need to write a plugin or
something for cf logs and train all my users to use it. Certainly possible
but I'm not that desperate yet. :)

Mike

On Tue, Apr 12, 2016 at 5:58 AM, David Laing <
david(a)davidlaing.com> wrote:

FWIW, the technique is to have your logging solution (eg,
logback, log4j) log a token (eg, \u2028) other than \n to
denote line breaks in your stack traces; and then have your log aggregation
software replace that token with a \n again when processing the log
messages.

If \u2028 doesn't work in your environment; use something
else; eg NEWLINE

On Mon, 11 Apr 2016 at 21:12 Mike Youngstrom <
youngm(a)gmail.com> wrote:

Finally got around to testing this. Preliminary testing
show that "\u2028" doesn't function as a new line
character in bash and causes eclipse console to wig out. I don't think "
\u2028" is a viable long term solution. Hope you make
progress on a metric format available to an app in a container. I too
would like a tracker link to such a feature if there is one.

Thanks,
Mike

On Mon, Mar 14, 2016 at 2:28 PM, Mike Youngstrom <
youngm(a)gmail.com> wrote:

Hi Jim,

So, to be clear what we're basically doing is using
unicode newline character to fool loggregator (which is looking for \n)
into thinking that it isn't a new log event right? Does \u2028 work as a
new line character when tailing logs in the CLI? Anyone tried this unicode
new line character in various consoles? IDE, xterm, etc? I'm wondering if
developers will need to have different config for development.

Mike

On Mon, Mar 14, 2016 at 12:17 PM, Jim CF Campbell <
jcampbell(a)pivotal.io> wrote:

Hi Mike and Alex,

Two things - for Java, we are working toward defining an
enhanced metric format that will support transport of Multi Lines.

The second is this workaround that David Laing suggested
for Logstash. Think you could use it for Splunk?

With the Java Logback library you can do this by adding
"%replace(%xException){'\n','\u2028'}%nopex" to your logging config[1] ,
and then use the following logstash conf.[2]
Replace the unicode newline character \u2028 with \n,
which Kibana will display as a new line.

mutate {

gsub => [ "[@message]", '\u2028', "

"]
^^^ Seems that passing a string with an actual newline
in it is the only way to make gsub work

}

to replace the token with a regular newline again so it
displays "properly" in Kibana.

[1] github.com/dpin...ication.yml#L12
<https://github.com/dpinto-pivotal/cf-SpringBootTrader-config/blob/master/application.yml#L12>

[2] github.com/logs...se.conf#L60-L64
<https://github.com/logsearch/logsearch-for-cloudfoundry/blob/master/src/logsearch-config/src/logstash-filters/snippets/firehose.conf#L60-L64>


On Mon, Mar 14, 2016 at 11:11 AM, Mike Youngstrom <
youngm(a)gmail.com> wrote:

I'll let the Loggregator team respond formally. But,
in my conversations with the Loggregator team I think we're basically stuck
not sure what the right thing to do is on the client side. How does the
client trigger in loggregator that this is a multi line log message or what
is the right way for loggregator to detect that the client is trying to
send a multi line log message? Any ideas?

Mike

On Mon, Mar 14, 2016 at 10:25 AM, Aliaksandr Prysmakou
<prysmakou(a)gmail.com> wrote:

Hi guys,
Are there any updates about "Multiline Log Entry"
issue? How correctly deal with stacktraces?
Links to the tracker to read?
----
Alex Prysmakou / Altoros
Tel: (617) 841-2121 ext. 5161 | Toll free: 855-ALTOROS
Skype: aliaksandr.prysmakou
www.altoros.com | blog.altoros.com |
twitter.com/altoros


--
Jim Campbell | Product Manager | Cloud Foundry |
Pivotal.io | 303.618.0963

--
Jim Campbell | Product Manager | Cloud Foundry | Pivotal.io
| 303.618.0963

--
Jim Campbell | Product Manager | Cloud Foundry | Pivotal.io |
303.618.0963

--
Jim Campbell | Product Manager | Cloud Foundry | Pivotal.io |
303.618.0963

--
Jim Campbell | Product Manager | Cloud Foundry | Pivotal.io |
303.618.0963

--
Jim Campbell | Product Manager | Cloud Foundry | Pivotal.io |
303.618.0963

--
Jim Campbell | Product Manager | Cloud Foundry | Pivotal.io |
303.618.0963


Re: Intended UAA-specific user identity fields in JWT access token ?

Filip Hanik
 

Guillaume,

We just realized that you can remove any PII, personally identifiable
information, from tokens without us having to add new features

You just configure

jwt:
token:
claims:
exclude:
- authorities
- email
- user_name

in your uaa.yml file. Similar config exists for cf-release
We're closing the story as a "no change needed".

On Fri, Apr 1, 2016 at 1:12 AM, Guillaume Berche <bercheg(a)gmail.com> wrote:

Great, thanks Filip!


Guillaume.

On Thu, Mar 31, 2016 at 9:50 PM, Filip Hanik <fhanik(a)pivotal.io> wrote:

yes, they are always returned.

introducing an option sounds like a good idea for the systems that wish
to turn it off, thanks for the idea.

https://www.pivotaltracker.com/story/show/116726159

Filip


On Thu, Mar 31, 2016 at 1:39 PM, Guillaume Berche <bercheg(a)gmail.com>
wrote:

Thanks Filip for your answer. Wouldn't it make sense to progressively
change this behavior, possibly controlled by a configuration option to give
clients time to handle this incompatible change?

Scanning quickly through the code I suspect the username and email
fields are systematically returned in the access token, regardless of the
presence of the openid scope (I still have to double check by actually
testing it), therefore disclosing some user identity without his/her
consent.

Guillaume.

On Thu, Mar 31, 2016 at 3:03 PM, Filip Hanik <fhanik(a)pivotal.io> wrote:

The access token used to double down as an identity token before OpenID
Connect was standardized, now that we have implemented id_token, we don't
really need it. but removing it would cause an backwards incompatible
change.


Filip


On Thu, Mar 31, 2016 at 6:50 AM, Guillaume Berche <bercheg(a)gmail.com>
wrote:

Hi,

I wonder the rationale for apparenty uaa-specific [0] user-related
fields in the access token (username, email [1]) while they are now
returned in a standard maneer in the openidconnect id token.

Is it something that would change in the future ([2] seemed similar
decoupling) ? Or is it a standard practice that avoids clients to request
the idtoken to get access to basic user identity ?

Thanks in advance,

Guillaume.

ps: please let me know if such questions are better suited for a GH
issue on the UAA repo.

[0]
https://github.com/cloudfoundry/uaa/blob/master/docs/UAA-Tokens.md#getting-started

Some of these fields are described in the JSON web tokens
specification. However, the vendor may add additional fields, or
attributes, to the token itself.

[1]
https://github.com/cloudfoundry/uaa/blob/9b5c13d793ebfe358e26559cedc6b528a557b43f/server/src/main/java/org/cloudfoundry/identity/uaa/oauth/UaaTokenServices.java#L493-L497
[2] https://www.pivotaltracker.com/story/show/102090344

Guillaume.


Re: Maven: Resolve Dependencies on Platform?

Matthew Tyson
 

It does help -- thanks guys. I looks like Heroku has a buildpack that does this also (https://github.com/heroku/heroku-buildpack-java).

The question I am coming up with is this: How does cloud foundry identify what files to upload? I know the detect script determines what buildpack to use, but what determines what all to upload?


Re: Postgresql data is not migrated after CF is upgraded from v197 to v230

Amit Kumar Gupta
 

Hi Maggie,

First off, please be sure to backup your data in case anything goes wrong
during an upgrade.

Also, do note that while private vendors of Cloud Foundry may support
upgrades between several different versions of Cloud Foundry, the open
source project currently only tests and guarantees upgrades between
consecutive final releases.

You're correct, at minimum, you will first need to upgrade from v197 to
some version between v211 (the first version which introduces the upgrade
from postgres 9.0 to 9.4.2) and v225 (the last version including such an
upgrade), and then upgrade to v230. I would review the release notes for
releases that you're skipping to see if there are any other important
releases that are actually not safe to skip.

I've updated the v226 release notes [1
<https://github.com/cloudfoundry/cf-release/releases/tag/v226>] to call
attention to this fact, thank you for highlighting this issue!

Please let me know if you have further questions about the upgrade process.

[1] https://github.com/cloudfoundry/cf-release/releases/tag/v226

Cheers,
Amit

On Mon, Apr 18, 2016 at 11:22 PM, Meng, Xiangyi <xiangyi.meng(a)emc.com>
wrote:

Hi,



I found all org, spaces and applications information were lost after I
upgraded my CF env from v197 to v230. After checked the postgres node, I
found there were two directory under /var/vcap/store. One is “postgres” and
the other is “postgres-9.4.5”. I guess the reason why all information got
lost is data from previous postgresql database(9.0.3) was not migrated to
new postgresql database(9.4.5). I found two related releases.



V211



Postgres Job Upgrade

The Postgres Job will upgrade the postgres database to version 9.4.2.
Postgres will be unavailable during this upgrade.



V229



In support of work in progress to enable developers to specify application
ports when mapping routes, cf-release v229 introduces a database migration
for CCDB. For deployments that use a PostgreSQL database for CCDB that is
NOT the PostreSQL job that comes with cf-release, v229 introduces the
following requirements. These requirements are applicable for subsequent
releases. If you are using the PostgreSQL job that comes with cf-release,
or if you are using MySQL as the backing db for CC, no action is necessary.

◦PostgreSQL 9.1 is required at a minimum

◦For versions 9.1-9.3, operators must first install the extension
uuid-ossp

◦For versions 9.4 and newer, operators must first install the extension
pgcrypto



So does it mean I have to upgrade v197 to v211 and then upgrade to v230
after installing pgcrypto? Any help would be appreciated.



Thanks,

Maggie


Postgresql data is not migrated after CF is upgraded from v197 to v230

MaggieMeng
 

Hi,

I found all org, spaces and applications information were lost after I upgraded my CF env from v197 to v230. After checked the postgres node, I found there were two directory under /var/vcap/store. One is “postgres” and the other is “postgres-9.4.5”. I guess the reason why all information got lost is data from previous postgresql database(9.0.3) was not migrated to new postgresql database(9.4.5). I found two related releases.

V211

Postgres Job Upgrade
The Postgres Job will upgrade the postgres database to version 9.4.2. Postgres will be unavailable during this upgrade.

V229

In support of work in progress to enable developers to specify application ports when mapping routes, cf-release v229 introduces a database migration for CCDB. For deployments that use a PostgreSQL database for CCDB that is NOT the PostreSQL job that comes with cf-release, v229 introduces the following requirements. These requirements are applicable for subsequent releases. If you are using the PostgreSQL job that comes with cf-release, or if you are using MySQL as the backing db for CC, no action is necessary.
◦PostgreSQL 9.1 is required at a minimum
◦For versions 9.1-9.3, operators must first install the extension uuid-ossp
◦For versions 9.4 and newer, operators must first install the extension pgcrypto

So does it mean I have to upgrade v197 to v211 and then upgrade to v230 after installing pgcrypto? Any help would be appreciated.

Thanks,
Maggie


Re: app guid uniqueness

John Wong
 

Awesome. Thank you, Nicholas.

John

On Mon, Apr 18, 2016 at 1:51 PM, Nicholas Calugar <ncalugar(a)pivotal.io>
wrote:

Hi John,

An application's guid will never change. If you delete an app, and push
the code again, you are creating a new app with another guid.


Thanks,

Nick

On Mon, Apr 18, 2016 at 9:53 AM, John Wong <gokoproject(a)gmail.com> wrote:

Based on my brief testing and observation, the guid of an app sticks
around for as long as the app remains running (whether we restart or
restage). But removing the app, then cf push a new guid is generated.

Is this a true statement?

Thanks.

John


Re: Staging and Runtime Hooks Feature Narrative

Troy Topnik
 

I think if we can get some consensus on the .profile script support (AKA Runtime Hooks), we should move forward with that. Jan has already separated that work into a separate PR, so it could be merged independently.

https://github.com/cloudfoundry-incubator/buildpack_app_lifecycle/pull/14

For the staging hooks, we can potentially implement the proposed functionality in pre-staging and post-staging *buildpacks* in conjunction with the multi-buildpacks support Mike mentions above. This is a little more work for the user, but avoids the need to expand the Heroku buildpack contract. I'm not totally convinced that the original proposal actually breaks buildpack compatibility, but moving staging hooks into their own auxiliary buildpacks should remove any remaining points of contention and would not require any merges into buildpack_app_lifecycle.

I think you've convinced me (separate discussion) that things like db initialization are best done in Tasks.

TT

4761 - 4780 of 9429