Re: Doppler/Firehose - Multiline Log Entry
Mike Youngstrom <youngm@...>
Yup, makes sense. I'm sure there is some valid reason for Diego not to use
log_sender.ScanLogStream today. When that get's sorted out then the event
demarcation and this replacement will all be in the correct place.
Thanks,
Mike
toggle quoted message
Show quoted text
log_sender.ScanLogStream today. When that get's sorted out then the event
demarcation and this replacement will all be in the correct place.
Thanks,
Mike
On Tue, Apr 19, 2016 at 9:29 AM, Eric Malm <emalm(a)pivotal.io> wrote:
Thanks, Mike, don't be too hard on yourself. :) I think that's a valid
point that if there is processing to translate a byte-stream from process
stdout/stderr to loggregator log events, it would be beneficial to make
that as efficient as possible. Right now, the executor's stream-destination
type has all of that logic: it breaks the byte-stream into messages on
newlines and carriage returns, it breaks too-long messages on UTF-8
boundaries, and then it emits those messages to dropsonde with the
app-specific tagging. So from the standpoint of efficiency and coherence,
it probably would make sense to do this proposed \u2028-to-\n substitution
in the same single-pass as the other processing.
That said, the Diego team has gotten some of that processing wrong in the
executor: we've been breaking log messages at 4KiB when loggregator can
handle messages close to 64KiB (whatever ends up being the limit imposed by
the size of a UDP datagram and the additional fields in the event
envelope), and we just fixed a bug where we could break 3- and 4-byte UTF-8
sequences incorrectly at those message-length boundaries. So I also think
it could make sense for the dropsonde library to provide a type that does
the app-byte-stream-to-log-message processing that the executor's
stream-destination currently does. The executor could then instantiate a
pair of those and hook them up as io.Writer interfaces to the stdout and
stderr of the Garden process it invokes, as it does with its own
stream-destinations today. The stream-destinations could then also be
individually configured to do the substitution.
In any case, I think it makes sense to proceed with the Loggregator spike,
and then if we want to implement the solution for real the Diego and
Loggregator teams can figure out the best way to make it efficient and
maintainable.
Thanks,
Eric
On Wed, Apr 13, 2016 at 5:53 PM, Mike Youngstrom <youngm(a)gmail.com> wrote:I'm an idiot. I see what you and Eric are saying now. Put the code in
Dropsonde then let the Executor simply initialize Dropsonde that way.
Works for me.
Thanks,
Mike
On Wed, Apr 13, 2016 at 5:26 PM, Jim CF Campbell <jcampbell(a)pivotal.io>
wrote:My last 2 cents. It'll be configurable so will only be active in users
of dropsonde that want the functionality such as the Executor.
On Wed, Apr 13, 2016 at 5:21 PM, Mike Youngstrom <youngm(a)gmail.com>
wrote:You may want to reference the issue I created on executor.
In that issue I note that I don't think dropsonde is the right place to
do this token replacement because dropsonde doesn't know that the event
originally came through the limited stdout/stderr interface that needs this
functionality. However, executor does. If I'm using the dropsonde API
directly where I can safely put new line characters I don't want dropsonde
looking to replace a character I don't want replaced especially since that
character replacement isn't even needed when using a more rich interface
like dropsonde directly.
That's my 2 cents.
Thanks,
Mike
On Wed, Apr 13, 2016 at 4:34 PM, Jim CF Campbell <jcampbell(a)pivotal.io>
wrote:We're going to look into it
<https://www.pivotaltracker.com/story/show/117583365>.
On Wed, Apr 13, 2016 at 12:33 PM, Eric Malm <emalm(a)pivotal.io> wrote:Thanks, Mike. If source-side processing is the right place to do
that \u2028-to-newline substitution, I think that there could also be a
config option on the dropsonde library to have its LogSender perform that
within each message before forwarding it on. The local metron-agent could
also do that processing. I think it's appropriate to push as much of that
log processing as possible to the Loggregator components and libraries:
it's already a bit much that the executor knows anything at all about the
content of the byte-streams that it receives from the stdout and stderr of
a process in the container, so that it can break those streams into the
log-lines that the dropsonde library expects.
Best,
Eric
On Wed, Apr 13, 2016 at 11:00 AM, Mike Youngstrom <youngm(a)gmail.com>
wrote:Thanks for the insight Jim. I still think that the Executor is the
place to fix this since multi line logging isn't a Loggregator limitation
it is a log inject limitation which is owned by the Executor. I'll open an
issue with Diego and see how it goes.
Thanks,
Mike
On Tue, Apr 12, 2016 at 2:51 PM, Jim CF Campbell <
jcampbell(a)pivotal.io> wrote:That strategy is going to be hard to sell. Diego's Executor takes
the log lines out of Garden and drops them into dropsonde messages. I doubt
they'll think it's a good idea to implement substitution in that
processing. You can certainly ask Eric - he's very aware of the underlying
problem.
After that point, the Loggregator system does it's best to touch
messages as little as possible, and to improve performance and reliability,
we have thinking about the future that will lower the amount of touching
ever further. The next place that log message processing can be done is
either in a nozzle, or the injester of a log aggregator.
I'd vote for those downstream places - a single configuration and
algorithm instead of distributed across runner VMs.
On Tue, Apr 12, 2016 at 2:15 PM, Mike Youngstrom <youngm(a)gmail.com>
wrote:I was thinking whoever demarcates and submits the original event
to loggregator. dea_logging_agent and the equivalent in Deigo. Doing it
at that point could provide a bit more flexibility. I know this isn't
necessarily the loggregator team's code but I think loggregator team buy
off would be important for those projects to accept such a PR.
Unless you can think of a better place to make that transformation
within the loggregator processing chain?
Mike
On Tue, Apr 12, 2016 at 2:02 PM, Jim CF Campbell <
jcampbell(a)pivotal.io> wrote:what exactly do you mean by "event creation time"?
On Tue, Apr 12, 2016 at 1:57 PM, Mike Youngstrom <
youngm(a)gmail.com> wrote:Before I submit the CLI issue let me ask one more question.
Would it be better to replace the newline token with /n at event
creation time instead of asking the cli, splunk, anyone listening on the
firehose, etc. to do so?
The obvious downside is this would probably need to be a global
configuration. However, I know my organization wouldn't have a problem
swapping /u2028 with /n for a deployment. The feature would obviously be
off by default.
Thoughs?
Mike
On Tue, Apr 12, 2016 at 11:24 AM, Mike Youngstrom <
youngm(a)gmail.com> wrote:Sounds good. I'll submit an issue to start the discussion. I
imagine the first question Dies will ask though is if you would support
something like that. :)
Mike
On Tue, Apr 12, 2016 at 11:12 AM, Jim CF Campbell <
jcampbell(a)pivotal.io> wrote:cf logs
<https://github.com/cloudfoundry/cli/blob/40eb5be48eaac697c3700d5f1e6f654bae471cec/cf/commands/application/logs.go>
is actually maintained by the CLI team under Dies
<https://www.pivotaltracker.com/n/projects/892938>. You can
talk to them. I'll certainly support you by helping explain the need. I'd
think we want a general solution (token in ENV for instance).
On Tue, Apr 12, 2016 at 11:02 AM, Mike Youngstrom <
youngm(a)gmail.com> wrote:Jim,
If I submitted a CLI PR to change the cf logs command to
substitute /u2028 with /n could the loggregator team get behind that?
Mike
On Tue, Apr 12, 2016 at 10:20 AM, Jim CF Campbell <
jcampbell(a)pivotal.io> wrote:Mike,
When you get a bit more desperate ;-) here is a nozzle plug
in <https://github.com/jtuchscherer/nozzle-plugin> for the
CLI. It's attaches to the firehose to display everything, but would be easy
to modify to just look at a single app, and sub out the magic token for
newlines.
Jim
On Tue, Apr 12, 2016 at 9:56 AM, Mike Youngstrom <
youngm(a)gmail.com> wrote:Hi David,
The problem for me is that I'm searching for a solution
that can works for development (though less of a priority cause you can
switch config between dev and cf) and for viewing logs via "cf logs" in
addition to a log aggregator. I had hoped that /u2028 would work for
viewing logs via "cf logs" but it doesn't in bash. I'd need to write a
plugin or something for cf logs and train all my users to use it.
Certainly possible but I'm not that desperate yet. :)
Mike
On Tue, Apr 12, 2016 at 5:58 AM, David Laing <
david(a)davidlaing.com> wrote:FWIW, the technique is to have your logging solution (eg,
logback, log4j) log a token (eg, \u2028) other than \n to
denote line breaks in your stack traces; and then have your log aggregation
software replace that token with a \n again when processing the log
messages.
If \u2028 doesn't work in your environment; use something
else; eg NEWLINE
On Mon, 11 Apr 2016 at 21:12 Mike Youngstrom <
youngm(a)gmail.com> wrote:Finally got around to testing this. Preliminary testing
show that "\u2028" doesn't function as a new line
character in bash and causes eclipse console to wig out. I don't think "
\u2028" is a viable long term solution. Hope you make
progress on a metric format available to an app in a container. I too
would like a tracker link to such a feature if there is one.
Thanks,
Mike
On Mon, Mar 14, 2016 at 2:28 PM, Mike Youngstrom <
youngm(a)gmail.com> wrote:Hi Jim,
So, to be clear what we're basically doing is using
unicode newline character to fool loggregator (which is looking for \n)
into thinking that it isn't a new log event right? Does \u2028 work as a
new line character when tailing logs in the CLI? Anyone tried this unicode
new line character in various consoles? IDE, xterm, etc? I'm wondering if
developers will need to have different config for development.
Mike
On Mon, Mar 14, 2016 at 12:17 PM, Jim CF Campbell <
jcampbell(a)pivotal.io> wrote:Hi Mike and Alex,
Two things - for Java, we are working toward defining
an enhanced metric format that will support transport of Multi Lines.
The second is this workaround that David Laing
suggested for Logstash. Think you could use it for Splunk?
With the Java Logback library you can do this by adding
"%replace(%xException){'\n','\u2028'}%nopex" to your logging config[1] ,
and then use the following logstash conf.[2]
Replace the unicode newline character \u2028 with \n,
which Kibana will display as a new line.
mutate {
gsub => [ "[@message]", '\u2028', "
"]
^^^ Seems that passing a string with an actual newline
in it is the only way to make gsub work
}
to replace the token with a regular newline again so it
displays "properly" in Kibana.
[1] github.com/dpin...ication.yml#L12
<https://github.com/dpinto-pivotal/cf-SpringBootTrader-config/blob/master/application.yml#L12>
[2] github.com/logs...se.conf#L60-L64
<https://github.com/logsearch/logsearch-for-cloudfoundry/blob/master/src/logsearch-config/src/logstash-filters/snippets/firehose.conf#L60-L64>
On Mon, Mar 14, 2016 at 11:11 AM, Mike Youngstrom <
youngm(a)gmail.com> wrote:I'll let the Loggregator team respond formally. But,
in my conversations with the Loggregator team I think we're basically stuck
not sure what the right thing to do is on the client side. How does the
client trigger in loggregator that this is a multi line log message or what
is the right way for loggregator to detect that the client is trying to
send a multi line log message? Any ideas?
Mike
On Mon, Mar 14, 2016 at 10:25 AM, Aliaksandr Prysmakou
<prysmakou(a)gmail.com> wrote:Hi guys,
Are there any updates about "Multiline Log Entry"
issue? How correctly deal with stacktraces?
Links to the tracker to read?
----
Alex Prysmakou / Altoros
Tel: (617) 841-2121 ext. 5161 | Toll free:
855-ALTOROS
Skype: aliaksandr.prysmakou
www.altoros.com | blog.altoros.com |
twitter.com/altoros
--
Jim Campbell | Product Manager | Cloud Foundry |
Pivotal.io | 303.618.0963
--
Jim Campbell | Product Manager | Cloud Foundry | Pivotal.io
| 303.618.0963
--
Jim Campbell | Product Manager | Cloud Foundry | Pivotal.io |
303.618.0963
--
Jim Campbell | Product Manager | Cloud Foundry | Pivotal.io |
303.618.0963
--
Jim Campbell | Product Manager | Cloud Foundry | Pivotal.io |
303.618.0963
--
Jim Campbell | Product Manager | Cloud Foundry | Pivotal.io |
303.618.0963
--
Jim Campbell | Product Manager | Cloud Foundry | Pivotal.io |
303.618.0963
Re: Maven: Resolve Dependencies on Platform?
Jesse T. Alford
The cf CLI also respects a .cfignore file in the top level of your repo, if
there's stuff you'd like to explicitly exclude from push.
toggle quoted message
Show quoted text
there's stuff you'd like to explicitly exclude from push.
On Tue, Apr 19, 2016, 8:30 AM Daniel Mikusa <dmikusa(a)pivotal.io> wrote:
It's the cf cli that does this. It looks through your JAR / WAR or push
path for files and then uses the resource matching endpoint to ask the
server what it needs to upload. Then it uploads just those files. I
believe the algorithm works based on file hashes, someone else might be
able to add more detail if you need it.
Dan
On Tue, Apr 19, 2016 at 9:44 AM, Matthew Tyson <matthewcarltyson(a)gmail.comwrote:It does help -- thanks guys. I looks like Heroku has a buildpack that
does this also (https://github.com/heroku/heroku-buildpack-java).
The question I am coming up with is this: How does cloud foundry identify
what files to upload? I know the detect script determines what buildpack
to use, but what determines what all to upload?
Re: Maven: Resolve Dependencies on Platform?
Daniel Mikusa
It's the cf cli that does this. It looks through your JAR / WAR or push
path for files and then uses the resource matching endpoint to ask the
server what it needs to upload. Then it uploads just those files. I
believe the algorithm works based on file hashes, someone else might be
able to add more detail if you need it.
Dan
On Tue, Apr 19, 2016 at 9:44 AM, Matthew Tyson <matthewcarltyson(a)gmail.com>
wrote:
path for files and then uses the resource matching endpoint to ask the
server what it needs to upload. Then it uploads just those files. I
believe the algorithm works based on file hashes, someone else might be
able to add more detail if you need it.
Dan
On Tue, Apr 19, 2016 at 9:44 AM, Matthew Tyson <matthewcarltyson(a)gmail.com>
wrote:
It does help -- thanks guys. I looks like Heroku has a buildpack that
does this also (https://github.com/heroku/heroku-buildpack-java).
The question I am coming up with is this: How does cloud foundry identify
what files to upload? I know the detect script determines what buildpack
to use, but what determines what all to upload?
Re: Doppler/Firehose - Multiline Log Entry
Eric Malm <emalm@...>
Thanks, Mike, don't be too hard on yourself. :) I think that's a valid
point that if there is processing to translate a byte-stream from process
stdout/stderr to loggregator log events, it would be beneficial to make
that as efficient as possible. Right now, the executor's stream-destination
type has all of that logic: it breaks the byte-stream into messages on
newlines and carriage returns, it breaks too-long messages on UTF-8
boundaries, and then it emits those messages to dropsonde with the
app-specific tagging. So from the standpoint of efficiency and coherence,
it probably would make sense to do this proposed \u2028-to-\n substitution
in the same single-pass as the other processing.
That said, the Diego team has gotten some of that processing wrong in the
executor: we've been breaking log messages at 4KiB when loggregator can
handle messages close to 64KiB (whatever ends up being the limit imposed by
the size of a UDP datagram and the additional fields in the event
envelope), and we just fixed a bug where we could break 3- and 4-byte UTF-8
sequences incorrectly at those message-length boundaries. So I also think
it could make sense for the dropsonde library to provide a type that does
the app-byte-stream-to-log-message processing that the executor's
stream-destination currently does. The executor could then instantiate a
pair of those and hook them up as io.Writer interfaces to the stdout and
stderr of the Garden process it invokes, as it does with its own
stream-destinations today. The stream-destinations could then also be
individually configured to do the substitution.
In any case, I think it makes sense to proceed with the Loggregator spike,
and then if we want to implement the solution for real the Diego and
Loggregator teams can figure out the best way to make it efficient and
maintainable.
Thanks,
Eric
toggle quoted message
Show quoted text
point that if there is processing to translate a byte-stream from process
stdout/stderr to loggregator log events, it would be beneficial to make
that as efficient as possible. Right now, the executor's stream-destination
type has all of that logic: it breaks the byte-stream into messages on
newlines and carriage returns, it breaks too-long messages on UTF-8
boundaries, and then it emits those messages to dropsonde with the
app-specific tagging. So from the standpoint of efficiency and coherence,
it probably would make sense to do this proposed \u2028-to-\n substitution
in the same single-pass as the other processing.
That said, the Diego team has gotten some of that processing wrong in the
executor: we've been breaking log messages at 4KiB when loggregator can
handle messages close to 64KiB (whatever ends up being the limit imposed by
the size of a UDP datagram and the additional fields in the event
envelope), and we just fixed a bug where we could break 3- and 4-byte UTF-8
sequences incorrectly at those message-length boundaries. So I also think
it could make sense for the dropsonde library to provide a type that does
the app-byte-stream-to-log-message processing that the executor's
stream-destination currently does. The executor could then instantiate a
pair of those and hook them up as io.Writer interfaces to the stdout and
stderr of the Garden process it invokes, as it does with its own
stream-destinations today. The stream-destinations could then also be
individually configured to do the substitution.
In any case, I think it makes sense to proceed with the Loggregator spike,
and then if we want to implement the solution for real the Diego and
Loggregator teams can figure out the best way to make it efficient and
maintainable.
Thanks,
Eric
On Wed, Apr 13, 2016 at 5:53 PM, Mike Youngstrom <youngm(a)gmail.com> wrote:
I'm an idiot. I see what you and Eric are saying now. Put the code in
Dropsonde then let the Executor simply initialize Dropsonde that way.
Works for me.
Thanks,
Mike
On Wed, Apr 13, 2016 at 5:26 PM, Jim CF Campbell <jcampbell(a)pivotal.io>
wrote:My last 2 cents. It'll be configurable so will only be active in users of
dropsonde that want the functionality such as the Executor.
On Wed, Apr 13, 2016 at 5:21 PM, Mike Youngstrom <youngm(a)gmail.com>
wrote:You may want to reference the issue I created on executor.
In that issue I note that I don't think dropsonde is the right place to
do this token replacement because dropsonde doesn't know that the event
originally came through the limited stdout/stderr interface that needs this
functionality. However, executor does. If I'm using the dropsonde API
directly where I can safely put new line characters I don't want dropsonde
looking to replace a character I don't want replaced especially since that
character replacement isn't even needed when using a more rich interface
like dropsonde directly.
That's my 2 cents.
Thanks,
Mike
On Wed, Apr 13, 2016 at 4:34 PM, Jim CF Campbell <jcampbell(a)pivotal.io>
wrote:We're going to look into it
<https://www.pivotaltracker.com/story/show/117583365>.
On Wed, Apr 13, 2016 at 12:33 PM, Eric Malm <emalm(a)pivotal.io> wrote:Thanks, Mike. If source-side processing is the right place to do
that \u2028-to-newline substitution, I think that there could also be a
config option on the dropsonde library to have its LogSender perform that
within each message before forwarding it on. The local metron-agent could
also do that processing. I think it's appropriate to push as much of that
log processing as possible to the Loggregator components and libraries:
it's already a bit much that the executor knows anything at all about the
content of the byte-streams that it receives from the stdout and stderr of
a process in the container, so that it can break those streams into the
log-lines that the dropsonde library expects.
Best,
Eric
On Wed, Apr 13, 2016 at 11:00 AM, Mike Youngstrom <youngm(a)gmail.com>
wrote:Thanks for the insight Jim. I still think that the Executor is the
place to fix this since multi line logging isn't a Loggregator limitation
it is a log inject limitation which is owned by the Executor. I'll open an
issue with Diego and see how it goes.
Thanks,
Mike
On Tue, Apr 12, 2016 at 2:51 PM, Jim CF Campbell <
jcampbell(a)pivotal.io> wrote:That strategy is going to be hard to sell. Diego's Executor takes
the log lines out of Garden and drops them into dropsonde messages. I doubt
they'll think it's a good idea to implement substitution in that
processing. You can certainly ask Eric - he's very aware of the underlying
problem.
After that point, the Loggregator system does it's best to touch
messages as little as possible, and to improve performance and reliability,
we have thinking about the future that will lower the amount of touching
ever further. The next place that log message processing can be done is
either in a nozzle, or the injester of a log aggregator.
I'd vote for those downstream places - a single configuration and
algorithm instead of distributed across runner VMs.
On Tue, Apr 12, 2016 at 2:15 PM, Mike Youngstrom <youngm(a)gmail.com>
wrote:I was thinking whoever demarcates and submits the original event to
loggregator. dea_logging_agent and the equivalent in Deigo. Doing it at
that point could provide a bit more flexibility. I know this isn't
necessarily the loggregator team's code but I think loggregator team buy
off would be important for those projects to accept such a PR.
Unless you can think of a better place to make that transformation
within the loggregator processing chain?
Mike
On Tue, Apr 12, 2016 at 2:02 PM, Jim CF Campbell <
jcampbell(a)pivotal.io> wrote:what exactly do you mean by "event creation time"?
On Tue, Apr 12, 2016 at 1:57 PM, Mike Youngstrom <youngm(a)gmail.comwrote:Before I submit the CLI issue let me ask one more question.
Would it be better to replace the newline token with /n at event
creation time instead of asking the cli, splunk, anyone listening on the
firehose, etc. to do so?
The obvious downside is this would probably need to be a global
configuration. However, I know my organization wouldn't have a problem
swapping /u2028 with /n for a deployment. The feature would obviously be
off by default.
Thoughs?
Mike
On Tue, Apr 12, 2016 at 11:24 AM, Mike Youngstrom <
youngm(a)gmail.com> wrote:Sounds good. I'll submit an issue to start the discussion. I
imagine the first question Dies will ask though is if you would support
something like that. :)
Mike
On Tue, Apr 12, 2016 at 11:12 AM, Jim CF Campbell <
jcampbell(a)pivotal.io> wrote:cf logs
<https://github.com/cloudfoundry/cli/blob/40eb5be48eaac697c3700d5f1e6f654bae471cec/cf/commands/application/logs.go>
is actually maintained by the CLI team under Dies
<https://www.pivotaltracker.com/n/projects/892938>. You can
talk to them. I'll certainly support you by helping explain the need. I'd
think we want a general solution (token in ENV for instance).
On Tue, Apr 12, 2016 at 11:02 AM, Mike Youngstrom <
youngm(a)gmail.com> wrote:Jim,
If I submitted a CLI PR to change the cf logs command to
substitute /u2028 with /n could the loggregator team get behind that?
Mike
On Tue, Apr 12, 2016 at 10:20 AM, Jim CF Campbell <
jcampbell(a)pivotal.io> wrote:Mike,
When you get a bit more desperate ;-) here is a nozzle plug
in <https://github.com/jtuchscherer/nozzle-plugin> for the
CLI. It's attaches to the firehose to display everything, but would be easy
to modify to just look at a single app, and sub out the magic token for
newlines.
Jim
On Tue, Apr 12, 2016 at 9:56 AM, Mike Youngstrom <
youngm(a)gmail.com> wrote:Hi David,
The problem for me is that I'm searching for a solution that
can works for development (though less of a priority cause you can switch
config between dev and cf) and for viewing logs via "cf logs" in addition
to a log aggregator. I had hoped that /u2028 would work for viewing logs
via "cf logs" but it doesn't in bash. I'd need to write a plugin or
something for cf logs and train all my users to use it. Certainly possible
but I'm not that desperate yet. :)
Mike
On Tue, Apr 12, 2016 at 5:58 AM, David Laing <
david(a)davidlaing.com> wrote:FWIW, the technique is to have your logging solution (eg,
logback, log4j) log a token (eg, \u2028) other than \n to
denote line breaks in your stack traces; and then have your log aggregation
software replace that token with a \n again when processing the log
messages.
If \u2028 doesn't work in your environment; use something
else; eg NEWLINE
On Mon, 11 Apr 2016 at 21:12 Mike Youngstrom <
youngm(a)gmail.com> wrote:Finally got around to testing this. Preliminary testing
show that "\u2028" doesn't function as a new line
character in bash and causes eclipse console to wig out. I don't think "
\u2028" is a viable long term solution. Hope you make
progress on a metric format available to an app in a container. I too
would like a tracker link to such a feature if there is one.
Thanks,
Mike
On Mon, Mar 14, 2016 at 2:28 PM, Mike Youngstrom <
youngm(a)gmail.com> wrote:Hi Jim,
So, to be clear what we're basically doing is using
unicode newline character to fool loggregator (which is looking for \n)
into thinking that it isn't a new log event right? Does \u2028 work as a
new line character when tailing logs in the CLI? Anyone tried this unicode
new line character in various consoles? IDE, xterm, etc? I'm wondering if
developers will need to have different config for development.
Mike
On Mon, Mar 14, 2016 at 12:17 PM, Jim CF Campbell <
jcampbell(a)pivotal.io> wrote:Hi Mike and Alex,
Two things - for Java, we are working toward defining an
enhanced metric format that will support transport of Multi Lines.
The second is this workaround that David Laing suggested
for Logstash. Think you could use it for Splunk?
With the Java Logback library you can do this by adding
"%replace(%xException){'\n','\u2028'}%nopex" to your logging config[1] ,
and then use the following logstash conf.[2]
Replace the unicode newline character \u2028 with \n,
which Kibana will display as a new line.
mutate {
gsub => [ "[@message]", '\u2028', "
"]
^^^ Seems that passing a string with an actual newline
in it is the only way to make gsub work
}
to replace the token with a regular newline again so it
displays "properly" in Kibana.
[1] github.com/dpin...ication.yml#L12
<https://github.com/dpinto-pivotal/cf-SpringBootTrader-config/blob/master/application.yml#L12>
[2] github.com/logs...se.conf#L60-L64
<https://github.com/logsearch/logsearch-for-cloudfoundry/blob/master/src/logsearch-config/src/logstash-filters/snippets/firehose.conf#L60-L64>
On Mon, Mar 14, 2016 at 11:11 AM, Mike Youngstrom <
youngm(a)gmail.com> wrote:I'll let the Loggregator team respond formally. But,
in my conversations with the Loggregator team I think we're basically stuck
not sure what the right thing to do is on the client side. How does the
client trigger in loggregator that this is a multi line log message or what
is the right way for loggregator to detect that the client is trying to
send a multi line log message? Any ideas?
Mike
On Mon, Mar 14, 2016 at 10:25 AM, Aliaksandr Prysmakou
<prysmakou(a)gmail.com> wrote:Hi guys,
Are there any updates about "Multiline Log Entry"
issue? How correctly deal with stacktraces?
Links to the tracker to read?
----
Alex Prysmakou / Altoros
Tel: (617) 841-2121 ext. 5161 | Toll free: 855-ALTOROS
Skype: aliaksandr.prysmakou
www.altoros.com | blog.altoros.com |
twitter.com/altoros
--
Jim Campbell | Product Manager | Cloud Foundry |
Pivotal.io | 303.618.0963
--
Jim Campbell | Product Manager | Cloud Foundry | Pivotal.io
| 303.618.0963
--
Jim Campbell | Product Manager | Cloud Foundry | Pivotal.io |
303.618.0963
--
Jim Campbell | Product Manager | Cloud Foundry | Pivotal.io |
303.618.0963
--
Jim Campbell | Product Manager | Cloud Foundry | Pivotal.io |
303.618.0963
--
Jim Campbell | Product Manager | Cloud Foundry | Pivotal.io |
303.618.0963
--
Jim Campbell | Product Manager | Cloud Foundry | Pivotal.io |
303.618.0963
Re: Intended UAA-specific user identity fields in JWT access token ?
Filip Hanik
Guillaume,
We just realized that you can remove any PII, personally identifiable
information, from tokens without us having to add new features
You just configure
jwt:
token:
claims:
exclude:
- authorities
- email
- user_name
in your uaa.yml file. Similar config exists for cf-release
We're closing the story as a "no change needed".
toggle quoted message
Show quoted text
We just realized that you can remove any PII, personally identifiable
information, from tokens without us having to add new features
You just configure
jwt:
token:
claims:
exclude:
- authorities
- user_name
in your uaa.yml file. Similar config exists for cf-release
We're closing the story as a "no change needed".
On Fri, Apr 1, 2016 at 1:12 AM, Guillaume Berche <bercheg(a)gmail.com> wrote:
Great, thanks Filip!
Guillaume.
On Thu, Mar 31, 2016 at 9:50 PM, Filip Hanik <fhanik(a)pivotal.io> wrote:yes, they are always returned.
introducing an option sounds like a good idea for the systems that wish
to turn it off, thanks for the idea.
https://www.pivotaltracker.com/story/show/116726159
Filip
On Thu, Mar 31, 2016 at 1:39 PM, Guillaume Berche <bercheg(a)gmail.com>
wrote:Thanks Filip for your answer. Wouldn't it make sense to progressively
change this behavior, possibly controlled by a configuration option to give
clients time to handle this incompatible change?
Scanning quickly through the code I suspect the username and email
fields are systematically returned in the access token, regardless of the
presence of the openid scope (I still have to double check by actually
testing it), therefore disclosing some user identity without his/her
consent.
Guillaume.
On Thu, Mar 31, 2016 at 3:03 PM, Filip Hanik <fhanik(a)pivotal.io> wrote:The access token used to double down as an identity token before OpenID
Connect was standardized, now that we have implemented id_token, we don't
really need it. but removing it would cause an backwards incompatible
change.
Filip
On Thu, Mar 31, 2016 at 6:50 AM, Guillaume Berche <bercheg(a)gmail.com>
wrote:Hi,
I wonder the rationale for apparenty uaa-specific [0] user-related
fields in the access token (username, email [1]) while they are now
returned in a standard maneer in the openidconnect id token.
Is it something that would change in the future ([2] seemed similar
decoupling) ? Or is it a standard practice that avoids clients to request
the idtoken to get access to basic user identity ?
Thanks in advance,
Guillaume.
ps: please let me know if such questions are better suited for a GH
issue on the UAA repo.
[0]
https://github.com/cloudfoundry/uaa/blob/master/docs/UAA-Tokens.md#getting-startedSome of these fields are described in the JSON web tokensspecification. However, the vendor may add additional fields, or
attributes, to the token itself.
[1]
https://github.com/cloudfoundry/uaa/blob/9b5c13d793ebfe358e26559cedc6b528a557b43f/server/src/main/java/org/cloudfoundry/identity/uaa/oauth/UaaTokenServices.java#L493-L497
[2] https://www.pivotaltracker.com/story/show/102090344
Guillaume.
Re: Maven: Resolve Dependencies on Platform?
Matthew Tyson
It does help -- thanks guys. I looks like Heroku has a buildpack that does this also (https://github.com/heroku/heroku-buildpack-java).
The question I am coming up with is this: How does cloud foundry identify what files to upload? I know the detect script determines what buildpack to use, but what determines what all to upload?
The question I am coming up with is this: How does cloud foundry identify what files to upload? I know the detect script determines what buildpack to use, but what determines what all to upload?
Re: Postgresql data is not migrated after CF is upgraded from v197 to v230
Amit Kumar Gupta
Hi Maggie,
First off, please be sure to backup your data in case anything goes wrong
during an upgrade.
Also, do note that while private vendors of Cloud Foundry may support
upgrades between several different versions of Cloud Foundry, the open
source project currently only tests and guarantees upgrades between
consecutive final releases.
You're correct, at minimum, you will first need to upgrade from v197 to
some version between v211 (the first version which introduces the upgrade
from postgres 9.0 to 9.4.2) and v225 (the last version including such an
upgrade), and then upgrade to v230. I would review the release notes for
releases that you're skipping to see if there are any other important
releases that are actually not safe to skip.
I've updated the v226 release notes [1
<https://github.com/cloudfoundry/cf-release/releases/tag/v226>] to call
attention to this fact, thank you for highlighting this issue!
Please let me know if you have further questions about the upgrade process.
[1] https://github.com/cloudfoundry/cf-release/releases/tag/v226
Cheers,
Amit
On Mon, Apr 18, 2016 at 11:22 PM, Meng, Xiangyi <xiangyi.meng(a)emc.com>
wrote:
First off, please be sure to backup your data in case anything goes wrong
during an upgrade.
Also, do note that while private vendors of Cloud Foundry may support
upgrades between several different versions of Cloud Foundry, the open
source project currently only tests and guarantees upgrades between
consecutive final releases.
You're correct, at minimum, you will first need to upgrade from v197 to
some version between v211 (the first version which introduces the upgrade
from postgres 9.0 to 9.4.2) and v225 (the last version including such an
upgrade), and then upgrade to v230. I would review the release notes for
releases that you're skipping to see if there are any other important
releases that are actually not safe to skip.
I've updated the v226 release notes [1
<https://github.com/cloudfoundry/cf-release/releases/tag/v226>] to call
attention to this fact, thank you for highlighting this issue!
Please let me know if you have further questions about the upgrade process.
[1] https://github.com/cloudfoundry/cf-release/releases/tag/v226
Cheers,
Amit
On Mon, Apr 18, 2016 at 11:22 PM, Meng, Xiangyi <xiangyi.meng(a)emc.com>
wrote:
Hi,
I found all org, spaces and applications information were lost after I
upgraded my CF env from v197 to v230. After checked the postgres node, I
found there were two directory under /var/vcap/store. One is “postgres” and
the other is “postgres-9.4.5”. I guess the reason why all information got
lost is data from previous postgresql database(9.0.3) was not migrated to
new postgresql database(9.4.5). I found two related releases.
V211
Postgres Job Upgrade
The Postgres Job will upgrade the postgres database to version 9.4.2.
Postgres will be unavailable during this upgrade.
V229
In support of work in progress to enable developers to specify application
ports when mapping routes, cf-release v229 introduces a database migration
for CCDB. For deployments that use a PostgreSQL database for CCDB that is
NOT the PostreSQL job that comes with cf-release, v229 introduces the
following requirements. These requirements are applicable for subsequent
releases. If you are using the PostgreSQL job that comes with cf-release,
or if you are using MySQL as the backing db for CC, no action is necessary.
◦PostgreSQL 9.1 is required at a minimum
◦For versions 9.1-9.3, operators must first install the extension
uuid-ossp
◦For versions 9.4 and newer, operators must first install the extension
pgcrypto
So does it mean I have to upgrade v197 to v211 and then upgrade to v230
after installing pgcrypto? Any help would be appreciated.
Thanks,
Maggie
Postgresql data is not migrated after CF is upgraded from v197 to v230
MaggieMeng
Hi,
I found all org, spaces and applications information were lost after I upgraded my CF env from v197 to v230. After checked the postgres node, I found there were two directory under /var/vcap/store. One is “postgres” and the other is “postgres-9.4.5”. I guess the reason why all information got lost is data from previous postgresql database(9.0.3) was not migrated to new postgresql database(9.4.5). I found two related releases.
V211
Postgres Job Upgrade
The Postgres Job will upgrade the postgres database to version 9.4.2. Postgres will be unavailable during this upgrade.
V229
In support of work in progress to enable developers to specify application ports when mapping routes, cf-release v229 introduces a database migration for CCDB. For deployments that use a PostgreSQL database for CCDB that is NOT the PostreSQL job that comes with cf-release, v229 introduces the following requirements. These requirements are applicable for subsequent releases. If you are using the PostgreSQL job that comes with cf-release, or if you are using MySQL as the backing db for CC, no action is necessary.
◦PostgreSQL 9.1 is required at a minimum
◦For versions 9.1-9.3, operators must first install the extension uuid-ossp
◦For versions 9.4 and newer, operators must first install the extension pgcrypto
So does it mean I have to upgrade v197 to v211 and then upgrade to v230 after installing pgcrypto? Any help would be appreciated.
Thanks,
Maggie
I found all org, spaces and applications information were lost after I upgraded my CF env from v197 to v230. After checked the postgres node, I found there were two directory under /var/vcap/store. One is “postgres” and the other is “postgres-9.4.5”. I guess the reason why all information got lost is data from previous postgresql database(9.0.3) was not migrated to new postgresql database(9.4.5). I found two related releases.
V211
Postgres Job Upgrade
The Postgres Job will upgrade the postgres database to version 9.4.2. Postgres will be unavailable during this upgrade.
V229
In support of work in progress to enable developers to specify application ports when mapping routes, cf-release v229 introduces a database migration for CCDB. For deployments that use a PostgreSQL database for CCDB that is NOT the PostreSQL job that comes with cf-release, v229 introduces the following requirements. These requirements are applicable for subsequent releases. If you are using the PostgreSQL job that comes with cf-release, or if you are using MySQL as the backing db for CC, no action is necessary.
◦PostgreSQL 9.1 is required at a minimum
◦For versions 9.1-9.3, operators must first install the extension uuid-ossp
◦For versions 9.4 and newer, operators must first install the extension pgcrypto
So does it mean I have to upgrade v197 to v211 and then upgrade to v230 after installing pgcrypto? Any help would be appreciated.
Thanks,
Maggie
Re: app guid uniqueness
John Wong
Awesome. Thank you, Nicholas.
John
On Mon, Apr 18, 2016 at 1:51 PM, Nicholas Calugar <ncalugar(a)pivotal.io>
wrote:
John
On Mon, Apr 18, 2016 at 1:51 PM, Nicholas Calugar <ncalugar(a)pivotal.io>
wrote:
Hi John,
An application's guid will never change. If you delete an app, and push
the code again, you are creating a new app with another guid.
Thanks,
Nick
On Mon, Apr 18, 2016 at 9:53 AM, John Wong <gokoproject(a)gmail.com> wrote:Based on my brief testing and observation, the guid of an app sticks
around for as long as the app remains running (whether we restart or
restage). But removing the app, then cf push a new guid is generated.
Is this a true statement?
Thanks.
John
Re: Staging and Runtime Hooks Feature Narrative
Troy Topnik
I think if we can get some consensus on the .profile script support (AKA Runtime Hooks), we should move forward with that. Jan has already separated that work into a separate PR, so it could be merged independently.
https://github.com/cloudfoundry-incubator/buildpack_app_lifecycle/pull/14
For the staging hooks, we can potentially implement the proposed functionality in pre-staging and post-staging *buildpacks* in conjunction with the multi-buildpacks support Mike mentions above. This is a little more work for the user, but avoids the need to expand the Heroku buildpack contract. I'm not totally convinced that the original proposal actually breaks buildpack compatibility, but moving staging hooks into their own auxiliary buildpacks should remove any remaining points of contention and would not require any merges into buildpack_app_lifecycle.
I think you've convinced me (separate discussion) that things like db initialization are best done in Tasks.
TT
https://github.com/cloudfoundry-incubator/buildpack_app_lifecycle/pull/14
For the staging hooks, we can potentially implement the proposed functionality in pre-staging and post-staging *buildpacks* in conjunction with the multi-buildpacks support Mike mentions above. This is a little more work for the user, but avoids the need to expand the Heroku buildpack contract. I'm not totally convinced that the original proposal actually breaks buildpack compatibility, but moving staging hooks into their own auxiliary buildpacks should remove any remaining points of contention and would not require any merges into buildpack_app_lifecycle.
I think you've convinced me (separate discussion) that things like db initialization are best done in Tasks.
TT
Re: CF Job Failure
Gupta, Abhik
Hi Nick, Hi Daniel,
The app push using the CLI works. I haven’t had the chance to check the cloud controller logs yet—that’s the next thing on my plate.
Best Regards,
Abhik
From: Nicholas Calugar [mailto:ncalugar(a)pivotal.io]
Sent: Tuesday, April 19, 2016 2:31 AM
To: Discussions about Cloud Foundry projects and the system overall. <cf-dev(a)lists.cloudfoundry.org>
Subject: [cf-dev] Re: Re: CF Job Failure
Hi Abhik,
Another bit of information that would be interesting is whether or not you can push your app using the CLI.
Thanks,
Nick
On Mon, Apr 18, 2016 at 6:13 AM, Daniel Mikusa <dmikusa(a)pivotal.io<mailto:dmikusa(a)pivotal.io>> wrote:
On Mon, Apr 18, 2016 at 7:10 AM, Gupta, Abhik <abhik.gupta(a)sap.com<mailto:abhik.gupta(a)sap.com>> wrote:
Hi,
We are trying to push a node.js application using the Cloud Controller REST APIs. The flow that we follow is similar to the flow followed by CF CLI:
Create Application Metadata > Create Route Metadata > Associate Route with Application > Get cached resources from Cloud Foundry using the Resource Match API > Upload the bits (sending the fingerprints + application.zip) asynchronously > Poll for the Job Status
This flow works perfectly fine till the last step but the polling for the job status gives back an error response like:
{
"metadata": {
"guid": "cd5bf18d-249b-4f00-9ee9-6328081d3d77",
"created_at": "2016-04-18T10:55:29Z",
"url": "/v2/jobs/cd5bf18d-249b-4f00-9ee9-6328081d3d77"
},
"entity": {
"guid": "cd5bf18d-249b-4f00-9ee9-6328081d3d77",
"status": "failed",
"error": "Use of entity>error is deprecated in favor of entity>error_details.",
"error_details": {
"error_code": "UnknownError",
"description": "An unknown error occurred.",
"code": 10001
}
}
}
Does the app actually push and get started? i.e. if you run `cf apps` after you get this message is the app up and running? Also, do you see similar issues when you push with `cf push`?
Apparently, this error is also pretty well-known because it’s documented in the API documentation as well here: http://apidocs.cloudfoundry.org/228/jobs/retrieve_job_with_unknown_failure.html
What could be the reason for this error from the Controller?
Take a look at the cloud controller logs, `/var/vcap/sys/log/cloud_controller_ng`. There should be more information about the problem there.
Dan
The app push using the CLI works. I haven’t had the chance to check the cloud controller logs yet—that’s the next thing on my plate.
Best Regards,
Abhik
From: Nicholas Calugar [mailto:ncalugar(a)pivotal.io]
Sent: Tuesday, April 19, 2016 2:31 AM
To: Discussions about Cloud Foundry projects and the system overall. <cf-dev(a)lists.cloudfoundry.org>
Subject: [cf-dev] Re: Re: CF Job Failure
Hi Abhik,
Another bit of information that would be interesting is whether or not you can push your app using the CLI.
Thanks,
Nick
On Mon, Apr 18, 2016 at 6:13 AM, Daniel Mikusa <dmikusa(a)pivotal.io<mailto:dmikusa(a)pivotal.io>> wrote:
On Mon, Apr 18, 2016 at 7:10 AM, Gupta, Abhik <abhik.gupta(a)sap.com<mailto:abhik.gupta(a)sap.com>> wrote:
Hi,
We are trying to push a node.js application using the Cloud Controller REST APIs. The flow that we follow is similar to the flow followed by CF CLI:
Create Application Metadata > Create Route Metadata > Associate Route with Application > Get cached resources from Cloud Foundry using the Resource Match API > Upload the bits (sending the fingerprints + application.zip) asynchronously > Poll for the Job Status
This flow works perfectly fine till the last step but the polling for the job status gives back an error response like:
{
"metadata": {
"guid": "cd5bf18d-249b-4f00-9ee9-6328081d3d77",
"created_at": "2016-04-18T10:55:29Z",
"url": "/v2/jobs/cd5bf18d-249b-4f00-9ee9-6328081d3d77"
},
"entity": {
"guid": "cd5bf18d-249b-4f00-9ee9-6328081d3d77",
"status": "failed",
"error": "Use of entity>error is deprecated in favor of entity>error_details.",
"error_details": {
"error_code": "UnknownError",
"description": "An unknown error occurred.",
"code": 10001
}
}
}
Does the app actually push and get started? i.e. if you run `cf apps` after you get this message is the app up and running? Also, do you see similar issues when you push with `cf push`?
Apparently, this error is also pretty well-known because it’s documented in the API documentation as well here: http://apidocs.cloudfoundry.org/228/jobs/retrieve_job_with_unknown_failure.html
What could be the reason for this error from the Controller?
Take a look at the cloud controller logs, `/var/vcap/sys/log/cloud_controller_ng`. There should be more information about the problem there.
Dan
Re: [PROPOSAL]: Removing ability to specify npm version
Shawn Nielsen
We have had a few use cases where users have requested to stay on nodejs
version 4 (LTS), while taking advantage of npm 3's new flat dependency
tree. Node 4 by default bundles witn npm 2. Node 5 defaults to npm 3.
In all of these use cases we are using online buildpacks. I would say this
use case isn't as common, but we definitely get requests for it.
toggle quoted message
Show quoted text
version 4 (LTS), while taking advantage of npm 3's new flat dependency
tree. Node 4 by default bundles witn npm 2. Node 5 defaults to npm 3.
In all of these use cases we are using online buildpacks. I would say this
use case isn't as common, but we definitely get requests for it.
On Mon, Apr 11, 2016 at 1:10 PM, John Shahid <jshahid(a)pivotal.io> wrote:
Hi all,
The buildpacks team would like to propose a change to the nodejs
buildpack. It was recently brought to our attention in this issue
<https://github.com/cloudfoundry/nodejs-buildpack/issues/54>, that the
nodejs buildpack will try to download npm if the version specified in
package.json didn’t match the version shipped with nodejs. According to
heroku
<https://devcenter.heroku.com/articles/nodejs-support#specifying-an-npm-version>
this is a feature that exists for historical reasons that do not apply
anymore.
We would like to ask if anyone relies on this feature or have an objection
to this change.
Thanks,
The Buildpacks Team
Re: CF Job Failure
Nicholas Calugar
Hi Abhik,
Another bit of information that would be interesting is whether or not you
can push your app using the CLI.
Thanks,
Nick
toggle quoted message
Show quoted text
Another bit of information that would be interesting is whether or not you
can push your app using the CLI.
Thanks,
Nick
On Mon, Apr 18, 2016 at 6:13 AM, Daniel Mikusa <dmikusa(a)pivotal.io> wrote:
On Mon, Apr 18, 2016 at 7:10 AM, Gupta, Abhik <abhik.gupta(a)sap.com> wrote:Hi,Does the app actually push and get started? i.e. if you run `cf apps`
We are trying to push a node.js application using the Cloud Controller
REST APIs. The flow that we follow is similar to the flow followed by CF
CLI:
Create Application Metadata > Create Route Metadata > Associate Route
with Application > Get cached resources from Cloud Foundry using the
Resource Match API > Upload the bits (sending the fingerprints +
application.zip) asynchronously > Poll for the Job Status
This flow works perfectly fine till the last step but the polling for the
job status gives back an error response like:
{
"metadata": {
"guid": "cd5bf18d-249b-4f00-9ee9-6328081d3d77",
"created_at": "2016-04-18T10:55:29Z",
"url": "/v2/jobs/cd5bf18d-249b-4f00-9ee9-6328081d3d77"
},
"entity": {
"guid": "cd5bf18d-249b-4f00-9ee9-6328081d3d77",
"status": "failed",
"error": "Use of entity>error is deprecated in favor of
entity>error_details.",
"error_details": {
"error_code": "UnknownError",
"description": "An unknown error occurred.",
"code": 10001
}
}
}
after you get this message is the app up and running? Also, do you see
similar issues when you push with `cf push`?Take a look at the cloud controller logs,
Apparently, this error is also pretty well-known because it’s documented
in the API documentation as well here:
http://apidocs.cloudfoundry.org/228/jobs/retrieve_job_with_unknown_failure.html
What could be the reason for this error from the Controller?
`/var/vcap/sys/log/cloud_controller_ng`. There should be more information
about the problem there.
Dan
pushing docker image to CF.
sangeeus <sangeetha081315@...>
hi, I am pushing jboss/drools-workbench-showcase docker image to cloudfoundry
from my CLI.
This is the command,
cf push workbench -o jboss/drools-workbench-showcase:6.2.0.Final
But I get an error
Failed to talk to docker registry: Get http://registry-1.docker.io/v2/:
net/http: request canceled while waiting for connection.
How to add docker insecure registry to cloud foundry.
This image works fine in my local
--
View this message in context: http://cf-dev.70369.x6.nabble.com/pushing-docker-image-to-CF-tp4649.html
Sent from the CF Dev mailing list archive at Nabble.com.
from my CLI.
This is the command,
cf push workbench -o jboss/drools-workbench-showcase:6.2.0.Final
But I get an error
Failed to talk to docker registry: Get http://registry-1.docker.io/v2/:
net/http: request canceled while waiting for connection.
How to add docker insecure registry to cloud foundry.
This image works fine in my local
--
View this message in context: http://cf-dev.70369.x6.nabble.com/pushing-docker-image-to-CF-tp4649.html
Sent from the CF Dev mailing list archive at Nabble.com.
Re: app guid uniqueness
Nicholas Calugar
Hi John,
An application's guid will never change. If you delete an app, and push the
code again, you are creating a new app with another guid.
Thanks,
Nick
toggle quoted message
Show quoted text
An application's guid will never change. If you delete an app, and push the
code again, you are creating a new app with another guid.
Thanks,
Nick
On Mon, Apr 18, 2016 at 9:53 AM, John Wong <gokoproject(a)gmail.com> wrote:
Based on my brief testing and observation, the guid of an app sticks
around for as long as the app remains running (whether we restart or
restage). But removing the app, then cf push a new guid is generated.
Is this a true statement?
Thanks.
John
Re: CF Job Failure
Daniel Mikusa
On Mon, Apr 18, 2016 at 7:10 AM, Gupta, Abhik <abhik.gupta(a)sap.com> wrote:
after you get this message is the app up and running? Also, do you see
similar issues when you push with `cf push`?
`/var/vcap/sys/log/cloud_controller_ng`. There should be more information
about the problem there.
Dan
Hi,Does the app actually push and get started? i.e. if you run `cf apps`
We are trying to push a node.js application using the Cloud Controller
REST APIs. The flow that we follow is similar to the flow followed by CF
CLI:
Create Application Metadata > Create Route Metadata > Associate Route with
Application > Get cached resources from Cloud Foundry using the Resource
Match API > Upload the bits (sending the fingerprints + application.zip)
asynchronously > Poll for the Job Status
This flow works perfectly fine till the last step but the polling for the
job status gives back an error response like:
{
"metadata": {
"guid": "cd5bf18d-249b-4f00-9ee9-6328081d3d77",
"created_at": "2016-04-18T10:55:29Z",
"url": "/v2/jobs/cd5bf18d-249b-4f00-9ee9-6328081d3d77"
},
"entity": {
"guid": "cd5bf18d-249b-4f00-9ee9-6328081d3d77",
"status": "failed",
"error": "Use of entity>error is deprecated in favor of
entity>error_details.",
"error_details": {
"error_code": "UnknownError",
"description": "An unknown error occurred.",
"code": 10001
}
}
}
after you get this message is the app up and running? Also, do you see
similar issues when you push with `cf push`?
Take a look at the cloud controller logs,
Apparently, this error is also pretty well-known because it’s documented
in the API documentation as well here:
http://apidocs.cloudfoundry.org/228/jobs/retrieve_job_with_unknown_failure.html
What could be the reason for this error from the Controller?
`/var/vcap/sys/log/cloud_controller_ng`. There should be more information
about the problem there.
Dan
Re: Maven: Resolve Dependencies on Platform?
Daniel Mikusa
On Sat, Apr 16, 2016 at 11:30 PM, Josh Long <starbuxman(a)gmail.com> wrote:
been updated in a while, I don't plan to update it and it was never very
solid to begin with. It was more to just see if I could make it work. It
did, but the benefit was very small. I wouldn't recommend using it, but
it's there if you want to look at it.
https://github.com/dmikusa-pivotal/cf-maven-buildpack
platform will cache any resources that are larger than 65k (default
threshold, your platform's actual value may differ). The cache is global
so it's not just per app or per user. Once any user pushes a file, it will
be cached. This helps a ton with Java apps since JAR files are generally
over the threshold and the same JAR files are used across many users &
apps. Long story short, when you go to push your app you likely won't need
to upload as much data as you think.
Hope that helps!
Dan
I'm not sure if this is the right forum. I doubt it.As an experiment, I created a build pack that would do this. It hasn't
* you could achieve what you want by forking the buildpack used. If you're
using the Java buildpack then it's
https://github.com/cloudfoundry/java-buildpack. the `cf push` command
supports providing an override URL for the buildpack.
been updated in a while, I don't plan to update it and it was never very
solid to begin with. It was more to just see if I could make it work. It
did, but the benefit was very small. I wouldn't recommend using it, but
it's there if you want to look at it.
https://github.com/dmikusa-pivotal/cf-maven-buildpack
* that said, this is a TERRIBLE idea. Instead, prefer that one build be+1 - It's also worth mentioning that when you `cf push` something, your
promoted from development to staging, QA, and production. Ideally, that
promotion should be automatic, the result of a continuous delivery pipeline
that sees code committed to version control, then run through continuous
integration, then pushed to a testing environment where it's certified and
smoke-tested, validated by QA, and ultimately promoted to production. You
can support this process with continuous integration tools like Jenkins,
Travis, Spinnaker, or Concourse.CI, which will monitor version control and
can be scripted to package and cf push code..
platform will cache any resources that are larger than 65k (default
threshold, your platform's actual value may differ). The cache is global
so it's not just per app or per user. Once any user pushes a file, it will
be cached. This helps a ton with Java apps since JAR files are generally
over the threshold and the same JAR files are used across many users &
apps. Long story short, when you go to push your app you likely won't need
to upload as much data as you think.
Hope that helps!
Dan
On Sat, Apr 16, 2016 at 7:15 PM Matthew Tyson <matthewcarltyson(a)gmail.com>
wrote:Please let me know if there is a more appropriate forum for this type of
question.
How can i configure HA Doppler at cf.yml?
inho cho
I read "Overview of the Loggregator System " - https://docs.cloudfoundry.org/loggregator/architecture.html
In that document, metron_agent can forward metrics or logs to N doppler.
But i don't know how to do it.
Would you let me know how to configure it at cf.yml.
Thanks & Regards
In that document, metron_agent can forward metrics or logs to N doppler.
But i don't know how to do it.
Would you let me know how to configure it at cf.yml.
Thanks & Regards
CF Job Failure
Gupta, Abhik
Hi,
We are trying to push a node.js application using the Cloud Controller REST APIs. The flow that we follow is similar to the flow followed by CF CLI:
Create Application Metadata > Create Route Metadata > Associate Route with Application > Get cached resources from Cloud Foundry using the Resource Match API > Upload the bits (sending the fingerprints + application.zip) asynchronously > Poll for the Job Status
This flow works perfectly fine till the last step but the polling for the job status gives back an error response like:
{
"metadata": {
"guid": "cd5bf18d-249b-4f00-9ee9-6328081d3d77",
"created_at": "2016-04-18T10:55:29Z",
"url": "/v2/jobs/cd5bf18d-249b-4f00-9ee9-6328081d3d77"
},
"entity": {
"guid": "cd5bf18d-249b-4f00-9ee9-6328081d3d77",
"status": "failed",
"error": "Use of entity>error is deprecated in favor of entity>error_details.",
"error_details": {
"error_code": "UnknownError",
"description": "An unknown error occurred.",
"code": 10001
}
}
}
Apparently, this error is also pretty well-known because it's documented in the API documentation as well here: http://apidocs.cloudfoundry.org/228/jobs/retrieve_job_with_unknown_failure.html
What could be the reason for this error from the Controller?
Thanks & Regards
Abhik
We are trying to push a node.js application using the Cloud Controller REST APIs. The flow that we follow is similar to the flow followed by CF CLI:
Create Application Metadata > Create Route Metadata > Associate Route with Application > Get cached resources from Cloud Foundry using the Resource Match API > Upload the bits (sending the fingerprints + application.zip) asynchronously > Poll for the Job Status
This flow works perfectly fine till the last step but the polling for the job status gives back an error response like:
{
"metadata": {
"guid": "cd5bf18d-249b-4f00-9ee9-6328081d3d77",
"created_at": "2016-04-18T10:55:29Z",
"url": "/v2/jobs/cd5bf18d-249b-4f00-9ee9-6328081d3d77"
},
"entity": {
"guid": "cd5bf18d-249b-4f00-9ee9-6328081d3d77",
"status": "failed",
"error": "Use of entity>error is deprecated in favor of entity>error_details.",
"error_details": {
"error_code": "UnknownError",
"description": "An unknown error occurred.",
"code": 10001
}
}
}
Apparently, this error is also pretty well-known because it's documented in the API documentation as well here: http://apidocs.cloudfoundry.org/228/jobs/retrieve_job_with_unknown_failure.html
What could be the reason for this error from the Controller?
Thanks & Regards
Abhik