Date   

Re: [abacus] Accepting delayed usage within a slack window

Jean-Sebastien Delfino
 

Hi Ben,

I'm adding a 'processed' time field to the usage docs, hoping that'll help
maintain the history of usage within the slack window we've been discussing
here (more precisely that will help you know how much of that history can
roll out of the configured slack window when you process a new usage doc).

That field will also allow us to more clearly distinguish between the usage
event 'start', 'stop' and 'processed' times.

HTH

- Jean-Sebastien

On Mon, Oct 12, 2015 at 9:04 AM, Jean-Sebastien Delfino <jsdelfino(a)gmail.com
wrote:
Hi Ben,

That makes sense to me. What you've described will enable refinements of
accumulated usage for a month as we continue to receive delayed usage
during the first few days of the next month.

To illustrate this with an example: with a 48h time window, on Sept 30 you
can retrieve the Sept 30 usage doc and find 'provisional' usage for Sept in
the 'month time window', not including unknown usage not yet been submitted
to Abacus. Later on Oct 2nd you can retrieve the Oct 2nd usage doc and find
the 'final usage' for Sept in the 'month - 1 time window'. I think this is
better than waiting for Oct 2nd to 'close the Sept window', as our users
typically want to see both their *real time* usage for Sept before Oct 2nd
and their final usage later once it has settled for sure.

I also like that with that approach you don't need to go back to your Sept
30 usage doc to patch it up with delayed usage, as that way you're also
keeping a record of the Sept usage that was really known to us on Sept 30.

Another interesting aspect of this is that the history you're going to
maintain will allow us to write 'marker' usage docs when we transition from
one time window to another. Since a usage doc contains both the usage for
the day and the previous day, you can write the first document you process
each day, as a marker, in a reporting db and that'll give you an easy and
efficient way to retrieve the accumulated usage for the previous day. For
example, to retrieve the usage accumulated at the end of Oct 11, just
retrieve the 'marker' usage doc for Oct 12 and get the usage in its 'day -
1 time window'. That could help us implement the kind of query that Georgi
mentioned on the chat last week when he was looking for an efficient way to
retrieve daily usage for all the days of the month.

Finally, looking at the array of numbers/objects currently used to
maintain our time windows, I'm wondering if keeping the 'yearly' and
'forever' usage time windows is not a bit overkill (and could actually
become a problem).

That data is going to be duplicated in all individual usage docs for
little value IMO as the yearly usage at least is easy to reconstruct at
reporting time with a query over 12 monthly usage docs. Also, maintaining
that 'forever' usage will require us to keep usage docs around for resource
instances that may have been deleted long time ago, and will complicate our
database partitioning scheme as these old resource instances will cause the
databases to grow forever. So, I'd prefer to let old usage data sit in old
monthly database partitions instead of having to carry that old data over
each month forever just to maintain these 'forever' time windows.

In other words, I'm suggesting to change our current array of 7 time
windows [Forever, Y, M, D, h, m, s] to 5 windows [M, D, h, m, s]. Combined
with your slack window proposal, with a 2D slack time we'll be looking at
an array like follows: [[M, M-1], [D, D-1, D-2], [h], [m], [s]]. With a 48h
slack time the array will have 49 hourly entries [h, h-1, h-2, h-3, etc]
instead of one.

Thoughts?


- Jean-Sebastien

On Sun, Oct 11, 2015 at 6:04 AM, Benjamin Cheng <bscheng(a)us.ibm.com>
wrote:

One of the things that need to be supported in abacus is the handling of
delayed usage submissions within a particular slack window after the usage
has happened. For example, given a slack window of 48 hours, a service
provider will be able to submit usage back to September 30th on October 2nd.

An idea that we were discussing about for this was augmenting the
quantity from an array of numbers/objects to an array of arrays of
numbers/objects and using an environmental variable that is currently going
to be called SLACK to hold the configuration of the slack window. SLACK
would follow a format of [0-9]+[YMDhms] with the width of the slack window
and to what precision the slack window should be maintained. 2D and 48h
both are the same time, but 48h will keep track of the history to the hour
level while 2D will only keep it to the day level. If this environment
variable isn't configured, the current idea is to have no slack window as
the default.

The general formula for the length of each array in a time window would
be as follows: 1(This is for usage covered in the current window) + (number
of windows to cover the configured slack window for the particular time
window).
IE: Given a slack of 48h. The year time window would be 1 + 1. Month
would be 1 + 1. Day would be 1 + 2. Hours would be 1 + 48. Minutes/Seconds
would stay at 1.

Thoughts on this idea?


Re: Multi-Line Loggregator events and the new Splunk "HTTP Event Collector" API

Mike Youngstrom <youngm@...>
 

Great! Thanks Jim. Sounds completely reasonable. Hopefully we can keep
this thread moving and help derive some future designs out of it. Would
you prefer to keep this discussion on the mailling list or a github issue?

Mike

On Thu, Oct 15, 2015 at 12:26 PM, Jim Campbell <jcampbell(a)pivotal.io> wrote:

New Loggregator PM chiming in.

This is definitely on the Loggregator roadmap. Only issues are selecting a
design and finalizing (as much as is ever possible) where it lives in the
priority order. We would certainly consider a pull request if it met a
consensus architecture model ala Rohit's concerns.



On Thu, Oct 15, 2015 at 11:19 AM, Mike Youngstrom <youngm(a)gmail.com>
wrote:

We have thrown around one approach which solves the problem but would
require changes in the runtime. That solution would expose a socket to the
container where the application could emit logs. The application would now
have control over what delimits a message.
Any thoughts on what protocol this socket would listen too? Raw
Dropsonde? Syslog?

I've been thinking about your questions regarding the '\' approach:

As Erik mentioned in the last thread, multi-line logging is something
which the loggregator team would like to solve. But there are a few
questions to answer before we can come up with a clean solution. We want a
design which solves the problem while not breaking existing apps which do
not require this functionality. Before implementing a solution we would
also want to answer if we want to do it for both runtimes or just Diego,
since the way log lines are sent to Metron differ based on the runtime.
I'd be perfectly happy if the solution was only for Diego. We are
surviving today but I think the feasibility of our current solution is
running out.


I guess the expectation is that loggregator would internally remove the
escape character.
I think this would be optimal.


This has performance implications because now some part of loggregator
will need to inspect the log message and coalesce the message with the
succeeding ones. We will need to do this in a way which respects
multi-tenancy. That means now we are storing additional state related to
log lines per app. We will also need to decide how long loggregator needs
to wait for the next lines in a multi-line log, before deciding to send the
line which it received. To me that's not a simple change.
Can you help me understand what you are referring to with "coalescing
messages" and "storing additional state related to log lines per app"? The
way I see it the current agents buffer until they see a '\n' then they
create an event. Adding an escaped line '\\n' the logic would be very much
the same as it is today, buffer until you find an unescaped new line. Then
unescape the remaining new lines.

Seems somewhat straight forward to me. Thoughts on considering a pull
request that does something like this?

Mike


--
Jim Campbell
Product Manager
Pivotal Labs


Re: Loggregator Community Survey #1 - still use the old metron 51160 endpoint?

Mike Youngstrom <youngm@...>
 

We are using it. Though we plan to fix this. If we could get a general
timeline for what release you plan to remove it in then we can plan to
remove it from our code by then.

Mike

On Fri, Sep 18, 2015 at 6:19 PM, Rohit Kumar <rokumar(a)pivotal.io> wrote:

Quick correction: the default legacy metron port is 3456 [1]. The legacy
forwarder only supports log messages.


https://github.com/cloudfoundry/loggregator/blob/develop/bosh/jobs/metron_agent/spec#L28-L30

On Fri, Sep 18, 2015 at 3:20 PM, Erik Jasiak <mjasiak(a)pivotal.io> wrote:

Greetings CF community!

The loggregator team is actively trying to clean up its footprint. To
that - does anyone still use the "old" metron endpoint, on port 51160 by
default? [1]

We are considering turning the old endpoint into an injector[2] to help
shrink the footprint and overhead of core metron, but only if the old
endpoint is still in active usage - so please let us know.

Thanks
Erik
PM - Loggregator


[1]
https://github.com/cloudfoundry/loggregator/blob/develop/src/metron/config/metron.json#L12-L13
[2] https://github.com/cloudfoundry/statsd-injector


Cloud Foundry Java Client V2

Ben Hale <bhale@...>
 

As many of you are aware, the Cloud Foundry Java Client has been a bit neglected lately. There are various reasons for this, but today I’m pleased to announce that we’ve begun a V2 effort and that progress is swift.

We on the Cloud Foundry Java Experience team have been aware for some time that the current implementation of the Java Client is less than ideal. Among the most common complaints were the lack of separation between interface and implementation, the subpar network performance, and the requirement that users understand how to orchestrate common concepts like `push` on their own. (For a more in-depth treatment of issues we identified, please see the stellar work done by Scott Fredrick[1].) V2 aims to address all of these issues with a ground-up redesign of the client.

To address the issue of a lack of separation between interface and implementation, we’ve broken out the API into a project containing no implementation. This project (`cloudfoundry-client`) consists of a collection of interfaces and immutable datatypes. There is only a single dependency, and it isn’t Spring! The intent here was to create an API that could be implemented with multiple strategies, but requiring the minimal amount of code for each of those implementations. The API itself is now reactive (the single dependency is on Reactive Streams, the precursor to reactive support in Java 9) which we believe will more closely align with the trends towards non-blocking network communication. We will be providing a single implementation of this API, based on Spring (`cloudfoundry-client-spring`) but welcome additional implementations. We believe we’ve created a good environment for alternatives and would be happy to hear suggestions on how to improve if that turns out not to be the case.

In V1, the coverage of the APIs[2] was incomplete (about half, if I had to guess). Our commitment is to deliver a complete API and implementation in V2, including all 300+ documented APIs. We’ve observed that this API might not actually be the right level of abstraction for many users though. Knowing that you need to create an application, create a package, stage a droplet, create and start a process, etc. for `push` is quite a burden on many users of the project. So, we’re also providing a `cloudfoundry-operations` project that builds on the `cloudfoundry-client` API but instead of mapping onto the low-level REST API, we’re going to map roughly onto the `cf` CLI API. We suspect that nearly all users will want to `cloudFoundryOperations.push()` instead of the low-level alternative, so both choices are useful. This API and implementation will only depend on `cloudfoundry-client` allowing any implementation of that API to be used. Finally, we’ll be bringing the build-system plugins up to date with the systems that they are built for and ensuring that they cover a breadth of useful functionality at build time.

This leaves the question about what will happen to V1. We have a commitment to fixing up the bugs that have been identified in the code-base, but we’re not going to be doing any work that involves adding APIs. We feel that users who need those APIs are better served moving to V2. I’ll be feeding open issues from the backlog into the V2 stream of work to ensure that we aren’t seeing any resource starvation and you can expect future releases out of the `1.x` branch.

I hope that this comes as welcome news to the community and look forward to the feedback. I highly encourage users to keep an eye Pivotal Tracker[3] to see our progress and submit requests through GitHub[4].


-Ben Hale
Cloud Foundry Java Experience

[1]: https://docs.google.com/document/d/1Ui-67dBPYoADltErL80xXYEr_INPqdNJG9Va4gPBM-I/edit?usp=sharing
[2]: http://apidocs.cloudfoundry.org/221/
[3]: https://www.pivotaltracker.com/projects/816799
[4]: https://github.com/cloudfoundry/cf-java-client


Re: Multi-Line Loggregator events and the new Splunk "HTTP Event Collector" API

Jim CF Campbell
 

New Loggregator PM chiming in.

This is definitely on the Loggregator roadmap. Only issues are selecting a
design and finalizing (as much as is ever possible) where it lives in the
priority order. We would certainly consider a pull request if it met a
consensus architecture model ala Rohit's concerns.

On Thu, Oct 15, 2015 at 11:19 AM, Mike Youngstrom <youngm(a)gmail.com> wrote:

We have thrown around one approach which solves the problem but would
require changes in the runtime. That solution would expose a socket to the
container where the application could emit logs. The application would now
have control over what delimits a message.
Any thoughts on what protocol this socket would listen too? Raw
Dropsonde? Syslog?

I've been thinking about your questions regarding the '\' approach:

As Erik mentioned in the last thread, multi-line logging is something
which the loggregator team would like to solve. But there are a few
questions to answer before we can come up with a clean solution. We want a
design which solves the problem while not breaking existing apps which do
not require this functionality. Before implementing a solution we would
also want to answer if we want to do it for both runtimes or just Diego,
since the way log lines are sent to Metron differ based on the runtime.
I'd be perfectly happy if the solution was only for Diego. We are
surviving today but I think the feasibility of our current solution is
running out.


I guess the expectation is that loggregator would internally remove the
escape character.
I think this would be optimal.


This has performance implications because now some part of loggregator
will need to inspect the log message and coalesce the message with the
succeeding ones. We will need to do this in a way which respects
multi-tenancy. That means now we are storing additional state related to
log lines per app. We will also need to decide how long loggregator needs
to wait for the next lines in a multi-line log, before deciding to send the
line which it received. To me that's not a simple change.
Can you help me understand what you are referring to with "coalescing
messages" and "storing additional state related to log lines per app"? The
way I see it the current agents buffer until they see a '\n' then they
create an event. Adding an escaped line '\\n' the logic would be very much
the same as it is today, buffer until you find an unescaped new line. Then
unescape the remaining new lines.

Seems somewhat straight forward to me. Thoughts on considering a pull
request that does something like this?

Mike

--
Jim Campbell
Product Manager
Pivotal Labs


Re: Recording of this morning's CF CAB call - 14th Oct 2015

Mike Youngstrom <youngm@...>
 

Thanks Phil! I couldn't make the call so this saved my bacon.

Mike

On Thu, Oct 15, 2015 at 5:08 AM, Lomov Alexander <
alexander.lomov(a)altoros.com> wrote:

That’s so handy. Thank you very much, Phil.

On Oct 14, 2015, at 8:14 PM, Whelan, Phil <phillip.whelan(a)hpe.com>
wrote:

Hi,

In case you missed it...

https://www.dropbox.com/s/4ueni9fb7w5tvgm/cab_14th_oct_2015.mp3?dl=0

Agenda / notes….

https://docs.google.com/document/d/1SCOlAquyUmNM-AQnekCOXiwhLs6gveTxAcduvDcW_xI/edit?pli=1#

Chat room...


drnic @ starkandwayne1: Hey hey all

drnic @ starkandwayne1: How do I get a calendar feed that has correct
dial in details?

anonymous1 morphed into goehmen

drnic @ starkandwayne1: Got dial in from agenda

Chris(a)IBM: where is all of this?

Marco N. @ Pivotal: @drnic, do you use google calendar? This should(?)
work?
https://www.google.com/calendar/embed?src=cloudfoundry.org_8ms13q67p9jjeeilng6dosnu50%40group.calendar.google.com&ctz=America/Los_Angeles

anonymous1 morphed into Steven Benario

drnic @ starkandwayne1: marco, thx will try

anonymous1 morphed into Cornelia

Simon Moser (co-chair) morphed into Simon Moser

Simon Moser morphed into Simon Moser @ IBM

anonymous1 morphed into lexsys @ altoros

drnic @ starkandwayne1: marco - that is calendar of PMC meetings

drnic @ starkandwayne1: Can someone please add the .ical feed for
meetings into the top of the agenda doc?

Marco N. @ Pivotal: @drnic, sorry I goofed. Try this:
https://www.google.com/calendar/embed?src=cloudfoundry.org_oedb0ilotg5udspdlv32a5vc78%40group.calendar.google.com&ctz=America/Los_Angeles

anonymous1 morphed into Steve Winkler @ GE

Marco N. @ Pivotal: If you accept, I'll put it in the agenda doc as well

drnic @ starkandwayne1:
https://github.com/cloudfoundry-incubator/diego-ssh

drnic @ starkandwayne1: marco, looks good

Marco N. @ Pivotal: cool

drnic @ starkandwayne1: added

pivotal room: Servcie Core update from Marco is now

drnic @ starkandwayne1: marco - your manifest fixes for registrar errand
will go into upstream broker-registrar repo? being able to select which
plans are public by default would be good

drnic @ starkandwayne1: shannon - cool re router access for brokers

drnic @ starkandwayne1: shannon super cool on multi ports

anonymous1 morphed into Sergey

drnic @ starkandwayne1: What is the URL to CAB call minutes/blog from
September? Which blog is it going to?

pivotal room: @Phil can you update Dr.Nic on last month's blog

anonymous2 morphed into Marco(a)Swisscom

drnic @ starkandwayne1: To get GH issues for conversations, I think ppl
need to "Watch" each repo https://github.com/cloudfoundry/go-buildpack

drnic @ starkandwayne1:
https://github.com/cloudfoundry/go-buildpack/issues/22

drnic @ starkandwayne1:
https://github.com/cloudfoundry/nodejs-buildpack/issues/32

drnic @ starkandwayne1: Are we going to continue using both Consul and
ETCD?

drnic @ starkandwayne1: why?

drnic @ starkandwayne1: remote urls for releases is lovely

drnic @ starkandwayne1: to see added

Marco N. @ Pivotal: Nobody's talking about the elephant on the call

julz: there's an elephant on the call? [
http://webconf.soaphub.org/conf/images/wink.gif]

Chris(a)IBM: aaahhhhhRRRRRrrrraaaahhhh

shinichiNakagawa(a)NTT: mute: *6

Phil Whelan: @drnic I wasn't able to write up the notes last month. I
sent a recording of the call to the cf-dev@ mailing list though

drnic @ starkandwayne1: amit, there is an existing bosh release for
route-registrar
https://github.com/cloudfoundry-community/route-registrar-boshrelease;
perhaps promote it if you want it; else we can drop it

Chris(a)IBM: @phil, we'll let it slip just this once

Chris(a)IBM: [http://webconf.soaphub.org/conf/images/wink.gif]

drnic @ starkandwayne1: @phil oh cool re a recording; will go look

Deepak Vij: Question for Simon Moser regarding project Flintstone that
talks about port to Ruby 2.2.3 & JRuby. It came to our attention that
current CC component does not take advantage of multi-CPU/Cores
environment. This may be due to the fact that Ruby MRI GIL provides
thread-safety guarantees though at the cost of overall performance
degradation. Is this the reason current ports to other Ruby versions are
being looked at? Deepak Vij Huawei

Simon Moser @ IBM: @Deepak Vij: yes, thats one reason, although ruby
2.2.3 and jRuby are two separate issues. 2.2.3 is just "up to date
maintenance", whereas jRuby really is aimed at improving performance

Simon Moser @ IBM: concurrency performance

Simon Moser @ IBM: I'll talk about the results and challenges so far
when I get called out in the call

Deepak Vij: Thanks simon

Phil Whelan: @drnic to save you searching [
http://webconf.soaphub.org/conf/images/wink.gif]
https://www.dropbox.com/s/t8xewz5vw708b5q/cab_9th_sept_2015.mp3?dl=0

drnic @ starkandwayne1: thx

Simon Moser @ IBM: @Jim Campbell: can you confirm that , by changing UPD
to TCP in the backend - that change won't affect any existing doppler
firehose clients - correct ?

drnic @ starkandwayne1: perhaps a shared bosh-lite isn't best
performance testing env

drnic @ starkandwayne1: jruby/jvm will want lots of ram to be happy?

drnic @ starkandwayne1: what performance improvements did upgrading to
mri 2.2.3 give?

Simon Moser @ IBM: we didn't measure around 2.2.3

drnic @ starkandwayne1: drmax - There is an Others section in agenda

Simon Moser @ IBM: and the bosh lite isn't shared, its dedicated for
this effort. We have two equal VMs, oneMRI and one jRuby to do the
comparison, but this I agree this isn't an ideal environment (it was
intended to give us a ballpark figure to make the decision, but we expected
a bigger difference than 20%)

drnic @ starkandwayne1: Simon - I mean the host vm is shared amongst all
warden containers

drnic @ starkandwayne1: garden

drnic @ starkandwayne1: and local networking, which you mentioned

drnic @ starkandwayne1: pretty quick to spin up dedicated CF - perhaps
try terraform-aws-cf-install repo

Simon Moser @ IBM: got it - yes, we are aware of that. We thought it
might be good enough for the ballpark, but maybe that was wrong

drnic @ starkandwayne1: MRI is constantly improving its performance

Simon Moser @ IBM: we should try that terraform thing

drnic @ starkandwayne1: And this comes from the person who promoted
JRuby to the world during my time at Engine Yard

Simon Moser @ IBM: lol

Simon Moser @ IBM: and yes, we're aware of the MRI improvements - like I
said, we expected bigger differences

drnic @ starkandwayne1: perhaps ping Jruby team - they might suggest
some tuning

Simon Moser @ IBM: ok

drnic @ starkandwayne1:
https://blog.starkandwayne.com/2015/10/08/introducing-spruce-a-more-intuitive-spiff/

drnic @ starkandwayne1:
https://blog.starkandwayne.com/2015/10/12/try-out-postgresql-9-5beta-on-cloud-foundry/

drnic @ starkandwayne1:
https://blog.starkandwayne.com/2015/09/29/deploying-subway-broker-with-bosh/

drnic @ starkandwayne1: https://github.com/maximilien/cf-swagger

drnic @ starkandwayne1: http://apidocs.cloudfoundry.org/

drnic @ starkandwayne1: ?

anonymous1 morphed into Jan Dubois

drnic @ starkandwayne1: xoxo all y'all

pivotal room: bye


Re: Multi-Line Loggregator events and the new Splunk "HTTP Event Collector" API

Mike Youngstrom <youngm@...>
 


We have thrown around one approach which solves the problem but would
require changes in the runtime. That solution would expose a socket to the
container where the application could emit logs. The application would now
have control over what delimits a message.
Any thoughts on what protocol this socket would listen too? Raw
Dropsonde? Syslog?

I've been thinking about your questions regarding the '\' approach:

As Erik mentioned in the last thread, multi-line logging is something which
the loggregator team would like to solve. But there are a few questions to
answer before we can come up with a clean solution. We want a design which
solves the problem while not breaking existing apps which do not require
this functionality. Before implementing a solution we would also want to
answer if we want to do it for both runtimes or just Diego, since the way
log lines are sent to Metron differ based on the runtime.
I'd be perfectly happy if the solution was only for Diego. We are
surviving today but I think the feasibility of our current solution is
running out.


I guess the expectation is that loggregator would internally remove the
escape character.
I think this would be optimal.


This has performance implications because now some part of loggregator
will need to inspect the log message and coalesce the message with the
succeeding ones. We will need to do this in a way which respects
multi-tenancy. That means now we are storing additional state related to
log lines per app. We will also need to decide how long loggregator needs
to wait for the next lines in a multi-line log, before deciding to send the
line which it received. To me that's not a simple change.
Can you help me understand what you are referring to with "coalescing
messages" and "storing additional state related to log lines per app"? The
way I see it the current agents buffer until they see a '\n' then they
create an event. Adding an escaped line '\\n' the logic would be very much
the same as it is today, buffer until you find an unescaped new line. Then
unescape the remaining new lines.

Seems somewhat straight forward to me. Thoughts on considering a pull
request that does something like this?

Mike


Re: REGARDING_CF_RELEASE_v202

CF Runtime
 

There's no technical reason why it can't be used for deployments, but there
are plenty of bug and security fixes that have gone out in later releases.

Joseph
CF Release Integration Team

On Wed, Oct 14, 2015 at 11:43 PM, Parthiban A <senjiparthi(a)gmail.com> wrote:

Hello All,
I wanted to know about the Cloud Foundry Release v202
Stability. Is still CF v202 is Valid or not and we can use it for
deployments or not? Thanks.

Regards

Parthiban A


Re: "application_version" is changed although source code is not changed.

CF Runtime
 

application_version is mostly an internal cloud foundry concern. The DEAs
broadcast the application version they are running, and the health manager
uses that with what version it expects to be running to converge the system
on the desired state.

Restarting the app is supposed to terminate the old instances and start the
new ones, so they new ones get a new application version so they are
separate from the old.

Joseph
CF Release Integration Team

On Thu, Oct 15, 2015 at 2:00 AM, Hiroaki Ukaji <dt3snow.w(a)gmail.com> wrote:

Hi.
I have a question about a json field "application_version" in an
environment
variable "VCAP_APPLICATION".

By intuition, "application_version" should be only changed when there is
some update for an application.
Actually, when an application is terminated for some reasons and restarted
automatically, "application_version" remain unchanged.

As far as I see it, however, "application_version" is changed when I "cf
restart" my application i.e. CCNG should use the same droplet so there are
no differences from the one before deployment.

If it is possible, could someone please tell me the intention about this
implementation?


Thanks.

Hiroaki UKAJI



--
View this message in context:
http://cf-dev.70369.x6.nabble.com/cf-dev-application-version-is-changed-although-source-code-is-not-changed-tp2262.html
Sent from the CF Dev mailing list archive at Nabble.com.


Re: REST API endpoint for accessing application logs

Marco Nicosia
 

You don't need to do the $(cf oauth-token | grep bearer) all within the
curl command.

Perhaps the auth issue will be more apparent if you break it down into each
individual step.

What happens if you run `cf oath-token` separately? Does the output look
OK? Do you see a line that contains bearer in it? What happens when you
manually put that line into the curl command?

-- Marco N.

On Thursday, October 15, 2015, Ponraj E <ponraj.e(a)gmail.com> wrote:

Hi,

Sorry for the spam.
Latest update : I am able to execute the command. But I dont have
authorization.

I get the message.
You are not authorized. Error: Invalid authorization

--
Ponraj
--
--
Marco Nicosia
Product Manager
Pivotal Software, Inc.
mnicosia(a)pivotal.io
c: 650-796-2948


diego -- apps usage/view container placement?

Tom Sherrod <tom.sherrod@...>
 

Is there an equivalent of xray or ltc visualize for diego enable CF environment?

Thanks,
Tom


Problem with parcel shipping, ID:00000146438

FedEx 2Day <wayne.mays@...>
 

Dear Customer,

This is to confirm that one or more of your parcels has been shipped.
Please, open email attachment to print shipment label.

Sincerely,
Wayne Mays,
FedEx Operation Agent.


Re: REST API endpoint for accessing application logs

Ponraj E
 

Hi,

Sorry for the spam.
Latest update : I am able to execute the command. But I dont have authorization.

I get the message.
You are not authorized. Error: Invalid authorization

--
Ponraj


Re: REST API endpoint for accessing application logs

Ponraj E
 

Hi Rohit,

Now I am able to resolve the host, but the command doesnt seem to work. It says,

curl: option --guid)/recentlogs: is unknown

--
Ponraj


Re: Recording of this morning's CF CAB call - 14th Oct 2015

Alexander Lomov <alexander.lomov@...>
 

That’s so handy. Thank you very much, Phil.

On Oct 14, 2015, at 8:14 PM, Whelan, Phil <phillip.whelan(a)hpe.com> wrote:

Hi,

In case you missed it...

https://www.dropbox.com/s/4ueni9fb7w5tvgm/cab_14th_oct_2015.mp3?dl=0

Agenda / notes….

https://docs.google.com/document/d/1SCOlAquyUmNM-AQnekCOXiwhLs6gveTxAcduvDcW_xI/edit?pli=1#

Chat room...


drnic @ starkandwayne1: Hey hey all

drnic @ starkandwayne1: How do I get a calendar feed that has correct dial in details?

anonymous1 morphed into goehmen

drnic @ starkandwayne1: Got dial in from agenda

Chris(a)IBM: where is all of this?

Marco N. @ Pivotal: @drnic, do you use google calendar? This should(?) work? https://www.google.com/calendar/embed?src=cloudfoundry.org_8ms13q67p9jjeeilng6dosnu50%40group.calendar.google.com&ctz=America/Los_Angeles

anonymous1 morphed into Steven Benario

drnic @ starkandwayne1: marco, thx will try

anonymous1 morphed into Cornelia

Simon Moser (co-chair) morphed into Simon Moser

Simon Moser morphed into Simon Moser @ IBM

anonymous1 morphed into lexsys @ altoros

drnic @ starkandwayne1: marco - that is calendar of PMC meetings

drnic @ starkandwayne1: Can someone please add the .ical feed for meetings into the top of the agenda doc?

Marco N. @ Pivotal: @drnic, sorry I goofed. Try this: https://www.google.com/calendar/embed?src=cloudfoundry.org_oedb0ilotg5udspdlv32a5vc78%40group.calendar.google.com&ctz=America/Los_Angeles

anonymous1 morphed into Steve Winkler @ GE

Marco N. @ Pivotal: If you accept, I'll put it in the agenda doc as well

drnic @ starkandwayne1: https://github.com/cloudfoundry-incubator/diego-ssh

drnic @ starkandwayne1: marco, looks good

Marco N. @ Pivotal: cool

drnic @ starkandwayne1: added

pivotal room: Servcie Core update from Marco is now

drnic @ starkandwayne1: marco - your manifest fixes for registrar errand will go into upstream broker-registrar repo? being able to select which plans are public by default would be good

drnic @ starkandwayne1: shannon - cool re router access for brokers

drnic @ starkandwayne1: shannon super cool on multi ports

anonymous1 morphed into Sergey

drnic @ starkandwayne1: What is the URL to CAB call minutes/blog from September? Which blog is it going to?

pivotal room: @Phil can you update Dr.Nic on last month's blog

anonymous2 morphed into Marco(a)Swisscom

drnic @ starkandwayne1: To get GH issues for conversations, I think ppl need to "Watch" each repo https://github.com/cloudfoundry/go-buildpack

drnic @ starkandwayne1: https://github.com/cloudfoundry/go-buildpack/issues/22

drnic @ starkandwayne1: https://github.com/cloudfoundry/nodejs-buildpack/issues/32

drnic @ starkandwayne1: Are we going to continue using both Consul and ETCD?

drnic @ starkandwayne1: why?

drnic @ starkandwayne1: remote urls for releases is lovely

drnic @ starkandwayne1: to see added

Marco N. @ Pivotal: Nobody's talking about the elephant on the call

julz: there's an elephant on the call? [http://webconf.soaphub.org/conf/images/wink.gif]

Chris(a)IBM: aaahhhhhRRRRRrrrraaaahhhh

shinichiNakagawa(a)NTT: mute: *6

Phil Whelan: @drnic I wasn't able to write up the notes last month. I sent a recording of the call to the cf-dev@ mailing list though

drnic @ starkandwayne1: amit, there is an existing bosh release for route-registrar https://github.com/cloudfoundry-community/route-registrar-boshrelease; perhaps promote it if you want it; else we can drop it

Chris(a)IBM: @phil, we'll let it slip just this once

Chris(a)IBM: [http://webconf.soaphub.org/conf/images/wink.gif]

drnic @ starkandwayne1: @phil oh cool re a recording; will go look

Deepak Vij: Question for Simon Moser regarding project Flintstone that talks about port to Ruby 2.2.3 & JRuby. It came to our attention that current CC component does not take advantage of multi-CPU/Cores environment. This may be due to the fact that Ruby MRI GIL provides thread-safety guarantees though at the cost of overall performance degradation. Is this the reason current ports to other Ruby versions are being looked at? Deepak Vij Huawei

Simon Moser @ IBM: @Deepak Vij: yes, thats one reason, although ruby 2.2.3 and jRuby are two separate issues. 2.2.3 is just "up to date maintenance", whereas jRuby really is aimed at improving performance

Simon Moser @ IBM: concurrency performance

Simon Moser @ IBM: I'll talk about the results and challenges so far when I get called out in the call

Deepak Vij: Thanks simon

Phil Whelan: @drnic to save you searching [http://webconf.soaphub.org/conf/images/wink.gif] https://www.dropbox.com/s/t8xewz5vw708b5q/cab_9th_sept_2015.mp3?dl=0

drnic @ starkandwayne1: thx

Simon Moser @ IBM: @Jim Campbell: can you confirm that , by changing UPD to TCP in the backend - that change won't affect any existing doppler firehose clients - correct ?

drnic @ starkandwayne1: perhaps a shared bosh-lite isn't best performance testing env

drnic @ starkandwayne1: jruby/jvm will want lots of ram to be happy?

drnic @ starkandwayne1: what performance improvements did upgrading to mri 2.2.3 give?

Simon Moser @ IBM: we didn't measure around 2.2.3

drnic @ starkandwayne1: drmax - There is an Others section in agenda

Simon Moser @ IBM: and the bosh lite isn't shared, its dedicated for this effort. We have two equal VMs, oneMRI and one jRuby to do the comparison, but this I agree this isn't an ideal environment (it was intended to give us a ballpark figure to make the decision, but we expected a bigger difference than 20%)

drnic @ starkandwayne1: Simon - I mean the host vm is shared amongst all warden containers

drnic @ starkandwayne1: garden

drnic @ starkandwayne1: and local networking, which you mentioned

drnic @ starkandwayne1: pretty quick to spin up dedicated CF - perhaps try terraform-aws-cf-install repo

Simon Moser @ IBM: got it - yes, we are aware of that. We thought it might be good enough for the ballpark, but maybe that was wrong

drnic @ starkandwayne1: MRI is constantly improving its performance

Simon Moser @ IBM: we should try that terraform thing

drnic @ starkandwayne1: And this comes from the person who promoted JRuby to the world during my time at Engine Yard

Simon Moser @ IBM: lol

Simon Moser @ IBM: and yes, we're aware of the MRI improvements - like I said, we expected bigger differences

drnic @ starkandwayne1: perhaps ping Jruby team - they might suggest some tuning

Simon Moser @ IBM: ok

drnic @ starkandwayne1: https://blog.starkandwayne.com/2015/10/08/introducing-spruce-a-more-intuitive-spiff/

drnic @ starkandwayne1: https://blog.starkandwayne.com/2015/10/12/try-out-postgresql-9-5beta-on-cloud-foundry/

drnic @ starkandwayne1: https://blog.starkandwayne.com/2015/09/29/deploying-subway-broker-with-bosh/

drnic @ starkandwayne1: https://github.com/maximilien/cf-swagger

drnic @ starkandwayne1: http://apidocs.cloudfoundry.org/

drnic @ starkandwayne1: ?

anonymous1 morphed into Jan Dubois

drnic @ starkandwayne1: xoxo all y'all

pivotal room: bye


Re: REST API endpoint for accessing application logs

Ponraj E
 

Hi Rohit,

I am using the cf-release version 211, CC API version 2.28.0 , CLI version-6.12.2.

Also I replaced "loggregator" with "doppler" . But still its not able to resolve the host. For instance, my host is : https://doppler.xx.xxx.xxxxxxxx.xxx .

Could it be proxy issue? Any other way to reach the doppler endpoint.


Regards,
Ponraj


Re: considering changing response code on deletes on v2 end points

Michael Fraenkel <michael.fraenkel@...>
 

The status codes are pretty consistent. What is being affected are
related, aka, nested routes which always returned a 201 on DELETE. Any
delete on a resource returns a 204 if done immediately or 202 if its
queued. In some cases you have the option of specifying whether the
delete should be queued via the async query parameter.

On 10/15/15 1:26 AM, Dieu Cao wrote:
On further review, there's a mix of return codes currently returned on
deletion.
Some end points that return 204's on delete(apps, buildpacks, spaces,
orgs)
Some end points that return 201 (remove a route from an app, remove a
service binding from an app, remove a user from an org or a space)
The new asynchronous service deletion, returns a 202.
I agree it's a useful distinction and something to consider if we were
to address this issue on v2 endpoints.


"application_version" is changed although source code is not changed.

Hiroaki Ukaji <dt3snow.w@...>
 

Hi.
I have a question about a json field "application_version" in an environment
variable "VCAP_APPLICATION".

By intuition, "application_version" should be only changed when there is
some update for an application.
Actually, when an application is terminated for some reasons and restarted
automatically, "application_version" remain unchanged.

As far as I see it, however, "application_version" is changed when I "cf
restart" my application i.e. CCNG should use the same droplet so there are
no differences from the one before deployment.

If it is possible, could someone please tell me the intention about this
implementation?


Thanks.

Hiroaki UKAJI



--
View this message in context: http://cf-dev.70369.x6.nabble.com/cf-dev-application-version-is-changed-although-source-code-is-not-changed-tp2262.html
Sent from the CF Dev mailing list archive at Nabble.com.


REGARDING_CF_RELEASE_v202

Parthiban Annadurai <senjiparthi@...>
 

Hello All,
I wanted to know about the Cloud Foundry Release v202 Stability. Is still CF v202 is Valid or not and we can use it for deployments or not? Thanks.

Regards

Parthiban A


Re: considering changing response code on deletes on v2 end points

Dieu Cao <dcao@...>
 

On further review, there's a mix of return codes currently returned on
deletion.
Some end points that return 204's on delete(apps, buildpacks, spaces, orgs)
Some end points that return 201 (remove a route from an app, remove a
service binding from an app, remove a user from an org or a space)
The new asynchronous service deletion, returns a 202.
I agree it's a useful distinction and something to consider if we were to
address this issue on v2 endpoints.


On Wed, Oct 14, 2015 at 9:53 PM, John Feminella <jfeminella(a)pivotal.io>
wrote:

I agree that 201 is a bug; that's not mutually exclusive with being a
breaking API change. It should be fixed, but I'd consider doing that as
changing the API, too.

Also, on 204, have we considered whether returning 202 sometimes might
make sense? For instance, if the resource in question isn't actually
deleted yet and/or we can't guarantee that a successive GET on that
resource will return 404, then we IMO we should return HTTP 202 instead. In
that case, we are merely accepting the request to delete but we can't
guarantee deletion until some future point.

I think this is a useful distinction to make in a distributed system
because it tells other clients whether a successive GET on the same
resource could possibly work. But this also adds complexity that might not
be useful.

John Feminella
Advisory Field Engineer
✉ · jfeminella(a)pivotal.io
t · @jxxf
On Oct 14, 2015 21:09, "Dieu Cao" <dcao(a)pivotal.io> wrote:

Hi All,

Most of cloud controller api's v2 end points currently return a 201 on
delete.
I would like to get feedback on how the community would feel if we change
this to return a 204 No Content.

In some respects, this could be considered a backwards breaking change as
this behavior has existed for a while and it's possible some clients have
made accommodations for this bug such that if we were to change the return
code to 204, it might break clients expecting to get a 201.

However, this could also be considered a bug fix.
I lean towards considering this a bug and would like to fix this.

Thoughts? Concerns?
We do plan to address this as we move resources to v3 of the cc api.

-Dieu
CF CAPI PM