Date   

Re: REST API endpoint for accessing application logs

Rohit Kumar
 

You should use the value coming from "doppler_logging_endpoint" not the
"logging_endpoint". What version of cf-release are you using? Alternatively
if you don't have a "doppler_logging_endpoint" in the response from
/v2/info , then use the URL from "logging_endpoint" but replace
"loggregator" with "doppler".

Rohit

On Tue, Oct 13, 2015 at 7:47 AM, Ponraj E <ponraj.e(a)gmail.com> wrote:

Hi Rohit,

Thanks for the reply. I tried this :

curl -k -H "Authorization: $(cf oauth-token | grep bearer)"
https://doppler.bosh-lite.com:443/apps/$(cf app appName --guid)/recentlogs

with my logging_endpoint that i got from cf curl /v2/info (for ex: "
https://xxxx:443") .But it says host could not be resolved.

Ponraj


Re: Initialization script for SSHFS

Daniel Mikusa
 

Awesome! Any chance you could share the final product? Sounds like it
could be useful to others.

Dan

On Tue, Oct 13, 2015 at 3:52 PM, Cory Jett <cory.jett(a)gmail.com> wrote:

Perfect, thanks! I had to do a little hacking to make it work right since
it is using the SSHFS service (which we arent using) and it is setup to use
credentials (and not keys) but otherwise it worked great.


Re: [abacus] End-to-end test coverage for secured Abacus

Michael Maximilien
 

+1 as well.

Please make sure to publish the results of the perf tests (overtime) and
let's discuss adding stories that anyone who cares can follow.

Various teams in CF have also been looking to add performance goals and
tests as part of their pipelines, e.g., Diego and MEGA, so might be good to
chat with them if that makes sense.

Best,

max

On Mon, Oct 12, 2015 at 8:08 AM, Jean-Sebastien Delfino <jsdelfino(a)gmail.com
wrote:
+1 to your proposal. That'll also help us test the performance impacts of
enabling that security.

- Jean-Sebastien

On Sat, Oct 10, 2015 at 8:38 PM, Saravanakumar A Srinivasan <
sasrin(a)us.ibm.com> wrote:


Now that we have secured Abacus using OAuth bearer access token based
authentication (See [1] for more details), we need to have an end-end test
coverage that tests Abacus with security enabled at Travis.

Looking for adding this test as a variation of one of our existing
end-to-end tests and/or all of our integration tests by just simply adding
security environment variables before starting Abacus and the tests.

Since our integration tests are starting a part of Abacus processing
steps inside the test, it would be simpler to cover at integration tests,
however, integration tests do not flow to the next processing step in the
pipeline, thus we will not be able to test the flow from one processing
step to another.

Whereas Performance Test and Demo Client expect a running Abacus, so any
variations require all of them to be started with same set of environment
variables to complete a meaningful test. One of the options could be to
have an npm script/command that would run this variation as one step and
have them integrated with our continuous integration build at Travis.

Adding a security variation to Demo Client would raise the entry bar for
anybody trying to get familiar with Abacus. So I am moving towards having
an npm script to 1. set security environment variables, 2. start Abacus,
3. run Performance Test with OAuth bearer token, and 4. stop Abacus.

That would leave us with Demo Client and all integration tests to run
only on an unsecured Abacus environment and the Performance Test could be
run with either secured or unsecured Abacus.

Any opinions/comments/thoughts?

[1] https://github.com/cloudfoundry-incubator/cf-abacus/issues/35


Thanks,
Saravanakumar Srinivasan (Assk)


Re: [abacus] Accepting delayed usage within a slack window

Michael Maximilien
 

+1

So long this does not prevent DBs from being sharded. Even if the penalty
for queries of distant past is higher (e.g., slow).

And, as we discussed last Friday, this slack value can be fixed for now and
be made configurable later in future stories.

I am hoping others who hare interested in this feature chime here as well.

Best,

max

On Mon, Oct 12, 2015 at 7:53 PM, Jean-Sebastien Delfino <jsdelfino(a)gmail.com
wrote:
The benefit in having the year window allows only having to go to a
single database as opposed to a potential 12 databases with month windows

Correct, if your resource instance has incurred usage in the last month,
but if no usage has been submitted for a resource instance since Jan for
example, then we still need to run a descending query back to Jan, giving
us a max of 12 database partitions to scan for old/inactive resource
instances when we do that in Dec (which is typically when people start to
get more interested in their yearly usage.)

but I think that probably doesn't outweigh having to duplicate the
yearly data on every document.

+1, that's what I was thinking.

- Jean-Sebastien

On Mon, Oct 12, 2015 at 5:57 PM, Benjamin Cheng <bscheng(a)us.ibm.com>
wrote:

I'm leaning towards agreeing with you in terms of reducing the number of
windows. I agree with what you've said on forever. The only case I can
point out is in years. The benefit in having the year window allows only
having to go to a single database as opposed to a potential 12 databases
with month windows, but I think that probably doesn't outweigh having to
duplicate the yearly data on every document.


Re: Initialization script for SSHFS

Cory Jett
 

Perfect, thanks! I had to do a little hacking to make it work right since it is using the SSHFS service (which we arent using) and it is setup to use credentials (and not keys) but otherwise it worked great.


CF CAB call for October is Wednesday Oct. 14th, 2015 - final reminder

Michael Maximilien
 

fyi...
 
Final reminder. Please join us tomorrow at 8AM PDT. Call info [1]. If you are Pivotal we have Scorpius reserved on the 4th floor. 
 
Product managers, please update the agenda [1] with highlights from your team since the last CAB call.
 
All the best,
 
 
Chip, James, and Max
 

----- Original message -----
From: Michael Maximilien/Almaden/IBM
To: cf-dev@...
Cc:
Subject: CF CAB call for October is Wednesday Oct. 14th, 2015 - one week reminder
Date: Tue, Oct 6, 2015 5:22 PM
 
Hi, all,

Quick reminder that the CAB call for September is next week Wednesday October 14th @ 8a PDT.

Please add any project updates to Agenda here: https://docs.google.com/document/d/1SCOlAquyUmNM-AQnekCOXiwhLs6gveTxAcduvDcW_xI/edit#heading=h.o44xhgvum2we

If you have something else to share, please also add an entry at the end.

Best,

Chip, James, and Max


Re: CF v205 / Pushing an app

Dieu Cao <dcao@...>
 

I'll look into getting that page updated.

One known issue is that CC doesn't have a concept of a default shared
domain when pushing apps.
The CLI assumes it's the first shared domain that comes back in the list.
Depending how you're splitting your domain, you may need to create a new
shared domain, delete any other shared domains until your desired shared
domain for apps is first in the `cf domains` list and create/map routes
with your new shared app domain to existing apps.

-Dieu

On Tue, Oct 13, 2015 at 8:03 AM, Sylvain Gibier <sylvain(a)munichconsulting.de
wrote:
Hi,

Thanks for the clarification - it should be worth to update the
documentation (
http://docs.cloudfoundry.org/deploying/ec2/bootstrap-aws-vpc.html) - as I
followed this one, and this setting is not even mentioned anywhere.

Any known issues should I be aware before trying to split an existing CF
deployment ? Does it mean that both application and system domain go
through the same ELB (cfrouter) ? Quid of existing application deployed ?

Sylvain


Re: CF v205 / Pushing an app

Sylvain Gibier
 

Hi,

Thanks for the clarification - it should be worth to update the documentation (http://docs.cloudfoundry.org/deploying/ec2/bootstrap-aws-vpc.html) - as I followed this one, and this setting is not even mentioned anywhere.

Any known issues should I be aware before trying to split an existing CF deployment ? Does it mean that both application and system domain go through the same ELB (cfrouter) ? Quid of existing application deployed ?

Sylvain


Re: [abacus] authorization needed to call APIs defined in account management stub

Jean-Sebastien Delfino
 

Hi Bharath,

You decide the scopes yourself as an implementor of that account API, and a
server for the account and org info resources it returns.

We've been having a related discussion of scopes with Piotr [1], where he'd
like the client to decide the scopes and I'm saying that the resource owner
and server should decide them instead. Well, here you're on the resource
server side so you get to decide :)

Quoting the OAuth spec for a bit more background [2]:
---
Tokens represent specific scopes and durations of access, granted by the
resource owner, and enforced by the resource server and authorization server
---

In terms of end to end flow, your account service is called by the Abacus
reporting service to retrieve the account and org info needed to generate
usage reports, and is passed the same token passed in to the reporting
service by the client requesting a report. So you need to have that client
pass a token with an identity and scopes that you can check in your account
service to protect the account and org info that you'll serve.

You can decide how you want to implement this, but if the client presents a
user token for example, you could check for some scopes in that token but
you may also want to check the roles assigned to that user in the requested
org to control whether or not she's allowed to access the org info.

HTH

[1]
http://cf-dev.70369.x6.nabble.com/cf-dev-Re-abacus-Usage-submission-authorization-tt2115.html#none
[2] https://tools.ietf.org/html/rfc6749#section-1.4

- Jean-Sebastien

- Jean-Sebastien

On Mon, Oct 12, 2015 at 8:32 PM, Bharath Sekar <bsekar14(a)gmail.com> wrote:

Sebastien, the account management stubs define APIs that will retrieve a
list of orgs given an account, use an org to get the corresponding account
it belongs to. The APIs implemented by an account management service will
be authorized by a bearer token. What scopes are required in the token to
use this API?


Re: REST API endpoint for accessing application logs

Ponraj E
 

Hi Rohit,

Thanks for the reply. I tried this :

curl -k -H "Authorization: $(cf oauth-token | grep bearer)" https://doppler.bosh-lite.com:443/apps/$(cf app appName --guid)/recentlogs

with my logging_endpoint that i got from cf curl /v2/info (for ex: "https://xxxx:443") .But it says host could not be resolved.

Ponraj


Re: REST API endpoint for accessing application logs

Rohit Kumar
 

The API endpoint to get recent logs is present on the loggregator
trafficontroller. You can get the URL for your traffic controller by
running:

cf curl /v2/info | jq .doppler_logging_endpoint

Note that, the URL which you get back will have a "wss" spec, but you will
need to use "https" when you issue a recentlogs request.

To get the recent logs for your application, you should issue a GET request
to https://<trafficontroller URL>/apps/<appid>/recentlogs . You will also
need to provide your CF oauth token as part of the "Authorization" header
for this request. For example:

curl -k -H "Authorization: $(cf oauth-token | grep bearer)"
https://doppler.bosh-lite.com:443/apps/$(cf app appName --guid)/recentlogs

The response body will contain the log messages in the dropsonde-protocol
<https://github.com/cloudfoundry/dropsonde-protocol>format, so you will
need to parse them. If you are using Go to do this an easier way would be
to use the NOAA library to get recentlogs
<https://github.com/cloudfoundry/noaa/blob/master/sample/main.go#L17-L29>.

Rohit

On Tue, Oct 13, 2015 at 6:11 AM, Ponraj E <ponraj.e(a)gmail.com> wrote:

Hi,

I want to get the application's dumped logs from the loggregator and not
the tailing logs.
CLI provides me a command to do it: cf logs APP_NAME --recent displays
all the lines in the Loggregator buffer.

But how do I do it via REST API endpoint? I had set the CF_TRACE=true to
see the REST calls thats been fired to get the application log, but I see
only the GET call to get the application details, but after that it just
dumps the log.

Thanks for the help.

Regards,
Ponraj


REST API endpoint for accessing application logs

Ponraj E
 

Hi,

I want to get the application's dumped logs from the loggregator and not the tailing logs.
CLI provides me a command to do it: cf logs APP_NAME --recent displays all the lines in the Loggregator buffer.

But how do I do it via REST API endpoint? I had set the CF_TRACE=true to see the REST calls thats been fired to get the application log, but I see only the GET call to get the application details, but after that it just dumps the log.

Thanks for the help.

Regards,
Ponraj


Re: Unable to set CF api endpoint

CF Runtime
 

Did you run the "bin/add-route" script from the bosh-lite repo? By default
that subnet does not have a route for it.

Joseph
CF Release Integration Team

On Mon, Oct 12, 2015 at 9:23 AM, Deepak Arn <arn.deepak1(a)gmail.com> wrote:

Hi,

I able to setup local cf instance(bosh-lite) in ubuntu, but while
deployment, when I run this command "cf api --skip-ssl-validation
https://api.10.244.0.34.xip.io", it's showing the following error message
everytime.
"Error performing request: timeout error, FAILED"
I also tried with "api.bosh-lite.com" which is more reliable than "xip.io
".
cf api --skip-ssl-validation https://api.bosh-lite.com"
Still the error is same.

Thanks,


Unable to deliver your item, #00194201

FedEx International Ground <leslie.parker@...>
 

Dear Customer,

We could not deliver your item.
You can review complete details of your order in the find attached.

Yours faithfully,
Leslie Parker,
Sr. Operation Manager.


[abacus] authorization needed to call APIs defined in account management stub

Bharath Sekar
 

Sebastien, the account management stubs define APIs that will retrieve a list of orgs given an account, use an org to get the corresponding account it belongs to. The APIs implemented by an account management service will be authorized by a bearer token. What scopes are required in the token to use this API?


Re: [abacus] Accepting delayed usage within a slack window

Jean-Sebastien Delfino
 

The benefit in having the year window allows only having to go to a
single database as opposed to a potential 12 databases with month windows

Correct, if your resource instance has incurred usage in the last month,
but if no usage has been submitted for a resource instance since Jan for
example, then we still need to run a descending query back to Jan, giving
us a max of 12 database partitions to scan for old/inactive resource
instances when we do that in Dec (which is typically when people start to
get more interested in their yearly usage.)

but I think that probably doesn't outweigh having to duplicate the yearly
data on every document.

+1, that's what I was thinking.

- Jean-Sebastien

On Mon, Oct 12, 2015 at 5:57 PM, Benjamin Cheng <bscheng(a)us.ibm.com> wrote:

I'm leaning towards agreeing with you in terms of reducing the number of
windows. I agree with what you've said on forever. The only case I can
point out is in years. The benefit in having the year window allows only
having to go to a single database as opposed to a potential 12 databases
with month windows, but I think that probably doesn't outweigh having to
duplicate the yearly data on every document.


Re: [abacus] Usage submission authorization

Jean-Sebastien Delfino
 

Also, resource id is an arbitrary identifier, making it part of the scope
may create quite complex names e.g.
'abacus.runtimes/node/v12-07.revision-2-buildpack-guid-a3d7ff4d-3cb1-4cc3-a855-fae98e20cf57.write.

Do you have a specific issue in mind with putting the resource uuid in the
scope name? We have uuids all over the place in CF, in most of the APIs,
the usage docs etc so I'm not sure why it'd be a problem to have one here.

Any naming convention may not be generic enough, for example for my UAA
instance requires the scope names to start with component using it,
followed by proper name - 'bss.runtimes.abacus.<resource id>.write'.

Like I said before, if you can't or don't want to use a specific scope per
resource, then you can use abacus.usage.write (with the same
disclaimers/warnings I gave in my previous post.)

I must be missing something though :) ... aren't you happily using
cloud_controller.write for example (or similar other CF scopes) without
renaming it to <your client component>.cloud_controller.write? Why would
you treat abacus.usage.write different?

Also, I must admit to find a bit surprising a naming convention that will
tie the scope name to the client that presents it. Isn't the scope
typically defined by the owner of the resource it protects instead of the
client? In that case the owner of the resource is not the client
component... it is the CF abacus project, hence <abacus>.usage.write.
Wouldn't that make more sense?

Finally, I'm also not quite sure how this will work at all if for example
Abacus needs to authorize resource access from multiple clients. That would
have to be really dynamic then, as each new client would require Abacus to
know about a new client specific naming convention (or client component
name prefix in the example you gave...)

Now, all that being said, looks like I'm not really following how you're
envisioning this to work, so do you think you could maybe submit a pull
request with how you concretely propose to make that dynamic scope naming
work when it includes client component names, or follows client component
specific naming conventions?

Thanks!

- Jean-Sebastien

On Mon, Oct 12, 2015 at 5:22 PM, Piotr Przybylski <piotrp(a)us.ibm.com> wrote:

Hi Sebastien,
I am not sure why allowing resource provider to explicitly specify scope
with which particular resource usage will be submitted is a problem. Just
allowing to pick a name would not compromise submission security in any
way. It could be done for example by adding scope name to the resource
definition.

Any naming convention may not be generic enough, for example for my UAA
instance requires the scope names to start with component using it,
followed by proper name - 'bss.runtimes.abacus.<resource id>.write'. Also,
resource id is an arbitrary identifier, making it part of the scope may
create quite complex names e.g.
'abacus.runtimes/node/v12-07.revision-2-buildpack-guid-a3d7ff4d-3cb1-4cc3-a855-fae98e20cf57.write.


Piotr



[image: Inactive hide details for Jean-Sebastien Delfino ---10/09/2015
09:38:09 PM---Hey Piotr, >>> In some cases it may not be possibl]Jean-Sebastien
Delfino ---10/09/2015 09:38:09 PM---Hey Piotr, >>> In some cases it may not
be possible or viable to create new scope for

From: Jean-Sebastien Delfino <jsdelfino(a)gmail.com>
To: "Discussions about Cloud Foundry projects and the system overall." <
cf-dev(a)lists.cloudfoundry.org>
Date: 10/09/2015 09:38 PM
Subject: [cf-dev] Re: Re: Re: Re: Re: [abacus] Usage submission
authorization

------------------------------



Hey Piotr,

In some cases it may not be possible or viable to create new scope for
each resource id e.g. short lived resources.

Why wouldn't that be possible? What type of short-lived resources did
you have in mind?

For example experimental service version (beta) replaced by release
version, usage of which may be reported and metered but not necessarily
billed.

OK, that use case makes sense to me. So, your resource is going to be
available for a few hours or days. I'm assuming that to get it on board CF
and meter it with Abacus you're going to run a cf create-service-broker
command or cf update-service-broker, define the resource config specifying
how to meter it, and store that config where your Abacus provisioning
endpoint implementation can retrieve it.

To secure the submission of usage for it, if I understand correctly how
UAA works, you'll then need to do this:
uaac client update <your service provider's client id> --authorities "...
existing permissions... abacus.<your resource id>.write"

That's all...

If that's really too much of a burden (really?) compared to the other
steps, you're basically looking to do *nothing* to secure that resource.
You could just submit usage with the abacus.usage.write scope, but that's
the equivalent of the CF cloud_controller.write scope for Abacus, close to
all powers... I'd probably advise against it as that's a serious risk but
that may be what you're looking for.

The scope names may need to follow adopter specific conventions so
creating scope with predefined name 'abacus.usage....' may not fit that
scheme. Abacus should offer ability to adjust the scope names, otherwise
submission may not be possible.

These are simple names that we expect in the token used to submit usage.
They're just constants like the names of our APIs, parameters, options,
fields in our JSON schemas... basically the contract/interface between the
Abacus user and its implementation. Not sure if there's a specific issue
with that abacus naming convention or if it's just a theoretical question,
but I'll be happy to discuss alternate naming conventions:

Do you have another naming convention in mind that you'd like to use?

Is there a specific issue with abacus.usage.write? Is the 'abacus' part in
the name a problem?

Would you prefer to submit usage with an existing CF scope like
cloud_controller.write or another of these high power scopes?
(again, I'd advise against it though...)

- Jean-Sebastien

- Jean-Sebastien

On Thu, Oct 8, 2015 at 5:24 PM, Piotr Przybylski <*piotrp(a)us.ibm.com*
<piotrp(a)us.ibm.com>> wrote:

Hi Sebastien,

>> In some cases it may not be possible or viable to create new scope
for each resource id e.g. short lived resources.

>Why wouldn't that be possible? What type of short-lived resources did
you have in mind?

For example experimental service version (beta) replaced by release
version, usage of which may be reported and metered but not necessarily
billed.
The scope names may need to follow adopter specific conventions so
creating scope with predefined name 'abacus.usage....' may not fit
that scheme. Abacus should offer ability to adjust the scope names,
otherwise submission may not be possible.


> Another reason why I'm not sure about short lived resources, is that
although you may decide to stop offering a type a resource at some point,
once you've metered it, and sent a bill for it >to a customer, I don't
think you can really 'forget' about its existence anymore... So in that
sense I'm not sure how it can be 'short lived'.
The short lived resource is only for submission, once it is not
offered, its specific scope is not needed. Thad does not mean erasing
history of usage.


Piotr




[image: Inactive hide details for Jean-Sebastien Delfino ---10/08/2015
11:10:16 AM---Hi Piotr, > In some cases it may not be possible o]Jean-Sebastien
Delfino ---10/08/2015 11:10:16 AM---Hi Piotr, > In some cases it may not be
possible or viable to create new scope for

From: Jean-Sebastien Delfino <*jsdelfino(a)gmail.com*
<jsdelfino(a)gmail.com>>
To: "Discussions about Cloud Foundry projects and the system overall."
<*cf-dev(a)lists.cloudfoundry.org* <cf-dev(a)lists.cloudfoundry.org>>
Date: 10/08/2015 11:10 AM
Subject: [cf-dev] Re: Re: Re: [abacus] Usage submission authorization


------------------------------



Hi Piotr,

> In some cases it may not be possible or viable to create new scope
for each resource id e.g. short lived resources.

Why wouldn't that be possible? What type of short-lived resources did
you have in mind?

The typical use case I've seen is for a Cloud platform to decide to
offer a new type of database or analytics or messaging service, or a new
type of runtime for example. Before that new resource is offered on the
platform, their resource provider needs to get on board, get a user id,
auth credentials defined in UAA etc... You probably also need to define how
you're going to meter that new resource and the pricing for it.

Couldn't a scope be created in UAA at that time along all these other
on boarding steps?

Another reason why I'm not sure about short lived resources, is that
although you may decide to stop offering a type a resource at some point,
once you've metered it, and sent a bill for it to a customer, I don't think
you can really 'forget' about its existence anymore... So in that sense I'm
not sure how it can be 'short lived'.

> Some flexibility would also help to accommodate changes related to
grouping resources by type as discussed in [1].

We discussed two options in [1]:
a) support a resource_type in addition to resource_id for grouping
many resource_ids under a single type
b) a common resource_id for several resources (something like 'node'
for all your versions of Node.js build packs for example)

Since option (a) is not implemented at this point and Issue #38 is
actually assigned to a 'future' milestone, AIUI resource providers need to
use option (b) with a common resource_id for multiple resources. Is
creating a scope for that common id still too much of a burden then?

[1] - *https://github.com/cloudfoundry-incubator/cf-abacus/issues/38*
<https://github.com/cloudfoundry-incubator/cf-abacus/issues/38>

Thoughts?

- Jean-Sebastien

On Wed, Oct 7, 2015 at 5:51 PM, Piotr Przybylski <*piotrp(a)us.ibm.com*
<piotrp(a)us.ibm.com>> wrote:
Hi Sebastien,

> That OAuth token should include:
> - a user id uniquely identifying that resource provider;
> - an OAuth scope named like abacus.usage.<resource_id>.write

What kind of customization of the above do you plan to expose?
In some cases it may not be possible or viable to create new scope for each
resource id e.g. short lived resources. The ability to either configure
scope to use for validation or provide scope 'mapping' would help to adapt
it to existing deployments. Some flexibility would also help to accommodate
changes related to grouping resources by type as discussed in [1].

[1] -
*https://github.com/cloudfoundry-incubator/cf-abacus/issues/38*
<https://github.com/cloudfoundry-incubator/cf-abacus/issues/38>


Piotr



[image: Inactive hide details for Jean-Sebastien Delfino
---10/07/2015 12:30:05 AM---Hi Piotr, > what kind of authorization is
required]Jean-Sebastien Delfino ---10/07/2015 12:30:05 AM---Hi
Piotr, > what kind of authorization is required to submit usage to Abacus ?

From: Jean-Sebastien Delfino <*jsdelfino(a)gmail.com*
<jsdelfino(a)gmail.com>>
To: "Discussions about Cloud Foundry projects and the system
overall." <*cf-dev(a)lists.cloudfoundry.org*
<cf-dev(a)lists.cloudfoundry.org>>
Date: 10/07/2015 12:30 AM
Subject: [cf-dev] Re: [abacus] Usage submission authorization
------------------------------




Hi Piotr,

> what kind of authorization is required to submit usage to
Abacus ?
> Is the oauth token used for submission [1] required to have
particular scope, specific to resource or resource provider ?

A resource provider is expected to present an OAuth token with
the usage it submits for a (service or runtime) resource.

That OAuth token should include:
- a user id uniquely identifying that resource provider;
- an OAuth scope named like abacus.usage.<resource_id>.write.

The precise naming syntax for that scope may still evolve in the
next few days as we progress with the implementation of user story
101703426 [1].

> Is there a different scope required to submit runtimes usage
(like cf bridge) versus other services or its possible to use single scope
for all the submissions

I'd like to handle runtimes and services consistently as they're
basically just different types of resources, i.e. one scope per 'service'
resource, one scope per 'runtime' resource.

We're still working on the detailed design and implementation,
but I'm not sure we'd want to share scopes across (service and runtime)
resource providers as that'd allow a resource provider to submit usage for
resources owned by another...

@assk / @sasrin, anything I missed? Thoughts?

-- Jean-Sebastien


On Tue, Oct 6, 2015 at 6:29 PM, Piotr Przybylski <
*piotrp(a)us.ibm.com* <piotrp(a)us.ibm.com>> wrote:
Hi,
what kind of authorization is required to submit
usage to Abacus ?
Is the oauth token used for submission [1] required
to have particular scope, specific to resource or resource provider ? Is
there a different scope required to submit runtimes usage (like cf bridge)
versus other services or its possible to use single scope for all the
submissions ?


[1] -
*https://www.pivotaltracker.com/story/show/101703426*
<https://www.pivotaltracker.com/story/show/101703426>

Piotr





Re: Multi-Line Loggregator events and the new Splunk "HTTP Event Collector" API

Rohit Kumar
 

We have thrown around one approach which solves the problem but would
require changes in the runtime. That solution would expose a socket to the
container where the application could emit logs. The application would now
have control over what delimits a message.

The implementation of this solution though would need coordination with the
runtime, as the socket would need to be plumbed from the container all the
way to metron. The messages would also need to be associated with the
application ID when they reach metron.

Rohit

On Fri, Oct 9, 2015 at 1:53 PM, Mike Youngstrom <youngm(a)gmail.com> wrote:

Another possible idea. Allow an application to send a single log line
with the new line characters escaped "Some Log Line1\\nSome Log Line2" Then
Loggregator could either remove the escape on the logging agent or if that
is too processor expensive then make it a standard responsibility of
clients to unescape these lines.

I can get fairly far myself with this approach by simply unescaping in our
Splunk processor. The problem is other aspects of CF don't expect this so
cf logs doesn't work correctly for example.

Mike

On Thu, Oct 8, 2015 at 11:31 AM, Mike Youngstrom <youngm(a)gmail.com> wrote:

Thanks for the response Rohit. I hope this is the beginning of a good
long discussion on the topic. :)

Before going too deep with the '\' proposal are you aware if the
loggregator team considered any other possible ways an application could
hint to the agent that this line should wait for future lines before
sending the event? I'm not necessarily in love with the '\' approach just
throwing an idea out to start a discussion.

Mike

On Wed, Oct 7, 2015 at 7:58 PM, Rohit Kumar <rokumar(a)pivotal.io> wrote:

Hi Mike,

As Erik mentioned in the last thread, multi-line logging is something
which the loggregator team would like to solve. But there are a few
questions to answer before we can come up with a clean solution. We want a
design which solves the problem while not breaking existing apps which do
not require this functionality. Before implementing a solution we would
also want to answer if we want to do it for both runtimes or just Diego,
since the way log lines are sent to Metron differ based on the runtime.

If we were to implement the solution which you described, where newlines
are escaped with a '\', I guess the expectation is that loggregator would
internally remove the escape character. This has performance implications
because now some part of loggregator will need to inspect the log message
and coalesce the message with the succeeding ones. We will need to do this
in a way which respects multi-tenancy. That means now we are storing
additional state related to log lines per app. We will also need to decide
how long loggregator needs to wait for the next lines in a multi-line log,
before deciding to send the line which it received. To me that's not a
simple change.

I am happy to continue this discussion and hear your thoughts on the
existing proposal or any other design alternatives.

Thanks,
Rohit

On Wed, Oct 7, 2015 at 10:45 AM, Mike Youngstrom <youngm(a)gmail.com>
wrote:

Splunk recently released its new "HTTP Event Collector" that greatly
simplifies how data can be streamed directly into Splunk without going to
an intermediate log file. It would be great to utilize this to efficiently
stream Loggregator information into Splunk.

For the most part loggregator appears to be very compatible with this
API with the exception of multi-line log messages.

The problem is that using this API splunk takes every request as an
independent splunk event. This completely eliminates anything splunk did
in the past to attempt to detect multi-line log messages.

Wouldn't it be great if a single loggregator event could contain
multiple log lines then these events could be easily streamed directly into
Splunk using this new api multiple lines preserved and all?

The previous attempt to bring up this topic fizzled [0]. With a new
LAMB PM coming I thought I'd ask my previous questions again.

In the previous thread [0] Erik mentioned a lot of work that he thought
would lead to multi-line log messages. But, it seems to me that the main
issue is simply how can a client actually communicate an multi-line event
to an agent? I don't think this issue is about breaking apart and then
combining log event rather how can I just I as a client hint to loggregator
that it should include multiple lines included into a single event?

Could it be as simple as escaping new lines with a '\' to notify the
agent to not end that event?

This problem cannot be solved without some help from loggregator.

Mike

[0]
https://lists.cloudfoundry.org/archives/list/cf-dev%40lists.cloudfoundry.org/thread/O6NDVGV44IBMVKZQXWOFIYOIC6CDU27G/


Re: Unable to set CF api endpoint

Yitao Jiang
 

Making CF_TRACE to true and paste the details logs here will be more
helpful.​
BTW, have you enable the route to the bosh-lite vms?

On Tue, Oct 13, 2015 at 12:23 AM, Deepak Arn <arn.deepak1(a)gmail.com> wrote:

Hi,

I able to setup local cf instance(bosh-lite) in ubuntu, but while
deployment, when I run this command "cf api --skip-ssl-validation
https://api.10.244.0.34.xip.io", it's showing the following error message
everytime.
"Error performing request: timeout error, FAILED"
I also tried with "api.bosh-lite.com" which is more reliable than "xip.io
".
cf api --skip-ssl-validation https://api.bosh-lite.com"
Still the error is same.

Thanks,
--

Regards,

Yitao
jiangyt.github.io


Re: [abacus] Accepting delayed usage within a slack window

Benjamin Cheng
 

I'm leaning towards agreeing with you in terms of reducing the number of windows. I agree with what you've said on forever. The only case I can point out is in years. The benefit in having the year window allows only having to go to a single database as opposed to a potential 12 databases with month windows, but I think that probably doesn't outweigh having to duplicate the yearly data on every document.

7161 - 7180 of 9417