Date   

Re: Unable to set CF api endpoint

CF Runtime
 

Did you run the "bin/add-route" script from the bosh-lite repo? By default
that subnet does not have a route for it.

Joseph
CF Release Integration Team

On Mon, Oct 12, 2015 at 9:23 AM, Deepak Arn <arn.deepak1(a)gmail.com> wrote:

Hi,

I able to setup local cf instance(bosh-lite) in ubuntu, but while
deployment, when I run this command "cf api --skip-ssl-validation
https://api.10.244.0.34.xip.io", it's showing the following error message
everytime.
"Error performing request: timeout error, FAILED"
I also tried with "api.bosh-lite.com" which is more reliable than "xip.io
".
cf api --skip-ssl-validation https://api.bosh-lite.com"
Still the error is same.

Thanks,


Unable to deliver your item, #00194201

FedEx International Ground <leslie.parker@...>
 

Dear Customer,

We could not deliver your item.
You can review complete details of your order in the find attached.

Yours faithfully,
Leslie Parker,
Sr. Operation Manager.


[abacus] authorization needed to call APIs defined in account management stub

Bharath Sekar
 

Sebastien, the account management stubs define APIs that will retrieve a list of orgs given an account, use an org to get the corresponding account it belongs to. The APIs implemented by an account management service will be authorized by a bearer token. What scopes are required in the token to use this API?


Re: [abacus] Accepting delayed usage within a slack window

Jean-Sebastien Delfino
 

The benefit in having the year window allows only having to go to a
single database as opposed to a potential 12 databases with month windows

Correct, if your resource instance has incurred usage in the last month,
but if no usage has been submitted for a resource instance since Jan for
example, then we still need to run a descending query back to Jan, giving
us a max of 12 database partitions to scan for old/inactive resource
instances when we do that in Dec (which is typically when people start to
get more interested in their yearly usage.)

but I think that probably doesn't outweigh having to duplicate the yearly
data on every document.

+1, that's what I was thinking.

- Jean-Sebastien

On Mon, Oct 12, 2015 at 5:57 PM, Benjamin Cheng <bscheng(a)us.ibm.com> wrote:

I'm leaning towards agreeing with you in terms of reducing the number of
windows. I agree with what you've said on forever. The only case I can
point out is in years. The benefit in having the year window allows only
having to go to a single database as opposed to a potential 12 databases
with month windows, but I think that probably doesn't outweigh having to
duplicate the yearly data on every document.


Re: [abacus] Usage submission authorization

Jean-Sebastien Delfino
 

Also, resource id is an arbitrary identifier, making it part of the scope
may create quite complex names e.g.
'abacus.runtimes/node/v12-07.revision-2-buildpack-guid-a3d7ff4d-3cb1-4cc3-a855-fae98e20cf57.write.

Do you have a specific issue in mind with putting the resource uuid in the
scope name? We have uuids all over the place in CF, in most of the APIs,
the usage docs etc so I'm not sure why it'd be a problem to have one here.

Any naming convention may not be generic enough, for example for my UAA
instance requires the scope names to start with component using it,
followed by proper name - 'bss.runtimes.abacus.<resource id>.write'.

Like I said before, if you can't or don't want to use a specific scope per
resource, then you can use abacus.usage.write (with the same
disclaimers/warnings I gave in my previous post.)

I must be missing something though :) ... aren't you happily using
cloud_controller.write for example (or similar other CF scopes) without
renaming it to <your client component>.cloud_controller.write? Why would
you treat abacus.usage.write different?

Also, I must admit to find a bit surprising a naming convention that will
tie the scope name to the client that presents it. Isn't the scope
typically defined by the owner of the resource it protects instead of the
client? In that case the owner of the resource is not the client
component... it is the CF abacus project, hence <abacus>.usage.write.
Wouldn't that make more sense?

Finally, I'm also not quite sure how this will work at all if for example
Abacus needs to authorize resource access from multiple clients. That would
have to be really dynamic then, as each new client would require Abacus to
know about a new client specific naming convention (or client component
name prefix in the example you gave...)

Now, all that being said, looks like I'm not really following how you're
envisioning this to work, so do you think you could maybe submit a pull
request with how you concretely propose to make that dynamic scope naming
work when it includes client component names, or follows client component
specific naming conventions?

Thanks!

- Jean-Sebastien

On Mon, Oct 12, 2015 at 5:22 PM, Piotr Przybylski <piotrp(a)us.ibm.com> wrote:

Hi Sebastien,
I am not sure why allowing resource provider to explicitly specify scope
with which particular resource usage will be submitted is a problem. Just
allowing to pick a name would not compromise submission security in any
way. It could be done for example by adding scope name to the resource
definition.

Any naming convention may not be generic enough, for example for my UAA
instance requires the scope names to start with component using it,
followed by proper name - 'bss.runtimes.abacus.<resource id>.write'. Also,
resource id is an arbitrary identifier, making it part of the scope may
create quite complex names e.g.
'abacus.runtimes/node/v12-07.revision-2-buildpack-guid-a3d7ff4d-3cb1-4cc3-a855-fae98e20cf57.write.


Piotr



[image: Inactive hide details for Jean-Sebastien Delfino ---10/09/2015
09:38:09 PM---Hey Piotr, >>> In some cases it may not be possibl]Jean-Sebastien
Delfino ---10/09/2015 09:38:09 PM---Hey Piotr, >>> In some cases it may not
be possible or viable to create new scope for

From: Jean-Sebastien Delfino <jsdelfino(a)gmail.com>
To: "Discussions about Cloud Foundry projects and the system overall." <
cf-dev(a)lists.cloudfoundry.org>
Date: 10/09/2015 09:38 PM
Subject: [cf-dev] Re: Re: Re: Re: Re: [abacus] Usage submission
authorization

------------------------------



Hey Piotr,

In some cases it may not be possible or viable to create new scope for
each resource id e.g. short lived resources.

Why wouldn't that be possible? What type of short-lived resources did
you have in mind?

For example experimental service version (beta) replaced by release
version, usage of which may be reported and metered but not necessarily
billed.

OK, that use case makes sense to me. So, your resource is going to be
available for a few hours or days. I'm assuming that to get it on board CF
and meter it with Abacus you're going to run a cf create-service-broker
command or cf update-service-broker, define the resource config specifying
how to meter it, and store that config where your Abacus provisioning
endpoint implementation can retrieve it.

To secure the submission of usage for it, if I understand correctly how
UAA works, you'll then need to do this:
uaac client update <your service provider's client id> --authorities "...
existing permissions... abacus.<your resource id>.write"

That's all...

If that's really too much of a burden (really?) compared to the other
steps, you're basically looking to do *nothing* to secure that resource.
You could just submit usage with the abacus.usage.write scope, but that's
the equivalent of the CF cloud_controller.write scope for Abacus, close to
all powers... I'd probably advise against it as that's a serious risk but
that may be what you're looking for.

The scope names may need to follow adopter specific conventions so
creating scope with predefined name 'abacus.usage....' may not fit that
scheme. Abacus should offer ability to adjust the scope names, otherwise
submission may not be possible.

These are simple names that we expect in the token used to submit usage.
They're just constants like the names of our APIs, parameters, options,
fields in our JSON schemas... basically the contract/interface between the
Abacus user and its implementation. Not sure if there's a specific issue
with that abacus naming convention or if it's just a theoretical question,
but I'll be happy to discuss alternate naming conventions:

Do you have another naming convention in mind that you'd like to use?

Is there a specific issue with abacus.usage.write? Is the 'abacus' part in
the name a problem?

Would you prefer to submit usage with an existing CF scope like
cloud_controller.write or another of these high power scopes?
(again, I'd advise against it though...)

- Jean-Sebastien

- Jean-Sebastien

On Thu, Oct 8, 2015 at 5:24 PM, Piotr Przybylski <*piotrp(a)us.ibm.com*
<piotrp(a)us.ibm.com>> wrote:

Hi Sebastien,

>> In some cases it may not be possible or viable to create new scope
for each resource id e.g. short lived resources.

>Why wouldn't that be possible? What type of short-lived resources did
you have in mind?

For example experimental service version (beta) replaced by release
version, usage of which may be reported and metered but not necessarily
billed.
The scope names may need to follow adopter specific conventions so
creating scope with predefined name 'abacus.usage....' may not fit
that scheme. Abacus should offer ability to adjust the scope names,
otherwise submission may not be possible.


> Another reason why I'm not sure about short lived resources, is that
although you may decide to stop offering a type a resource at some point,
once you've metered it, and sent a bill for it >to a customer, I don't
think you can really 'forget' about its existence anymore... So in that
sense I'm not sure how it can be 'short lived'.
The short lived resource is only for submission, once it is not
offered, its specific scope is not needed. Thad does not mean erasing
history of usage.


Piotr




[image: Inactive hide details for Jean-Sebastien Delfino ---10/08/2015
11:10:16 AM---Hi Piotr, > In some cases it may not be possible o]Jean-Sebastien
Delfino ---10/08/2015 11:10:16 AM---Hi Piotr, > In some cases it may not be
possible or viable to create new scope for

From: Jean-Sebastien Delfino <*jsdelfino(a)gmail.com*
<jsdelfino(a)gmail.com>>
To: "Discussions about Cloud Foundry projects and the system overall."
<*cf-dev(a)lists.cloudfoundry.org* <cf-dev(a)lists.cloudfoundry.org>>
Date: 10/08/2015 11:10 AM
Subject: [cf-dev] Re: Re: Re: [abacus] Usage submission authorization


------------------------------



Hi Piotr,

> In some cases it may not be possible or viable to create new scope
for each resource id e.g. short lived resources.

Why wouldn't that be possible? What type of short-lived resources did
you have in mind?

The typical use case I've seen is for a Cloud platform to decide to
offer a new type of database or analytics or messaging service, or a new
type of runtime for example. Before that new resource is offered on the
platform, their resource provider needs to get on board, get a user id,
auth credentials defined in UAA etc... You probably also need to define how
you're going to meter that new resource and the pricing for it.

Couldn't a scope be created in UAA at that time along all these other
on boarding steps?

Another reason why I'm not sure about short lived resources, is that
although you may decide to stop offering a type a resource at some point,
once you've metered it, and sent a bill for it to a customer, I don't think
you can really 'forget' about its existence anymore... So in that sense I'm
not sure how it can be 'short lived'.

> Some flexibility would also help to accommodate changes related to
grouping resources by type as discussed in [1].

We discussed two options in [1]:
a) support a resource_type in addition to resource_id for grouping
many resource_ids under a single type
b) a common resource_id for several resources (something like 'node'
for all your versions of Node.js build packs for example)

Since option (a) is not implemented at this point and Issue #38 is
actually assigned to a 'future' milestone, AIUI resource providers need to
use option (b) with a common resource_id for multiple resources. Is
creating a scope for that common id still too much of a burden then?

[1] - *https://github.com/cloudfoundry-incubator/cf-abacus/issues/38*
<https://github.com/cloudfoundry-incubator/cf-abacus/issues/38>

Thoughts?

- Jean-Sebastien

On Wed, Oct 7, 2015 at 5:51 PM, Piotr Przybylski <*piotrp(a)us.ibm.com*
<piotrp(a)us.ibm.com>> wrote:
Hi Sebastien,

> That OAuth token should include:
> - a user id uniquely identifying that resource provider;
> - an OAuth scope named like abacus.usage.<resource_id>.write

What kind of customization of the above do you plan to expose?
In some cases it may not be possible or viable to create new scope for each
resource id e.g. short lived resources. The ability to either configure
scope to use for validation or provide scope 'mapping' would help to adapt
it to existing deployments. Some flexibility would also help to accommodate
changes related to grouping resources by type as discussed in [1].

[1] -
*https://github.com/cloudfoundry-incubator/cf-abacus/issues/38*
<https://github.com/cloudfoundry-incubator/cf-abacus/issues/38>


Piotr



[image: Inactive hide details for Jean-Sebastien Delfino
---10/07/2015 12:30:05 AM---Hi Piotr, > what kind of authorization is
required]Jean-Sebastien Delfino ---10/07/2015 12:30:05 AM---Hi
Piotr, > what kind of authorization is required to submit usage to Abacus ?

From: Jean-Sebastien Delfino <*jsdelfino(a)gmail.com*
<jsdelfino(a)gmail.com>>
To: "Discussions about Cloud Foundry projects and the system
overall." <*cf-dev(a)lists.cloudfoundry.org*
<cf-dev(a)lists.cloudfoundry.org>>
Date: 10/07/2015 12:30 AM
Subject: [cf-dev] Re: [abacus] Usage submission authorization
------------------------------




Hi Piotr,

> what kind of authorization is required to submit usage to
Abacus ?
> Is the oauth token used for submission [1] required to have
particular scope, specific to resource or resource provider ?

A resource provider is expected to present an OAuth token with
the usage it submits for a (service or runtime) resource.

That OAuth token should include:
- a user id uniquely identifying that resource provider;
- an OAuth scope named like abacus.usage.<resource_id>.write.

The precise naming syntax for that scope may still evolve in the
next few days as we progress with the implementation of user story
101703426 [1].

> Is there a different scope required to submit runtimes usage
(like cf bridge) versus other services or its possible to use single scope
for all the submissions

I'd like to handle runtimes and services consistently as they're
basically just different types of resources, i.e. one scope per 'service'
resource, one scope per 'runtime' resource.

We're still working on the detailed design and implementation,
but I'm not sure we'd want to share scopes across (service and runtime)
resource providers as that'd allow a resource provider to submit usage for
resources owned by another...

@assk / @sasrin, anything I missed? Thoughts?

-- Jean-Sebastien


On Tue, Oct 6, 2015 at 6:29 PM, Piotr Przybylski <
*piotrp(a)us.ibm.com* <piotrp(a)us.ibm.com>> wrote:
Hi,
what kind of authorization is required to submit
usage to Abacus ?
Is the oauth token used for submission [1] required
to have particular scope, specific to resource or resource provider ? Is
there a different scope required to submit runtimes usage (like cf bridge)
versus other services or its possible to use single scope for all the
submissions ?


[1] -
*https://www.pivotaltracker.com/story/show/101703426*
<https://www.pivotaltracker.com/story/show/101703426>

Piotr





Re: Multi-Line Loggregator events and the new Splunk "HTTP Event Collector" API

Rohit Kumar
 

We have thrown around one approach which solves the problem but would
require changes in the runtime. That solution would expose a socket to the
container where the application could emit logs. The application would now
have control over what delimits a message.

The implementation of this solution though would need coordination with the
runtime, as the socket would need to be plumbed from the container all the
way to metron. The messages would also need to be associated with the
application ID when they reach metron.

Rohit

On Fri, Oct 9, 2015 at 1:53 PM, Mike Youngstrom <youngm(a)gmail.com> wrote:

Another possible idea. Allow an application to send a single log line
with the new line characters escaped "Some Log Line1\\nSome Log Line2" Then
Loggregator could either remove the escape on the logging agent or if that
is too processor expensive then make it a standard responsibility of
clients to unescape these lines.

I can get fairly far myself with this approach by simply unescaping in our
Splunk processor. The problem is other aspects of CF don't expect this so
cf logs doesn't work correctly for example.

Mike

On Thu, Oct 8, 2015 at 11:31 AM, Mike Youngstrom <youngm(a)gmail.com> wrote:

Thanks for the response Rohit. I hope this is the beginning of a good
long discussion on the topic. :)

Before going too deep with the '\' proposal are you aware if the
loggregator team considered any other possible ways an application could
hint to the agent that this line should wait for future lines before
sending the event? I'm not necessarily in love with the '\' approach just
throwing an idea out to start a discussion.

Mike

On Wed, Oct 7, 2015 at 7:58 PM, Rohit Kumar <rokumar(a)pivotal.io> wrote:

Hi Mike,

As Erik mentioned in the last thread, multi-line logging is something
which the loggregator team would like to solve. But there are a few
questions to answer before we can come up with a clean solution. We want a
design which solves the problem while not breaking existing apps which do
not require this functionality. Before implementing a solution we would
also want to answer if we want to do it for both runtimes or just Diego,
since the way log lines are sent to Metron differ based on the runtime.

If we were to implement the solution which you described, where newlines
are escaped with a '\', I guess the expectation is that loggregator would
internally remove the escape character. This has performance implications
because now some part of loggregator will need to inspect the log message
and coalesce the message with the succeeding ones. We will need to do this
in a way which respects multi-tenancy. That means now we are storing
additional state related to log lines per app. We will also need to decide
how long loggregator needs to wait for the next lines in a multi-line log,
before deciding to send the line which it received. To me that's not a
simple change.

I am happy to continue this discussion and hear your thoughts on the
existing proposal or any other design alternatives.

Thanks,
Rohit

On Wed, Oct 7, 2015 at 10:45 AM, Mike Youngstrom <youngm(a)gmail.com>
wrote:

Splunk recently released its new "HTTP Event Collector" that greatly
simplifies how data can be streamed directly into Splunk without going to
an intermediate log file. It would be great to utilize this to efficiently
stream Loggregator information into Splunk.

For the most part loggregator appears to be very compatible with this
API with the exception of multi-line log messages.

The problem is that using this API splunk takes every request as an
independent splunk event. This completely eliminates anything splunk did
in the past to attempt to detect multi-line log messages.

Wouldn't it be great if a single loggregator event could contain
multiple log lines then these events could be easily streamed directly into
Splunk using this new api multiple lines preserved and all?

The previous attempt to bring up this topic fizzled [0]. With a new
LAMB PM coming I thought I'd ask my previous questions again.

In the previous thread [0] Erik mentioned a lot of work that he thought
would lead to multi-line log messages. But, it seems to me that the main
issue is simply how can a client actually communicate an multi-line event
to an agent? I don't think this issue is about breaking apart and then
combining log event rather how can I just I as a client hint to loggregator
that it should include multiple lines included into a single event?

Could it be as simple as escaping new lines with a '\' to notify the
agent to not end that event?

This problem cannot be solved without some help from loggregator.

Mike

[0]
https://lists.cloudfoundry.org/archives/list/cf-dev%40lists.cloudfoundry.org/thread/O6NDVGV44IBMVKZQXWOFIYOIC6CDU27G/


Re: Unable to set CF api endpoint

Yitao Jiang
 

Making CF_TRACE to true and paste the details logs here will be more
helpful.​
BTW, have you enable the route to the bosh-lite vms?

On Tue, Oct 13, 2015 at 12:23 AM, Deepak Arn <arn.deepak1(a)gmail.com> wrote:

Hi,

I able to setup local cf instance(bosh-lite) in ubuntu, but while
deployment, when I run this command "cf api --skip-ssl-validation
https://api.10.244.0.34.xip.io", it's showing the following error message
everytime.
"Error performing request: timeout error, FAILED"
I also tried with "api.bosh-lite.com" which is more reliable than "xip.io
".
cf api --skip-ssl-validation https://api.bosh-lite.com"
Still the error is same.

Thanks,
--

Regards,

Yitao
jiangyt.github.io


Re: [abacus] Accepting delayed usage within a slack window

Benjamin Cheng
 

I'm leaning towards agreeing with you in terms of reducing the number of windows. I agree with what you've said on forever. The only case I can point out is in years. The benefit in having the year window allows only having to go to a single database as opposed to a potential 12 databases with month windows, but I think that probably doesn't outweigh having to duplicate the yearly data on every document.


Re: [abacus] Usage submission authorization

Piotr Przybylski <piotrp@...>
 

Hi Sebastien,
I am not sure why allowing resource provider to explicitly specify scope
with which particular resource usage will be submitted is a problem. Just
allowing to pick a name would not compromise submission security in any
way. It could be done for example by adding scope name to the resource
definition.

Any naming convention may not be generic enough, for example for my UAA
instance requires the scope names to start with component using it,
followed by proper name - 'bss.runtimes.abacus.<resource id>.write'. Also,
resource id is an arbitrary identifier, making it part of the scope may
create quite complex names e.g.
'abacus.runtimes/node/v12-07.revision-2-buildpack-guid-a3d7ff4d-3cb1-4cc3-a855-fae98e20cf57.write.


Piotr





From: Jean-Sebastien Delfino <jsdelfino(a)gmail.com>
To: "Discussions about Cloud Foundry projects and the system
overall." <cf-dev(a)lists.cloudfoundry.org>
Date: 10/09/2015 09:38 PM
Subject: [cf-dev] Re: Re: Re: Re: Re: [abacus] Usage submission
authorization



Hey Piotr,

In some cases it may not be possible or viable to create new scope for
each resource id e.g. short lived resources.

Why wouldn't that be possible? What type of short-lived resources did
you have in mind?

For example experimental service version (beta) replaced by release
version, usage of which may be reported and metered but not necessarily
billed.

OK, that use case makes sense to me. So, your resource is going to be
available for a few hours or days. I'm assuming that to get it on board CF
and meter it with Abacus you're going to run a cf create-service-broker
command or cf update-service-broker, define the resource config specifying
how to meter it, and store that config where your Abacus provisioning
endpoint implementation can retrieve it.

To secure the submission of usage for it, if I understand correctly how UAA
works, you'll then need to do this:
uaac client update <your service provider's client id> --authorities "...
existing permissions... abacus.<your resource id>.write"

That's all...

If that's really too much of a burden (really?) compared to the other
steps, you're basically looking to do *nothing* to secure that resource.
You could just submit usage with the abacus.usage.write scope, but that's
the equivalent of the CF cloud_controller.write scope for Abacus, close to
all powers... I'd probably advise against it as that's a serious risk but
that may be what you're looking for.

The scope names may need to follow adopter specific conventions so
creating scope with predefined name 'abacus.usage....' may not fit that
scheme. Abacus should offer ability to adjust the scope names, otherwise
submission may not be possible.

These are simple names that we expect in the token used to submit usage.
They're just constants like the names of our APIs, parameters, options,
fields in our JSON schemas... basically the contract/interface between the
Abacus user and its implementation. Not sure if there's a specific issue
with that abacus naming convention or if it's just a theoretical question,
but I'll be happy to discuss alternate naming conventions:

Do you have another naming convention in mind that you'd like to use?

Is there a specific issue with abacus.usage.write? Is the 'abacus' part in
the name a problem?

Would you prefer to submit usage with an existing CF scope like
cloud_controller.write or another of these high power scopes?
(again, I'd advise against it though...)

- Jean-Sebastien

- Jean-Sebastien

On Thu, Oct 8, 2015 at 5:24 PM, Piotr Przybylski <piotrp(a)us.ibm.com> wrote:
Hi Sebastien,

>> In some cases it may not be possible or viable to create new scope for
each resource id e.g. short lived resources.

>Why wouldn't that be possible? What type of short-lived resources did
you have in mind?

For example experimental service version (beta) replaced by release
version, usage of which may be reported and metered but not necessarily
billed.
The scope names may need to follow adopter specific conventions so
creating scope with predefined name 'abacus.usage....' may not fit that
scheme. Abacus should offer ability to adjust the scope names, otherwise
submission may not be possible.


> Another reason why I'm not sure about short lived resources, is that
although you may decide to stop offering a type a resource at some point,
once you've metered it, and sent a bill for it >to a customer, I don't
think you can really 'forget' about its existence anymore... So in that
sense I'm not sure how it can be 'short lived'.
The short lived resource is only for submission, once it is not offered,
its specific scope is not needed. Thad does not mean erasing history of
usage.


Piotr




Inactive hide details for Jean-Sebastien Delfino ---10/08/2015 11:10:16
AM---Hi Piotr, > In some cases it may not be possible oJean-Sebastien
Delfino ---10/08/2015 11:10:16 AM---Hi Piotr, > In some cases it may not
be possible or viable to create new scope for

From: Jean-Sebastien Delfino <jsdelfino(a)gmail.com>
To: "Discussions about Cloud Foundry projects and the system overall." <
cf-dev(a)lists.cloudfoundry.org>
Date: 10/08/2015 11:10 AM
Subject: [cf-dev] Re: Re: Re: [abacus] Usage submission authorization






Hi Piotr,

> In some cases it may not be possible or viable to create new scope for
each resource id e.g. short lived resources.

Why wouldn't that be possible? What type of short-lived resources did you
have in mind?

The typical use case I've seen is for a Cloud platform to decide to offer
a new type of database or analytics or messaging service, or a new type
of runtime for example. Before that new resource is offered on the
platform, their resource provider needs to get on board, get a user id,
auth credentials defined in UAA etc... You probably also need to define
how you're going to meter that new resource and the pricing for it.

Couldn't a scope be created in UAA at that time along all these other on
boarding steps?

Another reason why I'm not sure about short lived resources, is that
although you may decide to stop offering a type a resource at some point,
once you've metered it, and sent a bill for it to a customer, I don't
think you can really 'forget' about its existence anymore... So in that
sense I'm not sure how it can be 'short lived'.

> Some flexibility would also help to accommodate changes related to
grouping resources by type as discussed in [1].

We discussed two options in [1]:
a) support a resource_type in addition to resource_id for grouping many
resource_ids under a single type
b) a common resource_id for several resources (something like 'node' for
all your versions of Node.js build packs for example)

Since option (a) is not implemented at this point and Issue #38 is
actually assigned to a 'future' milestone, AIUI resource providers need
to use option (b) with a common resource_id for multiple resources. Is
creating a scope for that common id still too much of a burden then?

[1] - https://github.com/cloudfoundry-incubator/cf-abacus/issues/38

Thoughts?

- Jean-Sebastien

On Wed, Oct 7, 2015 at 5:51 PM, Piotr Przybylski <piotrp(a)us.ibm.com>
wrote:
Hi Sebastien,

> That OAuth token should include:
> - a user id uniquely identifying that resource provider;
> - an OAuth scope named like abacus.usage.<resource_id>.write

What kind of customization of the above do you plan to expose? In
some cases it may not be possible or viable to create new scope for
each resource id e.g. short lived resources. The ability to either
configure scope to use for validation or provide scope 'mapping'
would help to adapt it to existing deployments. Some flexibility
would also help to accommodate changes related to grouping
resources by type as discussed in [1].

[1] - https://github.com/cloudfoundry-incubator/cf-abacus/issues/38


Piotr



Inactive hide details for Jean-Sebastien Delfino ---10/07/2015
12:30:05 AM---Hi Piotr, > what kind of authorization is required
Jean-Sebastien Delfino ---10/07/2015 12:30:05 AM---Hi Piotr, > what
kind of authorization is required to submit usage to Abacus ?

From: Jean-Sebastien Delfino <jsdelfino(a)gmail.com>
To: "Discussions about Cloud Foundry projects and the system
overall." <cf-dev(a)lists.cloudfoundry.org>
Date: 10/07/2015 12:30 AM
Subject: [cf-dev] Re: [abacus] Usage submission authorization




Hi Piotr,

> what kind of authorization is required to submit usage to
Abacus ?
> Is the oauth token used for submission [1] required to have
particular scope, specific to resource or resource provider ?

A resource provider is expected to present an OAuth token with the
usage it submits for a (service or runtime) resource.

That OAuth token should include:
- a user id uniquely identifying that resource provider;
- an OAuth scope named like abacus.usage.<resource_id>.write.

The precise naming syntax for that scope may still evolve in the
next few days as we progress with the implementation of user story
101703426 [1].

> Is there a different scope required to submit runtimes usage
(like cf bridge) versus other services or its possible to use
single scope for all the submissions

I'd like to handle runtimes and services consistently as they're
basically just different types of resources, i.e. one scope per
'service' resource, one scope per 'runtime' resource.

We're still working on the detailed design and implementation, but
I'm not sure we'd want to share scopes across (service and runtime)
resource providers as that'd allow a resource provider to submit
usage for resources owned by another...

@assk / @sasrin, anything I missed? Thoughts?

-- Jean-Sebastien


On Tue, Oct 6, 2015 at 6:29 PM, Piotr Przybylski <piotrp(a)us.ibm.com
> wrote:
Hi,
what kind of authorization is required to submit usage
to Abacus ?
Is the oauth token used for submission [1] required to
have particular scope, specific to resource or resource
provider ? Is there a different scope required to
submit runtimes usage (like cf bridge) versus other
services or its possible to use single scope for all
the submissions ?


[1] -
https://www.pivotaltracker.com/story/show/101703426

Piotr


Re: Deploy cf: Error filling in template `config.json.erb' for `consul_z1/0'

Amit Kumar Gupta
 

Hey James,

Thanks for raising this issue. We do have a systemic issue with the fact
that our documentation is not versioned. The docs can become out of date
as things change; and since they're not versioned, once they're updated to
reflect the changes, there is no longer working documentation for older
versions of our software. I will re-raise this issue with our docs team.

As for the correct way to fill out the example stub to deploy cf using the
scripts/generate_deployment_manifest script in the cf-release repo, what
version of cf-release were you deploying, and what SHA of the repo do you
have checked out? Also, do you intend to have unencrypted traffic with
consul, and to the HA proxy? For instance, if for consul you intend to
have unencrypted traffic, then rather than setting the encrypt_keys
property, you should set the require_ssl property (to false). Otherwise,
you should set encrypt_keys to a non-empty array, and also provide several
SSL certs and keys.

Best,
Amit

On Mon, Oct 12, 2015 at 7:56 AM, James Leavers <james(a)cloudhelix.io> wrote:

Hi,

As the error message implies, there was something missing - I thought I
had double-checked everything before posting, but as usual, this was not
the case :-)

The encrypt_keys property had to be added - it is blank by default:

properties:
consul:
encrypt_keys: []

In case anyone else comes across this thread, the following also need to
be added:

** Environment config

Ensure that you have an environment name in your cf-stub.yml, e.g.

meta:
environment: my-cf-env

Otherwise you will end up with this:

Failed: Error filling in template metron_agent.json.erb'

https://github.com/cloudfoundry/cf-release/issues/690

** HAProxy config

By default it will be generated like this:

properties:
ha_proxy: null

Which will result in this:

Error filling in template `haproxy.config.erb' for `ha_proxy_z1/0'

You can add a certificate as follows:

ha_proxy:
ssl_pem: |
-----BEGIN CERTIFICATE-----
-----END CERTIFICATE-----
-----BEGIN PRIVATE KEY-----
-----END PRIVATE KEY-----

Regards
James


Re: `cf app` output

CF Runtime
 

Hey Dan,

Have you been able to confirm that this occurs on CF v219 and not on
earlier versions of CF? What's the earliest version of CF where you're
seeing this behavior?

Thanks,
Natalie & Mikhail

OSS Release & Integration

On Mon, Oct 5, 2015 at 2:08 PM, Daniel Mikusa <dmikusa(a)pivotal.io> wrote:

Thanks for those links. I'm using the DEA for now and it looks like
nothing has changed there for quite some time (Apr 2014). I'll see if I
can debug a bit more to see what's happening.

Thanks,

Dan

On Mon, Oct 5, 2015 at 4:36 PM, Matthew Sykes <matthew.sykes(a)gmail.com>
wrote:

In the DEA:
https://github.com/cloudfoundry/dea_ng/blob/4b1a50ae5598b0c70cb3e5895ed800e0cff37722/lib/dea/stat_collector.rb#L93

In Diego/Garden:

https://github.com/cloudfoundry-incubator/garden-linux/blob/master/linux_container/metrics.go#L129

If you're using the DEA, stats come back in response to the
`find.droplet` message while in Diego they get updated on an interval.

On Mon, Oct 5, 2015 at 4:20 PM, Daniel Mikusa <dmikusa(a)pivotal.io> wrote:

Has something changed recently with the output of `cf app` and how it
reports memory usage?

I was deploying apps to CF v219 and noticed that the output of `cf app`
is not what I'm used to seeing.

I'm deploying a Java app with a 512M memory limit. The Java build pack
is setting the initial heap and metaspace sizes to be 373M and 64M
(-XX:MaxMetaspaceSize=64M -Xss995K -Xmx382293K -Xms382293K
-XX:MetaspaceSize=64M), which is a total of 437M. Since that's just the
heap and metaspace, I'd expect the app to start out using at least that
much memory.

The report from `cf app` is showing 366.3M of 512M.

Does anyone know how this is presently being calculated / have a link to
the source code where this is being calculated?

Thanks,

Dan


--
Matthew Sykes
matthew.sykes(a)gmail.com


Re: [Proposal] Wanted a Babysitter for my applicatoin. ; -)

Amit Kumar Gupta
 

Does this mean Diego will be capable of provisioning workloads other than
garden-linux containers?

It can schedule to anything that runs something that acts like a "Rep" in
front of : https://github.com/cloudfoundry-incubator/rep. This is how
Diego is able to support garden-windows running .NET apps, for example.

The idea was that the ‘babysitter’ should be able to fire up a HTTP POST
to such a system automatically when any of its threshold value such as cpu,
memory, disk exceeds, other times it simply collects and sends a
consolidated metrics report once a minute.

That's a great idea. I think there's a couple distinct ideas here. The
first is custom healthchecking, some basic test you want to constantly do
against your app instances and shut it down if the healthchecks fail.
Diego allows you to define such healthchecks, it's not exposed up at the
level of the CC though. I think there will need to be some discovery to
determine the main use cases and finding an "opinionated" way to expose
this functionality through the API, exposing Diego's full Executor Action
DSL is probably not desirable.

The second idea here is the idea of streaming metrics out of the system,
and being able to set up monitoring, alerting, and hooks (e.g. autoscaling)
around these metrics. Things like request latency can be gleaned from the
gorouter, and memory and CPU metrics are available through the loggregator
firehose, as I believe this is how cf app is able to report those values.
I imagine this actually has a very large scope: gather metrics from
throughout the system relevant to your app instances, stream it out, have a
system ingest it, provide visualizations of the data, allow setting
alerting thresholds, allow configuring hooks like to kill an app instance,
scale an app up, roll back to a previous version of the droplet, etc, and
then building the piece that actually can convert these hooks into requests
that the CC will honour. This is more than what your original proposal
described, but I think it's sort of the logical conclusion. Quite
valuable, but also quite large in scope.

Best,
Amit

On Tue, Oct 6, 2015 at 1:21 AM, Dhilip Kumar S <dhilip.kumar.s(a)huawei.com>
wrote:

Thanks again Amit for the clarification on the executor part.



“The plan is that this can be solved within Diego's abstractions of tasks
and LRPs”

Does this mean Diego will be capable of provisioning workloads other than
garden-linux containers?



Ill add just one little point to clarify, but not pushing on the idea
itself.



I should have been even more explicit When I mentioned HM, what I meant
was the subsystem that was responsible for managing application’s health,
I did not intend to point at HM9000 specifically. The idea was that the
‘babysitter’ should be able to fire up a HTTP POST to such a system
automatically when any of its threshold value such as cpu, memory, disk
exceeds, other times it simply collects and sends a consolidated metrics
report once a minute.



Say for instance. A given app exceeds 90% CPU then the babysitter
automatically sends the post message to a specified (discoverable endpoint).

json{

GUID: ABCD1234

Time: <time stamp>

Index: 3

CPU: 95

Mem: 50

Disk: 50

}



Regards,

Dhilip



*From:* Amit Gupta [mailto:agupta(a)pivotal.io]
*Sent:* Tuesday, October 06, 2015 12:00 PM
*To:* Discussions about Cloud Foundry projects and the system overall.
*Cc:* Vinay Murudi; Krishna M Kumar; Liangbiao; Jianhui Zhou; Srinivasch
ch
*Subject:* [cf-dev] Re: Re: Re: Re: [Proposal] Wanted a Babysitter for my
applicatoin. ;-)



Hey Dhilip,



To clarify, we don't have a copy of executor for each garden-linux
container. A single "cell" VM has one executor, one garden-linux, and many
containers. The executor runs one monitor or "babysitter" process per each
container.



What would be the benefit of running a monitor inside external systems
which report to the HM? With Diego, there is no HM, so who exactly would
it report to? And whatever it reports to, what can it do with that
information? The Diego system components can take action when hearing
about a failed container running in a Diego cell, it can schedule the
process to be restarted, or whatever the right action may be given the
crash restart policies. How can Diego or any Cloud Foundry component take
action against an external system?



I think you highlight something valuable, that it would be nice for the
platform to support running things other than apps, e.g. a MySQL database.
The plan is that this can be solved within Diego's abstractions of tasks
and LRPs, and it's true for perhaps most stateless non-app workloads, but
things like databases are still hard, due to persistence being a hard
problem. If you have not already seen it, Ted Young and Caleb Miles talk
at the last CF Summit about this problem is a good one to watch:
https://www.youtube.com/watch?v=3Ut6Qdd2FHY



Not all containers run sshd. Typically, the CC is responsible for
requesting that an LRP have SSH access enabled, it's not conflated with
Diego's responsibilities. It's also optional for the CC, users and space
managers can opt to disable SSH (actually, I believe it's disabled by
default).



Cheers,

Amit



On Mon, Oct 5, 2015 at 10:39 PM, Dhilip Kumar S <dhilip.kumar.s(a)huawei.com>
wrote:

Hi All,



Thanks for the response.



Hi Amit,



Thanks for the info, I haven’t noticed that we run a copy of executor for
each ‘garden-linux’ container that we launch. We do have a ‘push’ based
container metrics collection and monitoring mechanism already in place
then, In this case I can think of only the following benefits here.



1) This can become a unified health check approach as this binary
can be packed within the container, it can even run inside a
docker-container of an external system and keep pushing to a common HM. Or
we could run this in the same VM as a My-SQL instance to get its health.

2) This can be a part of the SshD as we are running a daemon in
every container anyways.



Ofcourse the original intention is to see if we could slightly alter the
way diego’s monitoring/metrics collection works. If this is already
implemented then I do not see a point perusing this idea.



Thanks for your time CF,

Dhilip





*From:* Amit Gupta [mailto:agupta(a)pivotal.io]
*Sent:* Tuesday, October 06, 2015 1:05 AM
*To:* Discussions about Cloud Foundry projects and the system overall.
*Cc:* Vinay Murudi; Krishna M Kumar; Liangbiao; Srinivasch ch
*Subject:* [cf-dev] Re: Re: [Proposal] Wanted a Babysitter for my
applicatoin. ;-)



I'm not sure I see the benefit here.



Diego, for instance, runs a customizable babysitter alongside each app
instance, and kills the container if the babysitter says things are going
bad. This triggers an event that the system can react to, and the system
also polls for container states because events can always be lost.



One thing to note is in this case, "the system" is the Executor, not HM9k
(which doesn't exist in Diego), or the Converger (Diego's equivalent of
HM9k), or Firehose or Cloud Controller which are very far removed from the
container backend. In Diego, the pieces are loosely coupled, events/data
in the system don't have to be sent through several layers of abstraction.



Best,

Amit



On Mon, Oct 5, 2015 at 10:09 AM, Curry, Matthew <Matt.Curry(a)allstate.com>
wrote:

We have been talking about something similar that we have labeled the
Angry Farmer. I do not think you would need an agent. The firehose and
cloud controller should have everything that you need. Also an agent does
not give you the ability to really measure the performance of instances
relative to each other which is a good indicator of bad state or
performance.



Matt



*From: *Dhilip Kumar S <dhilip.kumar.s(a)huawei.com>
*Reply-To: *"Discussions about Cloud Foundry projects and the system
overall." <cf-dev(a)lists.cloudfoundry.org>
*Date: *Monday, October 5, 2015 at 9:31 AM
*To: *"Discussions about Cloud Foundry projects and the system overall." <
cf-dev(a)lists.cloudfoundry.org>
*Cc: *Vinay Murudi <vinaym(a)huawei.com>, Krishna M Kumar <
krishna.m.kumar(a)huawei.com>, Liangbiao <rexxar.liang(a)huawei.com>,
Srinivasch ch <srinivasch.ch(a)huawei.com>
*Subject: *[cf-dev] [Proposal] Wanted a Babysitter for my applicatoin. ;-)



Hello CF,

Greetings from Huawei. Here is a quick idea that came up to our mind
recently. Honestly we did not spend enormous time brainstorming this
internally, but we thought we could go ahead and ask the community
directly. It would be a great help to know if such an idea had already been
considered and dropped by the community.

*Proposal Motivations*

The way health-check process is currently performed in cloud foundry is to
run a command
<https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_cloudfoundry-2Dincubator_healthcheck&d=BQMGaQ&c=gtIjdLs6LnStUpy9cTOW9w&r=5uKsnIXwfJIxHSaCSaJzvcn90bBlYQuxsJhof4ERK-Q&m=8v-kDNCf3N_TGthtUze_YzZR4BADnwPZ9BiNHtzQnF4&s=MkllX3Km4FRjbvpC1QE02cQWP_QcCOE2qDv-UQCgytk&e=>
periodically; if the exit status is non-zero then it is assumed that an
application is non-responsive. We periodically repeat this process for all
the applications. Which means that we actually scan the entire data center
frequently to find one or few miss-behaving apps?

Why can’t we change the way health-check is done? Can it reflect the
real-world? The hospitals don’t periodically scan the entire community
looking for sick residents. Similarly, why can’t we report problems as and
when they occur – just like the real-world?

How about a lightweight process that constantly monitors the application’s
health and periodically reports in case an app is down or non-responsive
etc. In a huge datacenter where thousands of apps are hosted, and each app
has many instances. Wouldn’t it be better to make the individual
app/container come and tell us(healthmanager) that there is a problem
instead of scanning all of them? *Push versus Pull model* - Something
like a babysitter residing within each container and taking care of the
‘app’ hosted by our customers.

*How to accomplish this?*

Our proposal is for BabySitter(BS) – an agent residing within each
container optionally deployed using app-specific configuration. This agent
sends out the collected metrics to health monitor in case of any anomaly –
periodic time-series information etc. The agent should remember the
configured threshold value that each app should not exceed; otherwise it
triggers an alarm automatically to the health monitor in case of any
threshold violations. The alarm even could be sent many times a second to
the healthmonitor depending on the severity of the event, but the regular
periodic ‘time-series’ information could be collected every second but sent
once a minute to the HM. The challenge is design the application ‘bs’ as
lightweight as possible.

This is our primary idea, we also thought it would make more sense if we
club few more capabilities to babysitter like sshd (as a goroutine) and
fileserver(as a goroutine) but before we bore you with all that details, we
first want to understand what CF community thinks about this initial idea.

Thanks in advance,

Dhilip







Buildpacks PMC - 2015-10-12 Notes

Mike Dalessio
 

Hello CF community,

Here is an update from the Buildpacks PMC, as of 2015-10-12. The full notes
are available at

https://github.com/cloudfoundry/pmc-notes/blob/master/Buildpacks/2015-10-12-buildpacks.md

but I've reproduced them below in their entirety for your convenience.

*Please note* that there are three proposals below for which we're
requesting comments from the community. We invite comments and concerns, as
well as alternative solutions, in the Github Issues that are linked to
below.

Cheers,
-mike

-----

Buildpacks PMC Notes as of 2015-10-08

The last set of notes were sent out on 2015-09-09
<https://github.com/cloudfoundry/pmc-notes/blob/master/Buildpacks/2015-10-12-buildpacks.md#table-of-contents>Table
of Contents

1. Update on Stacks
2. Update on Buildpacks
1. General
2. java-buildpack
3. go-buildpack
4. php-buildpack
5. python-buildpack

<https://github.com/cloudfoundry/pmc-notes/blob/master/Buildpacks/2015-10-12-buildpacks.md#stacks>
Stacks
<https://github.com/cloudfoundry/pmc-notes/blob/master/Buildpacks/2015-10-12-buildpacks.md#releases>
Releases

Released cflinuxfs2 1.9.0
<https://github.com/cloudfoundry/stacks/releases/tag/1.9.0> and 1.8.0
<https://github.com/cloudfoundry/stacks/releases/tag/1.8.0>, which address
USN-2740-1 <http://www.ubuntu.com/usn/usn-2740-1>, "ICU vulnerabilities"
and USN-2739-1 <http://www.ubuntu.com/usn/usn-2739-1>, "FreeType
vulnerabilities".
<https://github.com/cloudfoundry/pmc-notes/blob/master/Buildpacks/2015-10-12-buildpacks.md#buildpacks>
Buildpacks
<https://github.com/cloudfoundry/pmc-notes/blob/master/Buildpacks/2015-10-12-buildpacks.md#general>
General

The Buildpacks team has been doing further experimentation with
*extensibility* for both developers and operators. Results can be seen in
the public Tracker under the "architecture" epic
<https://www.pivotaltracker.com/epic/show/1898760>.

The Buildpacks team has also been working on a feature track to allow end
users of the core buildpacks to *verify the origin* of all binaries
vendored in the buildpack. This "chain of custody" track is intended to
allow security-minded CF operators to trust the buildpack binaries being
run in their deployment (and to regenerate the binaries themselves as
needed). This work can be viewed in the public Tracker under the "chain of
custody" epic <https://www.pivotaltracker.com/epic/show/2077742>.
<https://github.com/cloudfoundry/pmc-notes/blob/master/Buildpacks/2015-10-12-buildpacks.md#java-buildpack>
java-buildpack
<https://github.com/cloudfoundry/pmc-notes/blob/master/Buildpacks/2015-10-12-buildpacks.md#releases-1>
Releases

Released java-buildpack 3.2
<https://github.com/cloudfoundry/java-buildpack/releases/tag/v3.2> and 3.3
<https://github.com/cloudfoundry/java-buildpack/releases/tag/v3.3>.

These buildpacks add Luna HA support, as well as deliver improvements to
the memory calculator.

Please view the release notes
<https://github.com/cloudfoundry/java-buildpack/releases> for full details.
<https://github.com/cloudfoundry/pmc-notes/blob/master/Buildpacks/2015-10-12-buildpacks.md#go-buildpack>
go-buildpack
<https://github.com/cloudfoundry/pmc-notes/blob/master/Buildpacks/2015-10-12-buildpacks.md#releases-2>
Releases

Released go-buildpack v1.6.1
<https://github.com/cloudfoundry/go-buildpack/releases/tag/v1.6.1> and
v1.6.2 <https://github.com/cloudfoundry/go-buildpack/releases/tag/v1.6.2>.

These releases add support for golang 1.5.1 and golang 1.4.3, which
addresses a number of CVEs in 1.4.2 and earlier. Support for golang 1.4.1
was dropped.

Please view the release notes
<https://github.com/cloudfoundry/go-buildpack/releases> for full details.
<https://github.com/cloudfoundry/pmc-notes/blob/master/Buildpacks/2015-10-12-buildpacks.md#proposals>
Proposals

*Proposal:* It is being proposed to drop support for golang 1.2.x and
1.3.x. We're using a Github Issue as an RFC, so please comment here:

https://github.com/cloudfoundry/go-buildpack/issues/22

<https://github.com/cloudfoundry/pmc-notes/blob/master/Buildpacks/2015-10-12-buildpacks.md#php-buildpack>
php-buildpack
<https://github.com/cloudfoundry/pmc-notes/blob/master/Buildpacks/2015-10-12-buildpacks.md#releases-3>
Releases

Released v4.1.5
<https://github.com/cloudfoundry/php-buildpack/releases/tag/v4.1.5>, v4.1.4
<https://github.com/cloudfoundry/php-buildpack/releases/tag/v4.1.4>, and
v4.1.3 <https://github.com/cloudfoundry/php-buildpack/releases/tag/v4.1.3>
which:

- updates to nginx 1.9.5,
- updates to PHP 5.6.14, 5.6.13, 5.5.30, 5.5.29, and 5.4.45.
- addresses USN-2740-1 "ICU vulnerabilities" (in combination with rootfs
1.9.0)

Please view the release notes
<https://github.com/cloudfoundry/php-buildpack/releases> for full details.
<https://github.com/cloudfoundry/pmc-notes/blob/master/Buildpacks/2015-10-12-buildpacks.md#end-of-life>End
of Life

*Please note that PHP 5.4 reached "End of Life" on 2015-09-14*. We intend
to remove support for this version of PHP in the next release of the
php-buildpack.
<https://github.com/cloudfoundry/pmc-notes/blob/master/Buildpacks/2015-10-12-buildpacks.md#proposals-1>
Proposals

*Proposal:* It is being proposed to drop support for nginx 1.6 (but keeping
support for nginx 1.8 and 1.9). We're using a Github Issue as an RFC, so
please comment here:

https://github.com/cloudfoundry/php-buildpack/issues/109

<https://github.com/cloudfoundry/pmc-notes/blob/master/Buildpacks/2015-10-12-buildpacks.md#python-buildpack>
python-buildpack
<https://github.com/cloudfoundry/pmc-notes/blob/master/Buildpacks/2015-10-12-buildpacks.md#releases-4>
Releases

Released v1.5.1
<https://github.com/cloudfoundry/python-buildpack/releases/tag/v1.5.1> which
adds support for Python 3.5.0.

Please view the release notes
<https://github.com/cloudfoundry/python-buildpack/releases> for full
details.
<https://github.com/cloudfoundry/pmc-notes/blob/master/Buildpacks/2015-10-12-buildpacks.md#nodejs-buildpack>
nodejs-buildpack
<https://github.com/cloudfoundry/pmc-notes/blob/master/Buildpacks/2015-10-12-buildpacks.md#proposals-2>
Proposals

*Proposal:* It is being proposed to add Node 4.x support by statically
linking against openssl 1.0.2. This introduces a requirement to restage
applications running Node 4.x to address openssl CVEs. (Currently, only a
rootfs update is needed for this scenario.) We're using a Github Issue as
an RFC, so please comment here:

https://github.com/cloudfoundry/nodejs-buildpack/issues/32


Unable to set CF api endpoint

Deepak Arn <arn.deepak1@...>
 

Hi,

I able to setup local cf instance(bosh-lite) in ubuntu, but while deployment, when I run this command "cf api --skip-ssl-validation https://api.10.244.0.34.xip.io", it's showing the following error message everytime.
"Error performing request: timeout error, FAILED"
I also tried with "api.bosh-lite.com" which is more reliable than "xip.io".
cf api --skip-ssl-validation https://api.bosh-lite.com"
Still the error is same.

Thanks,


Re: [abacus] Accepting delayed usage within a slack window

Jean-Sebastien Delfino
 

Hi Ben,

That makes sense to me. What you've described will enable refinements of
accumulated usage for a month as we continue to receive delayed usage
during the first few days of the next month.

To illustrate this with an example: with a 48h time window, on Sept 30 you
can retrieve the Sept 30 usage doc and find 'provisional' usage for Sept in
the 'month time window', not including unknown usage not yet been submitted
to Abacus. Later on Oct 2nd you can retrieve the Oct 2nd usage doc and find
the 'final usage' for Sept in the 'month - 1 time window'. I think this is
better than waiting for Oct 2nd to 'close the Sept window', as our users
typically want to see both their *real time* usage for Sept before Oct 2nd
and their final usage later once it has settled for sure.

I also like that with that approach you don't need to go back to your Sept
30 usage doc to patch it up with delayed usage, as that way you're also
keeping a record of the Sept usage that was really known to us on Sept 30.

Another interesting aspect of this is that the history you're going to
maintain will allow us to write 'marker' usage docs when we transition from
one time window to another. Since a usage doc contains both the usage for
the day and the previous day, you can write the first document you process
each day, as a marker, in a reporting db and that'll give you an easy and
efficient way to retrieve the accumulated usage for the previous day. For
example, to retrieve the usage accumulated at the end of Oct 11, just
retrieve the 'marker' usage doc for Oct 12 and get the usage in its 'day -
1 time window'. That could help us implement the kind of query that Georgi
mentioned on the chat last week when he was looking for an efficient way to
retrieve daily usage for all the days of the month.

Finally, looking at the array of numbers/objects currently used to maintain
our time windows, I'm wondering if keeping the 'yearly' and 'forever' usage
time windows is not a bit overkill (and could actually become a problem).

That data is going to be duplicated in all individual usage docs for little
value IMO as the yearly usage at least is easy to reconstruct at reporting
time with a query over 12 monthly usage docs. Also, maintaining that
'forever' usage will require us to keep usage docs around for resource
instances that may have been deleted long time ago, and will complicate our
database partitioning scheme as these old resource instances will cause the
databases to grow forever. So, I'd prefer to let old usage data sit in old
monthly database partitions instead of having to carry that old data over
each month forever just to maintain these 'forever' time windows.

In other words, I'm suggesting to change our current array of 7 time
windows [Forever, Y, M, D, h, m, s] to 5 windows [M, D, h, m, s]. Combined
with your slack window proposal, with a 2D slack time we'll be looking at
an array like follows: [[M, M-1], [D, D-1, D-2], [h], [m], [s]]. With a 48h
slack time the array will have 49 hourly entries [h, h-1, h-2, h-3, etc]
instead of one.

Thoughts?


- Jean-Sebastien

On Sun, Oct 11, 2015 at 6:04 AM, Benjamin Cheng <bscheng(a)us.ibm.com> wrote:

One of the things that need to be supported in abacus is the handling of
delayed usage submissions within a particular slack window after the usage
has happened. For example, given a slack window of 48 hours, a service
provider will be able to submit usage back to September 30th on October 2nd.

An idea that we were discussing about for this was augmenting the quantity
from an array of numbers/objects to an array of arrays of numbers/objects
and using an environmental variable that is currently going to be called
SLACK to hold the configuration of the slack window. SLACK would follow a
format of [0-9]+[YMDhms] with the width of the slack window and to what
precision the slack window should be maintained. 2D and 48h both are the
same time, but 48h will keep track of the history to the hour level while
2D will only keep it to the day level. If this environment variable isn't
configured, the current idea is to have no slack window as the default.

The general formula for the length of each array in a time window would be
as follows: 1(This is for usage covered in the current window) + (number of
windows to cover the configured slack window for the particular time
window).
IE: Given a slack of 48h. The year time window would be 1 + 1. Month would
be 1 + 1. Day would be 1 + 2. Hours would be 1 + 48. Minutes/Seconds would
stay at 1.

Thoughts on this idea?


Re: [abacus] End-to-end test coverage for secured Abacus

Jean-Sebastien Delfino
 

+1 to your proposal. That'll also help us test the performance impacts of
enabling that security.

- Jean-Sebastien

On Sat, Oct 10, 2015 at 8:38 PM, Saravanakumar A Srinivasan <
sasrin(a)us.ibm.com> wrote:


Now that we have secured Abacus using OAuth bearer access token based
authentication (See [1] for more details), we need to have an end-end test
coverage that tests Abacus with security enabled at Travis.

Looking for adding this test as a variation of one of our existing
end-to-end tests and/or all of our integration tests by just simply adding
security environment variables before starting Abacus and the tests.

Since our integration tests are starting a part of Abacus processing steps
inside the test, it would be simpler to cover at integration tests,
however, integration tests do not flow to the next processing step in the
pipeline, thus we will not be able to test the flow from one processing
step to another.

Whereas Performance Test and Demo Client expect a running Abacus, so any
variations require all of them to be started with same set of environment
variables to complete a meaningful test. One of the options could be to
have an npm script/command that would run this variation as one step and
have them integrated with our continuous integration build at Travis.

Adding a security variation to Demo Client would raise the entry bar for
anybody trying to get familiar with Abacus. So I am moving towards having
an npm script to 1. set security environment variables, 2. start Abacus,
3. run Performance Test with OAuth bearer token, and 4. stop Abacus.

That would leave us with Demo Client and all integration tests to run only
on an unsecured Abacus environment and the Performance Test could be run
with either secured or unsecured Abacus.

Any opinions/comments/thoughts?

[1] https://github.com/cloudfoundry-incubator/cf-abacus/issues/35


Thanks,
Saravanakumar Srinivasan (Assk)


Re: Deploy cf: Error filling in template `config.json.erb' for `consul_z1/0'

James Leavers
 

Hi,

As the error message implies, there was something missing - I thought I had double-checked everything before posting, but as usual, this was not the case :-)

The encrypt_keys property had to be added - it is blank by default:

properties:
consul:
encrypt_keys: []

In case anyone else comes across this thread, the following also need to be added:

** Environment config

Ensure that you have an environment name in your cf-stub.yml, e.g.

meta:
environment: my-cf-env

Otherwise you will end up with this:

Failed: Error filling in template metron_agent.json.erb'

https://github.com/cloudfoundry/cf-release/issues/690

** HAProxy config

By default it will be generated like this:

properties:
ha_proxy: null

Which will result in this:

Error filling in template `haproxy.config.erb' for `ha_proxy_z1/0'

You can add a certificate as follows:

ha_proxy:
ssl_pem: |
-----BEGIN CERTIFICATE-----
-----END CERTIFICATE-----
-----BEGIN PRIVATE KEY-----
-----END PRIVATE KEY-----

Regards
James


Re: Deploy cf: Error filling in template `config.json.erb' for `consul_z1/0'

Chad Malfait
 

Hi James.

I was curious if you've found a solution? I am seeing the same issue.

Thanks!


Re: Deploy CF on OpenStack, api and uaa failing

Bruce Yang
 

Hi Mike,
Did you manage to find the root cause of the issue?
How did you get it fixed?

I am trying to deploy CF onto vShpere. And now I am stuck at the same point.

It will be very nice of you to help me out

Best Regards
Bruce Yang


Re: R: Re: R: Re: Deploy CF on OpenStack, api and uaa failing

Bruce Yang
 

Hi, I am facing the exact same issue.

Could you please tell how I can solve this problem?

7181 - 7200 of 9425