Date   

Re: How to deploy a Web application using HTTPs

Juan Antonio Breña Moral <bren at juanantonio.info...>
 

Hi James,

I have just tested and I received this message:

"502 Bad Gateway: Registered endpoint failed to handle the request."

Source:
https://github.com/jabrena/CloudFoundryLab/tree/master/Node_HelloWorld_ssl

I think that it is a very important feature. In the example, I use a local certificate to offer a https connection with an API, but CF doesn't have any support.

My question is: How to deploy in Pivotal a secure application if the platform doesn't that support?

Juan Antonio


Re: So many hard-coded dropsonde destinations to metrons

Warren Fernandes
 

The LAMB team added a chore to discuss how we can better manage a dropsonde_incoming_port on the metron_agent over here https://www.pivotaltracker.com/story/show/102935222

We'll update this thread once we decide how to proceed.


Re: CAB September Call on 9/9/2015 @ 8a PDT

Michael Maximilien
 

Final reminder for the CAB call tomorrow. See you at Pivotal SF and talk to you all then.

Best,

dr.max
ibm cloud labs
silicon valley, ca

Sent from my iPhone

On Sep 2, 2015, at 6:04 PM, Michael Maximilien <maxim(a)us.ibm.com> wrote:

Hi, all,

Quick reminder that the CAB call for September is next week Wednesday September 9th @ 8a PDT.

Please add any project updates to Agenda here: https://docs.google.com/document/d/1SCOlAquyUmNM-AQnekCOXiwhLs6gveTxAcduvDcW_xI/edit#heading=h.o44xhgvum2we

If you have something else to share, please also add an entry at the end.

Best,

Chip, James, and Max

PS: Dr.Nic this is one week in advance, so no excuses ;) phone info listed on agenda.
PPS: Have a great labor day weekend---if you are in the US.


Re: Generic data points for dropsonde

Johannes Tuchscherer
 

Ben,

I guess I am working under the assumption that the current upstream schema
is not going to see a terrible amount of change. The StatsD protocol has
been very stable for over four years, so I don't understand why we would
add more and more metric types. (I already struggle with the decision to
have container metrics as their own data type. I am not quite sure why that
was done vs just expressing them as ValueMetrics).

I am also not following your argument with the multiple implementations of
a redis export? Why would you have multiple implementations of a redis info
export? Also, why does the downstream consumer have to know about the
schema? Neither the datadog nozzle nor the graphite nozzle cares about any
type of schema right now.

But to answer your question, I think as a downstream developer I am not as
interested in whether you are sending me a uint32 or uint64, but the
meaning (e.g. counter vs value) is much more important to me. So, if you
were to do nested metrics, I think I would rather like to see having nested
counters or values in there plus maybe one type that we are missing which
is a generic event with just a string.

Generally, I would try to avoid falling into the trap of creating a overly
generic system at the cost of making consumers unnecessarily complicated.
Maybe it would help if you outlined a few use cases that might benefit from
a system like this and how specifically you would implement a downstream
consumer (e.g. is there a common place where I can fetch the schema for the
generic data point?).

On Sat, Sep 5, 2015 at 6:57 AM, James Bayer <jbayer(a)pivotal.io> wrote:

after understanding ben's proposal of what i would call an extensible
generic point versus the status quo of metrics that are actually hard-coded
in software on by the metric producer and the metric consumer, i
immediately gravitated toward the approach by ben.

cloud foundry has really benefited from extensibility in these examples:

* diego lifecycles
* app buildpacks
* app docker images
* app as windows build artifact
* service brokers
* cf cli plugins
* collector plugins
* firehose nozzles
* diego route emitters
* garden backends
* bosh cli plugins
* bosh releases
* external bosh CPIs
* bosh health monitor plugins

let me know if there are other points of extension i'm missing.

in most cases, the initial implementations required cloud foundry system
components to change software to support additional extensibility, and some
of the examples above still require that and it's an issue in frustration
as someone with an idea to explore needs to persuade the cf maintaining
team to process a pull request or complete work on an area. i see ben's
proposal as making an extremely valuable additional point of extension for
creating application and system metrics that benefits the entire cloud
foundry ecosystem.

i am sympathetic to the question raised by dwayne around how large the
messages will be. it would seem that we could consider an upper bound on
the number of attributes supported by looking at the types of metrics that
would be expressed. the redis info point is already 84 attributes for
example.

all of the following seem related to scaling considerations off the top of
my head:
* how large an individual metric may be
* at what rate the platform should support producers sending metrics
* what platform quality of service to provide (lossiness or not, back
pressure, rate limiting, etc)
* what type of clients to the metrics are supported and any limitations
related to that.
* whether there is tenant variability in some of the dimensions above. for
example a system metric might have a higher SLA than an app metric

should we consider putting a boundary on the "how large an individual
metric may be" by limiting the initial implementation to a number of
attributes (that we could change later or make configurable?).

i'm personally really excited about this new set of extensibility being
proposed and the creative things people will do with it. having loggregator
as a built-in system component versus a bolt-on is already such a great
capability compared with other platforms and i see investments to make it
more extensible and apply to more scenarios as making cloud foundry more
valuable and more fun to use.

On Fri, Sep 4, 2015 at 10:52 AM, Benjamin Black <bblack(a)pivotal.io> wrote:

johannes,

the problem of upstream schema changes causing downstream change or
breakage is the current situation: every addition of a metric type implies
a change to the dropsonde-protocol requiring everything downstream to be
updated.

the schema concerns are similar. currently there is no schema whatsoever
beyond the very fine grained "this is a name and this is a value". this
means every implementation of redis info export, for example, can, and
almost certainly will, be different. this results in every downstream
consumer having to know every possible variant or to only support specific
variants, both exactly the problem you are looking to avoid.

i share the core concern regarding ensuring points are "sufficiently"
self describing. however, there is no clear line delineating what is
sufficient. the current proposal pushes almost everything out to schema. we
could imagine a change to the attributes that includes what an attribute is
(gauge, counter, etc), what the units are for the attribute, and so on.

it is critical that we balance the complexity of the points against
complexity of the consumers as there is no free lunch here. which specific
functionality would you want to see in the generic points to achieve the
balance you prefer?


b



On Wed, Sep 2, 2015 at 2:07 PM, Johannes Tuchscherer <
jtuchscherer(a)pivotal.io> wrote:

The current way of sending metrics as either Values or Counters through
the pipeline makes the development of a downstream consumer (=nozzle)
pretty easy. If you look at the datadog nozzle[0], it just takes all
ValueMetrics and Counters and sends them off to datadog. The nozzle does
not have to know anything about these metrics (e.g. their origin, name, or
layout).

Adding a new way to send metrics as a nested object would make the
downstream implementation certainly more complicated. In that case, the
nozzle developer has to know what metrics are included inside the generic
point (basically the schema of the metric) and treat each point
accordingly. For example, if I were to write a nozzle that emits metrics to
Graphite with a StatsD client (like it is done here[1]), I need to know if
my int64 value is a Gauge or a Counter. Also, my consumer is under constant
risk of breaking when the upstream schema changes.

We are already facing this problem with the container metrics. But at
least the container metrics are in a defined format that is well documented
and not likely to change.

I agree with you, though, the the dropsonde protocol could use a
mechanism for easier extension. Having a GenericPoint and/or GenericEvent
seems like a good idea that I whole-heartedly support. I would just like to
stay away from nested metrics. I think the cost of adding more logic into
the downstream consumer (and making it harder to maintain) is not worth the
benefit of a more concise metric transport.


[0] https://github.com/cloudfoundry-incubator/datadog-firehose-nozzle
[1] https://github.com/CloudCredo/graphite-nozzle

On Tue, Sep 1, 2015 at 5:52 PM, Benjamin Black <bblack(a)pivotal.io>
wrote:

great questions, dwayne.

1) the partition key is intended to be used in a similar manner to
partitioners in distributed systems like cassandra or kafka. the specific
behavior i would like to make part of the contract is two-fold: that all
data with the same key is routed to the same partition and that all data in
a partition is FIFO (meaning no ordering guarantees beyond arrival time).

this could help with the multi-line log problem by ensuring a single
consumer will receive all lines for a given log entry in order, allowing
simple reassembly. however, the lines might be interleaved with other lines
with the same key or even other keys that happen to map to the same
partition, so the consumer does require a bit of intelligence. this was
actually one of the driving scenarios for adding the key.

2) i expect typical points to be in the hundreds of bytes to a few KB.
if we find ourselves regularly needing much larger points, especially near
that 64KB limit, i'd look to the JSON representation as the hierarchical
structure is more efficiently managed there.


b




On Tue, Sep 1, 2015 at 4:42 PM, <dschultz(a)pivotal.io> wrote:

Hi Ben,

I was wondering if you could give a concrete use case for the
partition key functionality.

In particular I am interested in how we solve multi line log entries.
I think it would be better to solve it by keeping all the data (the
multiple lines) together throughout the logging/metrics pipeline, but could
see how something like a partition key might help keep the data together as
well.

Second question: how large do you see these point messages getting
(average and max)? There are still several stages of the logging/metrics
pipeline that use UDP with a standard 64K size limit.

Thanks,
Dwayne

On Aug 28, 2015, at 4:54 PM, Benjamin Black <bblack(a)pivotal.io> wrote:

All,

The existing dropsonde protocol uses a different message type for each
event type. HttpStart, HttpStop, ContainerMetrics, and so on are all
distinct types in the protocol definition. This requires protocol changes
to introduce any new event type, making such changes very expensive. We've
been working for the past few weeks on an addition to the dropsonde
protocol to support easier future extension to new types of events and to
make it easier for users to define their own events.

The document linked below [1] describes a generic data point message
capable of carrying multi-dimensional, multi-metric points as sets of
name/value pairs. This new message is expected to be added as an additional
entry in the existing dropsonde protocol metric type enum. Things are now
at a point where we'd like to get feedback from the community before moving
forward with implementation.

Please contribute your thoughts on the document in whichever way you
are most comfortable: comments on the document, email here, or email
directly to me. If you comment on the document, please make sure you are
logged in so we can keep track of who is asking for what. Your views are
not just appreciated, but critical to the continued health and success of
the Cloud Foundry community. Thank you!


b

[1]
https://docs.google.com/document/d/1SzvT1BjrBPqUw6zfSYYFfaW9vX_dTZZjn5sl2nxB6Bc/edit?usp=sharing





--
Thank you,

James Bayer


Re: How to deploy a Web application using HTTPs

James Bayer
 

this related story is in the routing team tracker, not currently scheduled:
https://www.pivotaltracker.com/story/show/80674008

On Tue, Sep 8, 2015 at 4:30 AM, Matthew Sykes <matthew.sykes(a)gmail.com>
wrote:

There isn't a way to tell CF that you want https only at this time. You'll
have to check the x-forwarded-proto header in your application and redirect
to the secure endpoint if needed.

On Tue, Sep 8, 2015 at 6:16 AM, Juan Antonio Breña Moral <
bren(a)juanantonio.info> wrote:

Hi,

I would like to deploy an App but I would like to use it using only https.

What is the way to indicate CF that the Application X will use https only?

Juan Antonio


--
Matthew Sykes
matthew.sykes(a)gmail.com


--
Thank you,

James Bayer


Re: How to execute multiple CF REST methods with an unique authentication

James Bayer
 

* access tokens have a short time to live, something usually measured in
minutes, and generally are not revokable by the issuer as endpoints do not
check in with the issuer when making decisions
* refresh tokens have a longer time to love, usually hours or days, and can
be used to get new access tokens. refresh tokens are revokable.

use base64 to decode the token and you'll see the attributes.

On Mon, Sep 7, 2015 at 11:40 PM, Juan Antonio Breña Moral <
bren(a)juanantonio.info> wrote:

Hi,

you had reason. I stored in the right way the token, and Now it is
possible to reuse a token for multiple operations.

Example:

it.only("Using An unique Login, it is possible to execute 3 REST
operations", function () {
this.timeout(2500);

CloudFoundry.setEndPoint(endPoint);

var token_endpoint = null;
var refresh_token = null;
var token_type = null;
var access_token = null;
return CloudFoundry.getInfo().then(function (result) {
token_endpoint = result.token_endpoint;
return CloudFoundry.login(token_endpoint, username, password);
}).then(function (result) {
token_type = result.token_type;
access_token = result.access_token;
return CloudFoundryApps.getApps(token_type, access_token);
}).then(function (result) {
return CloudFoundryApps.getApps(token_type, access_token);
}).then(function (result) {
return CloudFoundryApps.getApps(token_type, access_token);
}).then(function (result) {
expect(true).to.equal(true);
});
});

What is the usage of token_refresh?
How to check the pending time for current token?

Juan Antonio


--
Thank you,

James Bayer


Re: When will dea be replaced by diego?

Amit Kumar Gupta
 

Done, anyone with the link should be able to comment now.

Best,
Amit

On Tuesday, September 8, 2015, Matthew Sykes <matthew.sykes(a)gmail.com>
wrote:

Hi Guillaume. The proposal document was created by Amit and I had assumed
it was public. I'll try to make sure he sees this chain today so he can
address it. Sorry to send a unusable link.

On Tue, Sep 8, 2015 at 3:02 AM, Guillaume Berche <bercheg(a)gmail.com
<javascript:_e(%7B%7D,'cvml','bercheg(a)gmail.com');>> wrote:

Thanks Matthew for the additional details and pointers. It seems that the
deployment strategy proposal mentionned in [2] is lacking read/comment
permissions. Any chance to fix that ?

Guillaume.

On Tue, Sep 8, 2015 at 2:07 AM, Matthew Sykes <matthew.sykes(a)gmail.com
<javascript:_e(%7B%7D,'cvml','matthew.sykes(a)gmail.com');>> wrote:

The notes you're pointing to were a straw man proposal; many of the
dates no longer seem relevant.

With that, I'm not in product management but, in my opinion, the
definition of "done" and "ready" are relative.

The current bar that the development team is focusing on is data and API
versioning. We feel it's necessary to maintain continuous operation across
deployments. In particular, we want to be sure that operators can perform
forward migration with minimal down time before it becomes the default
backend in production. We're currently referring to that target as v 0.9.

That said, the current path towards that goal has us going to a single
API server Diego[1]. With this change in architecture, the scaling and
performance characteristics will probably change. While it's likely these
changes won't have measurable impact to smaller environments, it remains to
be seen what will happen with the larger deployments operated by public
providers. This is where the whole notion of "replacement" starts to get a
bit murky.

As for "merging into cf-release," again, I'm not product management
(James and Amit are in a better position to comment) but the current
direction appears to be to break down Cloud Foundry into a number of
smaller releases. We already have a cf-release, garden-release, and
diego-release as part of a diego deployment but there are others like an
etcd-release that the MEGA team is managing and a uaa-release that the
identity team have done. These are all pieces of a new deployment strategy
that was proposed[2] a few months ago.

Given that path, I don't know that diego-release will ever be merged
into cf-release; it's more likely that it will be stitched into the
"cf-deployment" described in that proposal.

So, to your question, the 0.9 release may be cut in September. That's
the first release that operators will be able to roll forward from without
downtime. If you want Diego to be the default backend without having to
mess with plugins and configuration, you can already do that today via
configuration[3].

[1]: https://github.com/onsi/migration-proposal
[2]:
https://docs.google.com/document/d/1Viga_TzUB2nLxN_ILqksmUiILM1hGhq7MBXxgLaUOkY/edit#heading=h.qam414rpl0xe
[3]:
https://github.com/cloudfoundry/cloud_controller_ng/blob/aea2a53b123dc5104c11eb53b81a09a4c4eaba55/bosh-templates/cloud_controller_api.yml.erb#L287

On Mon, Sep 7, 2015 at 2:08 PM, Layne Peng <layne.peng(a)emc.com
<javascript:_e(%7B%7D,'cvml','layne.peng(a)emc.com');>> wrote:

I think what he ask is, when the Diego-release will merge to
cf-release. And also no need to install cf cli diego plugin, no need to
enabe-diego to your app, then start. For the
https://github.com/cloudfoundry-incubator/diego-design-notes/blob/master/migrating-to-diego.md#a-detailed-transition-timeline
. it is said to be mid-september, is it right?


--
Matthew Sykes
matthew.sykes(a)gmail.com
<javascript:_e(%7B%7D,'cvml','matthew.sykes(a)gmail.com');>

--
Matthew Sykes
matthew.sykes(a)gmail.com
<javascript:_e(%7B%7D,'cvml','matthew.sykes(a)gmail.com');>


Re: Security group rules to allow HTTP communication between 2 apps deployed on CF

Matthew Sykes <matthew.sykes@...>
 

I'm afraid I don't really understand your questions or what you're trying
to accomplish.

Security groups intended to be managed by platform administrators so unless
you have admin access to your target environment, you will not be able to
create security groups.

If you're trying to access the cloud controller api or other applications,
you should be going through the front door (the external host names). The
security group rules should not be preventing you from doing that.

If you're trying to access something internal to the cloud foundry
deployment, you will need explicit support from the administrators.

On Tue, Sep 8, 2015 at 5:20 AM, Naveen Asapu <asapu.naveen(a)gmail.com> wrote:

How to get destination address for bluemix.net can you suggest any
command for getting destination address

actually i'm creating security group for abacus for that it needs
destination address how can i get


command:
cf create-security-group abacus abacus_group.json

error:
Creating security group abacus as xxxx(a)xxxx.in
FAILED
Server error, status code: 403, error code: 10003, message: You are not
authorized to perform the requested action
--
Matthew Sykes
matthew.sykes(a)gmail.com


Re: So many hard-coded dropsonde destinations to metrons

Noburou TANIGUCHI
 

Thank you, Warren.

So "localhost" is ok, but what about port numbers?


Warren Fernandes wrote
Dropsonde is a go library that allows the CF components using it to emit
logs and metrics. The current flow for CF components is to emit their logs
and metrics to their local metron agent which then forwards them to the
Doppler servers in Loggregator. The metron agents only listen on the local
interface and immediately signs the messages before sending them off to
the Dopplers. So for now, the destination parameter for dropsonde will
always point to the local metron agent.

Here is some more info on Metron
https://github.com/cloudfoundry/loggregator/tree/develop/src/metron
Here is some more info on Dropsonde
https://github.com/cloudfoundry/dropsonde




-----
I'm not a ...
noburou taniguchi
--
View this message in context: http://cf-dev.70369.x6.nabble.com/So-many-hard-coded-dropsonde-destinations-to-metrons-tp1474p1543.html
Sent from the CF Dev mailing list archive at Nabble.com.


Re: Public access to Pivotal Tracker stories for BOSH and CF.

Christopher B Ferris <chrisfer@...>
 

Look in the right-hand margin of the wiki [1] for the list of CFF public trackers.
 
 
Cheers,

Christopher Ferris
IBM Distinguished Engineer, CTO Open Technology
IBM Cloud, Open Technologies
email: chrisfer@...
twitter: @christo4ferris
blog: http://thoughtsoncloud.com/index.php/author/cferris/
phone: +1 508 667 0402
 
 

----- Original message -----
From: Lomov Alexander <alexander.lomov@...>
To: "Discussions about Cloud Foundry projects and the system overall." <cf-dev@...>
Cc:
Subject: [cf-dev] Public access to Pivotal Tracker stories for BOSH and CF.
Date: Tue, Sep 8, 2015 7:08 AM
 
Hi, all.
 
Last few months I started to find more and more extremely interesting trends in BOSH and CF development. For instance BOSH AZ [1] or Garden OCS support [2]. I would like to somehow to follow this changes and I’m sure that Pivatal Tracker can be the tool to do so. Still I found only this Pivotal Tracker instructions in cf-docs-contrib [3], that is discussed in BOSH Users group some time ago [4].
 
Still links from cf-docs-contrib page are missing (or I don’t have access to them) [5]. 
 
Could you please tell if there is any public access to Pivatal Tracker to follow this changes.
 
Thank you,
Alex L.
 
 
 
 


Re: How to deploy a Web application using HTTPs

Matthew Sykes <matthew.sykes@...>
 

There isn't a way to tell CF that you want https only at this time. You'll
have to check the x-forwarded-proto header in your application and redirect
to the secure endpoint if needed.

On Tue, Sep 8, 2015 at 6:16 AM, Juan Antonio Breña Moral <
bren(a)juanantonio.info> wrote:

Hi,

I would like to deploy an App but I would like to use it using only https.

What is the way to indicate CF that the Application X will use https only?

Juan Antonio


--
Matthew Sykes
matthew.sykes(a)gmail.com


Public access to Pivotal Tracker stories for BOSH and CF.

Alexander Lomov <alexander.lomov@...>
 

Hi, all.

Last few months I started to find more and more extremely interesting trends in BOSH and CF development. For instance BOSH AZ [1] or Garden OCS support [2]. I would like to somehow to follow this changes and I’m sure that Pivatal Tracker can be the tool to do so. Still I found only this Pivotal Tracker instructions in cf-docs-contrib [3], that is discussed in BOSH Users group some time ago [4].

Still links from cf-docs-contrib page are missing (or I don’t have access to them) [5].

Could you please tell if there is any public access to Pivatal Tracker to follow this changes.

Thank you,
Alex L.

[1] https://github.com/cloudfoundry/bosh-notes/blob/master/availability-zones.md
[2] https://docs.google.com/document/d/1SCOlAquyUmNM-AQnekCOXiwhLs6gveTxAcduvDcW_xI/edit#
[3] https://github.com/cloudfoundry-community/cf-docs-contrib/wiki/Pivotal-Tracker-Instructions#pivotal-trackers
[4] https://groups.google.com/a/cloudfoundry.org/forum/#!topic/bosh-users/kSwYfQNwO54
[5] https://www.evernote.com/shard/s108/sh/d322f0a4-39e8-4825-9f3c-ae242aaa39d6/64a83b76dcb0b4d7/res/d2792e9a-3763-4d77-833c-0855d3cb25f5/skitch.png?resizeSmall&width=832


Re: When will dea be replaced by diego?

Matthew Sykes <matthew.sykes@...>
 

Hi Guillaume. The proposal document was created by Amit and I had assumed
it was public. I'll try to make sure he sees this chain today so he can
address it. Sorry to send a unusable link.

On Tue, Sep 8, 2015 at 3:02 AM, Guillaume Berche <bercheg(a)gmail.com> wrote:

Thanks Matthew for the additional details and pointers. It seems that the
deployment strategy proposal mentionned in [2] is lacking read/comment
permissions. Any chance to fix that ?

Guillaume.

On Tue, Sep 8, 2015 at 2:07 AM, Matthew Sykes <matthew.sykes(a)gmail.com>
wrote:

The notes you're pointing to were a straw man proposal; many of the dates
no longer seem relevant.

With that, I'm not in product management but, in my opinion, the
definition of "done" and "ready" are relative.

The current bar that the development team is focusing on is data and API
versioning. We feel it's necessary to maintain continuous operation across
deployments. In particular, we want to be sure that operators can perform
forward migration with minimal down time before it becomes the default
backend in production. We're currently referring to that target as v 0.9.

That said, the current path towards that goal has us going to a single
API server Diego[1]. With this change in architecture, the scaling and
performance characteristics will probably change. While it's likely these
changes won't have measurable impact to smaller environments, it remains to
be seen what will happen with the larger deployments operated by public
providers. This is where the whole notion of "replacement" starts to get a
bit murky.

As for "merging into cf-release," again, I'm not product management
(James and Amit are in a better position to comment) but the current
direction appears to be to break down Cloud Foundry into a number of
smaller releases. We already have a cf-release, garden-release, and
diego-release as part of a diego deployment but there are others like an
etcd-release that the MEGA team is managing and a uaa-release that the
identity team have done. These are all pieces of a new deployment strategy
that was proposed[2] a few months ago.

Given that path, I don't know that diego-release will ever be merged into
cf-release; it's more likely that it will be stitched into the
"cf-deployment" described in that proposal.

So, to your question, the 0.9 release may be cut in September. That's the
first release that operators will be able to roll forward from without
downtime. If you want Diego to be the default backend without having to
mess with plugins and configuration, you can already do that today via
configuration[3].

[1]: https://github.com/onsi/migration-proposal
[2]:
https://docs.google.com/document/d/1Viga_TzUB2nLxN_ILqksmUiILM1hGhq7MBXxgLaUOkY/edit#heading=h.qam414rpl0xe
[3]:
https://github.com/cloudfoundry/cloud_controller_ng/blob/aea2a53b123dc5104c11eb53b81a09a4c4eaba55/bosh-templates/cloud_controller_api.yml.erb#L287

On Mon, Sep 7, 2015 at 2:08 PM, Layne Peng <layne.peng(a)emc.com> wrote:

I think what he ask is, when the Diego-release will merge to cf-release.
And also no need to install cf cli diego plugin, no need to enabe-diego to
your app, then start. For the
https://github.com/cloudfoundry-incubator/diego-design-notes/blob/master/migrating-to-diego.md#a-detailed-transition-timeline
. it is said to be mid-september, is it right?


--
Matthew Sykes
matthew.sykes(a)gmail.com
--
Matthew Sykes
matthew.sykes(a)gmail.com


How to deploy a Web application using HTTPs

Juan Antonio Breña Moral <bren at juanantonio.info...>
 

Hi,

I would like to deploy an App but I would like to use it using only https.

What is the way to indicate CF that the Application X will use https only?

Juan Antonio


Re: Security group rules to allow HTTP communication between 2 apps deployed on CF

Naveen Asapu
 

How to get destination address for bluemix.net can you suggest any command for getting destination address

actually i'm creating security group for abacus for that it needs destination address how can i get


command:
cf create-security-group abacus abacus_group.json

error:
Creating security group abacus as xxxx(a)xxxx.in
FAILED
Server error, status code: 403, error code: 10003, message: You are not authorized to perform the requested action


Re: When will dea be replaced by diego?

Guillaume Berche
 

Thanks Matthew for the additional details and pointers. It seems that the
deployment strategy proposal mentionned in [2] is lacking read/comment
permissions. Any chance to fix that ?

Guillaume.

On Tue, Sep 8, 2015 at 2:07 AM, Matthew Sykes <matthew.sykes(a)gmail.com>
wrote:

The notes you're pointing to were a straw man proposal; many of the dates
no longer seem relevant.

With that, I'm not in product management but, in my opinion, the
definition of "done" and "ready" are relative.

The current bar that the development team is focusing on is data and API
versioning. We feel it's necessary to maintain continuous operation across
deployments. In particular, we want to be sure that operators can perform
forward migration with minimal down time before it becomes the default
backend in production. We're currently referring to that target as v 0.9.

That said, the current path towards that goal has us going to a single API
server Diego[1]. With this change in architecture, the scaling and
performance characteristics will probably change. While it's likely these
changes won't have measurable impact to smaller environments, it remains to
be seen what will happen with the larger deployments operated by public
providers. This is where the whole notion of "replacement" starts to get a
bit murky.

As for "merging into cf-release," again, I'm not product management (James
and Amit are in a better position to comment) but the current direction
appears to be to break down Cloud Foundry into a number of smaller
releases. We already have a cf-release, garden-release, and diego-release
as part of a diego deployment but there are others like an etcd-release
that the MEGA team is managing and a uaa-release that the identity team
have done. These are all pieces of a new deployment strategy that was
proposed[2] a few months ago.

Given that path, I don't know that diego-release will ever be merged into
cf-release; it's more likely that it will be stitched into the
"cf-deployment" described in that proposal.

So, to your question, the 0.9 release may be cut in September. That's the
first release that operators will be able to roll forward from without
downtime. If you want Diego to be the default backend without having to
mess with plugins and configuration, you can already do that today via
configuration[3].

[1]: https://github.com/onsi/migration-proposal
[2]:
https://docs.google.com/document/d/1Viga_TzUB2nLxN_ILqksmUiILM1hGhq7MBXxgLaUOkY/edit#heading=h.qam414rpl0xe
[3]:
https://github.com/cloudfoundry/cloud_controller_ng/blob/aea2a53b123dc5104c11eb53b81a09a4c4eaba55/bosh-templates/cloud_controller_api.yml.erb#L287

On Mon, Sep 7, 2015 at 2:08 PM, Layne Peng <layne.peng(a)emc.com> wrote:

I think what he ask is, when the Diego-release will merge to cf-release.
And also no need to install cf cli diego plugin, no need to enabe-diego to
your app, then start. For the
https://github.com/cloudfoundry-incubator/diego-design-notes/blob/master/migrating-to-diego.md#a-detailed-transition-timeline
. it is said to be mid-september, is it right?


--
Matthew Sykes
matthew.sykes(a)gmail.com


Re: How to execute multiple CF REST methods with an unique authentication

Juan Antonio Breña Moral <bren at juanantonio.info...>
 

Hi,

you had reason. I stored in the right way the token, and Now it is possible to reuse a token for multiple operations.

Example:

it.only("Using An unique Login, it is possible to execute 3 REST operations", function () {
this.timeout(2500);

CloudFoundry.setEndPoint(endPoint);

var token_endpoint = null;
var refresh_token = null;
var token_type = null;
var access_token = null;
return CloudFoundry.getInfo().then(function (result) {
token_endpoint = result.token_endpoint;
return CloudFoundry.login(token_endpoint, username, password);
}).then(function (result) {
token_type = result.token_type;
access_token = result.access_token;
return CloudFoundryApps.getApps(token_type, access_token);
}).then(function (result) {
return CloudFoundryApps.getApps(token_type, access_token);
}).then(function (result) {
return CloudFoundryApps.getApps(token_type, access_token);
}).then(function (result) {
expect(true).to.equal(true);
});
});

What is the usage of token_refresh?
How to check the pending time for current token?

Juan Antonio


Re: How to execute multiple CF REST methods with an unique authentication

CF Runtime
 

A token should be valid for any number of requests until the expiration
time is reached.

In your code example, is the "result" passed to your second call to
"getApps" the result from the login attempt, or the result from the first
"getApps" call? You might try console.log(results) before that second
getApps call.

Joseph
OSS Release Integration Team

On Mon, Sep 7, 2015 at 3:05 AM, Juan Antonio Breña Moral <
bren(a)juanantonio.info> wrote:

Currently,

If I execute 2 operations with the same token, I receive the following
message:

it.only("Using Login to execute 2 REST operations", function () {
this.timeout(2500);

CloudFoundry.setEndPoint(endPoint);

var token_endpoint = null;
var refresh_token = null;
return CloudFoundry.getInfo().then(function (result) {
token_endpoint = result.token_endpoint;
return CloudFoundry.login(token_endpoint, username, password);
}).then(function (result) {
return CloudFoundryApps.getApps(result.token_type,
result.access_token);
}).then(function (result) {
return CloudFoundryApps.getApps(result.token_type,
result.access_token);
}).then(function (result) {
console.log(result);
expect(true).to.equal(true);
});
});

Tests Response:

1) Cloud Foundry Using Login to execute 2 REST operations:
Error: the string "{\n \"code\": 10002,\n \"description\":
\"Authenticati
on error\",\n \"error_code\": \"CF-NotAuthenticated\"\n}\n" was thrown,
throw a
n Error :)


Re: When will dea be replaced by diego?

Matthew Sykes <matthew.sykes@...>
 

The notes you're pointing to were a straw man proposal; many of the dates
no longer seem relevant.

With that, I'm not in product management but, in my opinion, the definition
of "done" and "ready" are relative.

The current bar that the development team is focusing on is data and API
versioning. We feel it's necessary to maintain continuous operation across
deployments. In particular, we want to be sure that operators can perform
forward migration with minimal down time before it becomes the default
backend in production. We're currently referring to that target as v 0.9.

That said, the current path towards that goal has us going to a single API
server Diego[1]. With this change in architecture, the scaling and
performance characteristics will probably change. While it's likely these
changes won't have measurable impact to smaller environments, it remains to
be seen what will happen with the larger deployments operated by public
providers. This is where the whole notion of "replacement" starts to get a
bit murky.

As for "merging into cf-release," again, I'm not product management (James
and Amit are in a better position to comment) but the current direction
appears to be to break down Cloud Foundry into a number of smaller
releases. We already have a cf-release, garden-release, and diego-release
as part of a diego deployment but there are others like an etcd-release
that the MEGA team is managing and a uaa-release that the identity team
have done. These are all pieces of a new deployment strategy that was
proposed[2] a few months ago.

Given that path, I don't know that diego-release will ever be merged into
cf-release; it's more likely that it will be stitched into the
"cf-deployment" described in that proposal.

So, to your question, the 0.9 release may be cut in September. That's the
first release that operators will be able to roll forward from without
downtime. If you want Diego to be the default backend without having to
mess with plugins and configuration, you can already do that today via
configuration[3].

[1]: https://github.com/onsi/migration-proposal
[2]:
https://docs.google.com/document/d/1Viga_TzUB2nLxN_ILqksmUiILM1hGhq7MBXxgLaUOkY/edit#heading=h.qam414rpl0xe
[3]:
https://github.com/cloudfoundry/cloud_controller_ng/blob/aea2a53b123dc5104c11eb53b81a09a4c4eaba55/bosh-templates/cloud_controller_api.yml.erb#L287

On Mon, Sep 7, 2015 at 2:08 PM, Layne Peng <layne.peng(a)emc.com> wrote:

I think what he ask is, when the Diego-release will merge to cf-release.
And also no need to install cf cli diego plugin, no need to enabe-diego to
your app, then start. For the
https://github.com/cloudfoundry-incubator/diego-design-notes/blob/master/migrating-to-diego.md#a-detailed-transition-timeline
. it is said to be mid-september, is it right?
--
Matthew Sykes
matthew.sykes(a)gmail.com


Re: v3 cc api style guide feedback requested

Guillaume Berche
 

Thanks for sharing this great spec.

Not sure if you're preferring feedback other the mailing list of GH issue.
Let me know.

General feedback:

+1 for a formal schema for the v3 api as to ease automatic client
generations (api explorer, java sdk, go sdk...) (e.g. swagger format)
Automated tests on the formal schema may also help checking the style guide
is respected. https://www.pivotaltracker.com/story/show/99237980 seems to
only consider documentation benefits so far and not yet client generation
benefits (e.g. https://github.com/swagger-api/swagger-codegen
https://github.com/swagger-api/swagger-codegen/issues/325 )

Would be nice to clarify support for non ascii characters in query params,
such as support for IRI
https://en.wikipedia.org/wiki/Internationalized_resource_identifier as to
avoid mojibake bugs such as the one presumed in
https://github.com/cloudfoundry/cli/issues/560

Would be nice to consider supporting gzip encoding for the json payload
responses as to speed up responses over internet connections
('Accept-Encoding' header)

It general it may make sense to clarify supported HTTP headers (+1 for
etag/if-modified-since support suggested at
https://github.com/cloudfoundry/cc-api-v3-style-guide/issues/2 ).

https://github.com/cloudfoundry/cc-api-v3-style-guide#pagination
*"order_by: a field on the resource to order the collection by; each
collection may choose a subset of fields that it can be sorted by "*

Would be nice to illustrate/precise if multiple sort order can be
supported, e.g. order_by=-state,-created

https://github.com/cloudfoundry/cc-api-v3-style-guide#query-parameters
Precise character escaping on query param values e.g. containing comma:
filtering on name="a,b"

https://github.com/cloudfoundry/cc-api-v3-style-guide#pagination-of-related-resources

GET /v3/apps/:guid?include=space,organization

with pluralized resource name should be GET /v3/apps/:guid?include=space*s*
,organization*s*

https://github.com/cloudfoundry/cc-api-v3-style-guide#pagination-of-related-resources
would be nice to include an example of a pagination request on a related
resource inclusion request (e.g,

/v2/spaces/ab09cd29-9420-f021-g20d-123431420768?include=apps&*include_apps_order_by*=-state,-date)

https://github.com/cloudfoundry/cc-api-v3-style-guide#proposal
Would useful to consider I18N of user-facing messages. Cf related thread
for service broker error messages at
http://cf-dev.70369.x6.nabble.com/cf-dev-Announcing-Experimental-support-for-Asynchronous-Service-Operations-tp287p1471.html
May be the CC API could accept a "Accept-Language: zh_Hans" header and try
to return localized messages when available in the accepted locale.

Thanks,

Guillaume.

On Wed, Sep 2, 2015 at 6:44 PM, Zach Robinson <zrobinson(a)pivotal.io> wrote:

Thanks James, I've just corrected the three issues you've noted so far

7861 - 7880 of 9425