Date   

Re: Deploying a shell script driven java application to cf

Daniel Mikusa
 

I haven't tried this, but I think it should work.

1.) Make a directory. In that directory put your JAR file, your start
script and anything else the app needs to run.
2.) From that directory, run `cf push <app-name> -b java_buildpack -c
'$PWD/start-script.sh'`. This will upload your script, the JAR file and
everything else in the current directory. It will also tell CF that you
specifically want to use the Java build pack (which will install Java) and
that you want to use your script to start your app.

What could be tricky about this is your start script. It's going to need
to reference JAVA_HOME as `/home/vcap/app/.java-buildpack/open_jdk_jre`,
and `java` as `$JAVA_HOME/bin/java` since `java` is not going to be on the
$PATH.

You're also going to need to handle some of the things that the JBP would
normally do like set -Xmx and other JVM memory settings to keep the JVM
from exceeding the containers MEMORY_LIMIT. Note, *all* memory needs to
fit under the limit, not just the JVM's heap. In other words, setting -Xmx
== MEMORY_LIMIT is 100% wrong.

Beyond that, you'd need to make sure the app is listening on $PORT or if
it's not taking web requests, disable that health check (`cf push
--no-route` & `cf set-health-check none`).

Dan

On Fri, Nov 13, 2015 at 12:47 AM, dammina sahabandu <dammina(a)adroitlogic.com
wrote:
Hi All,
I have a java application which is a composition of several jars in the
running environment. The main jar is invoked by a shell script. Is it
possible to push such application into cloud foundry? And if it is possible
can you please help me on how to achieve that. A simple guide will be
really helpful.

Thank you in advance,
Dammina


Re: regarding using public key to verify client

Noburou TANIGUCHI
 

Hi ankit,

First of all, do you think who is responsible to verify the signature? Your
application? Or (one of) the components of Cloud Foundry? I assume the
former is your answer. I think there is no functionality in Cloud Foundry to
verify client signature.

Then, if you use the Cloud Foundry java-buildpack to deploy your
application, I think there is the only one way to send key files with your
app on deployment. It is to add your key files to your app's war / jar / zip
file, primitively like:

```
jar uvf your-war-jar-zip-file path-to-your-key-files-or-directories
```

But you may add a maven / gradle task to do such a thing.

This is because the Cloud Foundry java-buildpack accepts only one zip-format
file on a deployment.

# Please correct this post if I am wrong. Thank you.



ankit wrote
Suppose my application is deployed on the cloud foundry and my client
sends a POST request that contains some message but that message is
digitally signed by client’s private key. So, I need client’s public
key(digital id of client) to verify my client for inbound calls in the
cloud foundry where application is running. So, can you tell me where can
I put these public keys(digital IDs of clients) in java build pack or any
other place.
Similarly, for outbound calls I want my message to be digitally signed and
for that I need private key to be used. So, where can I put that also?




-----
I'm not a ...
noburou taniguchi
--
View this message in context: http://cf-dev.70369.x6.nabble.com/regarding-using-public-key-to-verify-client-tp2711p2719.html
Sent from the CF Dev mailing list archive at Nabble.com.


Re: Deploying a shell script driven java application to cf

Noburou TANIGUCHI
 

Hi Dammina,

I think your case doesn't seem to fit the Cloud Foundry standard Java
buildpack.

One thing I can suggest is to use heroku-buildpack-runnable-jar [1]. With
this buildpack, you can start your app with a shell script.

But you probably have to modify it to fit your purpose. Also you should
calculate and specify appropriate memory for your app in the start script by
yourself.

Or if you can use Diego, it may be a solution to create and use a Docker
image for your app. But I don't know much about Diego, so this may be wrong.

[1] https://github.com/energizedwork/heroku-buildpack-runnable-jar


dammina sahabandu wrote
Hi All,
I have a java application which is a composition of several jars in the
running environment. The main jar is invoked by a shell script. Is it
possible to push such application into cloud foundry? And if it is
possible can you please help me on how to achieve that. A simple guide
will be really helpful.

Thank you in advance,
Dammina




-----
I'm not a ...
noburou taniguchi
--
View this message in context: http://cf-dev.70369.x6.nabble.com/cf-dev-Deploying-a-shell-script-driven-java-application-to-cf-tp2697p2717.html
Sent from the CF Dev mailing list archive at Nabble.com.


Re: how to get cf authentication token programmatically

Noburou TANIGUCHI
 

Hi zooba,

Though I haven't tried them by myself, I conceived the following 2 methods:

(a) Push your app with an auth token and a refresh token, and refresh the
auth token using the refresh token

or,

(b) Create another user only for your app and push the app with the
credentials of the user


For (a), I'm not sure how an auth token is refreshed, but it seems able to
be done via /oauth/token endpoint in UAA [1].

For (b), it seems that you need just revokable credentials, so I think you
can use a user dedicated to the app as revokable credentials.


[1]
https://github.com/cloudfoundry/uaa/blob/master/docs/UAA-APIs.rst#oauth2-token-endpoint-post-oauth-token


I hope this helps.


zooba Sir wrote
Actually my app needs to get info of other apps running in same cf api
domain. To get this info, I'm calling APIs of other apps using
HTTP.GETrequest with authentication token in request header. And to get
this authentication token, I'm using HTTP:POST request to
'AuthorizationURL/oauth/token' with username and password in the request
header.
1.CF environment variables doesn't have all the info needed and so I'm
calling app's API directly and getting needed info from the returned JSON.
2. There is a config.json file in my local system in "C:\Users\MyUser\.cf"
path and has cf target api and auth token, but it gets populated only when
I'm logged into cf in command line. My app once deployed and running in
cf, should get the info of other apps even if I'm not logged in. So
getting auth token from local system's config.json is not a good idea.

Please suggest me another way to get auth token.




-----
I'm not a ...
noburou taniguchi
--
View this message in context: http://cf-dev.70369.x6.nabble.com/cf-dev-how-to-get-cf-authentication-token-programmatically-tp2668p2716.html
Sent from the CF Dev mailing list archive at Nabble.com.


Re: Does CloudFoundry have the quota of QPS for an app?

Zhou Yu
 

Hey,

Can you elaborate on what qps is? Query per second?

Zhou Yu
Software Engineer
Pivotal.io

On Mon, Nov 16, 2015 at 7:35 AM, yancey0623 <yancey0623(a)163.com> wrote:

Dear all!

Do you have the solution to limit the qps of the app?


Does CloudFoundry have the quota of QPS for an app?

Yancey
 

Dear all!
Do you have the solution to limit the qps of the app?


Inline-relations-depth: deprecation and its consequences

Ponraj E
 

Hi,

I am using cf version 211 and CC API version 2.28.0. I am curious to know why the "inline-relations-depth" is going to be deprecated. It seems to be a useful feature.

For instance, I have a use case where for an APP, I need to display the service bindings details. The details to be displayed are: "Instance name, Plan name, Service name, Dashboard url, Credentials etc".

The calls that had to be fired to achieve this are:
1. GET /v2/apps/0f27ab6c-c132-413d-8d6a-64551dcb73fc/service_bindings
2. GET /v2/service_instances/fbd24d3e-3fe5-4d89-9ef1-5f43b8bc3767
3. GET /v2/service_plans/32bd0e93-e856-4c89-9f97-ba5c09c84ac6
4. GET /v2/services/ffc81a4b-98e0-4aff-9901-399ef98638e0

Without this feature, performance delay is introduced for multiple calls, if the data is quite large. Not only this use case but we have other use cases where the "inline-relational-data" has to be retrieved and displayed.

Is there any other api which replaces this particular feature going to be introduced?. Any other solution also would help.

Thanks.
---------
Ponraj


Re: BOSH-Lite New install: Failure setting SPI endpoint

Chandra Narayanasamy
 

We also had the exact same issue and tried using https://api.bosh-lite.com also but it did not work. It gave an error unexpected EOF.

Can someone help us?

Thanks in advance.


regarding using public key to verify client

ankit <ankit.ankit@...>
 

Suppose my application is deployed on the cloud foundry and my client sends
a POST request that contains some message but that message is digitally
signed by client’s private key. So, I need client’s public key(digital id of
client) to verify my client for inbound calls in the cloud foundry where
application is running. So, can you tell me where can I put these public
keys(digital IDs of clients) in java build pack or any other place.
Similarly, for outbound calls I want my message to be digitally signed and
for that I need private key to be used. So, where can I put that also?




--
View this message in context: http://cf-dev.70369.x6.nabble.com/regarding-using-public-key-to-verify-client-tp2711.html
Sent from the CF Dev mailing list archive at Nabble.com.


add package to container during staging?

Eric Poelke
 

Is it possible to add commands to get run during staging? I am using the python buildpack but one of my dependencies is on PyNaCL which requires libffi-dev to build properly. Since that package is not part of cflinuxfs2 stack my push fails. So is there any way to add staging steps? I have googled around a bit but not really come up with anything.


Re: Pluggable Resource Scheduling

Deepak Vij
 

Hi Idit, good to hear from you. In that case, we have covered all bases and good to go on this. We will touch base as we discussed earlier. Thanks.


- Deepak

From: Gmail [mailto:idit.levine(a)gmail.com]
Sent: Friday, November 13, 2015 1:43 PM
To: Discussions about Cloud Foundry projects and the system overall.
Subject: [cf-dev] Re: Re: Re: Re: Re: Pluggable Resource Scheduling

We can add to mesosphere a native support for Garden containers. That is very easy to do ....

Sent from my iPhone

On Nov 13, 2015, at 2:51 PM, Deepak Vij (A) <deepak.vij(a)huawei.com<mailto:deepak.vij(a)huawei.com>> wrote:
If Mesosphere can do it for Kubernetes, CF/Diego is doable too. I am not worried about that, it is a solved problem.

The only concern I have is regarding deployment of Garden Container environment in Mesos/Slave/Executor. Although, this is not CF/Diego issues.

Because Mesos is not Garden Container environment aware, its underlying DRF scheduling algorithm may not have visibility to the resources being consumed within the Garden Container. Unless, we wrap Garden Container within the Docker container as Mesos supports Docker container environment. Although, this may not be right approach as it opens up another can of worms – Gardner Container nested within Docker Container. For Kubernetes environment this is not an issue as it uses Docker container to begin with.


- Deepak

From: resouer(a)163.com<mailto:resouer(a)163.com> [mailto:resouer(a)163.com] On Behalf Of Zhang Lei
Sent: Thursday, November 12, 2015 8:24 PM
To: Discussions about Cloud Foundry projects and the system overall.
Subject: [cf-dev] Re: Re: Re: Pluggable Resource Scheduling

You can add different scheduling strategy into Diego by implementing a scheduler plugin.


But not Mesos, that would be a huge task and another story.

The reason Kubernetes can integrate Mesos as scheduler (can work, not perfect) is due to Mesosphere is doing that part, I'm afraid ...

在 2015-11-13 03:57:52,"Deepak Vij (A)" <deepak.vij(a)huawei.com<mailto:deepak.vij(a)huawei.com>> 写道:


I did not mean to replace the whole “Diego” environment itself. What I was thinking was more in terms of plug-ability within Diego itself. This is so that “Auctioneer” component can be turned into a “Mesos Framework” as one of the scheduling options. By doing that, “Auctioneer” can start accepting “Mesos Offers” instead of native “Auctioning based Diego Resource Scheduling”. Rest of the runtime environment including Garden, Rep etc., they all stay the same. Nothing else changes. I hope this makes sense.


- Deepak

From: Gwenn Etourneau [mailto:getourneau(a)pivotal.io<mailto:getourneau(a)pivotal.io>]
Sent: Wednesday, November 11, 2015 5:10 PM
To: Discussions about Cloud Foundry projects and the system overall.
Subject: [cf-dev] Re: Pluggable Resource Scheduling

Hi,

Interesting proposition, wondering if it make sense to hook into Diego or CF.
Diego is connected to CF by the CC-Bridge (big picture) why not create a CC-Bridge for other scheduling system ?



Thanks

On Thu, Nov 12, 2015 at 5:13 AM, Deepak Vij (A) <deepak.vij(a)huawei.com<mailto:deepak.vij(a)huawei.com>> wrote:
Hi folks, I would like to start a discussion thread and get community thoughts regarding availability of Pluggable Resource Scheduling within CF/Diego. Just like Kubernetes does, wouldn’t it be nice to have an option of choosing Diego native scheduling or other uber/global resource management environments, specifically Mesos.

Look forward to comments and feedback from the community. Thanks.

Regards,
Deepak Vij
(Huawei Software Lab., Santa Clara)


Re: Pluggable Resource Scheduling

Idit Levine
 

We can add to mesosphere a native support for Garden containers. That is very easy to do ....

Sent from my iPhone

On Nov 13, 2015, at 2:51 PM, Deepak Vij (A) <deepak.vij(a)huawei.com> wrote:

If Mesosphere can do it for Kubernetes, CF/Diego is doable too. I am not worried about that, it is a solved problem.

The only concern I have is regarding deployment of Garden Container environment in Mesos/Slave/Executor. Although, this is not CF/Diego issues.

Because Mesos is not Garden Container environment aware, its underlying DRF scheduling algorithm may not have visibility to the resources being consumed within the Garden Container. Unless, we wrap Garden Container within the Docker container as Mesos supports Docker container environment. Although, this may not be right approach as it opens up another can of worms – Gardner Container nested within Docker Container. For Kubernetes environment this is not an issue as it uses Docker container to begin with.

- Deepak

From: resouer(a)163.com [mailto:resouer(a)163.com] On Behalf Of Zhang Lei
Sent: Thursday, November 12, 2015 8:24 PM
To: Discussions about Cloud Foundry projects and the system overall.
Subject: [cf-dev] Re: Re: Re: Pluggable Resource Scheduling

You can add different scheduling strategy into Diego by implementing a scheduler plugin.


But not Mesos, that would be a huge task and another story.

The reason Kubernetes can integrate Mesos as scheduler (can work, not perfect) is due to Mesosphere is doing that part, I'm afraid ...

在 2015-11-13 03:57:52,"Deepak Vij (A)" <deepak.vij(a)huawei.com> 写道:

I did not mean to replace the whole “Diego” environment itself. What I was thinking was more in terms of plug-ability within Diego itself. This is so that “Auctioneer” component can be turned into a “Mesos Framework” as one of the scheduling options. By doing that, “Auctioneer” can start accepting “Mesos Offers” instead of native “Auctioning based Diego Resource Scheduling”. Rest of the runtime environment including Garden, Rep etc., they all stay the same. Nothing else changes. I hope this makes sense.

- Deepak

From: Gwenn Etourneau [mailto:getourneau(a)pivotal.io]
Sent: Wednesday, November 11, 2015 5:10 PM
To: Discussions about Cloud Foundry projects and the system overall.
Subject: [cf-dev] Re: Pluggable Resource Scheduling

Hi,

Interesting proposition, wondering if it make sense to hook into Diego or CF.
Diego is connected to CF by the CC-Bridge (big picture) why not create a CC-Bridge for other scheduling system ?



Thanks

On Thu, Nov 12, 2015 at 5:13 AM, Deepak Vij (A) <deepak.vij(a)huawei.com> wrote:
Hi folks, I would like to start a discussion thread and get community thoughts regarding availability of Pluggable Resource Scheduling within CF/Diego. Just like Kubernetes does, wouldn’t it be nice to have an option of choosing Diego native scheduling or other uber/global resource management environments, specifically Mesos.

Look forward to comments and feedback from the community. Thanks.

Regards,
Deepak Vij
(Huawei Software Lab., Santa Clara)


Missing component licenses in bosh final release

Aaron L <aaron.lefkowitz@...>
 

It seems like the specs for the packages don't include the LICENSE and NOTICE files for each package.

Example: https://github.com/cloudfoundry/cf-release/blob/master/packages/cloud_controller_ng/spec

The end result is that the bosh release tarballs don't include licensing information for each of these packages.
I think it would be best if they were included, but I'm also asking to see if there's some reason that they're not.

If no reason exists I'd like to open a PR to add them to the packaging specs.


Re: Pluggable Resource Scheduling

Deepak Vij
 

If Mesosphere can do it for Kubernetes, CF/Diego is doable too. I am not worried about that, it is a solved problem.

The only concern I have is regarding deployment of Garden Container environment in Mesos/Slave/Executor. Although, this is not CF/Diego issues.

Because Mesos is not Garden Container environment aware, its underlying DRF scheduling algorithm may not have visibility to the resources being consumed within the Garden Container. Unless, we wrap Garden Container within the Docker container as Mesos supports Docker container environment. Although, this may not be right approach as it opens up another can of worms – Gardner Container nested within Docker Container. For Kubernetes environment this is not an issue as it uses Docker container to begin with.


- Deepak

From: resouer(a)163.com [mailto:resouer(a)163.com] On Behalf Of Zhang Lei
Sent: Thursday, November 12, 2015 8:24 PM
To: Discussions about Cloud Foundry projects and the system overall.
Subject: [cf-dev] Re: Re: Re: Pluggable Resource Scheduling

You can add different scheduling strategy into Diego by implementing a scheduler plugin.


But not Mesos, that would be a huge task and another story.

The reason Kubernetes can integrate Mesos as scheduler (can work, not perfect) is due to Mesosphere is doing that part, I'm afraid ...

在 2015-11-13 03:57:52,"Deepak Vij (A)" <deepak.vij(a)huawei.com> 写道:

I did not mean to replace the whole “Diego” environment itself. What I was thinking was more in terms of plug-ability within Diego itself. This is so that “Auctioneer” component can be turned into a “Mesos Framework” as one of the scheduling options. By doing that, “Auctioneer” can start accepting “Mesos Offers” instead of native “Auctioning based Diego Resource Scheduling”. Rest of the runtime environment including Garden, Rep etc., they all stay the same. Nothing else changes. I hope this makes sense.


- Deepak

From: Gwenn Etourneau [mailto:getourneau(a)pivotal.io<mailto:getourneau(a)pivotal.io>]
Sent: Wednesday, November 11, 2015 5:10 PM
To: Discussions about Cloud Foundry projects and the system overall.
Subject: [cf-dev] Re: Pluggable Resource Scheduling

Hi,

Interesting proposition, wondering if it make sense to hook into Diego or CF.
Diego is connected to CF by the CC-Bridge (big picture) why not create a CC-Bridge for other scheduling system ?



Thanks

On Thu, Nov 12, 2015 at 5:13 AM, Deepak Vij (A) <deepak.vij(a)huawei.com<mailto:deepak.vij(a)huawei.com>> wrote:
Hi folks, I would like to start a discussion thread and get community thoughts regarding availability of Pluggable Resource Scheduling within CF/Diego. Just like Kubernetes does, wouldn’t it be nice to have an option of choosing Diego native scheduling or other uber/global resource management environments, specifically Mesos.

Look forward to comments and feedback from the community. Thanks.

Regards,
Deepak Vij
(Huawei Software Lab., Santa Clara)


Re: Changing CF Encryption Keys (was Re: Re: Re: Re: Cloud Controller - s3 encryption for droplets)

Dieu Cao <dcao@...>
 

Hi Sandy,

Yes, I'm happy to help work through requirements on these via a github
issue in support of PRs to follow through on implementation.

-Dieu
CF CAPI PM

On Fri, Nov 13, 2015 at 6:44 AM, Sandy Cash Jr <lhcash(a)us.ibm.com> wrote:

Hi,

I'm not sure what strategies exist either. This same topic came up
partially in the context of my resubmitted FIPS proposal, and I was curious
- is it worth creating an issue (or even a separate feature
proposal/blueprint) for tooling to rotate encryption keys? It's nontrivial
(unless there is tooling about which I am unaware) to do, and a good
solution in this space would IMHO fill a significant operational need.

Thoughts?

-Sandy


--
Sandy Cash
Certified Senior IT Architect/Senior SW Engineer
IBM BlueMix
lhcash(a)us.ibm.com
(919) 543-0209

"I skate to where the puck is going to be, not to where it has been.” -
Wayne Gretzky

[image: Inactive hide details for Dieu Cao ---11/12/2015 02:19:53 PM---Hi
William, Thanks for the links.]Dieu Cao ---11/12/2015 02:19:53 PM---Hi
William, Thanks for the links.

From: Dieu Cao <dcao(a)pivotal.io>
To: "Discussions about Cloud Foundry projects and the system overall." <
cf-dev(a)lists.cloudfoundry.org>
Date: 11/12/2015 02:19 PM
Subject: [cf-dev] Re: Re: Re: Cloud Controller - s3 encryption for
droplets
------------------------------



Hi William,

Thanks for the links.
We don't have support for client side encryption currently.
Cloud Controller and Diego's blobstore clients would need to be modified
to encrypt and decrypt for client side encryption and I'm not clear what
strategies exist for rotation of keys in these scenarios.

If you're very interested in this feature and are open to working through
requirements with me and submitting a PR, please open up an issue on github
and we can discuss this further.

-Dieu

On Tue, Nov 10, 2015 at 4:16 PM, William C Penrod <*wcpenrod(a)gmail.com*
<wcpenrod(a)gmail.com>> wrote:

I first ran across it here:

*http://cloudfoundryjp.github.io/docs/running/bosh/components/blobstore.html*
<http://cloudfoundryjp.github.io/docs/running/bosh/components/blobstore.html>

and checked here for additional info:

*https://github.com/cloudfoundry/bosh/blob/master/blobstore_client/lib/blobstore_client/s3_blobstore_client.rb*
<https://github.com/cloudfoundry/bosh/blob/master/blobstore_client/lib/blobstore_client/s3_blobstore_client.rb>





Re: [abacus-perf] Persisting Metrics performance

Jean-Sebastien Delfino
 

In my opinion, writing to a database at the source of the collected data
would drastically reduce the programming complexity and would help to make
the data more consistent with the source.

That approach would also create another path for these stats (apps
proactively pushing stats to DB, then monitor pulling from DB) on top of
the current one (monitor pulling from apps returning the stats). So, I'd
argue that having two significantly different paths for these stats instead
of one is actually adding complexity rather than reducing complexity :).

However, I always wonder why one would need to persist this data. any
reasons?

I have the same question. Would like to understand Kevin's use case a bit
better.


- Jean-Sebastien

On Thu, Nov 12, 2015 at 3:26 PM, Saravanakumar A Srinivasan <
sasrin(a)us.ibm.com> wrote:

I would like to add one more to the list of possible solutions for further
discussion:

How about extending abacus-perf to optionally persist collected
performance metrics into a database?
In my opinion, writing to a database at the source of the collected data
would drastically reduce the programming complexity and would help to make
the data more consistent with the source.

However, I always wonder why one would need to persist this data. any
reasons?


Thanks,
Saravanakumar Srinivasan (Assk),


-----KRuelY <kevinyudhiswara(a)gmail.com> wrote: -----
To: cf-dev(a)lists.cloudfoundry.org
From: KRuelY <kevinyudhiswara(a)gmail.com>
Date: 11/12/2015 02:45PM
Subject: [cf-dev] [abacus-perf] Persisting Metrics performance


Hi,

One of the thing that I want to do is to persist the metrics performance
collected by abacus-perf. What would be the best way to do this? I've been
through some solutions, but none of them seems to be the "correct"
solution.

The scenario is this: I have an application running, and there are 2
instances of this application currently running.

To collect the metrics performance of my application, I need to aggregate
the metrics data collected by each instance's abacus-perf and store them in
a database.

The first solution is to use Turbine. Using Eureka to keep track each
instance's ip address, I can configure Turbine to use Eureka instance
discovery. This way turbine will have aggregated metrics data collected by
each instance's abacus-perf. The next thing to do is to have a separate
application 'peeks' at the turbine stream at some interval and post them to
database. The problem with this is that Turbine persists the metrics data
when there are no activity in the application, and it will flush the
metrics
data when a new stats come in. Meaning that every time I peek into the
turbine stream, I have to check if I already posted these data to the
database.

The second solution is to have each instance post independently. By using
abacus-perf's 'all()' I can set an interval that would call all(), check
the
timewindow, and post accordingly. The restriction is that I can only post
the previous timewindow (since the current window is not yet done), and I
need to filter 0 data. Another restriction is that my interval cannot
exceed
perf's interval. The problem with this is that
I am playing with the time interval. There would be some occasion that I
might lose some data. I'm not sure that this would cover the time where
perf
flushes out old metrics when a new one comes in. I need to make sure that I
save the data before perf flushes.

Another solution is to mimic what the hystrix module is doing: Instead of
streaming the metrics to the hystrix dashboard, I would post to the
database. I have yet to try this solution.

Currently I'm not sure what is the best way to persist the metrics
performance collected by the abacus-perf with accuracy, and I would like to
have some inputs/suggestion on how to persist the metrics. Thanks!





--
View this message in context:
http://cf-dev.70369.x6.nabble.com/abacus-perf-Persisting-Metrics-performance-tp2693.html
Sent from the CF Dev mailing list archive at Nabble.com.



Re: [abacus-perf] Persisting Metrics performance

Jean-Sebastien Delfino
 

Hi Kevin,

One of the thing that I want to do is to persist the metrics performance
collected by abacus-perf.

Interesting feature! It'd be good to understand what you're trying to do
with that data (I think Assk for asking a similar question) as that'll help
us provide better implementation suggestions.

The first solution is to use Turbine...
...
The problem with this is that Turbine persists the metrics data
when there are no activity in the application, and it will flush the metrics
data when a new stats come in.

Not sure I'm following here. Can you give a bit more details to help us
understand how Turbine alters the stats and what problems that causes in
your collection + store logic?

Another solution is to mimic what the hystrix module is doing: Instead of streaming
the metrics to the hystrix dashboard, I would post to the database.

The Abacus-hystrix module responds to GET /hystrix.stream requests, and
doesn't do anything unless a monitor requests the stats. I'm not sure that
pro-actively posting the stats to a DB from each app instance will work so
well... as IMO that'll generate a lot of DB traffic from all these app
instances, will slow down these apps, and won't give you an aggregated
stats at the app level anyway (more on that below, however).

Here's a few more suggestions for you:

a) Give us a bit more context on how you intend to use the data you're
storing... if this is for use with Graphite for example, there's already a
number of blog posts out there that cover that; if you'd like to store the
data in ELK for searching then you might want to log these metrics and flow
them to ELK as part of your logs; if you'd like to store the data in your
own DB and render it using custom made dashboards later then we can explore
other solutions...

b) Try to leverage the current flow (with app instances providing stats on
demand at a /hystrix.stream endpoint and an external monitoring app
collecting these stats) rather than creating yet another completely
different flow; looking at the Hystrix Wiki, looks like that's what most
Hystrix integrations do (incl. the ones used to collect and store stats
into Graphite for example).

c) Decide if you want to store aggregations of stats from multiple app
instances (in that case understand how you can configure or 'fix' Turbine
to not alter the semantics of the original instance level stats, or
understand how/when to store the aggregated Turbine stats), or if it's
actually better to store stats from individual app instances... I'd
probably favor the latter, collect and store the individual instance data
in a DB and aggregate/interpret at rendering time later.

d) Investigate the CF firehose to see if it could help flow the metrics
you've collected to consumers that'll store them in your DBs; that firehose
will definitely be in the loop if you decide to flow the metrics with your
logs, then you can probably just connect a firehose nozzle to it that will
store the selected metrics to your DB.

HTH

- Jean-Sebastien

On Thu, Nov 12, 2015 at 2:36 PM, KRuelY <kevinyudhiswara(a)gmail.com> wrote:

Hi,

One of the thing that I want to do is to persist the metrics performance
collected by abacus-perf. What would be the best way to do this? I've been
through some solutions, but none of them seems to be the "correct"
solution.

The scenario is this: I have an application running, and there are 2
instances of this application currently running.

To collect the metrics performance of my application, I need to aggregate
the metrics data collected by each instance's abacus-perf and store them in
a database.

The first solution is to use Turbine. Using Eureka to keep track each
instance's ip address, I can configure Turbine to use Eureka instance
discovery. This way turbine will have aggregated metrics data collected by
each instance's abacus-perf. The next thing to do is to have a separate
application 'peeks' at the turbine stream at some interval and post them to
database. The problem with this is that Turbine persists the metrics data
when there are no activity in the application, and it will flush the
metrics
data when a new stats come in. Meaning that every time I peek into the
turbine stream, I have to check if I already posted these data to the
database.

The second solution is to have each instance post independently. By using
abacus-perf's 'all()' I can set an interval that would call all(), check
the
timewindow, and post accordingly. The restriction is that I can only post
the previous timewindow (since the current window is not yet done), and I
need to filter 0 data. Another restriction is that my interval cannot
exceed
perf's interval. The problem with this is that
I am playing with the time interval. There would be some occasion that I
might lose some data. I'm not sure that this would cover the time where
perf
flushes out old metrics when a new one comes in. I need to make sure that I
save the data before perf flushes.

Another solution is to mimic what the hystrix module is doing: Instead of
streaming the metrics to the hystrix dashboard, I would post to the
database. I have yet to try this solution.

Currently I'm not sure what is the best way to persist the metrics
performance collected by the abacus-perf with accuracy, and I would like to
have some inputs/suggestion on how to persist the metrics. Thanks!





--
View this message in context:
http://cf-dev.70369.x6.nabble.com/abacus-perf-Persisting-Metrics-performance-tp2693.html
Sent from the CF Dev mailing list archive at Nabble.com.


Re: Acceptance tests assume a multi level wildcard ssl cert

Felix Friedrich
 

Hello Christopher,

thanks for your reply. We are stumbling over the very same test again.
Just to confirm, the tests haven't been fixed according to [1], have
they? Can I help you in any way with fixing this test?


Best regards from Berlin,


Felix


[1] https://www.pivotaltracker.com/n/projects/1358110/stories/105340048

On Mon, 19 Oct 2015, at 17:46, Christopher Piraino wrote:
Hi Felix,

You are right, we have found this issue in one of our own environments as
well, we have a story here
<https://www.pivotaltracker.com/story/show/105340048> to address it by
skipping verification explicitly for this test only. Previously, I
believe
that test only used an http URL when curling, recent updates to allow
configuration of the protocol exposed this issue. We do not assume
multi-level wildcard certs.

The curl helper was also changed recently to set SSL verification
internally
for all curl commands
<https://github.com/cloudfoundry/cf-acceptance-tests/commit/06c83fa5641785ebca1c6dedb36c2370415e3005>,
so the skip_ssl_validation configuration should still be working
correctly.

If you want to see the tests pass, you could either set
"skip_ssl_validation" to false or "use_http" to true and the test should
work as intended. In any case, we are sorry for the failures and
hopefully
we can get a fix out soon.

- Chris

On Mon, Oct 19, 2015 at 7:32 AM, Felix Friedrich <felix(a)fri.edri.ch>
wrote:

Hello,

we've just upgraded our CF deployment from v215 to v220. Unfortunately
the acceptance tests fail: http://pastebin.com/rWrXX1HA
They reasonably fail. The test expects a valid ssl cert, but our cert is
only valid for *.test.cf.springer-sbm.com not for
*.*.test.cf.springer-sbm.com. The test seem to expect a multilevel SSL
cert, I am not sure if that's reasonable or not.

However, I wondered why this exact test did not fail in v215. I
suspected that the way curl gets executed in the v220 tests changed and
it apparently seems that I am right [1]. Thus I assume (!) that before
curl's return codes did not get propagated, while they are now. (Return
code 51 is "The peer's SSL certificate or SSH MD5 fingerprint was not
OK." according to the man page.)

Also the new way of executing ("curlCmd := runner.Curl(uri)") does not
look like it gets the skipSslValidation value. As a fact running the
acceptances tests with the skip_ssl_validation option still leads to
this test failing. However the used library looks like it is able to
skip SSL validation:

https://github.com/cloudfoundry-incubator/cf-test-helpers/blob/master/runner/run.go

Even if skip_ssl_validation would work, I am not very keen on activating
that option since that also applies to all other tests, which are not
using multi level wildcard certs.

Besides of the fact that curl seems to validate SSL certs no matter if
skip_ssl_validation is true or false, did you intentionally assume that
CF uses a multilevel wildcard cert?


Felix



[1]

https://github.com/cloudfoundry/cf-acceptance-tests/compare/353e06565a6a1a0d6b4c417f57b00eeecec604fa...72496c6fabd1c8ec51ae932d13a597a62ccf30dd


Re: [vcap-dev] Addressing buildpack size

Jack Cai
 

Thanks JT. Except the Java buildpack though :-)

Jack

On Fri, Nov 13, 2015 at 8:42 AM, JT Archie <jarchie(a)pivotal.io> wrote:

Jack,

This is correct.

The online version and the offline version of the buildpack only differ
one way. The offline version has the dependencies, defined in the
`manifest.yml`, packaged with it.

They both keep the same contract that we'll only support certain versions
of runtimes (Ruby, Python, etc).


-- JT

On Thu, Nov 12, 2015 at 2:46 PM, Jack Cai <greensight(a)gmail.com> wrote:

Thanks Mike. Is it true that even for the online version (aka when doing
"cf push -b https://github.com/cloudfoundry/nodejs-buildpack.git"),
users are now limited to use the runtime versions defined in the
manifest.yml?

Jack


On Thu, Nov 12, 2015 at 12:05 PM, Mike Dalessio <mdalessio(a)pivotal.io>
wrote:

This is a bug in the nodejs-buildpack v1.5.1, which we should have a fix
for later today.

Github issue is here:
https://github.com/cloudfoundry/nodejs-buildpack/issues/35

Tracker story is here:
https://www.pivotaltracker.com/story/show/107946000

Apologies for the inconvenience.

On Thu, Nov 12, 2015 at 12:03 PM, Jack Cai <greensight(a)gmail.com> wrote:


For the cached package of the buildpacks, I thought it would refuse to
provide a runtime version that's not cached. Yesterday I was playing with
the node.js buildpack and found it actually will download a non-cached
node.js runtime. Does it mean we kind of moved to the "hybrid" model I
suggested earlier in this thread? Does it work the same way for
java/go/php/ruby/python buildpacks as well?

Jack




On Mon, Apr 13, 2015 at 3:08 PM, Mike Dalessio <mdalessio(a)pivotal.io>
wrote:

Hi Jack,

Thanks so much for your feedback!

Based on my conversations with CF users to date, this is definitely
something that we would want to be "opt-in" behavior; the consensus-desired
default appears to be to disallow the downloading of old/deprecated
versions.

Notably, though, what we'll continue to support is the specification
of a buildpack using the `cf push` `-b` option:

```
-b Custom buildpack by name (e.g. my-buildpack) or GIT URL
```

Buildpacks used in this manner will behave in "online" mode, meaning
they'll attempt to download dependencies from the public internet. Does
that satisfy your needs, at least in the short-term?

-m


On Mon, Apr 13, 2015 at 1:59 PM, Jack Cai <greensight(a)gmail.com>
wrote:

We will no longer provide "online" buildpack releases, which
download dependencies from the public internet.

I think it would make sense to retain the ability to download
additional runtime versions on demand (that's not packaged in the
buildpack) if the user explicitly requests it. So basically it will be a
hybrid model, where the most recent versions are "cached", while old
versions are still available.

Jack


On Wed, Apr 8, 2015 at 11:36 PM, Onsi Fakhouri <ofakhouri(a)pivotal.io>
wrote:

Hey Patrick,

Sorry about that - the diego-dev-notes is an internal documentation
repo that the Diego team uses to stay on the same page and toss ideas
around.

There isn't much that's terribly interesting at that link - just
some ideas on how to extend diego's existing caching capabilities to avoid
copying cached artifacts into containers (we'd mount them in
directly instead).

Happy to share more detail if there is interest.

Onsi

On Wednesday, April 8, 2015, Patrick Mueller <pmuellr(a)gmail.com>
wrote:

I got a 404 on
https://github.com/pivotal-cf-experimental/diego-dev-notes/blob/master/proposals/better-buildpack-caching.md

On Wed, Apr 8, 2015 at 11:10 AM, Mike Dalessio <
mdalessio(a)pivotal.io> wrote:

Hello vcap-dev!

This email details a proposed change to how Cloud Foundry
buildpacks are packaged, with respect to the ever-increasing number of
binary dependencies being cached within them.

This proposal's permanent residence is here:


https://github.com/cloudfoundry-incubator/buildpack-packager/issues/4

Feel free to comment there or reply to this email.
------------------------------
Buildpack SizesWhere we are today

Many of you have seen, and possibly been challenged by, the
enormous sizes of some of the buildpacks that are currently shipping with
cf-release.

Here's the state of the world right now, as of v205:

php-buildpack: 1.1G
ruby-buildpack: 922M
go-buildpack: 675M
python-buildpack: 654M
nodejs-buildpack: 403M
----------------------
total: 3.7G

These enormous sizes are the result of the current policy of
packaging every-version-of-everything-ever-supported ("EVOEES") within the
buildpack.

Most recently, this problem was exacerbated by the fact that
buildpacks now contain binaries for two rootfses.
Why this is a problem

If continued, buildpacks will only continue to increase in size,
leading to longer and longer build and deploy times, longer test times,
slacker feedback loops, and therefore less frequent buildpack releases.

Additionally, this also means that we're shipping versions of
interpreters, web servers, and libraries that are deprecated, insecure, or
both. Feedback from CF users has made it clear that many companies view
this as an unnecessary security risk.

This policy is clearly unsustainable.
What we can do about it

There are many things being discussed to ameliorate the impact
that buildpack size is having on the operations of CF.

Notably, Onsi has proposed a change to buildpack caching, to
improve Diego staging times (link to proposal
<https://github.com/pivotal-cf-experimental/diego-dev-notes/blob/master/proposals/better-buildpack-caching.md>
).

However, there is an immediate solution available, which addresses
both the size concerns as well as the security concern: packaging fewer
binary dependencies within the buildpack.
The proposal

I'm proposing that we reduce the binary dependencies in each
buildpack in a very specific way.

Aside on terms I'll use below:

- Versions of the form "1.2.3" are broken down as:
MAJOR.MINOR.TEENY. Many language ecosystems refer to the "TEENY" as "PATCH"
interchangeably, but we're going to use "TEENY" in this proposal.
- We'll assume that TEENY gets bumped for API/ABI compatible
changes.
- We'll assume that MINOR and MAJOR get bumped when there are
API/ABI *incompatible* changes.

I'd like to move forward soon with the following changes:

1. For language interpreters/compilers, we'll package the two
most-recent TEENY versions on each MAJOR.MINOR release.
2. For all other dependencies, we'll package only the single
most-recent TEENY version on each MAJOR.MINOR release.
3. We will discontinue packaging versions of dependencies that
have been deprecated.
4. We will no longer provide "EVOEES" buildpack releases.
5. We will no longer provide "online" buildpack releases,
which download dependencies from the public internet.
6. We will document the process, and provide tooling, for CF
operators to build their own buildpacks, choosing the dependencies that
their organization wants to support or creating "online" buildpacks at
operators' discretion.

An example for #1 is that we'll go from packaging 34 versions of
node v0.10.x to only packaging two: 0.10.37 and 0.10.38.

An example for #2 is that we'll go from packaging 3 versions of
nginx 1.5 in the PHP buildpack to only packaging one: 1.5.12.

An example for #3 is that we'll discontinue packaging ruby 1.9.3
in the ruby-buildpack, which reached end-of-life in February 2015.
Outcomes

With these changes, the total buildpack size will be reduced
greatly. As an example, we expect the ruby-buildpack size to go from 922M
to 338M.

We also want to set the expectation that, as new interpreter
versions are released, either for new features or (more urgently) for
security fixes, we'll release new buildpacks much more quickly than we do
today. My hope is that we'll be able to do it within 24 hours of a new
release.
Planning

These changes will be relatively easy to make, since all the
buildpacks are now using a manifest.yml file to declare what's
being packaged. We expect to be able to complete this work within the next
two weeks.

Stories are in the Tracker backlog under the Epic named
"skinny-buildpacks", which you can see here:

https://www.pivotaltracker.com/epic/show/1747328

------------------------------

Please let me know how these changes will impact you and your
organizations, and let me know of any counter-proposals or variations you'd
like to consider.

Thanks,

-mike


--
You received this message because you are subscribed to the Google
Groups "Cloud Foundry Developers" group.
To view this discussion on the web visit
https://groups.google.com/a/cloudfoundry.org/d/msgid/vcap-dev/CAGeQLZwDbON2B6cAynyJY12tCWXO8XPKSCmhCc%3D%3DBu4KsHe%3DhA%40mail.gmail.com
<https://groups.google.com/a/cloudfoundry.org/d/msgid/vcap-dev/CAGeQLZwDbON2B6cAynyJY12tCWXO8XPKSCmhCc%3D%3DBu4KsHe%3DhA%40mail.gmail.com?utm_medium=email&utm_source=footer>
.

To unsubscribe from this group and stop receiving emails from it,
send an email to vcap-dev+unsubscribe(a)cloudfoundry.org.


--
Patrick Mueller
http://muellerware.org

--
You received this message because you are subscribed to the Google
Groups "Cloud Foundry Developers" group.
To view this discussion on the web visit
https://groups.google.com/a/cloudfoundry.org/d/msgid/vcap-dev/CACAoQH0-dCrN6o%2B%3Ds1kn3poJSusUWbV6Zzsk29FTRs_kQfYtaQ%40mail.gmail.com
<https://groups.google.com/a/cloudfoundry.org/d/msgid/vcap-dev/CACAoQH0-dCrN6o%2B%3Ds1kn3poJSusUWbV6Zzsk29FTRs_kQfYtaQ%40mail.gmail.com?utm_medium=email&utm_source=footer>
.

To unsubscribe from this group and stop receiving emails from it,
send an email to vcap-dev+unsubscribe(a)cloudfoundry.org.
--
You received this message because you are subscribed to the Google
Groups "Cloud Foundry Developers" group.
To view this discussion on the web visit
https://groups.google.com/a/cloudfoundry.org/d/msgid/vcap-dev/CAFwdB-x8JD36rQFCHOnuDpOojYpVFZ_xBH1JzoTHWaEKC5Vqog%40mail.gmail.com
<https://groups.google.com/a/cloudfoundry.org/d/msgid/vcap-dev/CAFwdB-x8JD36rQFCHOnuDpOojYpVFZ_xBH1JzoTHWaEKC5Vqog%40mail.gmail.com?utm_medium=email&utm_source=footer>
.

To unsubscribe from this group and stop receiving emails from it,
send an email to vcap-dev+unsubscribe(a)cloudfoundry.org.
--
You received this message because you are subscribed to the Google
Groups "Cloud Foundry Developers" group.
To view this discussion on the web visit
https://groups.google.com/a/cloudfoundry.org/d/msgid/vcap-dev/CANbAaBTNMr9QwyopJJJ6VpZWz%2BGYr-Q_xL0UsnX2ha6OzkAcag%40mail.gmail.com
<https://groups.google.com/a/cloudfoundry.org/d/msgid/vcap-dev/CANbAaBTNMr9QwyopJJJ6VpZWz%2BGYr-Q_xL0UsnX2ha6OzkAcag%40mail.gmail.com?utm_medium=email&utm_source=footer>
.

To unsubscribe from this group and stop receiving emails from it,
send an email to vcap-dev+unsubscribe(a)cloudfoundry.org.
--
You received this message because you are subscribed to the Google
Groups "Cloud Foundry Developers" group.
To view this discussion on the web visit
https://groups.google.com/a/cloudfoundry.org/d/msgid/vcap-dev/CAGeQLZyo9VszAcgQRCuAfQ3r%3Di2PksDq2Hkv6N-kObs053hOFg%40mail.gmail.com
<https://groups.google.com/a/cloudfoundry.org/d/msgid/vcap-dev/CAGeQLZyo9VszAcgQRCuAfQ3r%3Di2PksDq2Hkv6N-kObs053hOFg%40mail.gmail.com?utm_medium=email&utm_source=footer>
.

To unsubscribe from this group and stop receiving emails from it, send
an email to vcap-dev+unsubscribe(a)cloudfoundry.org.


Changing CF Encryption Keys (was Re: Re: Re: Re: Cloud Controller - s3 encryption for droplets)

Sandy Cash Jr <lhcash@...>
 

Hi,

I'm not sure what strategies exist either. This same topic came up
partially in the context of my resubmitted FIPS proposal, and I was curious
- is it worth creating an issue (or even a separate feature
proposal/blueprint) for tooling to rotate encryption keys? It's nontrivial
(unless there is tooling about which I am unaware) to do, and a good
solution in this space would IMHO fill a significant operational need.

Thoughts?

-Sandy


--
Sandy Cash
Certified Senior IT Architect/Senior SW Engineer
IBM BlueMix
lhcash(a)us.ibm.com
(919) 543-0209

"I skate to where the puck is going to be, not to where it has been.” -
Wayne Gretzky



From: Dieu Cao <dcao(a)pivotal.io>
To: "Discussions about Cloud Foundry projects and the system
overall." <cf-dev(a)lists.cloudfoundry.org>
Date: 11/12/2015 02:19 PM
Subject: [cf-dev] Re: Re: Re: Cloud Controller - s3 encryption for
droplets



Hi William,

Thanks for the links.
We don't have support for client side encryption currently.
Cloud Controller and Diego's blobstore clients would need to be modified to
encrypt and decrypt for client side encryption and I'm not clear what
strategies exist for rotation of keys in these scenarios.

If you're very interested in this feature and are open to working through
requirements with me and submitting a PR, please open up an issue on github
and we can discuss this further.

-Dieu

On Tue, Nov 10, 2015 at 4:16 PM, William C Penrod <wcpenrod(a)gmail.com>
wrote:
I first ran across it here:
http://cloudfoundryjp.github.io/docs/running/bosh/components/blobstore.html


and checked here for additional info:
https://github.com/cloudfoundry/bosh/blob/master/blobstore_client/lib/blobstore_client/s3_blobstore_client.rb

6701 - 6720 of 9425