Date   

Re: "Can't find property `[etcd.cluster]' when deploy cfv233

王小锋 <zzuwxf at gmail.com...>
 

Please ignore this thread, I just figure out the problem.

2016-03-25 13:13 GMT+08:00 王小锋 <zzuwxf(a)gmail.com>:

Hi, there

I am trying to deploy cf deployment 233, and use script to generate
deployment manifest, but failed with the following error, I am not sure
what "value" should be given to property "cluster", your help is greatly
appreciated!! thanks.

Started preparing configuration > Binding configuration. Failed: Error
filling in template `etcd_bosh_utils.sh.erb' for `etcd_z1/0' (line 33:
Can't find property `["etcd.cluster"]') (00:00:01)

*Error 100: Error filling in template `etcd_bosh_utils.sh.erb' for
`etcd_z1/0' (line 33: Can't find property `["etcd.cluster"]')*

Task 38 error

The corresponding etcd job looks like:

- instances: 1
name: etcd_z1
networks:
- name: cf1
static_ips:
- 10.10.16.20
persistent_disk: 10024
properties:
metron_agent:
zone: z1
resource_pool: medium_z1
templates:
- name: etcd
release: cf
- name: etcd_metrics_server
release: cf
- name: metron_agent
release: cf
update:
max_in_flight: 1
serial: true


"Can't find property `[etcd.cluster]' when deploy cfv233

王小锋 <zzuwxf at gmail.com...>
 

Hi, there

I am trying to deploy cf deployment 233, and use script to generate
deployment manifest, but failed with the following error, I am not sure
what "value" should be given to property "cluster", your help is greatly
appreciated!! thanks.

Started preparing configuration > Binding configuration. Failed: Error
filling in template `etcd_bosh_utils.sh.erb' for `etcd_z1/0' (line 33:
Can't find property `["etcd.cluster"]') (00:00:01)

*Error 100: Error filling in template `etcd_bosh_utils.sh.erb' for
`etcd_z1/0' (line 33: Can't find property `["etcd.cluster"]')*

Task 38 error

The corresponding etcd job looks like:

- instances: 1
name: etcd_z1
networks:
- name: cf1
static_ips:
- 10.10.16.20
persistent_disk: 10024
properties:
metron_agent:
zone: z1
resource_pool: medium_z1
templates:
- name: etcd
release: cf
- name: etcd_metrics_server
release: cf
- name: metron_agent
release: cf
update:
max_in_flight: 1
serial: true


Re: Upcoming extraction of cflinuxfs2 rootfs release from diego-release

Benjamin Gandon
 

And here is the README!

https://github.com/bgandon/rootfses-boshrelease/blob/master/README.md <https://github.com/bgandon/rootfses-boshrelease/blob/master/README.md>

I took the chance of upgrading to cflinuxfs2 v1.48.0 to document how people can do that themselves.
That’s a pretty comprehensive 15-steps workflow!

/Benjamin

Le 25 mars 2016 à 00:47, Benjamin Gandon <benjamin(a)gandon.org> a écrit :

Ok sorry for the delay, I was listening and chatting with Josh here at Paris Spring Meetup. That was really cool.

Anyway, here it is : <https://github.com/bgandon/rootfses-boshrelease <https://github.com/bgandon/rootfses-boshrelease>>

Here is what I did :

1. Patch Diego deployment manifests with the deployment-samples/diego-manifests.yml.patch <https://github.com/bgandon/rootfses-boshrelease/blob/master/deployment-samples/diego-manifests.yml.patch>

2. Add the deployment-samples/property-overrides.yml <https://github.com/bgandon/rootfses-boshrelease/blob/master/deployment-samples/property-overrides.yml> to the Diego deployment

3. If needed, customize the deployment-samples/rootfses-properties.yml <https://github.com/bgandon/rootfses-boshrelease/blob/master/deployment-samples/rootfses-properties.yml>

4. Create the release tarball and upload it to the director

bosh create release --final --name rootfses --version 1.43.0
bosh upload release

5. Deploy

And it worked like a charm.

I shall add a README soon with those instructions.
Have fun!

/Benjamin


Le 24 mars 2016 à 19:28, Eric Malm <emalm(a)pivotal.io <mailto:emalm(a)pivotal.io>> a écrit :

Thanks, Benjamin! The Buildpacks team just started the extraction story (https://www.pivotaltracker.com/story/show/115888335 <https://www.pivotaltracker.com/story/show/115888335>) yesterday and is continuing it today, so now would be an ideal time for you to weigh in with your extraction efforts.

Best,
Eric

On Thu, Mar 24, 2016 at 3:29 AM, Benjamin Gandon <benjamin(a)gandon.org <mailto:benjamin(a)gandon.org>> wrote:
Hi Eric,

Facing the pace of cflinuxfs2 updates, I've done and successfully deployed such a BOSH release. It also ships with example manifests. I'll share it with you this afternoon.

I've been using a "cheap" local blobstore but it would be interesting that you publish the blobs on a public bucket.

/Benjamin

Le 18 mars 2016 à 00:01, Eric Malm <emalm(a)pivotal.io <mailto:emalm(a)pivotal.io>> a écrit :

Dear CF Community,

Over the next few weeks, the Diego and Buildpacks teams will be working together to extract a new BOSH release for the cflinuxfs2 rootfs/stack out of the Diego BOSH release. The Buildpacks team has been doing an amazing job of publishing new rootfs images in response to CVEs, and this separation will make it easier for all Diego deployment operators to update to those latest rootfs images without having to update their other releases. We've already taken advantage of the same kind of separation between Garden-Linux and Diego when addressing some recent Garden CVEs, and we're looking forward to having that flexibility with the rootfs image as well.

Once completed, the release extraction will mean a couple of minor changes for Diego deployment operators:

- You'll have one more release to upload alongside the Diego, Garden-Linux, and etcd BOSH releases to deploy your Diego cluster. The Diego release tarball itself will be much smaller, as it will no longer include the rootfs image that accounts for about 70% of its current size.
- If you use the spiff-based manifest-generation script in the diego-release repo to produce your manifest, that's all you'll have to do! If you're hand-rolling your manifests, you will have one or two BOSH properties to add or move, and an entry to change in the list of job templates on the Diego Cell VMs.

We'll call out these changes explicitly in the Diego release notes on GitHub when the time comes.

Just as we do with the Garden-Linux and etcd releases today, the Diego team will also attach a recent final cflinuxfs2-rootfs release tarball to each final Diego release we publish on GitHub, so it will be easy for consumers to get a validated set of default 'batteries' to plug into Diego. We'll also work with the CF Release Integration team to make sure that when the most recent rootfs image passes tests against Diego in their integration environments, its release version is recorded in the Diego/CF compatibility record, at https://github.com/cloudfoundry-incubator/diego-cf-compatibility/blob/master/compatibility-v2.csv <https://github.com/cloudfoundry-incubator/diego-cf-compatibility/blob/master/compatibility-v2.csv>.

If you would like to track our progress, please follow the 'cflinuxfs2-release-extraction' epic in the Diego tracker (https://www.pivotaltracker.com/epic/show/2395419 <https://www.pivotaltracker.com/epic/show/2395419>) and the 'bosh-release' label in the Buildpacks tracker (https://www.pivotaltracker.com/n/projects/1042066/search?q=label%3A%22bosh-release%22 <https://www.pivotaltracker.com/n/projects/1042066/search?q=label%3A%22bosh-release%22>).

Thanks,
Eric Malm, CF Runtime Diego PM


Re: CC BUILDPACK_SET App Usage Events

Matthew Sykes <matthew.sykes@...>
 

Can you explain how staging tasks will be recorded? We have already seen a
number of custom buildpacks that use the 15 minutes of staging time in
interesting ways that consume significant resources. This is mitigated
today by the fact that the staging only occurs when the app transitions
from stopped to started so only the staging instance can execute; during
that period the desired state of the application is `started` so it's
billable time.

While I'm happy we won't be stopping apps when a new version is pushed, I
do think providers need a way to track the additional resource consumption
of staging to avoid misuse and abuse - especially given staging tasks are
typically executed with memory and disk limits that exceed those associated
with an app instance.

Thanks.

On Thu, Mar 24, 2016 at 5:20 PM, Nicholas Calugar <ncalugar(a)pivotal.io>
wrote:

Hi Matthew,

Thanks for the response. When we start an app in V3, we will always know
the buildpack because we are starting the app's current droplet. The
droplet was staged with a particular buildpack, so we can record the
buildpack in the STARTED app usage event.

As promised, a couple follow up questions:

If we record the buildpack_guid in the STARTED event, can we omit the
BUILDPACK_SET event in V3?

Currently, a V3 app must be stopped to change the current droplet. We have
an upcoming story [1] to enable changing the droplet on a running
application. Would we want to add something like a DROPLET_CHANGED app
usage event to indicate a running app is now using a different buildpack?


Thanks,

-Nick

Nicholas Calugar
CAPI Product Manager
Pivotal Software, Inc.

[1] https://www.pivotaltracker.com/story/show/111166678

On Wed, Mar 23, 2016 at 7:34 PM Matthew Sykes <matthew.sykes(a)gmail.com>
wrote:

The buildpack set event was implemented to enable usage based billing for
buildpack applications where the rates differed by the buildpack used to
stage the application.

When staging an application in /v2, if a buildpack is not specified, we
don't know which buildpack will stage the application until after the
detect phase of staging has occurred. That means at the time the usage
event was captured for the transition to start, the buildpack was
unavailable.

The buildpack set event makes that information available to billing
systems after staging completes.

On Tue, Mar 22, 2016 at 8:03 PM, Nicholas Calugar <ncalugar(a)pivotal.io>
wrote:

Hi CF-Dev,

We are continuing work on the Cloud Controller V3 API and had a question
about a particular App Usage Event we write as part of V2, the
BUILDPACK_SET event. The test[1] around this indicates that we write the
app usage event when staging completes and the app is still started. In V2,
apps, packages, and droplets are very tightly coupled, so writing this
event here makes sense.

In V3, apps, packages, and droplets are first class resources and we
don't stage apps, we stage packages to create droplets. Furthermore,
staging with a particular buildpack does not affect the app until the
droplet is assigned to the app as the current droplet and the current
droplet can be changed to any valid droplet for the app. Staging completion
and the app being started no longer seem to correlate to the buildpack
being "set".

With the above differences, we are hoping to understand the use-case
around the BUILDPACK_SET event so we can correctly preserve the desired
behavior for V3. I'll likely have follow up questions, but the first thing
I'd like to know is what BUILDPACK_SET indicates to downstream billing
engines.

Thanks,

-Nick

Nicholas Calugar
CAPI Product Manager
Pivotal Software, Inc.

[1]
https://github.com/cloudfoundry/cloud_controller_ng/blob/45b311f18d8ad1184dcb647081b19eca6f1eaf83/spec/unit/models/runtime/app_spec.rb#L1345-L1369


--
Matthew Sykes
matthew.sykes(a)gmail.com

--
Matthew Sykes
matthew.sykes(a)gmail.com


Re: Announcement: BOSH for Windows

Gwenn Etourneau
 

That's a really good news.
Do you have any bosh-release example for windows ??

Thanks

On Fri, Mar 25, 2016 at 1:18 AM, Mike Youngstrom <youngm(a)gmail.com> wrote:

This is great!

Mike

On Thu, Mar 24, 2016 at 9:06 AM, Steven Benario <sbenario(a)pivotal.io>
wrote:

Hello everyone,

We wanted to take this opportunity to announce a new incubating project
under the BOSH umbrella.

In the fall, we launched full support for Windows and .NET applications
in Cloud Foundry with Project Greenhouse (composed of Garden-Windows [1]
and Diego-Windows [2]). Since then, we've seen a lot of interest in
bringing the magic of CF to Windows developers.

One of the most common requests along those lines has been for BOSH to
support Windows cells. I'm pleased to announce that we have recently begun
adding support for Windows servers to Bosh (aka "Bosh for Windows" or
"BoshWin"). This project can be tracked at its public tracker [3], and will
be an open source contribution to the Foundation.

To date, all contributions have been committed directly to the Bosh-Agent
repo [4], under "windows" branches. If appropriate, we may commit code to
other repos in the future.

We are really excited to announce this, and would love to hear from you
if you are a potential user or contributor to Bosh-Windows!

Cheers,
Steven Benario
PM for Bosh-Windows



[1] - https://github.com/cloudfoundry/garden-windows
[2] - https://github.com/cloudfoundry/diego-windows-release
[3] - https://www.pivotaltracker.com/n/projects/1479998
[4] - https://github.com/cloudfoundry/bosh-agent


Re: Proposal to Change CATs Ownership

Matthew Sykes <matthew.sykes@...>
 

This has as much to do with finding the "right" place for the CATS as it
does anything else. The CATS are the end-to-end acceptance tests that are
associated with what used to be called `cf-release` - the thing that
(before we spread our bits across 20+ releases) actually represented what
Cloud Foundry was. Not only does it reflect the target environment, it
tries to ensure that the primary developer experience, the cf cli,
continues to function correctly.

Now that all of the things that used to be part of the coherent, versioned,
collection of components have been spread to the wind, it's probably best
to make the CATS an independent release too.

Basically, I also believe that it's inappropriate to pull them into the
CAPI releases; they have as much to do with the CLI, Diego, LAM, and
Routing teams as they do with CAPI.

On Thu, Mar 24, 2016 at 11:48 AM, Utako Ueda <uueda(a)pivotal.io> wrote:

To address your concerns, Mike:

I think there's a solution here that won't hinder people from contributing
to CATs. Contributing to CATs won't necessarily change from the perspective
of those who already have push access. Your changes would make their way
through the CAPI pipeline first, get bumped within capi-release in a manner
similar to how we currently bump cloud_controller_ng, before getting bumped
to cf-release develop. This means you could potentially fork just the CATs
repo and push to it. Our CI pipeline would take the responsibility of
pointing capi-release to the correct SHA of its submoduled CATs.



On Wed, Mar 23, 2016 at 12:03 PM, Mike Youngstrom <youngm(a)gmail.com>
wrote:

I don't have any show stopping cases. But, my 2 main concerns are:

* Conceptually speaking if the CATS are for all will making them part of
capi hinder other teams from contributing to it?

* Having the CATs part of CAPI will require me to fork CAPI when I need
to fix/customize a test specific to our environment, something we
frequently need to do. Long term I'd prefer to fork a smaller more single
purpose release when these situations arise.

Mike


On Wed, Mar 23, 2016 at 12:43 PM, Utako Ueda <uueda(a)pivotal.io> wrote:

We had a series of meetings to figure out a good path for CATs that
ranged from multiple teams to smaller groups. We met most recently to
address concerns that include the following:

• Workflow and Release Bumping Issues: CATs relies on calls to specific
versions of CC API to run successfully. The CAPI CI currently bumps
cloud_controller_ng within capi-release, and on a successful run of CATs
(which lives on cf-release develop), capi-release then gets bumped in
cf-release develop. This leads to a "chicken and egg" scenario, in that
when changes are made to both CATs and CC, both cf-release and capi
pipelines are broken until a manual fix is made.
• In a world where cf-release no longer exists, how do we ensure CATs
uses the correct versions of the CF CLI and CC API?

tl;dr: Though not all of the CATs test the CC API explicitly, they all
make use of the CC API and therefore make CC a choke point. Having CATs as
a separate release may be a good idea, but doesn't necessarily address the
strong coupling issues.

We'd like to know if there are specific cases in which this will be
detrimental to other projects' workflows before moving forward.


On Wed, Mar 23, 2016 at 4:08 AM, Michael Fraenkel <
michael.fraenkel(a)gmail.com> wrote:

Can you explain why CATs should live under CAPI?
CATs now represents acceptance test for all of the Cloud Foundry
function driven mainly via the CLI.
Tieing this to any one release seems a bit artificial.
CATs is about testing the aggregation of all the various releases not
just CAPI.

I am more surprised that it wasn't suggested to be its own release
which dictated the versions for all releases that had passed.

- Michael


On 3/22/16 3:05 AM, Utako Ueda wrote:

Hi all,

The CAPI team would like to take ownership of the CF acceptance tests
<https://github.com/cloudfoundry/cf-acceptance-tests/> and include
them as part of CAPI release
<http://github.com/cloudfoundry/capi-release>.

This solves several pain points we've experienced over the last few
months, mainly due to the strong coupling between CATs and the Cloud
Controller API.

Our plan is to submodule the CATs repo in to CAPI release, and bump
CATs on a successful run through the CAPI ci pipeline. At this point, CAPI
release will be bumped in cf-release develop, which other teams will
consume for their own testing purposes.

We hope to make these changes in the near future as we wrap up our
release extraction from cf-release. We'd like to know if any teams have any
concerns about this before we proceed, so do let us know as soon as
possible so we can address them.

Thanks,
Utako, CF CAPI Team


--
Matthew Sykes
matthew.sykes(a)gmail.com


Re: CC BUILDPACK_SET App Usage Events

Nicholas Calugar
 

Hi Piotr,

You are correct, the next start would simply contain the new buildpack used
to stage the droplet. Thanks for the feedback on the new event, as long as
everyone is ok with DROPLET_CHANGED, I'll add that to the story.

Thanks,

Nick

On Thu, Mar 24, 2016 at 4:40 PM Piotr Przybylski <piotrp(a)us.ibm.com> wrote:

Hi Nick,
If the STARTED event would guarantee to always include buildpack name and
guid, then BUILDPACK_SET seems redundant and could be omitted. If the
droplet change in running application is possible, we would like to know
that - so the new event would be needed. And if the droplet changes in the
stopped app, the next start would simply contain new buildpack guid/name -
correct ?

Thank you,

Piotr

Piotr Przybylski / IBM Bluemix


[image: Inactive hide details for Nicholas Calugar ---03/24/2016 02:21:23
PM---Hi Matthew, Thanks for the response. When we start an ap]Nicholas
Calugar ---03/24/2016 02:21:23 PM---Hi Matthew, Thanks for the response.
When we start an app in V3, we will always know

From: Nicholas Calugar <ncalugar(a)pivotal.io>
To: "Discussions about Cloud Foundry projects and the system overall." <
cf-dev(a)lists.cloudfoundry.org>
Date: 03/24/2016 02:21 PM
Subject: [cf-dev] Re: Re: CC BUILDPACK_SET App Usage Events
------------------------------




Hi Matthew,

Thanks for the response. When we start an app in V3, we will always know
the buildpack because we are starting the app's current droplet. The
droplet was staged with a particular buildpack, so we can record the
buildpack in the STARTED app usage event.

As promised, a couple follow up questions:

If we record the buildpack_guid in the STARTED event, can we omit the
BUILDPACK_SET event in V3?

Currently, a V3 app must be stopped to change the current droplet. We have
an upcoming story [1] to enable changing the droplet on a running
application. Would we want to add something like a DROPLET_CHANGED app
usage event to indicate a running app is now using a different buildpack?


Thanks,

-Nick

Nicholas Calugar
CAPI Product Manager
Pivotal Software, Inc.

[1] *https://www.pivotaltracker.com/story/show/111166678*
<https://www.pivotaltracker.com/story/show/111166678>

On Wed, Mar 23, 2016 at 7:34 PM Matthew Sykes <*matthew.sykes(a)gmail.com*
<matthew.sykes(a)gmail.com>> wrote:

The buildpack set event was implemented to enable usage based billing
for buildpack applications where the rates differed by the buildpack used
to stage the application.

When staging an application in /v2, if a buildpack is not specified,
we don't know which buildpack will stage the application until after the
detect phase of staging has occurred. That means at the time the usage
event was captured for the transition to start, the buildpack was
unavailable.

The buildpack set event makes that information available to billing
systems after staging completes.

On Tue, Mar 22, 2016 at 8:03 PM, Nicholas Calugar <
*ncalugar(a)pivotal.io* <ncalugar(a)pivotal.io>> wrote:
Hi CF-Dev,

We are continuing work on the Cloud Controller V3 API and had a
question about a particular App Usage Event we write as part of V2, the
BUILDPACK_SET event. The test[1] around this indicates that we write the
app usage event when staging completes and the app is still started. In V2,
apps, packages, and droplets are very tightly coupled, so writing this
event here makes sense.

In V3, apps, packages, and droplets are first class resources and we
don't stage apps, we stage packages to create droplets. Furthermore,
staging with a particular buildpack does not affect the app until the
droplet is assigned to the app as the current droplet and the current
droplet can be changed to any valid droplet for the app. Staging completion
and the app being started no longer seem to correlate to the buildpack
being "set".

With the above differences, we are hoping to understand the use-case
around the BUILDPACK_SET event so we can correctly preserve the desired
behavior for V3. I'll likely have follow up questions, but the first thing
I'd like to know is what BUILDPACK_SET indicates to downstream billing
engines.

Thanks,

-Nick

Nicholas Calugar
CAPI Product Manager
Pivotal Software, Inc.

[1]
*https://github.com/cloudfoundry/cloud_controller_ng/blob/45b311f18d8ad1184dcb647081b19eca6f1eaf83/spec/unit/models/runtime/app_spec.rb#L1345-L1369*
<https://github.com/cloudfoundry/cloud_controller_ng/blob/45b311f18d8ad1184dcb647081b19eca6f1eaf83/spec/unit/models/runtime/app_spec.rb#L1345-L1369>



--
Matthew Sykes
*matthew.sykes(a)gmail.com* <matthew.sykes(a)gmail.com>




Re: Upcoming extraction of cflinuxfs2 rootfs release from diego-release

Benjamin Gandon
 

Ok sorry for the delay, I was listening and chatting with Josh here at Paris Spring Meetup. That was really cool.

Anyway, here it is : <https://github.com/bgandon/rootfses-boshrelease <https://github.com/bgandon/rootfses-boshrelease>>

Here is what I did :

1. Patch Diego deployment manifests with the deployment-samples/diego-manifests.yml.patch <https://github.com/bgandon/rootfses-boshrelease/blob/master/deployment-samples/diego-manifests.yml.patch>

2. Add the deployment-samples/property-overrides.yml <https://github.com/bgandon/rootfses-boshrelease/blob/master/deployment-samples/property-overrides.yml> to the Diego deployment

3. If needed, customize the deployment-samples/rootfses-properties.yml <https://github.com/bgandon/rootfses-boshrelease/blob/master/deployment-samples/rootfses-properties.yml>

4. Create the release tarball and upload it to the director

bosh create release --final --name rootfses --version 1.43.0
bosh upload release

5. Deploy

And it worked like a charm.

I shall add a README soon with those instructions.
Have fun!

/Benjamin

Le 24 mars 2016 à 19:28, Eric Malm <emalm(a)pivotal.io> a écrit :

Thanks, Benjamin! The Buildpacks team just started the extraction story (https://www.pivotaltracker.com/story/show/115888335 <https://www.pivotaltracker.com/story/show/115888335>) yesterday and is continuing it today, so now would be an ideal time for you to weigh in with your extraction efforts.

Best,
Eric

On Thu, Mar 24, 2016 at 3:29 AM, Benjamin Gandon <benjamin(a)gandon.org <mailto:benjamin(a)gandon.org>> wrote:
Hi Eric,

Facing the pace of cflinuxfs2 updates, I've done and successfully deployed such a BOSH release. It also ships with example manifests. I'll share it with you this afternoon.

I've been using a "cheap" local blobstore but it would be interesting that you publish the blobs on a public bucket.

/Benjamin

Le 18 mars 2016 à 00:01, Eric Malm <emalm(a)pivotal.io <mailto:emalm(a)pivotal.io>> a écrit :

Dear CF Community,

Over the next few weeks, the Diego and Buildpacks teams will be working together to extract a new BOSH release for the cflinuxfs2 rootfs/stack out of the Diego BOSH release. The Buildpacks team has been doing an amazing job of publishing new rootfs images in response to CVEs, and this separation will make it easier for all Diego deployment operators to update to those latest rootfs images without having to update their other releases. We've already taken advantage of the same kind of separation between Garden-Linux and Diego when addressing some recent Garden CVEs, and we're looking forward to having that flexibility with the rootfs image as well.

Once completed, the release extraction will mean a couple of minor changes for Diego deployment operators:

- You'll have one more release to upload alongside the Diego, Garden-Linux, and etcd BOSH releases to deploy your Diego cluster. The Diego release tarball itself will be much smaller, as it will no longer include the rootfs image that accounts for about 70% of its current size.
- If you use the spiff-based manifest-generation script in the diego-release repo to produce your manifest, that's all you'll have to do! If you're hand-rolling your manifests, you will have one or two BOSH properties to add or move, and an entry to change in the list of job templates on the Diego Cell VMs.

We'll call out these changes explicitly in the Diego release notes on GitHub when the time comes.

Just as we do with the Garden-Linux and etcd releases today, the Diego team will also attach a recent final cflinuxfs2-rootfs release tarball to each final Diego release we publish on GitHub, so it will be easy for consumers to get a validated set of default 'batteries' to plug into Diego. We'll also work with the CF Release Integration team to make sure that when the most recent rootfs image passes tests against Diego in their integration environments, its release version is recorded in the Diego/CF compatibility record, at https://github.com/cloudfoundry-incubator/diego-cf-compatibility/blob/master/compatibility-v2.csv <https://github.com/cloudfoundry-incubator/diego-cf-compatibility/blob/master/compatibility-v2.csv>.

If you would like to track our progress, please follow the 'cflinuxfs2-release-extraction' epic in the Diego tracker (https://www.pivotaltracker.com/epic/show/2395419 <https://www.pivotaltracker.com/epic/show/2395419>) and the 'bosh-release' label in the Buildpacks tracker (https://www.pivotaltracker.com/n/projects/1042066/search?q=label%3A%22bosh-release%22 <https://www.pivotaltracker.com/n/projects/1042066/search?q=label%3A%22bosh-release%22>).

Thanks,
Eric Malm, CF Runtime Diego PM


Re: 答复: Re: Incubation request: App Auto-Scaling service

Mike Youngstrom
 

Great! I'd love a mysql backend for this service.

Mike

On Thu, Mar 24, 2016 at 1:16 PM, Yang Bo <boyang9527(a)hotmail.com> wrote:

I guess no. We are working on a bosh release, but only for single node
couch as the first step.

We have plan to implement a database layer with MySQL or Postgres.
------------------------------
*发件人:* Mike Youngstrom <youngm(a)gmail.com>
*发送时间:* 2016年3月24日 16:25
*收件人:* Discussions about Cloud Foundry projects and the system overall.
*主题:* [cf-dev] Re: Incubation request: App Auto-Scaling service

Is anyone aware of a good CouchDB Bosh release?

Mike

On Mon, Mar 21, 2016 at 2:54 PM, Koper, Dies <diesk(a)fast.au.fujitsu.com>
wrote:

Hi Shannon,



Our App Auto-Scaling proposal has received overwhelming support from the
community, and we have received and addressed various feedback.



Now, Fujitsu and IBM would like to put the proposal forward to the
Services PMC as an incubation project.



Project name: App Auto-Scaling

Project proposal:
https://docs.google.com/document/d/1HHhj9ZK-trI_VVDR34bwOnWem83UAq5_Pjr9imRTazY/edit?usp=sharing

Proposed Project Lead: Michael Fraenkel (IBM)

Proposed Scope: Please refer to the Goals and Non-goals sections in the
proposal

Development Operating Model: Distributed Committer Model

Technical Approach: Please refer to the Deliverables section in the
proposal

Initial team committed: 7 engineers: 2 from Fujitsu, 2 from SAP, 3 from
IBM (not including the Lead)



As functionally IBM’s recently open sourced app auto-scaling solution
(see
https://lists.cloudfoundry.org/archives/list/cf-dev(a)lists.cloudfoundry.org/message/QRBQKDDG7PWS6EMEBWI4ARFQ3OGSSSWB)
seems a good match to our proposal, the team will look at using that code.

Michael can be contacted to answer any IP concerns about this code.



Please let me know if you have any questions.



Regards,

Dies Koper

Fujitsu





*From:* Koper, Dies [mailto:diesk(a)fast.au.fujitsu.com]
*Sent:* Friday, February 19, 2016 11:53 AM
*To:* cf-dev(a)lists.cloudfoundry.org
*Subject:* [cf-dev] Proposal: App Auto-Scaling service



Dear community,



We have relaunched this initiative and put together a proposal for an App
Auto-Scaling service, ready for your review:




https://docs.google.com/document/d/1HHhj9ZK-trI_VVDR34bwOnWem83UAq5_Pjr9imRTazY/edit?usp=sharing



The proposal includes feedback from SAP, as well as some of the
suggestions you have shared with me.



The intent is to start with a MVP, really a minimal yet useful solution,
that can serve as a foundation to build upon to address the various
problems raised.



We welcome your feedback, as well as your participation in this exciting
project!



Regards,

Dies Koper

Fujitsu


Re: 答复: Re: Incubation request: App Auto-Scaling service

Marco Nicosia
 

On Thu, Mar 24, 2016 at 12:16 PM, Yang Bo <boyang9527(a)hotmail.com> wrote:

I guess no. We are working on a bosh release, but only for single node
couch as the first step.

We have plan to implement a database layer with MySQL or Postgres.
I'm sure there are many strong opinions about which is the best way to go,
MySQL or Postgres.

But I suppose it's OK for me to mention that the CF Foundation project,
cf-mysql <https://github.com/cloudfoundry/cf-mysql-release>, has an
HA-cluster option out of the box. :)

--
Marco Nicosia
Product Manager
Pivotal Software, Inc.

------------------------------
*发件人:* Mike Youngstrom <youngm(a)gmail.com>
*发送时间:* 2016年3月24日 16:25
*收件人:* Discussions about Cloud Foundry projects and the system overall.
*主题:* [cf-dev] Re: Incubation request: App Auto-Scaling service

Is anyone aware of a good CouchDB Bosh release?

Mike

On Mon, Mar 21, 2016 at 2:54 PM, Koper, Dies <diesk(a)fast.au.fujitsu.com>
wrote:

Hi Shannon,



Our App Auto-Scaling proposal has received overwhelming support from the
community, and we have received and addressed various feedback.



Now, Fujitsu and IBM would like to put the proposal forward to the
Services PMC as an incubation project.



Project name: App Auto-Scaling

Project proposal:
https://docs.google.com/document/d/1HHhj9ZK-trI_VVDR34bwOnWem83UAq5_Pjr9imRTazY/edit?usp=sharing

Proposed Project Lead: Michael Fraenkel (IBM)

Proposed Scope: Please refer to the Goals and Non-goals sections in the
proposal

Development Operating Model: Distributed Committer Model

Technical Approach: Please refer to the Deliverables section in the
proposal

Initial team committed: 7 engineers: 2 from Fujitsu, 2 from SAP, 3 from
IBM (not including the Lead)



As functionally IBM’s recently open sourced app auto-scaling solution
(see
https://lists.cloudfoundry.org/archives/list/cf-dev(a)lists.cloudfoundry.org/message/QRBQKDDG7PWS6EMEBWI4ARFQ3OGSSSWB)
seems a good match to our proposal, the team will look at using that code.

Michael can be contacted to answer any IP concerns about this code.



Please let me know if you have any questions.



Regards,

Dies Koper

Fujitsu





*From:* Koper, Dies [mailto:diesk(a)fast.au.fujitsu.com]
*Sent:* Friday, February 19, 2016 11:53 AM
*To:* cf-dev(a)lists.cloudfoundry.org
*Subject:* [cf-dev] Proposal: App Auto-Scaling service



Dear community,



We have relaunched this initiative and put together a proposal for an App
Auto-Scaling service, ready for your review:




https://docs.google.com/document/d/1HHhj9ZK-trI_VVDR34bwOnWem83UAq5_Pjr9imRTazY/edit?usp=sharing



The proposal includes feedback from SAP, as well as some of the
suggestions you have shared with me.



The intent is to start with a MVP, really a minimal yet useful solution,
that can serve as a foundation to build upon to address the various
problems raised.



We welcome your feedback, as well as your participation in this exciting
project!



Regards,

Dies Koper

Fujitsu


Re: CC BUILDPACK_SET App Usage Events

Piotr Przybylski <piotrp@...>
 

Hi Nick,
If the STARTED event would guarantee to always include buildpack name and
guid, then BUILDPACK_SET seems redundant and could be omitted. If the
droplet change in running application is possible, we would like to know
that - so the new event would be needed. And if the droplet changes in the
stopped app, the next start would simply contain new buildpack guid/name -
correct ?

Thank you,

Piotr

Piotr Przybylski / IBM Bluemix




From: Nicholas Calugar <ncalugar(a)pivotal.io>
To: "Discussions about Cloud Foundry projects and the system
overall." <cf-dev(a)lists.cloudfoundry.org>
Date: 03/24/2016 02:21 PM
Subject: [cf-dev] Re: Re: CC BUILDPACK_SET App Usage Events



Hi Matthew,

Thanks for the response. When we start an app in V3, we will always know
the buildpack because we are starting the app's current droplet. The
droplet was staged with a particular buildpack, so we can record the
buildpack in the STARTED app usage event.

As promised, a couple follow up questions:

If we record the buildpack_guid in the STARTED event, can we omit the
BUILDPACK_SET event in V3?

Currently, a V3 app must be stopped to change the current droplet. We have
an upcoming story [1] to enable changing the droplet on a running
application. Would we want to add something like a DROPLET_CHANGED app
usage event to indicate a running app is now using a different buildpack?


Thanks,

-Nick

Nicholas Calugar
CAPI Product Manager
Pivotal Software, Inc.

[1] https://www.pivotaltracker.com/story/show/111166678

On Wed, Mar 23, 2016 at 7:34 PM Matthew Sykes <matthew.sykes(a)gmail.com>
wrote:
The buildpack set event was implemented to enable usage based billing for
buildpack applications where the rates differed by the buildpack used to
stage the application.

When staging an application in /v2, if a buildpack is not specified, we
don't know which buildpack will stage the application until after the
detect phase of staging has occurred. That means at the time the usage
event was captured for the transition to start, the buildpack was
unavailable.

The buildpack set event makes that information available to billing
systems after staging completes.

On Tue, Mar 22, 2016 at 8:03 PM, Nicholas Calugar <ncalugar(a)pivotal.io>
wrote:
Hi CF-Dev,

We are continuing work on the Cloud Controller V3 API and had a question
about a particular App Usage Event we write as part of V2, the
BUILDPACK_SET event. The test[1] around this indicates that we write the
app usage event when staging completes and the app is still started. In
V2, apps, packages, and droplets are very tightly coupled, so writing
this event here makes sense.

In V3, apps, packages, and droplets are first class resources and we
don't stage apps, we stage packages to create droplets. Furthermore,
staging with a particular buildpack does not affect the app until the
droplet is assigned to the app as the current droplet and the current
droplet can be changed to any valid droplet for the app. Staging
completion and the app being started no longer seem to correlate to the
buildpack being "set".

With the above differences, we are hoping to understand the use-case
around the BUILDPACK_SET event so we can correctly preserve the desired
behavior for V3. I'll likely have follow up questions, but the first
thing I'd like to know is what BUILDPACK_SET indicates to downstream
billing engines.

Thanks,

-Nick

Nicholas Calugar
CAPI Product Manager
Pivotal Software, Inc.

[1]
https://github.com/cloudfoundry/cloud_controller_ng/blob/45b311f18d8ad1184dcb647081b19eca6f1eaf83/spec/unit/models/runtime/app_spec.rb#L1345-L1369



--
Matthew Sykes
matthew.sykes(a)gmail.com


Re: CC BUILDPACK_SET App Usage Events

Nicholas Calugar
 

Hi Matthew,

Thanks for the response. When we start an app in V3, we will always know
the buildpack because we are starting the app's current droplet. The
droplet was staged with a particular buildpack, so we can record the
buildpack in the STARTED app usage event.

As promised, a couple follow up questions:

If we record the buildpack_guid in the STARTED event, can we omit the
BUILDPACK_SET event in V3?

Currently, a V3 app must be stopped to change the current droplet. We have
an upcoming story [1] to enable changing the droplet on a running
application. Would we want to add something like a DROPLET_CHANGED app
usage event to indicate a running app is now using a different buildpack?


Thanks,

-Nick

Nicholas Calugar
CAPI Product Manager
Pivotal Software, Inc.

[1] https://www.pivotaltracker.com/story/show/111166678

On Wed, Mar 23, 2016 at 7:34 PM Matthew Sykes <matthew.sykes(a)gmail.com>
wrote:

The buildpack set event was implemented to enable usage based billing for
buildpack applications where the rates differed by the buildpack used to
stage the application.

When staging an application in /v2, if a buildpack is not specified, we
don't know which buildpack will stage the application until after the
detect phase of staging has occurred. That means at the time the usage
event was captured for the transition to start, the buildpack was
unavailable.

The buildpack set event makes that information available to billing
systems after staging completes.

On Tue, Mar 22, 2016 at 8:03 PM, Nicholas Calugar <ncalugar(a)pivotal.io>
wrote:

Hi CF-Dev,

We are continuing work on the Cloud Controller V3 API and had a question
about a particular App Usage Event we write as part of V2, the
BUILDPACK_SET event. The test[1] around this indicates that we write the
app usage event when staging completes and the app is still started. In V2,
apps, packages, and droplets are very tightly coupled, so writing this
event here makes sense.

In V3, apps, packages, and droplets are first class resources and we
don't stage apps, we stage packages to create droplets. Furthermore,
staging with a particular buildpack does not affect the app until the
droplet is assigned to the app as the current droplet and the current
droplet can be changed to any valid droplet for the app. Staging completion
and the app being started no longer seem to correlate to the buildpack
being "set".

With the above differences, we are hoping to understand the use-case
around the BUILDPACK_SET event so we can correctly preserve the desired
behavior for V3. I'll likely have follow up questions, but the first thing
I'd like to know is what BUILDPACK_SET indicates to downstream billing
engines.

Thanks,

-Nick

Nicholas Calugar
CAPI Product Manager
Pivotal Software, Inc.

[1]
https://github.com/cloudfoundry/cloud_controller_ng/blob/45b311f18d8ad1184dcb647081b19eca6f1eaf83/spec/unit/models/runtime/app_spec.rb#L1345-L1369


--
Matthew Sykes
matthew.sykes(a)gmail.com


答复: Re: Incubation request: App Auto-Scaling service

Yang Bo
 

I guess no. We are working on a bosh release, but only for single node couch as the first step.

We have plan to implement a database layer with MySQL or Postgres.

________________________________
发件人: Mike Youngstrom <youngm(a)gmail.com>
发送时间: 2016年3月24日 16:25
收件人: Discussions about Cloud Foundry projects and the system overall.
主题: [cf-dev] Re: Incubation request: App Auto-Scaling service

Is anyone aware of a good CouchDB Bosh release?

Mike

On Mon, Mar 21, 2016 at 2:54 PM, Koper, Dies <diesk(a)fast.au.fujitsu.com<mailto:diesk(a)fast.au.fujitsu.com>> wrote:
Hi Shannon,

Our App Auto-Scaling proposal has received overwhelming support from the community, and we have received and addressed various feedback.

Now, Fujitsu and IBM would like to put the proposal forward to the Services PMC as an incubation project.

Project name: App Auto-Scaling
Project proposal: https://docs.google.com/document/d/1HHhj9ZK-trI_VVDR34bwOnWem83UAq5_Pjr9imRTazY/edit?usp=sharing
Proposed Project Lead: Michael Fraenkel (IBM)
Proposed Scope: Please refer to the Goals and Non-goals sections in the proposal
Development Operating Model: Distributed Committer Model
Technical Approach: Please refer to the Deliverables section in the proposal
Initial team committed: 7 engineers: 2 from Fujitsu, 2 from SAP, 3 from IBM (not including the Lead)

As functionally IBM’s recently open sourced app auto-scaling solution (see https://lists.cloudfoundry.org/archives/list/cf-dev(a)lists.cloudfoundry.org/message/QRBQKDDG7PWS6EMEBWI4ARFQ3OGSSSWB) seems a good match to our proposal, the team will look at using that code.
Michael can be contacted to answer any IP concerns about this code.

Please let me know if you have any questions.

Regards,
Dies Koper
Fujitsu


From: Koper, Dies [mailto:diesk(a)fast.au.fujitsu.com<mailto:diesk(a)fast.au.fujitsu.com>]
Sent: Friday, February 19, 2016 11:53 AM
To: cf-dev(a)lists.cloudfoundry.org<mailto:cf-dev(a)lists.cloudfoundry.org>
Subject: [cf-dev] Proposal: App Auto-Scaling service

Dear community,

We have relaunched this initiative and put together a proposal for an App Auto-Scaling service, ready for your review:

https://docs.google.com/document/d/1HHhj9ZK-trI_VVDR34bwOnWem83UAq5_Pjr9imRTazY/edit?usp=sharing

The proposal includes feedback from SAP, as well as some of the suggestions you have shared with me.

The intent is to start with a MVP, really a minimal yet useful solution, that can serve as a foundation to build upon to address the various problems raised.

We welcome your feedback, as well as your participation in this exciting project!

Regards,
Dies Koper
Fujitsu


npm install zmq fails during cf push

av V
 

Steps to reproduce:
1.) Create an empty folder.
2.) Add this package.json file.

{
"name": "zmq-test",
"version": "1.0.0",
"description": "",
"main": "index.js",
"scripts": {
"start": "node index.js"
},
"author": "",
"license": "",
"dependencies": {
"zmq": "2.8.0"
},
"engines": {
"node": "0.10.33"
}
}
3.) Add this index.js file.

var zmq = require('zmq');
var http = require('http');

http.createServer(function (req, res) {
res.writeHead(200, {'Content-Type': 'text/plain'});
for (var env in process.env) {
res.write(env + '=' + process.env[env] + '\n');
}
res.end();
}).listen(process.env.PORT || 3000);
4.) Run npm install. That should build the library locally.
5.) Push the application to a CF

When the application starts, I get the below error
Prebuild detected (node_modules already exists)
Rebuilding any native modules
> zmq(a)2.8.0 install /tmp/staged/app/node_modules/zmq
> node-gyp rebuild
make: Entering directory `/tmp/staged/app/node_modules/zmq/build'
CXX(target) Release/obj.target/zmq/binding.o
../binding.cc:28:17: fatal error: zmq.h: No such file or directory
#include <zmq.h>
^
compilation terminated.
make: *** [Release/obj.target/zmq/binding.o] Error 1
make: Leaving directory `/tmp/staged/app/node_modules/zmq/build'
gyp ERR! build error
gyp ERR! stack Error: `make` failed with exit code: 2
gyp ERR! stack at ChildProcess.onExit (/tmp/staged/app/.heroku/node/lib/node_modules/npm/node_modules/node-gyp/lib/build.js:276:23)
gyp ERR! stack at ChildProcess.emit (events.js:98:17)
gyp ERR! stack at Process.ChildProcess._handle.onexit (child_process.js:820:12)
gyp ERR! System Linux 3.19.0-49-generic
gyp ERR! command "node" "/tmp/staged/app/.heroku/node/lib/node_modules/npm/node_modules/node-gyp/bin/node-gyp.js" "rebuild"
gyp ERR! cwd /tmp/staged/app/node_modules/zmq
gyp ERR! node -v v0.10.33
gyp ERR! node-gyp -v v3.3.1
gyp ERR! not ok
npm ERR! Linux 3.19.0-49-generic
npm ERR! argv "node" "/tmp/staged/app/.heroku/node/bin/npm" "rebuild"
npm ERR! node v0.10.33
npm ERR! npm v1.4.28
npm ERR! code ELIFECYCLE
npm ERR! zmq(a)2.14.0 install: `node-gyp rebuild`
npm ERR! Exit status 1
npm ERR!
npm ERR! Failed at the zmq(a)2.8.0 install script 'node-gyp rebuild'.
npm ERR! Make sure you have the latest version of node.js and npm installed.
npm ERR! If you do, this is most likely a problem with the zmq package,
npm ERR! not with npm itself.
npm ERR! Tell the author that this fails on your system:
npm ERR! node-gyp rebuild
npm ERR! You can get information on how to open an issue for this project with:
npm ERR! npm bugs zmq
npm ERR! Or if that isn't available, you can get their info via:
npm ERR! npm owner ls zmq
npm ERR! There is likely additional logging output above.
npm ERR! Please include the following file with any support request:
npm ERR! /tmp/staged/app/npm-debug.log
-----> Build failed
We're sorry this build is failing! You can troubleshoot common issues here:
https://devcenter.heroku.com/articles/troubleshooting-node-deploys
Some possible problems:
- node_modules checked into source control
https://docs.npmjs.com/misc/faq#should-i-check-my-node-modules-folder-into-git
Love,
Heroku
Staging failed: Buildpack compilation step failed


Re: Upcoming extraction of cflinuxfs2 rootfs release from diego-release

Eric Malm <emalm@...>
 

Thanks, Benjamin! The Buildpacks team just started the extraction story (
https://www.pivotaltracker.com/story/show/115888335) yesterday and is
continuing it today, so now would be an ideal time for you to weigh in with
your extraction efforts.

Best,
Eric

On Thu, Mar 24, 2016 at 3:29 AM, Benjamin Gandon <benjamin(a)gandon.org>
wrote:

Hi Eric,

Facing the pace of cflinuxfs2 updates, I've done and successfully deployed
such a BOSH release. It also ships with example manifests. I'll share it
with you this afternoon.

I've been using a "cheap" local blobstore but it would be interesting that
you publish the blobs on a public bucket.

/Benjamin

Le 18 mars 2016 à 00:01, Eric Malm <emalm(a)pivotal.io> a écrit :

Dear CF Community,

Over the next few weeks, the Diego and Buildpacks teams will be working
together to extract a new BOSH release for the cflinuxfs2 rootfs/stack out
of the Diego BOSH release. The Buildpacks team has been doing an amazing
job of publishing new rootfs images in response to CVEs, and this
separation will make it easier for all Diego deployment operators to update
to those latest rootfs images without having to update their other
releases. We've already taken advantage of the same kind of separation
between Garden-Linux and Diego when addressing some recent Garden CVEs, and
we're looking forward to having that flexibility with the rootfs image as
well.

Once completed, the release extraction will mean a couple of minor changes
for Diego deployment operators:

- You'll have one more release to upload alongside the Diego,
Garden-Linux, and etcd BOSH releases to deploy your Diego cluster. The
Diego release tarball itself will be much smaller, as it will no longer
include the rootfs image that accounts for about 70% of its current size.
- If you use the spiff-based manifest-generation script in the
diego-release repo to produce your manifest, that's all you'll have to do!
If you're hand-rolling your manifests, you will have one or two BOSH
properties to add or move, and an entry to change in the list of job
templates on the Diego Cell VMs.

We'll call out these changes explicitly in the Diego release notes on
GitHub when the time comes.

Just as we do with the Garden-Linux and etcd releases today, the Diego
team will also attach a recent final cflinuxfs2-rootfs release tarball to
each final Diego release we publish on GitHub, so it will be easy for
consumers to get a validated set of default 'batteries' to plug into Diego.
We'll also work with the CF Release Integration team to make sure that when
the most recent rootfs image passes tests against Diego in their
integration environments, its release version is recorded in the Diego/CF
compatibility record, at
https://github.com/cloudfoundry-incubator/diego-cf-compatibility/blob/master/compatibility-v2.csv
.

If you would like to track our progress, please follow the
'cflinuxfs2-release-extraction' epic in the Diego tracker (
https://www.pivotaltracker.com/epic/show/2395419) and the 'bosh-release'
label in the Buildpacks tracker (
https://www.pivotaltracker.com/n/projects/1042066/search?q=label%3A%22bosh-release%22
).

Thanks,
Eric Malm, CF Runtime Diego PM


Re: Incubation request: App Auto-Scaling service

Mike Youngstrom
 

Is anyone aware of a good CouchDB Bosh release?

Mike

On Mon, Mar 21, 2016 at 2:54 PM, Koper, Dies <diesk(a)fast.au.fujitsu.com>
wrote:

Hi Shannon,



Our App Auto-Scaling proposal has received overwhelming support from the
community, and we have received and addressed various feedback.



Now, Fujitsu and IBM would like to put the proposal forward to the
Services PMC as an incubation project.



Project name: App Auto-Scaling

Project proposal:
https://docs.google.com/document/d/1HHhj9ZK-trI_VVDR34bwOnWem83UAq5_Pjr9imRTazY/edit?usp=sharing

Proposed Project Lead: Michael Fraenkel (IBM)

Proposed Scope: Please refer to the Goals and Non-goals sections in the
proposal

Development Operating Model: Distributed Committer Model

Technical Approach: Please refer to the Deliverables section in the
proposal

Initial team committed: 7 engineers: 2 from Fujitsu, 2 from SAP, 3 from
IBM (not including the Lead)



As functionally IBM’s recently open sourced app auto-scaling solution (see
https://lists.cloudfoundry.org/archives/list/cf-dev(a)lists.cloudfoundry.org/message/QRBQKDDG7PWS6EMEBWI4ARFQ3OGSSSWB)
seems a good match to our proposal, the team will look at using that code.

Michael can be contacted to answer any IP concerns about this code.



Please let me know if you have any questions.



Regards,

Dies Koper

Fujitsu





*From:* Koper, Dies [mailto:diesk(a)fast.au.fujitsu.com]
*Sent:* Friday, February 19, 2016 11:53 AM
*To:* cf-dev(a)lists.cloudfoundry.org
*Subject:* [cf-dev] Proposal: App Auto-Scaling service



Dear community,



We have relaunched this initiative and put together a proposal for an App
Auto-Scaling service, ready for your review:




https://docs.google.com/document/d/1HHhj9ZK-trI_VVDR34bwOnWem83UAq5_Pjr9imRTazY/edit?usp=sharing



The proposal includes feedback from SAP, as well as some of the
suggestions you have shared with me.



The intent is to start with a MVP, really a minimal yet useful solution,
that can serve as a foundation to build upon to address the various
problems raised.



We welcome your feedback, as well as your participation in this exciting
project!



Regards,

Dies Koper

Fujitsu


Re: Announcement: BOSH for Windows

Mike Youngstrom
 

This is great!

Mike

On Thu, Mar 24, 2016 at 9:06 AM, Steven Benario <sbenario(a)pivotal.io> wrote:

Hello everyone,

We wanted to take this opportunity to announce a new incubating project
under the BOSH umbrella.

In the fall, we launched full support for Windows and .NET applications in
Cloud Foundry with Project Greenhouse (composed of Garden-Windows [1] and
Diego-Windows [2]). Since then, we've seen a lot of interest in bringing
the magic of CF to Windows developers.

One of the most common requests along those lines has been for BOSH to
support Windows cells. I'm pleased to announce that we have recently begun
adding support for Windows servers to Bosh (aka "Bosh for Windows" or
"BoshWin"). This project can be tracked at its public tracker [3], and will
be an open source contribution to the Foundation.

To date, all contributions have been committed directly to the Bosh-Agent
repo [4], under "windows" branches. If appropriate, we may commit code to
other repos in the future.

We are really excited to announce this, and would love to hear from you if
you are a potential user or contributor to Bosh-Windows!

Cheers,
Steven Benario
PM for Bosh-Windows



[1] - https://github.com/cloudfoundry/garden-windows
[2] - https://github.com/cloudfoundry/diego-windows-release
[3] - https://www.pivotaltracker.com/n/projects/1479998
[4] - https://github.com/cloudfoundry/bosh-agent


Re: Adding previous_instances and previous_memory fields to cf_event

Hristo Iliev
 

Hi Nick,

Adding previous state sounds good. Will add it in the PR as well.

Thanks,
Hristo Iliev

2016-03-24 17:29 GMT+02:00 Nicholas Calugar <ncalugar(a)pivotal.io>:

Hi Hristo,

I'm fine with a PR to add these two fields. Would it make sense to add
previous state as well?

Thanks,

Nick

On Thu, Mar 24, 2016 at 12:59 AM Dieu Cao <dcao(a)pivotal.io> wrote:

Hi Hristo,

I think a PR to add them would be fine, but I would defer to Nick
Calugar, who's taking over as PM of CAPI, to make that call.

-Dieu

On Wed, Mar 23, 2016 at 2:12 PM, Hristo Iliev <hsiliev(a)gmail.com> wrote:

Hi again,

Would you consider a PR that adds previous memory & instances to the app
usage events? Does this two additional fields make a sense?

Regards,
Hristo Iliev


Re: Proposal to Change CATs Ownership

Utako Ueda
 

To address your concerns, Mike:

I think there's a solution here that won't hinder people from contributing
to CATs. Contributing to CATs won't necessarily change from the perspective
of those who already have push access. Your changes would make their way
through the CAPI pipeline first, get bumped within capi-release in a manner
similar to how we currently bump cloud_controller_ng, before getting bumped
to cf-release develop. This means you could potentially fork just the CATs
repo and push to it. Our CI pipeline would take the responsibility of
pointing capi-release to the correct SHA of its submoduled CATs.

On Wed, Mar 23, 2016 at 12:03 PM, Mike Youngstrom <youngm(a)gmail.com> wrote:

I don't have any show stopping cases. But, my 2 main concerns are:

* Conceptually speaking if the CATS are for all will making them part of
capi hinder other teams from contributing to it?

* Having the CATs part of CAPI will require me to fork CAPI when I need to
fix/customize a test specific to our environment, something we frequently
need to do. Long term I'd prefer to fork a smaller more single purpose
release when these situations arise.

Mike


On Wed, Mar 23, 2016 at 12:43 PM, Utako Ueda <uueda(a)pivotal.io> wrote:

We had a series of meetings to figure out a good path for CATs that
ranged from multiple teams to smaller groups. We met most recently to
address concerns that include the following:

• Workflow and Release Bumping Issues: CATs relies on calls to specific
versions of CC API to run successfully. The CAPI CI currently bumps
cloud_controller_ng within capi-release, and on a successful run of CATs
(which lives on cf-release develop), capi-release then gets bumped in
cf-release develop. This leads to a "chicken and egg" scenario, in that
when changes are made to both CATs and CC, both cf-release and capi
pipelines are broken until a manual fix is made.
• In a world where cf-release no longer exists, how do we ensure CATs
uses the correct versions of the CF CLI and CC API?

tl;dr: Though not all of the CATs test the CC API explicitly, they all
make use of the CC API and therefore make CC a choke point. Having CATs as
a separate release may be a good idea, but doesn't necessarily address the
strong coupling issues.

We'd like to know if there are specific cases in which this will be
detrimental to other projects' workflows before moving forward.


On Wed, Mar 23, 2016 at 4:08 AM, Michael Fraenkel <
michael.fraenkel(a)gmail.com> wrote:

Can you explain why CATs should live under CAPI?
CATs now represents acceptance test for all of the Cloud Foundry function
driven mainly via the CLI.
Tieing this to any one release seems a bit artificial.
CATs is about testing the aggregation of all the various releases not
just CAPI.

I am more surprised that it wasn't suggested to be its own release which
dictated the versions for all releases that had passed.

- Michael


On 3/22/16 3:05 AM, Utako Ueda wrote:

Hi all,

The CAPI team would like to take ownership of the CF acceptance tests
<https://github.com/cloudfoundry/cf-acceptance-tests/> and include them
as part of CAPI release <http://github.com/cloudfoundry/capi-release>.

This solves several pain points we've experienced over the last few
months, mainly due to the strong coupling between CATs and the Cloud
Controller API.

Our plan is to submodule the CATs repo in to CAPI release, and bump CATs
on a successful run through the CAPI ci pipeline. At this point, CAPI
release will be bumped in cf-release develop, which other teams will
consume for their own testing purposes.

We hope to make these changes in the near future as we wrap up our
release extraction from cf-release. We'd like to know if any teams have any
concerns about this before we proceed, so do let us know as soon as
possible so we can address them.

Thanks,
Utako, CF CAPI Team



Re: Adding previous_instances and previous_memory fields to cf_event

Nicholas Calugar
 

Hi Hristo,

I'm fine with a PR to add these two fields. Would it make sense to add
previous state as well?

Thanks,

Nick

On Thu, Mar 24, 2016 at 12:59 AM Dieu Cao <dcao(a)pivotal.io> wrote:

Hi Hristo,

I think a PR to add them would be fine, but I would defer to Nick Calugar,
who's taking over as PM of CAPI, to make that call.

-Dieu

On Wed, Mar 23, 2016 at 2:12 PM, Hristo Iliev <hsiliev(a)gmail.com> wrote:

Hi again,

Would you consider a PR that adds previous memory & instances to the app
usage events? Does this two additional fields make a sense?

Regards,
Hristo Iliev

5081 - 5100 of 9426