Date   

Re: Adding previous_instances and previous_memory fields to cf_event

Dieu Cao <dcao@...>
 

Hi Hristo,

Correct me if I'm wrong but it sounds like you are calling purge multiple
times. Am I misunderstanding the work flow you are describing?

Purge should only be called only one time EVER on any deployment. It
should not be called on each billing cycle. Cloud contoller will purge
older billing events itself based on the configured
cc.app_usage_events.cutoff_age_in_days
and cc.service_usage_events.cutoff_age_in_days which both default to 31
days.

-Dieu

On Wed, Mar 9, 2016 at 11:57 PM, Hristo Iliev <hsiliev(a)gmail.com> wrote:

Hi Dieu,

We are polling app-usage-events with Abacus, but because of purge the
events may be out of order right after billing epoch started. But that's
only part of the problem.

To consume app-usage-events every integrator needs to build additional
infrastructure like:
- simple filter, loadbalancer or API management product to disable purging
once billing epoch started
- DB replication software that pulls data and deals with wrongly ordered
events after purge (we use abacus-cf-bridge)
- the Data warehouse described in the doc you sent

Introducing the previous values in the usage events will help us get rid
of most of the infrastructure we need in order to be able to deal with
usage events, before they even reach a billing system. We won't need to
care for purge calls or additional db, but instead simply pull events. The
previous values help us to:
- use formulas that do not care for the order of events (solves the purge
problem)
- get the info about a billing relevant change (we don't have to cache,
access DB or scan a stream to know what changed)
- simplify the processing logic in Abacus (or other metering/aggregation
solution)

We now pull the usage events, but we would like to be notified to offload
the CC from the constant /v2/app_usage_events calls. This however will not
solve any of the problems we now have and in fact may mess the ordering of
the events.

Regards,
Hristo Iliev

2016-03-10 6:32 GMT+02:00 Dieu Cao <dcao(a)pivotal.io>:

We don't advise using /v2/events for metering/billing for precisely the
reason you mention, that order of events is not guaranteed.

You can find more information about app usage events and service usage
events which are guaranteed to be in order here:
http://docs.cloudfoundry.org/running/managing-cf/usage-events.html

-Dieu
CF Runtime PMC Lead

On Wed, Mar 9, 2016 at 10:27 AM, KRuelY <kevinyudhiswara(a)gmail.com>
wrote:

Hi,

I am currently working on metering runtime usage, and one issue I'm
facing
is that there is a possibility that usage submission comes in out of
order(due to network error / other possibilities). Before the issue, the
way
metering runtime usage works is quiet simple. There is an app that will
look
at cf_events and submit usages to
[cf-abacus](https://github.com/cloudfoundry-incubator/cf-abacus).


{
"metadata": {
"guid": "40afe01a-b15a-4b8d-8bd1-e36a0ba2f6f5",
"url":
"/v2/app_usage_events/40afe01a-b15a-4b8d-8bd1-e36a0ba2f6f5",
"created_at": "2016-03-02T09:48:09Z"
},
"entity": {
"state": "STARTED",
"memory_in_mb_per_instance": 512,
"instance_count": 1,
"app_guid": "a2ab1b5a-94c0-4344-9a71-a1d2b11f483a",
"app_name": "abacus-usage-collector",
"space_guid": "d34d770d-4cd0-4bdc-8c83-8fdfa5f0b3cb",
"space_name": "dev",
"org_guid": "238a3e78-3fc8-4542-928a-88ee99643732",
"buildpack_guid": "b77d0ef8-da1f-4c0a-99cc-193449324706",
"buildpack_name": "nodejs_buildpack",
"package_state": "STAGED",
"parent_app_guid": null,
"parent_app_name": null,
"process_type": "web"
}
}


The way this app works is by looking at the state.
If the state is STARTED, it will submit usage to abacus with the
instance_memory = memory_in_mb_per_instance, running_instances =
instance_count, and since = created_at.
If the state is STOPPED, it will submit usage to abacus with the
instance_memory = 0, running_instances = 0, and since = created_at.

In ideal situation, where there is no out of order submission this is
fine.
'Simple, but Exaggerated' Example:
Usage instance_memory = 1GB, running_instances = 1, since = 3/9 00:00
comes
in. (STARTED)
Usage instance_memory = 0GB, running_instances = 0, since = 3/10 00:00
comes
in. (STOPPED)
Then Abacus know that the app consumed 1GB * (3/10 - 3/9 = 24 hours) = 24
GB-hour.

But when the usage comes in out of order:
Usage instance_memory = 0GB, running_instances = 0, since = 3/10 00:00
comes
in. (STOPPED)
Usage instance_memory = 1GB, running_instances = 1, since = 3/9 00:00
comes
in. (STARTED)
The formula that Abacus currently have would not works.

Abacus has another formula that would take care of this out of order
submission, but it would only works if we have previous_instance_memory
and
previous_running_instances.

When looking for a way to have this fields, we concluded that the
cleanest
way would be to add previous_memory_in_mb_per_instance and
previous_instance_count to the cf_event. It will make App reconfigure or
cf
scale makes more sense too because currently cf scale is a STOP and a
START.

To sum up, the cf_event state submitted would include information:

// Starting
{
"state": "STARTED",
"memory_in_mb_per_instance": 512,
"instance_count": 1,
"previous_memory_in_mb_per_instance": 0,
"previous_instance_count": 0
}

// Scaling up
{
"state": "SCALE"?,
"memory_in_mb_per_instance": 512,
"instance_count": 2,
"previous_memory_in_mb_per_instance": 512,
"previous_instance_count": 1
}

// Scale down
{
"state": "SCALE"?,
"memory_in_mb_per_instance": 512,
"instance_count": 1,
"previous_memory_in_mb_per_instance": 512,
"previous_instance_count": 2
}

// Stopping
{
"state": "STOPPED",
"memory_in_mb_per_instance": 0,
"instance_count": 0,
"previous_memory_in_mb_per_instance": 512,
"previous_instance_count": 1
}


Any thoughts/feedbacks/guidance?








--
View this message in context:
http://cf-dev.70369.x6.nabble.com/cf-dev-Adding-previous-instances-and-previous-memory-fields-to-cf-event-tp4100.html
Sent from the CF Dev mailing list archive at Nabble.com.


Re: Required manifest changes for Cloud Foundry

Benjamin Gandon
 

Indeed I scripted a couple of « bosh create release » / « bosh upload release » and then bosh-workspace is happy with it.

It worked like a charm. Buildpacks just end up being there, automatically updated. That’s Great!
I really look forward to being able to update the java-buildpack in the same way!

I’m not familiar to the inner workflow of bosh-workspace either. I use it because what you type just makes sense.
I suppose bosh-workspace uploads the manifest release and tell the director to recreate the release tarball from that. It’s time effective when your director has a much wider bandwidth than your Bosh CLI.

By the way, all config/final.yml for all buildpacks contain the same settings:

blobstore:
file_name: stacks
provider: s3
options:
bucket_name: pivotal-buildpacks
folder: tmp/builpacks-release-blobs

Is it normal that the filenames are all « stacks » for all buildpacks? I’m afraid these settings might not have been properly set. This would explain the whole thing.

/Benjamin

Le 11 mars 2016 à 00:04, Amit Gupta <agupta(a)pivotal.io> a écrit :

I did

cd cf-release
git fetch origin
git checkout develop
git pull --ff-only
./scripts/update
bosh create release --with-tarball

And also

cd src/buildpacks/binary-buildpack-release/
git fetch origin
(confirmed HEAD was pointed at origin/master)
bosh create release

Everything worked fine for me, meaning it was able to sync blobs down from the remote blobstores. I'm not familiar with bosh-workspace, and it's not clear to me why it's trying to upload anything (e.g. Uploading 'ruby-buildpack-release/1.6.14').

On Thu, Mar 10, 2016 at 2:48 PM, Benjamin Gandon <benjamin(a)gandon.org <mailto:benjamin(a)gandon.org>> wrote:
And btw Amit, it looks like the java-buildpack v3.6 is here with its fellows:
https://github.com/cloudfoundry/buildpack-releases/blob/master/java-buildpack-release/releases/java-buildpack-release/java-buildpack-release-3.6.yml <https://github.com/cloudfoundry/buildpack-releases/blob/master/java-buildpack-release/releases/java-buildpack-release/java-buildpack-release-3.6.yml>
For example the « e6ff7d79e50f0aaafa92f100f346e648c503ab17 » SHA in the error below (when recreating the java-buildpack-release) is the one of the first blob in the release manifest above.


Le 10 mars 2016 à 23:24, Benjamin Gandon <benjamin(a)gandon.org <mailto:benjamin(a)gandon.org>> a écrit :

No no no, these are not SHAs of cf-release, but those of all the buildpack-releases indeed.
Looks like no blobs of these releases are actually available online, are they?

I'm running the standard middle step "bosh prepare deployment" provided by bosh-workspace.
(See <https://github.com/cloudfoundry-incubator/bosh-workspace <https://github.com/cloudfoundry-incubator/bosh-workspace>>)

/Benjamin


Le 10 mars 2016 à 21:14, Amit Gupta <agupta(a)pivotal.io <mailto:agupta(a)pivotal.io>> a écrit :

At the time of the email, the java buildpack hadn't been extracted into a separate release yet. I believe it has now, and that will be reflected in CF v232.

What command did you run?
What SHA of cf-release have you checked out?

On Thu, Mar 10, 2016 at 12:10 PM, Benjamin Gandon <benjamin(a)gandon.org <mailto:benjamin(a)gandon.org>> wrote:
Amit, just for me to be sure, why didn’t you list the java-buildpack?

Also, have the blobs properly been uploaded?
I copy below the BOSH errors I get:

With binary-buildpack:

Uploading 'binary-buildpack-release/1.0.1'
Recreating release from the manifest
MISSING
Cannot find package with checksum `413ce11236f87273ba8a9249b6e3bebb3d0db92b'

With go-buildpack:

Uploading 'go-buildpack-release/1.7.3'
Recreating release from the manifest
MISSING
Cannot find package with checksum `300760637ee0babd5fddd474101dfa634116d9c4'

With java-buildpack:

Uploading 'java-buildpack-release/3.6'
Recreating release from the manifest
MISSING
Cannot find package with checksum `e6ff7d79e50f0aaafa92f100f346e648c503ab17'

With nodejs-buildpack:

Uploading 'nodejs-buildpack-release/1.5.7'
Recreating release from the manifest
MISSING
Cannot find package with checksum `b3edbcfb9435892749dffcb99f06d00fb4c59c5b'

With php-buildpack:

Uploading 'php-buildpack-release/4.3.6'
Recreating release from the manifest
MISSING
Cannot find package with checksum `fbc784608ffa3ceafed1810b69c12a7277c86ee0'

With python-buildpack:

Uploading 'python-buildpack-release/1.5.4'
Recreating release from the manifest
MISSING
Cannot find package with checksum `7e2377ccd9df10b21aba49c8e95338a0b1b3b92e'

With ruby-buildpack:

Uploading 'ruby-buildpack-release/1.6.14'
Recreating release from the manifest
MISSING
Cannot find package with checksum `362282d45873634db888a609cd64d7d70e9f4be2'

With staticfile-buildpack:

Uploading 'staticfile-buildpack-release/1.3.2'
Recreating release from the manifest
MISSING
Cannot find package with checksum `06382f7c804cc7f01a8dc78ca9c91e9b7f4712cc'

Are these on a specific blobstore I should point my deployment manifest at?

/Benjamin


Le 18 févr. 2016 à 19:31, Amit Gupta <agupta(a)pivotal.io <mailto:agupta(a)pivotal.io>> a écrit :

Hey developers,

The buildpacks team has recently extracted the buildpacks as separate releases. As we transition to deploying CF via a bunch of composed releases, for now we're making the change more transparent, by submoduling and symlinking the buildpacks releases back into cf-release. This requires some manifest changes: buildpacks are now colocated with cloud controller, rather than package dependencies of cloud controller.

If you are using spiff to generate manifests, and are not overriding the templates/jobs colocated on the api_zN jobs, you can ignore this email. If you are overriding the api_zN templates in your stub, or if you are not using spiff, you will need to add the following:

templates:
- name: consul_agent
release: cf
+ - name: go-buildpack
+ release: cf
+ - name: binary-buildpack
+ release: cf
+ - name: nodejs-buildpack
+ release: cf
+ - name: ruby-buildpack
+ release: cf
+ - name: php-buildpack
+ release: cf
+ - name: python-buildpack
+ release: cf
+ - name: staticfile-buildpack
+ release: cf
- name: cloud_controller_ng
release: cf

Please see this commit (https://github.com/cloudfoundry/cf-release/commit/549e5a8271bbf0d30efdb84f381f38c8bf22099d <https://github.com/cloudfoundry/cf-release/commit/549e5a8271bbf0d30efdb84f381f38c8bf22099d>) for more details.

Best,
Amit


Re: Required manifest changes for Cloud Foundry

Amit Kumar Gupta
 

I did

cd cf-release
git fetch origin
git checkout develop
git pull --ff-only
./scripts/update
bosh create release --with-tarball

And also

cd src/buildpacks/binary-buildpack-release/
git fetch origin
(confirmed HEAD was pointed at origin/master)
bosh create release

Everything worked fine for me, meaning it was able to sync blobs down from
the remote blobstores. I'm not familiar with bosh-workspace, and it's not
clear to me why it's trying to upload anything (e.g. Uploading '
ruby-buildpack-release/1.6.14').

On Thu, Mar 10, 2016 at 2:48 PM, Benjamin Gandon <benjamin(a)gandon.org>
wrote:

And btw Amit, it looks like the java-buildpack v3.6 is here with its
fellows:

https://github.com/cloudfoundry/buildpack-releases/blob/master/java-buildpack-release/releases/java-buildpack-release/java-buildpack-release-3.6.yml
For example the « e6ff7d79e50f0aaafa92f100f346e648c503ab17 » SHA in the
error below (when recreating the java-buildpack-release) is the one of the
first blob in the release manifest above.


Le 10 mars 2016 à 23:24, Benjamin Gandon <benjamin(a)gandon.org> a écrit :

No no no, these are not SHAs of cf-release, but those of all the
buildpack-releases indeed.
Looks like no blobs of these releases are actually available online, are
they?

I'm running the standard middle step "bosh prepare deployment" provided by
bosh-workspace.
(See <https://github.com/cloudfoundry-incubator/bosh-workspace>)

/Benjamin


Le 10 mars 2016 à 21:14, Amit Gupta <agupta(a)pivotal.io> a écrit :

At the time of the email, the java buildpack hadn't been extracted into a
separate release yet. I believe it has now, and that will be reflected in
CF v232.

What command did you run?
What SHA of cf-release have you checked out?

On Thu, Mar 10, 2016 at 12:10 PM, Benjamin Gandon <benjamin(a)gandon.org>
wrote:

Amit, just for me to be sure, why didn’t you list the java-buildpack?

Also, have the blobs properly been uploaded?
I copy below the BOSH errors I get:

With binary-buildpack:

Uploading 'binary-buildpack-release/1.0.1'
Recreating release from the manifest
MISSING
Cannot find package with checksum
`413ce11236f87273ba8a9249b6e3bebb3d0db92b'


With go-buildpack:

Uploading 'go-buildpack-release/1.7.3'
Recreating release from the manifest
MISSING
Cannot find package with checksum
`300760637ee0babd5fddd474101dfa634116d9c4'


With java-buildpack:

Uploading 'java-buildpack-release/3.6'
Recreating release from the manifest
MISSING
Cannot find package with checksum
`e6ff7d79e50f0aaafa92f100f346e648c503ab17'


With nodejs-buildpack:

Uploading 'nodejs-buildpack-release/1.5.7'
Recreating release from the manifest
MISSING
Cannot find package with checksum
`b3edbcfb9435892749dffcb99f06d00fb4c59c5b'


With php-buildpack:

Uploading 'php-buildpack-release/4.3.6'
Recreating release from the manifest
MISSING
Cannot find package with checksum
`fbc784608ffa3ceafed1810b69c12a7277c86ee0'


With python-buildpack:

Uploading 'python-buildpack-release/1.5.4'
Recreating release from the manifest
MISSING
Cannot find package with checksum
`7e2377ccd9df10b21aba49c8e95338a0b1b3b92e'


With ruby-buildpack:

Uploading 'ruby-buildpack-release/1.6.14'
Recreating release from the manifest
MISSING
Cannot find package with checksum
`362282d45873634db888a609cd64d7d70e9f4be2'


With staticfile-buildpack:

Uploading 'staticfile-buildpack-release/1.3.2'
Recreating release from the manifest
MISSING
Cannot find package with checksum
`06382f7c804cc7f01a8dc78ca9c91e9b7f4712cc'


Are these on a specific blobstore I should point my deployment manifest
at?

/Benjamin


Le 18 févr. 2016 à 19:31, Amit Gupta <agupta(a)pivotal.io> a écrit :

Hey developers,

The buildpacks team has recently extracted the buildpacks as separate
releases. As we transition to deploying CF via a bunch of composed
releases, for now we're making the change more transparent, by submoduling
and symlinking the buildpacks releases back into cf-release. This requires
some manifest changes: buildpacks are now colocated with cloud controller,
rather than package dependencies of cloud controller.

If you are using spiff to generate manifests, and are not overriding the
templates/jobs colocated on the api_zN jobs, you can ignore this email. If
you are overriding the api_zN templates in your stub, or if you are not
using spiff, you will need to add the following:

templates:
- name: consul_agent
release: cf
+ - name: go-buildpack
+ release: cf
+ - name: binary-buildpack
+ release: cf
+ - name: nodejs-buildpack
+ release: cf
+ - name: ruby-buildpack
+ release: cf
+ - name: php-buildpack
+ release: cf
+ - name: python-buildpack
+ release: cf
+ - name: staticfile-buildpack
+ release: cf
- name: cloud_controller_ng
release: cf

Please see this commit (
https://github.com/cloudfoundry/cf-release/commit/549e5a8271bbf0d30efdb84f381f38c8bf22099d)
for more details.

Best,
Amit



Re: Required manifest changes for Cloud Foundry

Benjamin Gandon
 

And btw Amit, it looks like the java-buildpack v3.6 is here with its fellows:
https://github.com/cloudfoundry/buildpack-releases/blob/master/java-buildpack-release/releases/java-buildpack-release/java-buildpack-release-3.6.yml <https://github.com/cloudfoundry/buildpack-releases/blob/master/java-buildpack-release/releases/java-buildpack-release/java-buildpack-release-3.6.yml>
For example the « e6ff7d79e50f0aaafa92f100f346e648c503ab17 » SHA in the error below (when recreating the java-buildpack-release) is the one of the first blob in the release manifest above.

Le 10 mars 2016 à 23:24, Benjamin Gandon <benjamin(a)gandon.org> a écrit :

No no no, these are not SHAs of cf-release, but those of all the buildpack-releases indeed.
Looks like no blobs of these releases are actually available online, are they?

I'm running the standard middle step "bosh prepare deployment" provided by bosh-workspace.
(See <https://github.com/cloudfoundry-incubator/bosh-workspace <https://github.com/cloudfoundry-incubator/bosh-workspace>>)

/Benjamin


Le 10 mars 2016 à 21:14, Amit Gupta <agupta(a)pivotal.io <mailto:agupta(a)pivotal.io>> a écrit :

At the time of the email, the java buildpack hadn't been extracted into a separate release yet. I believe it has now, and that will be reflected in CF v232.

What command did you run?
What SHA of cf-release have you checked out?

On Thu, Mar 10, 2016 at 12:10 PM, Benjamin Gandon <benjamin(a)gandon.org <mailto:benjamin(a)gandon.org>> wrote:
Amit, just for me to be sure, why didn’t you list the java-buildpack?

Also, have the blobs properly been uploaded?
I copy below the BOSH errors I get:

With binary-buildpack:

Uploading 'binary-buildpack-release/1.0.1'
Recreating release from the manifest
MISSING
Cannot find package with checksum `413ce11236f87273ba8a9249b6e3bebb3d0db92b'

With go-buildpack:

Uploading 'go-buildpack-release/1.7.3'
Recreating release from the manifest
MISSING
Cannot find package with checksum `300760637ee0babd5fddd474101dfa634116d9c4'

With java-buildpack:

Uploading 'java-buildpack-release/3.6'
Recreating release from the manifest
MISSING
Cannot find package with checksum `e6ff7d79e50f0aaafa92f100f346e648c503ab17'

With nodejs-buildpack:

Uploading 'nodejs-buildpack-release/1.5.7'
Recreating release from the manifest
MISSING
Cannot find package with checksum `b3edbcfb9435892749dffcb99f06d00fb4c59c5b'

With php-buildpack:

Uploading 'php-buildpack-release/4.3.6'
Recreating release from the manifest
MISSING
Cannot find package with checksum `fbc784608ffa3ceafed1810b69c12a7277c86ee0'

With python-buildpack:

Uploading 'python-buildpack-release/1.5.4'
Recreating release from the manifest
MISSING
Cannot find package with checksum `7e2377ccd9df10b21aba49c8e95338a0b1b3b92e'

With ruby-buildpack:

Uploading 'ruby-buildpack-release/1.6.14'
Recreating release from the manifest
MISSING
Cannot find package with checksum `362282d45873634db888a609cd64d7d70e9f4be2'

With staticfile-buildpack:

Uploading 'staticfile-buildpack-release/1.3.2'
Recreating release from the manifest
MISSING
Cannot find package with checksum `06382f7c804cc7f01a8dc78ca9c91e9b7f4712cc'

Are these on a specific blobstore I should point my deployment manifest at?

/Benjamin


Le 18 févr. 2016 à 19:31, Amit Gupta <agupta(a)pivotal.io <mailto:agupta(a)pivotal.io>> a écrit :

Hey developers,

The buildpacks team has recently extracted the buildpacks as separate releases. As we transition to deploying CF via a bunch of composed releases, for now we're making the change more transparent, by submoduling and symlinking the buildpacks releases back into cf-release. This requires some manifest changes: buildpacks are now colocated with cloud controller, rather than package dependencies of cloud controller.

If you are using spiff to generate manifests, and are not overriding the templates/jobs colocated on the api_zN jobs, you can ignore this email. If you are overriding the api_zN templates in your stub, or if you are not using spiff, you will need to add the following:

templates:
- name: consul_agent
release: cf
+ - name: go-buildpack
+ release: cf
+ - name: binary-buildpack
+ release: cf
+ - name: nodejs-buildpack
+ release: cf
+ - name: ruby-buildpack
+ release: cf
+ - name: php-buildpack
+ release: cf
+ - name: python-buildpack
+ release: cf
+ - name: staticfile-buildpack
+ release: cf
- name: cloud_controller_ng
release: cf

Please see this commit (https://github.com/cloudfoundry/cf-release/commit/549e5a8271bbf0d30efdb84f381f38c8bf22099d <https://github.com/cloudfoundry/cf-release/commit/549e5a8271bbf0d30efdb84f381f38c8bf22099d>) for more details.

Best,
Amit


Re: Required manifest changes for Cloud Foundry

Benjamin Gandon
 

No no no, these are not SHAs of cf-release, but those of all the buildpack-releases indeed.
Looks like no blobs of these releases are actually available online, are they?

I'm running the standard middle step "bosh prepare deployment" provided by bosh-workspace.
(See <https://github.com/cloudfoundry-incubator/bosh-workspace>)

/Benjamin

Le 10 mars 2016 à 21:14, Amit Gupta <agupta(a)pivotal.io> a écrit :

At the time of the email, the java buildpack hadn't been extracted into a separate release yet. I believe it has now, and that will be reflected in CF v232.

What command did you run?
What SHA of cf-release have you checked out?

On Thu, Mar 10, 2016 at 12:10 PM, Benjamin Gandon <benjamin(a)gandon.org> wrote:
Amit, just for me to be sure, why didn’t you list the java-buildpack?

Also, have the blobs properly been uploaded?
I copy below the BOSH errors I get:

With binary-buildpack:

Uploading 'binary-buildpack-release/1.0.1'
Recreating release from the manifest
MISSING
Cannot find package with checksum `413ce11236f87273ba8a9249b6e3bebb3d0db92b'

With go-buildpack:

Uploading 'go-buildpack-release/1.7.3'
Recreating release from the manifest
MISSING
Cannot find package with checksum `300760637ee0babd5fddd474101dfa634116d9c4'

With java-buildpack:

Uploading 'java-buildpack-release/3.6'
Recreating release from the manifest
MISSING
Cannot find package with checksum `e6ff7d79e50f0aaafa92f100f346e648c503ab17'

With nodejs-buildpack:

Uploading 'nodejs-buildpack-release/1.5.7'
Recreating release from the manifest
MISSING
Cannot find package with checksum `b3edbcfb9435892749dffcb99f06d00fb4c59c5b'

With php-buildpack:

Uploading 'php-buildpack-release/4.3.6'
Recreating release from the manifest
MISSING
Cannot find package with checksum `fbc784608ffa3ceafed1810b69c12a7277c86ee0'

With python-buildpack:

Uploading 'python-buildpack-release/1.5.4'
Recreating release from the manifest
MISSING
Cannot find package with checksum `7e2377ccd9df10b21aba49c8e95338a0b1b3b92e'

With ruby-buildpack:

Uploading 'ruby-buildpack-release/1.6.14'
Recreating release from the manifest
MISSING
Cannot find package with checksum `362282d45873634db888a609cd64d7d70e9f4be2'

With staticfile-buildpack:

Uploading 'staticfile-buildpack-release/1.3.2'
Recreating release from the manifest
MISSING
Cannot find package with checksum `06382f7c804cc7f01a8dc78ca9c91e9b7f4712cc'

Are these on a specific blobstore I should point my deployment manifest at?

/Benjamin


Le 18 févr. 2016 à 19:31, Amit Gupta <agupta(a)pivotal.io> a écrit :

Hey developers,

The buildpacks team has recently extracted the buildpacks as separate releases. As we transition to deploying CF via a bunch of composed releases, for now we're making the change more transparent, by submoduling and symlinking the buildpacks releases back into cf-release. This requires some manifest changes: buildpacks are now colocated with cloud controller, rather than package dependencies of cloud controller.

If you are using spiff to generate manifests, and are not overriding the templates/jobs colocated on the api_zN jobs, you can ignore this email. If you are overriding the api_zN templates in your stub, or if you are not using spiff, you will need to add the following:

templates:
- name: consul_agent
release: cf
+ - name: go-buildpack
+ release: cf
+ - name: binary-buildpack
+ release: cf
+ - name: nodejs-buildpack
+ release: cf
+ - name: ruby-buildpack
+ release: cf
+ - name: php-buildpack
+ release: cf
+ - name: python-buildpack
+ release: cf
+ - name: staticfile-buildpack
+ release: cf
- name: cloud_controller_ng
release: cf

Please see this commit (https://github.com/cloudfoundry/cf-release/commit/549e5a8271bbf0d30efdb84f381f38c8bf22099d) for more details.

Best,
Amit


Re: `api_z1/0' is not running after update to CF v231

Wayne Ha <wayne.h.ha@...>
 

Sorry for the late response. I didn't get a chance to try again until today. It turned out by setting require_https to false will let me run "cf login".

Properties
uaa
+ require_https: false
Meta
No changes
Deploying
---------
Director task 10
Started preparing deployment
Started preparing deployment > Binding deployment. Done (00:00:00)
Started preparing deployment > Binding releases. Done (00:00:00)
Started preparing deployment > Binding existing deployment. Done (00:00:00)
Started preparing deployment > Binding resource pools. Done (00:00:00)
Started preparing deployment > Binding stemcells. Done (00:00:00)
Started preparing deployment > Binding templates. Done (00:00:00)
Started preparing deployment > Binding properties. Done (00:00:00)
Started preparing deployment > Binding unallocated VMs. Done (00:00:00)
Started preparing deployment > Binding instance networks. Done (00:00:00)
Done preparing deployment (00:00:00)
Started preparing package compilation > Finding packages to compile. Done (00:00:00)
Started preparing dns > Binding DNS. Done (00:00:00)
Started preparing configuration > Binding configuration. Done (00:00:03)
Started updating job uaa_z1 > uaa_z1/0. Done (00:01:09)


Re: Required manifest changes for Cloud Foundry

Amit Kumar Gupta
 

At the time of the email, the java buildpack hadn't been extracted into a
separate release yet. I believe it has now, and that will be reflected in
CF v232.

What command did you run?
What SHA of cf-release have you checked out?

On Thu, Mar 10, 2016 at 12:10 PM, Benjamin Gandon <benjamin(a)gandon.org>
wrote:

Amit, just for me to be sure, why didn’t you list the java-buildpack?

Also, have the blobs properly been uploaded?
I copy below the BOSH errors I get:

With binary-buildpack:

Uploading 'binary-buildpack-release/1.0.1'
Recreating release from the manifest
MISSING
Cannot find package with checksum
`413ce11236f87273ba8a9249b6e3bebb3d0db92b'


With go-buildpack:

Uploading 'go-buildpack-release/1.7.3'
Recreating release from the manifest
MISSING
Cannot find package with checksum
`300760637ee0babd5fddd474101dfa634116d9c4'


With java-buildpack:

Uploading 'java-buildpack-release/3.6'
Recreating release from the manifest
MISSING
Cannot find package with checksum
`e6ff7d79e50f0aaafa92f100f346e648c503ab17'


With nodejs-buildpack:

Uploading 'nodejs-buildpack-release/1.5.7'
Recreating release from the manifest
MISSING
Cannot find package with checksum
`b3edbcfb9435892749dffcb99f06d00fb4c59c5b'


With php-buildpack:

Uploading 'php-buildpack-release/4.3.6'
Recreating release from the manifest
MISSING
Cannot find package with checksum
`fbc784608ffa3ceafed1810b69c12a7277c86ee0'


With python-buildpack:

Uploading 'python-buildpack-release/1.5.4'
Recreating release from the manifest
MISSING
Cannot find package with checksum
`7e2377ccd9df10b21aba49c8e95338a0b1b3b92e'


With ruby-buildpack:

Uploading 'ruby-buildpack-release/1.6.14'
Recreating release from the manifest
MISSING
Cannot find package with checksum
`362282d45873634db888a609cd64d7d70e9f4be2'


With staticfile-buildpack:

Uploading 'staticfile-buildpack-release/1.3.2'
Recreating release from the manifest
MISSING
Cannot find package with checksum
`06382f7c804cc7f01a8dc78ca9c91e9b7f4712cc'


Are these on a specific blobstore I should point my deployment manifest at?

/Benjamin


Le 18 févr. 2016 à 19:31, Amit Gupta <agupta(a)pivotal.io> a écrit :

Hey developers,

The buildpacks team has recently extracted the buildpacks as separate
releases. As we transition to deploying CF via a bunch of composed
releases, for now we're making the change more transparent, by submoduling
and symlinking the buildpacks releases back into cf-release. This requires
some manifest changes: buildpacks are now colocated with cloud controller,
rather than package dependencies of cloud controller.

If you are using spiff to generate manifests, and are not overriding the
templates/jobs colocated on the api_zN jobs, you can ignore this email. If
you are overriding the api_zN templates in your stub, or if you are not
using spiff, you will need to add the following:

templates:
- name: consul_agent
release: cf
+ - name: go-buildpack
+ release: cf
+ - name: binary-buildpack
+ release: cf
+ - name: nodejs-buildpack
+ release: cf
+ - name: ruby-buildpack
+ release: cf
+ - name: php-buildpack
+ release: cf
+ - name: python-buildpack
+ release: cf
+ - name: staticfile-buildpack
+ release: cf
- name: cloud_controller_ng
release: cf

Please see this commit (
https://github.com/cloudfoundry/cf-release/commit/549e5a8271bbf0d30efdb84f381f38c8bf22099d)
for more details.

Best,
Amit



Re: Required manifest changes for Cloud Foundry

Benjamin Gandon
 

Amit, just for me to be sure, why didn’t you list the java-buildpack?

Also, have the blobs properly been uploaded?
I copy below the BOSH errors I get:

With binary-buildpack:

Uploading 'binary-buildpack-release/1.0.1'
Recreating release from the manifest
MISSING
Cannot find package with checksum `413ce11236f87273ba8a9249b6e3bebb3d0db92b'

With go-buildpack:

Uploading 'go-buildpack-release/1.7.3'
Recreating release from the manifest
MISSING
Cannot find package with checksum `300760637ee0babd5fddd474101dfa634116d9c4'

With java-buildpack:

Uploading 'java-buildpack-release/3.6'
Recreating release from the manifest
MISSING
Cannot find package with checksum `e6ff7d79e50f0aaafa92f100f346e648c503ab17'

With nodejs-buildpack:

Uploading 'nodejs-buildpack-release/1.5.7'
Recreating release from the manifest
MISSING
Cannot find package with checksum `b3edbcfb9435892749dffcb99f06d00fb4c59c5b'

With php-buildpack:

Uploading 'php-buildpack-release/4.3.6'
Recreating release from the manifest
MISSING
Cannot find package with checksum `fbc784608ffa3ceafed1810b69c12a7277c86ee0'

With python-buildpack:

Uploading 'python-buildpack-release/1.5.4'
Recreating release from the manifest
MISSING
Cannot find package with checksum `7e2377ccd9df10b21aba49c8e95338a0b1b3b92e'

With ruby-buildpack:

Uploading 'ruby-buildpack-release/1.6.14'
Recreating release from the manifest
MISSING
Cannot find package with checksum `362282d45873634db888a609cd64d7d70e9f4be2'

With staticfile-buildpack:

Uploading 'staticfile-buildpack-release/1.3.2'
Recreating release from the manifest
MISSING
Cannot find package with checksum `06382f7c804cc7f01a8dc78ca9c91e9b7f4712cc'

Are these on a specific blobstore I should point my deployment manifest at?

/Benjamin

Le 18 févr. 2016 à 19:31, Amit Gupta <agupta(a)pivotal.io> a écrit :

Hey developers,

The buildpacks team has recently extracted the buildpacks as separate releases. As we transition to deploying CF via a bunch of composed releases, for now we're making the change more transparent, by submoduling and symlinking the buildpacks releases back into cf-release. This requires some manifest changes: buildpacks are now colocated with cloud controller, rather than package dependencies of cloud controller.

If you are using spiff to generate manifests, and are not overriding the templates/jobs colocated on the api_zN jobs, you can ignore this email. If you are overriding the api_zN templates in your stub, or if you are not using spiff, you will need to add the following:

templates:
- name: consul_agent
release: cf
+ - name: go-buildpack
+ release: cf
+ - name: binary-buildpack
+ release: cf
+ - name: nodejs-buildpack
+ release: cf
+ - name: ruby-buildpack
+ release: cf
+ - name: php-buildpack
+ release: cf
+ - name: python-buildpack
+ release: cf
+ - name: staticfile-buildpack
+ release: cf
- name: cloud_controller_ng
release: cf

Please see this commit (https://github.com/cloudfoundry/cf-release/commit/549e5a8271bbf0d30efdb84f381f38c8bf22099d <https://github.com/cloudfoundry/cf-release/commit/549e5a8271bbf0d30efdb84f381f38c8bf22099d>) for more details.

Best,
Amit


Re: how to debug "BuildpackCompileFailed" issue?

Noburou TANIGUCHI
 

Hi Ning,

Does anyone know how to debug "BuildpackCompileFailed" issue?
1) I think you'd better open another terminal and run `cf logs (appname)`
while pushing your app.

2) If you know something about buildpacks (especially about the 3 major
steps -- detect, compile, and release), I think this Anynines blog might
help:

http://blog.anynines.com/debug-cloud-foundry-java-buildpack/

However it is old and dealing with java-buildpack, it is still useful and
you can use it almost only by replacing "java" to "ruby".

*NOTE* There is no file `.java-buildpack.log` and no environment variable
`JBP_LOG_LEVEL` for ruby-buildpack.

*NOTE 2* The section "Start the server" is in app starting process, not in
app staging process. So you can't apply the process in the blog post.



Ning Fu wrote
Hi,

Does anyone know how to debug "BuildpackCompileFailed" issue?
When I push a ruby app:
========================
...
Done uploadingOK
Starting app happy in org funorg / space development as funcloud...
FAILEDBuildpackCompileFailed
TIP: use 'cf logs happy --recent' for more
informationPivotals-iMac:happy-root-route-app pivotal$ cf logs happy
--recent
Connected, dumping recent logs for app happy in org funorg / space
development as funcloud...
========================
But I got nothing from "cf logs happy --recent".
It is "ruby '2.2.2'" in Gemfile, and my ruby build back is cached-1.6.11.
I've also tried "bundle package --all" before I push.

Any suggestions? Doesn't ruby build pack provide any logs?

Thanks,
Ning




-----
I'm not a ...
noburou taniguchi
--
View this message in context: http://cf-dev.70369.x6.nabble.com/cf-dev-how-to-debug-BuildpackCompileFailed-issue-tp4079p4119.html
Sent from the CF Dev mailing list archive at Nabble.com.


Re: Can resources of a IDLE application be shared by others?

Stanley Shen <meteorping@...>
 

Yes, it's one way but it's not flexible, and scale app need to restart the app as well.
As I said I may have some heavy operations which will definitely need more than 2G.

In my opinion the ideal way is that we just set a maximum value for each process, but during the running of the process, we don't pre-allocate the memory as we specify as the maximum in deployment.

I suggest you manually “cf scale -m 2G“ after your app has booted.
Type “cf scale --help” for more info.

Le 9 mars 2016 à 04:09, Stanley Shen <meteorping(a)gmail.com&gt; a écrit :

Hello, all

When pushing an application to CF, we need to define its disk/memory limitation.
The memory limitation is just the possible maximum value will be needed in this
application, but in most time, we don't need so much memory.
For example, I have one application which needs at most 5G memory at startup some
some specific operation, but in most time it just needs 2G.
So right now I need to specify 5G in deployment manifest, and 5G memory is
allocated.

Take m3.large VM for example, it has 7.5G.
Right now we can only push one application on it, but ideally we should can push more
applications, like 3 since only 2G is needed for each application.

Can the resources of a IDLE application be shared by other applications?
It seems right now all the resources are pre-allocated when pushing application, it
will not be released even I stopped the application.


Re: Announcing cf-mysql-release v26, now with a slimmer VM footprint!

Benjamin Gandon
 

Thanks for the answer Marco! (I realize I'm much influenced by the Percona vision, because they are active here in Paris dev communities.)

I finally upgraded to v26 and well.. Congratulations for the deep review of manifests, but.. What a tremendous change!

My feedback: starting from a mere mistake forgetting the "http://" prefix in the "api_url" prop, I arrived to a half-finished bosh deploy :(
By the time I realized the issue was not about the broker database migration (which was failing and prevented the az1 broker from booting) I already had wiped the cluster out.
Fortunately it was not in production! But the manifest change is so big that such mistakes are easy to do.
Anyway I wish we had some basic syntax checks on Bosh props that would have warned me like: “Hey you need an 'http://' prefix here!”

Btw, white labeling is not finished: there are still a bunch of Pivotal references out there. Didn't you know that the "p" in "p-mysql" stands for "Pivotal"? ;)
I might push some PR within the next days or so about that.

/Benjamin

Le 2 mars 2016 à 21:16, Marco Nicosia <mnicosia(a)pivotal.io> a écrit :

Hi Benjamin,

Sorry for the delayed response!

There are technically no trade-offs when using an Arbitrator, with the sole exception that you're sacrificing an extra level of data redundancy by keeping only two copies of the data, not three.

For that reason, a three node cluster is still a "standard" deployment option, you just use the no-arbitrator example during manifest generation.

Galera doesn't do GTID re-sync in the normal way that MySQL does. GTID has a slightly different context. There's a blog that describes some of the differences if you want to dive in. In the case of whole-cluster failure, we describe how to bootstrap the cluster in the documentation.

Finally, 10.1 is very definitely something we're excited to begin to support. It's something we want to do in a way that will allow users to migrate as they feel comfortable. So, we have some challenges to figure out how to make it an easy option while still allowing conservative users 10.0 as an option.

--
Marco Nicosia
Product Manager
Pivotal Software, Inc.


On Fri, Feb 26, 2016 at 2:34 AM, Benjamin Gandon <benjamin(a)gandon.org> wrote:
Hi,

Congratulations for this v26 release!

Do you have any documentation for the benefits and trade offs introduced by this new arbitrator, compared to the standard 3-nodes setup? What happens to the "quorum" principle in such arbitrator setup?

Are there any consequences or benefits in terms of managing possible sync failures and manual GTID re-sync?

And a last question : do you plan upgrading to MariaDB 10.1.x in future releases?

/Benjamin

Le 25 févr. 2016 à 17:53, Mike Youngstrom <youngm(a)gmail.com> a écrit :

In our case we'll use the arbitrator because we only have 2 AZs in some datacenters. The Arbitrator allows us to place the 3rd member of the cluster in another datacenter minimal performance impact. Nice feature!

Mike

On Thu, Feb 25, 2016 at 9:49 AM, Duncan Winn <dwinn(a)pivotal.io> wrote:
+1.. The arbitrator is a fantastic feature. Great job MySQL team.

On Thu, 25 Feb 2016 at 00:39, James Bayer <jbayer(a)pivotal.io> wrote:
congrats mysql team! the arbitrator is a nice touch to save resources yet still result in high availability even when losing an entire AZ.

On Wed, Feb 24, 2016 at 10:43 AM, Marco Nicosia <mnicosia(a)pivotal.io> wrote:
Hello Cloud Foundry folks,

For those of you who are tired of deploying web apps without easy integration with data services, cf-mysql is a great place to start. With this release, I'm happy to tell you that it takes even less commitment than ever to give cf-mysql a spin!

The theme for this release is the First Rule in Government Spending. Wanna take a ride?

A single MySQL node is a Single Point of Failure. With the introduction of the Arbitrator, we allow you to buy two at twice the price. Previously, we made you buy three for the same sense of security. Upgrading to cf-mysql v26 will save you money, with no sacrifice in performance!


If you like what we're doing with cf-mysql, click the thumbs up: 👍
If we're messing up your game, click the thumbs down: 👎
And if this just isn't your thing, at least give me a fist bump! 👊

Highlights

- We've updated to MariaDB 10.0.23, enabled some important behind-the-scenes features, and fixed several important bugs.
- For those of you who require audit access to all data, we've given you the option to enable a Read Only admin user.
- We've introduced a new HA deployment option, 2+1: two MySQL nodes and an Arbitrator.
- And we've made significant updates to the generate-deployment-manifest script and stubs.


You'll have to update your stubs to use the new release, but we hope with the new Examples and Arbitrator feature, it'll be worth your time to upgrade. Make sure to read the release notes for important information about the upgrade process.

As always, for full disclosure, and links beyond that, please check out the Release Notes.

Introducing the Arbitrator

With cf-mysql v26, we've replaced one of the MySQL nodes with a lightweight Arbitrator node. Previously, the minimal HA configuration required three full-size MySQL nodes.

For cf-mysql administrators who are careful with their infrastructure resources, the Arbitrator feature is a new deployment topology that uses a smaller VM footprint while maintaining high availability guarantees. Unlike the old three node topology, the Arbitrator decreases spend with no impact on performance.

Thanks, and make sure to give me your feedback to influence what we do for future releases!

--
Marco Nicosia
Product Manager
Pivotal Software, Inc.
mnicosia(a)pivotal.io


--
Thank you,

James Bayer
--
Duncan Winn
Cloud Foundry PCF Services


Re: Can resources of a IDLE application be shared by others?

Benjamin Gandon
 

I suggest you manually “cf scale -m 2G“ after your app has booted.
Type “cf scale --help” for more info.

Le 9 mars 2016 à 04:09, Stanley Shen <meteorping(a)gmail.com> a écrit :

Hello, all

When pushing an application to CF, we need to define its disk/memory limitation.
The memory limitation is just the possible maximum value will be needed in this application, but in most time, we don't need so much memory.
For example, I have one application which needs at most 5G memory at startup some some specific operation, but in most time it just needs 2G.
So right now I need to specify 5G in deployment manifest, and 5G memory is allocated.

Take m3.large VM for example, it has 7.5G.
Right now we can only push one application on it, but ideally we should can push more applications, like 3 since only 2G is needed for each application.

Can the resources of a IDLE application be shared by other applications?
It seems right now all the resources are pre-allocated when pushing application, it will not be released even I stopped the application.


Re: CF deployment with Diego support only ?

Benjamin Gandon
 

That's right Amit, but it was just a typo by me. I meant setting instances counts to zero for “runner_z*” and “hm9000_z*”.

I saw in a-detailed-transition-timeline that those two properties are also of help:
- cc.default_to_diego_backend=true
- cc.users_can_select_backend=false

So all in all, is that really all what needs to be done ?

/Benjamin

Le 10 mars 2016 à 09:07, Amit Gupta <agupta(a)pivotal.io> a écrit :

You need the api jobs, those are the cloud controllers! Set the runner and hm9000 jobs to 0 instances, or even remove them from your deployment manifest altogether.

On Wed, Mar 9, 2016 at 11:39 PM, Benjamin Gandon <benjamin(a)gandon.org> wrote:
Hi cf-dev,

For a fresh new deployment of cf-release, I wonder how the default manifests stubs and templates should be modified to remove unnecessary support for DEA in favor of Diego ?

Indeed, I’m starting with a working deployment of cf+diego. And now I want to wipe out those ancient DEA and HM9000 I don’t need.

I tried to draw inspiration from the MicroPCF main deployment manifest. (Are there any other sources for Diego-only CF deployments BTW?)
At the moment, all I see in this example is that I need to set « instances: » counts to zero for both « api_z* » and « hm9000_z* » jobs.

Is this sufficient ? Should I perform some more adaptations ?
Thanks for your guidance.

/Benjamin


Re: Update Parallelization in Cloud Foundry

Omar Elazhary <omazhary@...>
 

Thanks everyone. What I understood from Amit's response is that I can parallelize certain components. What I also understood from both Amit's and Dieu's responses is that some components have hard dependencies, while others only have soft ones, and some components have no dependencies at all. My question is: how can I figure out these dependencies? Are they listed somewhere? The cloud foundry docs do a great job of describing each component separately, but they do not explain which should be up before which. That is what I need in order to work an execution plan in order to minimize update time, all the while keeping CF 100% available.

Thanks.

Regards,
Omar


Re: cf ssh APP_NAME doesn't work in AWS environment

Balamurugan.J@...
 

Hi,

In which file I have to add below properties

After adding below properties, it works now:
app_ssh:
host_key_fingerprint: a6:d1:08:0b:b0:cb:9b:5f:c4:ba:44:2a:97:26:19:8a
oauth_client_id: ssh-proxy
cc:
allow_app_ssh_access: true

[cid:image001.png(a)01D17AD5.B296E030]

Thanks,
Bala


This e-mail and any files transmitted with it are for the sole use of the intended recipient(s) and may contain confidential and privileged information. If you are not the intended recipient(s), please reply to the sender and destroy all copies of the original message. Any unauthorized review, use, disclosure, dissemination, forwarding, printing or copying of this email, and/or any action taken in reliance on the contents of this e-mail is strictly prohibited and may be unlawful. Where permitted by applicable law, this e-mail and other e-mail communications sent to and from Cognizant e-mail addresses may be monitored.


Re: CF deployment with Diego support only ?

Amit Kumar Gupta
 

You need the api jobs, those are the cloud controllers! Set the runner and
hm9000 jobs to 0 instances, or even remove them from your deployment
manifest altogether.

On Wed, Mar 9, 2016 at 11:39 PM, Benjamin Gandon <benjamin(a)gandon.org>
wrote:

Hi cf-dev,

For a fresh new deployment of cf-release
<https://github.com/cloudfoundry/cf-release>, I wonder how the default
manifests stubs and templates should be modified to remove unnecessary
support for DEA in favor of Diego ?

Indeed, I’m starting with a working deployment of cf+diego. And now I want
to wipe out those ancient DEA and HM9000 I don’t need.

I tried to draw inspiration from the MicroPCF main deployment manifest
<https://github.com/pivotal-cf/micropcf/blob/master/images/manifest.yml>.
(Are there any other sources for Diego-only CF deployments BTW?)
At the moment, all I see in this example is that I need to set «
instances: » counts to zero for both « api_z* » and « hm9000_z* » jobs.

Is this sufficient ? Should I perform some more adaptations ?
Thanks for your guidance.

/Benjamin


Re: Adding previous_instances and previous_memory fields to cf_event

Hristo Iliev
 

Hi Dieu,

We are polling app-usage-events with Abacus, but because of purge the
events may be out of order right after billing epoch started. But that's
only part of the problem.

To consume app-usage-events every integrator needs to build additional
infrastructure like:
- simple filter, loadbalancer or API management product to disable purging
once billing epoch started
- DB replication software that pulls data and deals with wrongly ordered
events after purge (we use abacus-cf-bridge)
- the Data warehouse described in the doc you sent

Introducing the previous values in the usage events will help us get rid of
most of the infrastructure we need in order to be able to deal with usage
events, before they even reach a billing system. We won't need to care for
purge calls or additional db, but instead simply pull events. The previous
values help us to:
- use formulas that do not care for the order of events (solves the purge
problem)
- get the info about a billing relevant change (we don't have to cache,
access DB or scan a stream to know what changed)
- simplify the processing logic in Abacus (or other metering/aggregation
solution)

We now pull the usage events, but we would like to be notified to offload
the CC from the constant /v2/app_usage_events calls. This however will not
solve any of the problems we now have and in fact may mess the ordering of
the events.

Regards,
Hristo Iliev

2016-03-10 6:32 GMT+02:00 Dieu Cao <dcao(a)pivotal.io>:

We don't advise using /v2/events for metering/billing for precisely the
reason you mention, that order of events is not guaranteed.

You can find more information about app usage events and service usage
events which are guaranteed to be in order here:
http://docs.cloudfoundry.org/running/managing-cf/usage-events.html

-Dieu
CF Runtime PMC Lead

On Wed, Mar 9, 2016 at 10:27 AM, KRuelY <kevinyudhiswara(a)gmail.com> wrote:

Hi,

I am currently working on metering runtime usage, and one issue I'm facing
is that there is a possibility that usage submission comes in out of
order(due to network error / other possibilities). Before the issue, the
way
metering runtime usage works is quiet simple. There is an app that will
look
at cf_events and submit usages to
[cf-abacus](https://github.com/cloudfoundry-incubator/cf-abacus).


{
"metadata": {
"guid": "40afe01a-b15a-4b8d-8bd1-e36a0ba2f6f5",
"url":
"/v2/app_usage_events/40afe01a-b15a-4b8d-8bd1-e36a0ba2f6f5",
"created_at": "2016-03-02T09:48:09Z"
},
"entity": {
"state": "STARTED",
"memory_in_mb_per_instance": 512,
"instance_count": 1,
"app_guid": "a2ab1b5a-94c0-4344-9a71-a1d2b11f483a",
"app_name": "abacus-usage-collector",
"space_guid": "d34d770d-4cd0-4bdc-8c83-8fdfa5f0b3cb",
"space_name": "dev",
"org_guid": "238a3e78-3fc8-4542-928a-88ee99643732",
"buildpack_guid": "b77d0ef8-da1f-4c0a-99cc-193449324706",
"buildpack_name": "nodejs_buildpack",
"package_state": "STAGED",
"parent_app_guid": null,
"parent_app_name": null,
"process_type": "web"
}
}


The way this app works is by looking at the state.
If the state is STARTED, it will submit usage to abacus with the
instance_memory = memory_in_mb_per_instance, running_instances =
instance_count, and since = created_at.
If the state is STOPPED, it will submit usage to abacus with the
instance_memory = 0, running_instances = 0, and since = created_at.

In ideal situation, where there is no out of order submission this is
fine.
'Simple, but Exaggerated' Example:
Usage instance_memory = 1GB, running_instances = 1, since = 3/9 00:00
comes
in. (STARTED)
Usage instance_memory = 0GB, running_instances = 0, since = 3/10 00:00
comes
in. (STOPPED)
Then Abacus know that the app consumed 1GB * (3/10 - 3/9 = 24 hours) = 24
GB-hour.

But when the usage comes in out of order:
Usage instance_memory = 0GB, running_instances = 0, since = 3/10 00:00
comes
in. (STOPPED)
Usage instance_memory = 1GB, running_instances = 1, since = 3/9 00:00
comes
in. (STARTED)
The formula that Abacus currently have would not works.

Abacus has another formula that would take care of this out of order
submission, but it would only works if we have previous_instance_memory
and
previous_running_instances.

When looking for a way to have this fields, we concluded that the cleanest
way would be to add previous_memory_in_mb_per_instance and
previous_instance_count to the cf_event. It will make App reconfigure or
cf
scale makes more sense too because currently cf scale is a STOP and a
START.

To sum up, the cf_event state submitted would include information:

// Starting
{
"state": "STARTED",
"memory_in_mb_per_instance": 512,
"instance_count": 1,
"previous_memory_in_mb_per_instance": 0,
"previous_instance_count": 0
}

// Scaling up
{
"state": "SCALE"?,
"memory_in_mb_per_instance": 512,
"instance_count": 2,
"previous_memory_in_mb_per_instance": 512,
"previous_instance_count": 1
}

// Scale down
{
"state": "SCALE"?,
"memory_in_mb_per_instance": 512,
"instance_count": 1,
"previous_memory_in_mb_per_instance": 512,
"previous_instance_count": 2
}

// Stopping
{
"state": "STOPPED",
"memory_in_mb_per_instance": 0,
"instance_count": 0,
"previous_memory_in_mb_per_instance": 512,
"previous_instance_count": 1
}


Any thoughts/feedbacks/guidance?








--
View this message in context:
http://cf-dev.70369.x6.nabble.com/cf-dev-Adding-previous-instances-and-previous-memory-fields-to-cf-event-tp4100.html
Sent from the CF Dev mailing list archive at Nabble.com.


CF deployment with Diego support only ?

Benjamin Gandon
 

Hi cf-dev,

For a fresh new deployment of cf-release <https://github.com/cloudfoundry/cf-release>, I wonder how the default manifests stubs and templates should be modified to remove unnecessary support for DEA in favor of Diego ?

Indeed, I’m starting with a working deployment of cf+diego. And now I want to wipe out those ancient DEA and HM9000 I don’t need.

I tried to draw inspiration from the MicroPCF main deployment manifest <https://github.com/pivotal-cf/micropcf/blob/master/images/manifest.yml>. (Are there any other sources for Diego-only CF deployments BTW?)
At the moment, all I see in this example is that I need to set « instances: » counts to zero for both « api_z* » and « hm9000_z* » jobs.

Is this sufficient ? Should I perform some more adaptations ?
Thanks for your guidance.

/Benjamin


Re: Update Parallelization in Cloud Foundry

Dieu Cao <dcao@...>
 

It should also be considered that in some scenarios the order of deployment
as recommended serially will most often be the most tested in terms of
ensuring backwards compatibility of code changes during deployment.

For example, a new end point might be added to cloud controller to be used
by DEAs/CELLs and it is assumed that because of the serial deployment
order, that all cloud controller's will have completed updating and thus
the new end point available prior to DEAs/CELLs updating so then code
changes to DEAs/CELLs can simply switch over to using the new end points as
they update and there is no need to keep the code on DEAs/CELLs that used
the older end points.

-Dieu
CF Runtime PMC Lead

On Wed, Mar 9, 2016 at 2:34 AM, Voelz, Marco <marco.voelz(a)sap.com> wrote:

Thanks for clarifying this for me, Amit.

Warm regards
Marco

On 09/03/16 07:43, "Amit Gupta" <agupta(a)pivotal.io> wrote:

You can probably try to start everything in parallel, and either set very
long update timeouts, or allow the deployment to fail with the expectation
that it will eventually correct itself. Or you can start things in a
strict order, and have stronger constraints on the possible failure
scenarios, and be able to debug the root cause of a failure better.

Certain things do depend on NATS, and thus won't work until NATS is up.
The main thing I can currently think of is registering routes with
gorouter, which is done both for apps and for system components (e.g. the
route-registrar registers api.SYSTEM_DOMAIN on behalf of the CC).

Best,
Amit

On Tue, Mar 8, 2016 at 2:14 AM, Voelz, Marco <marco.voelz(a)sap.com> wrote:

Does NATS also need to come up before any of the other components?

On 07/03/16 21:16, "Amit Gupta" <agupta(a)pivotal.io> wrote:

Hey Omar,

You can set the "serial" property at the global level of a deployment
(you can think of it as setting a default for all jobs), and then override
it at the individual job levels. You will want the consul server jobs to
be deployed first, with serial: true, and max_in_flight: 1. The important
thing here is, if you have more than one server in your consul cluster,
they need to come up one at a time to ensure the cluster orchestration goes
smoothly. The same is true if your etcd cluster has more than one server
in it. If you're using the postgres job for CCDB and/or UAADB (instead of
some external database), then you will want the postgres job to come up
before CC and/or UAA. Similarly, if you're using the provided blobstore
job instead of an external blobstore, you'll want it up before CC comes up.

You might be able to get away with parallelizing some of the things
above. E.g. if you bring the CC and blobstore up at the same time, CC
might fail to start for a while until Blobstore comes up, and then CC might
successfully start up. Monit also generally keeps retrying even after BOSH
gives up. So your deploy might fail but later on, you might see everything
up and running.

Cheers,
Amit

On Mon, Mar 7, 2016 at 5:54 AM, Omar Elazhary <omazhary(a)gmail.com> wrote:

Hello everyone,

I know it is possible to update and redeploy components in parallel in
cloud foundry by setting the "serial" property in the deployment manifest
to "false". However, is such a thing recommended? Are there particular job
dependencies that I need to pay attention to?

Regards,
Omar




Re: Adding previous_instances and previous_memory fields to cf_event

Dieu Cao <dcao@...>
 

We don't advise using /v2/events for metering/billing for precisely the
reason you mention, that order of events is not guaranteed.

You can find more information about app usage events and service usage
events which are guaranteed to be in order here:
http://docs.cloudfoundry.org/running/managing-cf/usage-events.html

-Dieu
CF Runtime PMC Lead

On Wed, Mar 9, 2016 at 10:27 AM, KRuelY <kevinyudhiswara(a)gmail.com> wrote:

Hi,

I am currently working on metering runtime usage, and one issue I'm facing
is that there is a possibility that usage submission comes in out of
order(due to network error / other possibilities). Before the issue, the
way
metering runtime usage works is quiet simple. There is an app that will
look
at cf_events and submit usages to
[cf-abacus](https://github.com/cloudfoundry-incubator/cf-abacus).


{
"metadata": {
"guid": "40afe01a-b15a-4b8d-8bd1-e36a0ba2f6f5",
"url": "/v2/app_usage_events/40afe01a-b15a-4b8d-8bd1-e36a0ba2f6f5",
"created_at": "2016-03-02T09:48:09Z"
},
"entity": {
"state": "STARTED",
"memory_in_mb_per_instance": 512,
"instance_count": 1,
"app_guid": "a2ab1b5a-94c0-4344-9a71-a1d2b11f483a",
"app_name": "abacus-usage-collector",
"space_guid": "d34d770d-4cd0-4bdc-8c83-8fdfa5f0b3cb",
"space_name": "dev",
"org_guid": "238a3e78-3fc8-4542-928a-88ee99643732",
"buildpack_guid": "b77d0ef8-da1f-4c0a-99cc-193449324706",
"buildpack_name": "nodejs_buildpack",
"package_state": "STAGED",
"parent_app_guid": null,
"parent_app_name": null,
"process_type": "web"
}
}


The way this app works is by looking at the state.
If the state is STARTED, it will submit usage to abacus with the
instance_memory = memory_in_mb_per_instance, running_instances =
instance_count, and since = created_at.
If the state is STOPPED, it will submit usage to abacus with the
instance_memory = 0, running_instances = 0, and since = created_at.

In ideal situation, where there is no out of order submission this is fine.
'Simple, but Exaggerated' Example:
Usage instance_memory = 1GB, running_instances = 1, since = 3/9 00:00 comes
in. (STARTED)
Usage instance_memory = 0GB, running_instances = 0, since = 3/10 00:00
comes
in. (STOPPED)
Then Abacus know that the app consumed 1GB * (3/10 - 3/9 = 24 hours) = 24
GB-hour.

But when the usage comes in out of order:
Usage instance_memory = 0GB, running_instances = 0, since = 3/10 00:00
comes
in. (STOPPED)
Usage instance_memory = 1GB, running_instances = 1, since = 3/9 00:00 comes
in. (STARTED)
The formula that Abacus currently have would not works.

Abacus has another formula that would take care of this out of order
submission, but it would only works if we have previous_instance_memory and
previous_running_instances.

When looking for a way to have this fields, we concluded that the cleanest
way would be to add previous_memory_in_mb_per_instance and
previous_instance_count to the cf_event. It will make App reconfigure or cf
scale makes more sense too because currently cf scale is a STOP and a
START.

To sum up, the cf_event state submitted would include information:

// Starting
{
"state": "STARTED",
"memory_in_mb_per_instance": 512,
"instance_count": 1,
"previous_memory_in_mb_per_instance": 0,
"previous_instance_count": 0
}

// Scaling up
{
"state": "SCALE"?,
"memory_in_mb_per_instance": 512,
"instance_count": 2,
"previous_memory_in_mb_per_instance": 512,
"previous_instance_count": 1
}

// Scale down
{
"state": "SCALE"?,
"memory_in_mb_per_instance": 512,
"instance_count": 1,
"previous_memory_in_mb_per_instance": 512,
"previous_instance_count": 2
}

// Stopping
{
"state": "STOPPED",
"memory_in_mb_per_instance": 0,
"instance_count": 0,
"previous_memory_in_mb_per_instance": 512,
"previous_instance_count": 1
}


Any thoughts/feedbacks/guidance?








--
View this message in context:
http://cf-dev.70369.x6.nabble.com/cf-dev-Adding-previous-instances-and-previous-memory-fields-to-cf-event-tp4100.html
Sent from the CF Dev mailing list archive at Nabble.com.