Date   

CF CLI Release v6.13.0

Koper, Dies <diesk@...>
 

The CF CLI team cut 6.13.0. Release notes and binaries are available at:


https://github.com/cloudfoundry/cli#downloads


Note that we have simplified the download matrix and filenames are being updated to include the release version.

Let us know what you think!


Highlights of this release include:


Diego GA


In alignment with the effort to get to a GA version of Diego [0] in CF-Release, this version of the CLI includes new commands specific to the Diego component of runtime. These commands have been pulled into the core CLI from the 2 existing plugins [1] [2]. Among the features, the highlights are:


· A user can now ssh to an app container

· `cf push` includes a new flag to specify a docker image


[0] https://github.com/cloudfoundry-incubator/diego-design-notes/blob/master/migrating-to-diego.md#installing-the-diego-enabler-cli-plugin

[1] https://github.com/cloudfoundry-incubator/diego-ssh

[2] https://github.com/cloudfoundry-incubator/diego-cli-plugin


Other Features:

· Plugin install now prompts interactively and provides warning to inform user of risk

· `cf scale` can now scale an app to zero instances


Bug Fixes:

· Fixed issue with password containing double-quote or backtick exposing partial password in cleartext in cf_trace

· login with --sso flag was providing link with http url. Fixed bug so that it provides https url.


Improved User Experience/Error Messages:

· Attempt to delete a shared domain with `cf delete-domain` will now fail early

· Improved error message when `cf curl` not properly formed

· Improved message when no users found in `cf org-users` and `cf space-users`

· Improved message when push of app times out due to wrong port specification


New Plugins:

· Firehose Nozzle Plugin http://github.com/pivotal-cf-experimental/nozzle-plugin

· Cloud Deployment Plugin http://github.com/xchapter7x/deploycloud


Also notable:


Updated CLI to Go 1.5.1, and added a --build flag to list this version.


Greg Oehmen & Dies Köper
Cloud Foundry CLI Product Manager


Re: Cloud Foundry DEA to Diego switch - when?

Amit Kumar Gupta
 

I'd encourage anyone wanting to switch to Diego to track the following
release marker in our project tracker:
https://www.pivotaltracker.com/story/show/76376202. When this marker is
delivered, it means the core teams have confidence that Diego can replace
the DEAs. Note that while the tracker shows the date for this release to
occur this week, there are actually several unpointed placeholder stories
above the line that will expand. Those stories will be broken down and
pointed soon, so it will be possible to get a more realistic estimate soon.

After it's deemed that Diego can replace the DEAs, there will be some time
before the DEAs will be end-of-life'd, but I would not recommend waiting
that long.

On Wed, Oct 21, 2015 at 11:07 PM, Meng, Xiangyi <xiangyi.meng(a)emc.com>
wrote:

Hi, Amit



Our team is also planning the timeline of replacing dea with diego. Would
you please let me know the approximated estimation on when the final
iteration would come? Will it in 2016 or 1017?



Thanks,

Maggie



*From:* Amit Gupta [mailto:agupta(a)pivotal.io]
*Sent:* 2015年10月22日 2:59
*To:* Discussions about Cloud Foundry projects and the system overall.
*Subject:* [cf-dev] Re: Cloud Foundry DEA to Diego switch - when?



Hi Rishi,



Thanks for your question. Let's first clarify the distinction between
what you deploy -- bosh releases (versioned packages of source code and
binaries) -- and how you deploy things -- bosh deployment (a manifest of
which releases to use, what code/binaries from those releases to place on
nodes in your deployment cluster, property/credential configuration,
networking and compute resources, etc.).



diego-release may not change, although it may be split into smaller
releases, e.g. the cc-bridge part consisting of the components which talk
to CC, and the diego runtime part consisting of components responsible for
scheduling, running, and health-monitoring containerized workloads.



cf-release will undergo heavy changes. We are currently breaking it apart
entirely, into separate releases: consul, etcd, logging-and-metrics,
identity, routing, API, nats, postgres, and existing runtime backend (DEA,
Warden, HM9k).



In addition to breaking up cf-release, we are working on cf-deployment[1],
this will give you the same ability to deploy the Cloud Foundry PaaS as you
know it today, but composed of multiple releases rather than the monolithic
cf-release. We will ensure that cf-deployment has versioning and tooling
to make it easy to deploy everything at versions that are known to work
together.



For the first major iteration of cf-deployment, it will deploy all the
existing components of cf-release, but coming from separate releases. You
can still deploy diego separately (configured to talk to the CC) as you do
today.



The second major iteration will be to leverage new BOSH features[2], such
as links, AZs, cloud config, and global networking to simplify the manifest
generation for cf-deployment. Again, you will still be able to deploy
diego separately alongside your cf deployment.



The third iteration is to fold the diego-release deployment strategies
into cf-deployment itself, so you'll have a single manifest deploying DEAs
and Diego side-by-side.



The final iteration will be to remove the DEAs from cf-deployment and stop
supporting the release that contains them.



As to your question of defaults, there are several definitions of
"default". You can set Diego to be the default backend today[3]. You have
to opt in to this, but then anyone using the platform you deployed will
have their apps run on Diego by default. Pivotal Web Services, for
example, now defaults to Diego as the backend. At some point, Diego will be
the true default backend, and you will have to opt-out of it (either at the
CC configuration level, or at the individual app level). Finally, at a
later point in time, DEAs will no longer be supported and Diego will be the
only backend option.



We are actively working on a timeline for all these things. You can see
the Diego team's public tracker has a release marker[4] for when Diego will
be capable of replacing the DEAs. After reaching that release marker,
there will be some time given for the rest of the community to switch over
before declaring end-of-life for the DEAs.



[1] https://github.com/cloudfoundry/cf-deployment

[2] https://github.com/cloudfoundry/bosh-notes/

[3]
https://github.com/cloudfoundry/cf-release/blob/v222/jobs/cloud_controller_ng/spec#L396-L398

[4] https://www.pivotaltracker.com/story/show/76376202



Thanks,

Amit, OSS Release Integration PM



On Wed, Oct 21, 2015 at 10:31 AM, R M <rishi.investigate(a)gmail.com> wrote:

I am trying to understand when will Diego become default runtime of Cloud
Foundry. Latest cf-release is still using DEA and if my understanding is
correct, at some stage, a new cf-release version will come out with Diego
and perhaps change to v3. Do we have any ideas on when/if this will
happen? Is it safe to assume that diego-release on github will slowly
transition to cf-release?

Thanks.



Re: CF-RELEASE v202 UPLOAD ERROR

Amit Kumar Gupta
 

Try running "bosh cck" and recreating VMs from last known apply spec. You
should also make sure that the IPs you're allocating to your jobs are
accessible from the BOSH director VM.

On Thu, Oct 22, 2015 at 5:27 AM, Parthiban Annadurai <senjiparthi(a)gmail.com>
wrote:

Yaa sure Amit. I have attached both the files with this mail. Could you
please? Thanks.



On 21 October 2015 at 19:49, Amit Gupta <agupta(a)pivotal.io> wrote:

Can you share the output of "bosh vms" and "bosh task 51 --debug". It's
preferable if you copy the terminal outputs and paste them to Gists or
Pastebins and share the links.

On Tue, Oct 20, 2015 at 6:18 AM, James Bayer <jbayer(a)pivotal.io> wrote:

sometimes a message like that is due to networking issues. does the bosh
director and the VM it is creating have an available network path to reach
each other? sometimes ssh'ing in to the VM that is identified can yield
more debug clues.

On Tue, Oct 20, 2015 at 5:09 AM, Parthiban Annadurai <
senjiparthi(a)gmail.com> wrote:

Thanks Bharath and Amit for the helpful solutions. I have surpassed
that error. Now, bosh deploy strucks like in attached image. Could you
anyone please?

Regards

Parthiban A



On 20 October 2015 at 11:57, Amit Gupta <agupta(a)pivotal.io> wrote:

Bharath, I think you mean to increase the *disk* size on the
compilation VMs, not the memory size.

Parthiban, the error message is happening during compiling, saying "No
space left on device". This means your compilation VMs are running out of
space on disk. This means you need to increase the allocated disk for your
compilation VMs. In the "compilation" section of your deployment manifest,
you can specify "cloud_properties". This is where you will specify disk
size. These "cloud_properties" look the same as the could_properties
specified for a resource pool. Depending on your IaaS, the structure of
the cloud_properties section differs. See here:
https://bosh.io/docs/deployment-manifest.html#resource-pools-cloud-properties

On Mon, Oct 19, 2015 at 11:13 PM, Bharath Posa <bharathp(a)vedams.com>
wrote:

hi parthiban

It seems you are running out of space in your vm in which you are
compiling . try to increase the size of memory in your compilation vm .

regards
Bharath



On Mon, Oct 19, 2015 at 7:39 PM, Parthiban Annadurai <
senjiparthi(a)gmail.com> wrote:

Hello All,
Thanks All for the helpful suggestions. Actually, now
we r facing the following issue while kicking bosh deploy,

Done compiling packages >
nats/d3a1f853f4980682ed8b48e4706b7280e2b7ce0e (00:01:07)
Failed compiling packages >
buildpack_php/9c72be716ab8629d7e6feed43012d1d671720157: Action Failed
get_task: Task aba21e6a-2031-4a69-5b72-f238ecd07051 result: Compiling
package buildpack_php: Compressing compiled package: Shelling out to tar:
Running command: 'tar czf
/var/vcap/data/tmp/bosh-platform-disk-TarballCompressor-CompressFilesInDir762165297
-C
/var/vcap/data/packages/buildpack_php/9c72be716ab8629d7e6feed43012d1d671720157.1-
.', stdout: '', stderr: '
gzip: stdout: No space left on device
': signal: broken pipe (00:02:41)
Failed compiling packages (00:02:41)

Error 450001: Action Failed get_task: Task
aba21e6a-2031-4a69-5b72-f238ecd07051 result: Compiling package
buildpack_php: Compressing compiled package: Shelling out to tar: Running
command: 'tar czf
/var/vcap/data/tmp/bosh-platform-disk-TarballCompressor-CompressFilesInDir762165297
-C
/var/vcap/data/packages/buildpack_php/9c72be716ab8629d7e6feed43012d1d671720157.1-
.', stdout: '', stderr: '
gzip: stdout: No space left on device
': signal: broken pipe

Could Anyone on this issue?

Regards

Parthiban A

On 19 October 2015 at 14:30, Bharath Posa <bharathp(a)vedams.com>
wrote:

Hi partiban

can u do a checksum of the tar file .


it should come like this *sha1:
b6f596eaff4c7af21cc18a52ef97e19debb00403*

example:

*sha1sum {file}*

regards
Bharath

On Mon, Oct 19, 2015 at 1:12 PM, Eric Poelke <epoelke(a)gmail.com>
wrote:

You actually do not need to download it. if you just run --

`bosh upload release
https://bosh.io/d/github.com/cloudfoundry/cf-release?v=202`
<https://bosh.io/d/github.com/cloudfoundry/cf-release?v=202>

The director will pull in the release directly from bosh.io.

--
Thank you,

James Bayer


Re: REST API endpoint for accessing application logs

Ponraj E
 

Hi Warren,

Thanks. Reg #1, Is the loggregator clears the logs after "certain period of interval"? If yes, How much is that and where do we configure that?


--
Ponraj


Re: [abacus] Configuring Abacus applications

Jean-Sebastien Delfino
 

Hi Piotr,

The answers to your questions really depend on the performance of the
environment and database you're integrating Abacus with, but let me try to
give you pointers to some of the factors you'll want to watch in your
performance tuning. Sorry for such a long email, but you had several
questions bundled in there and the answers are not just yes/no type of
answers.

- is there a recommended minimal number of instances of Abacus
applications

I recommend 2 of each as a minimum for availability if instances crash or
if you need to restart them individually.

- how would above depend on expected number of submissions or documents
to be processed

This really depends on the performance of your deployment environment and
database cluster. More instances will allow you to process more docs
faster, scaling linearly up to the load what your database can take.

- is there a dependency between number of instances of applications i.e.
do they have to match

You should be able to tune each application with a different number of
instances (see note *** below for additional info).

Here are some of the key factors to consider for tuning:

Collector service
- stateless, receives batches of submitted usage over HTTP, does 1 db write
per batch, 1 db write per usage doc;
- increase to provide better response time to resource providers as they
submit usage.

Metering service
- stateless, receives individual submitted usage docs from collector, does
2 db writes per usage doc;
- you can probably size it the same or a bit more than the collector app as
it's processing more (individual) docs than the submitted batches.

Accumulator service
- stateful as it accumulates usage per resource instance, does 2 db writes
per usage doc, 1 read per approx 100 usage docs;
- serializes updates to the accumulated usage per resource instance, so
increase if your individual resource instances are getting a lot of usage;
- resource instances are distributed to db partitions, one partition per
instance, and that instance is the only reader/writer from/to that
partition;
- I've seen the performance of the accumulator scale linearly from 1 to 16
instances, recommend to test its performance in your environment.

Aggregator service
- stateful as it aggregates usage per organization, does 2 db writes per
usage doc, 1 read per approx 100 usage docs;
- same performance characteristics and observations as for the accumulator,
except that the write serialization is on an organization basis.

Rating service
- stateless, just adds rated usage to input aggregated usage, no
serialization here, 2 db writes per usage doc;
- since there's no serialization you may be OK with less instances than the
accumulator and aggregator;
- on the other hand you don't want 16 aggregators to overload 2 instances
of the rating service, so look for a middle ground.

Reporting
- stateless, one db read per report per org;
- scales like a regular Web app, gated by the query performance on your db;
- recommend 2 instances minimum for availability then increase as your
reporting load increases;
- delegates org lookups to your account info service so include the
performance of that service in your analysis as well.

- what is the default and recommended number of DB partitions and how can
they be configured (time based as well as key based)

Time-based
- one per month, as most db writes and reads target the current month, and
sometimes the previous month;
- with that, monthly dbs can be archived once they're not needed anymore.

Key based
- depends how many resource instances and organizations you have and the
performance of your database as its volume increases;
- for the accumulator and aggregator services, you need one db partition
per app instance, reserved to that instance.

- how would above depend on expected number of documents
Same as your 2nd question, if I understood it correctly.

[***] While researching this I found that although you can configure each
app with a different number of instances, it's not very convenient to do
right now as we're currently using a single environment variable to
configure the number of db partitions a service uses and the number of
instances configured for the next service in the Abacus processing
pipeline. I'll open a Github issue to change that and use different env
variables to configure these two different aspects, as that'll make it
easier for you to use different numbers of db partitions and instances in
the accumulator and the aggregator services for example.

HTH


- Jean-Sebastien

On Wed, Oct 21, 2015 at 9:04 AM, Piotr Przybylski <piotrp(a)us.ibm.com> wrote:

Hi,
couple of questions about configuring Abacus, specifically the recommended
settings and how to configure them

- is there a recommended minimal number of instances of Abacus applications
- how would above depend on expected number of submissions or documents to
be processed
- is there a dependency between number of instances of applications i.e.
do they have to match
- what is the default and recommended number of DB partitions and how can
they be configured (time based as well as key based)
- how would above depend on expected number of documents

Thank you,

Piotr



Re: REST API endpoint for accessing application logs

Warren Fernandes
 

For #3,

The Loggregator team currently doesn't manage the cf-java-client library. There seems to be another post in the community here (https://lists.cloudfoundry.org/archives/list/cf-dev(a)lists.cloudfoundry.org/message/JWESJLSWV44KKLP7LTMSLB3L5N2I62BG/)
talking about a new v2 java client that will be more useful. If you see that there is some unexpected truncation, I'd suggest creating an issue on that repo so that they can fix it in v2.


Re: CF-RELEASE v202 UPLOAD ERROR

Parthiban Annadurai <senjiparthi@...>
 

Yaa sure Amit. I have attached both the files with this mail. Could you
please? Thanks.

On 21 October 2015 at 19:49, Amit Gupta <agupta(a)pivotal.io> wrote:

Can you share the output of "bosh vms" and "bosh task 51 --debug". It's
preferable if you copy the terminal outputs and paste them to Gists or
Pastebins and share the links.

On Tue, Oct 20, 2015 at 6:18 AM, James Bayer <jbayer(a)pivotal.io> wrote:

sometimes a message like that is due to networking issues. does the bosh
director and the VM it is creating have an available network path to reach
each other? sometimes ssh'ing in to the VM that is identified can yield
more debug clues.

On Tue, Oct 20, 2015 at 5:09 AM, Parthiban Annadurai <
senjiparthi(a)gmail.com> wrote:

Thanks Bharath and Amit for the helpful solutions. I have surpassed that
error. Now, bosh deploy strucks like in attached image. Could you anyone
please?

Regards

Parthiban A



On 20 October 2015 at 11:57, Amit Gupta <agupta(a)pivotal.io> wrote:

Bharath, I think you mean to increase the *disk* size on the
compilation VMs, not the memory size.

Parthiban, the error message is happening during compiling, saying "No
space left on device". This means your compilation VMs are running out of
space on disk. This means you need to increase the allocated disk for your
compilation VMs. In the "compilation" section of your deployment manifest,
you can specify "cloud_properties". This is where you will specify disk
size. These "cloud_properties" look the same as the could_properties
specified for a resource pool. Depending on your IaaS, the structure of
the cloud_properties section differs. See here:
https://bosh.io/docs/deployment-manifest.html#resource-pools-cloud-properties

On Mon, Oct 19, 2015 at 11:13 PM, Bharath Posa <bharathp(a)vedams.com>
wrote:

hi parthiban

It seems you are running out of space in your vm in which you are
compiling . try to increase the size of memory in your compilation vm .

regards
Bharath



On Mon, Oct 19, 2015 at 7:39 PM, Parthiban Annadurai <
senjiparthi(a)gmail.com> wrote:

Hello All,
Thanks All for the helpful suggestions. Actually, now we
r facing the following issue while kicking bosh deploy,

Done compiling packages >
nats/d3a1f853f4980682ed8b48e4706b7280e2b7ce0e (00:01:07)
Failed compiling packages >
buildpack_php/9c72be716ab8629d7e6feed43012d1d671720157: Action Failed
get_task: Task aba21e6a-2031-4a69-5b72-f238ecd07051 result: Compiling
package buildpack_php: Compressing compiled package: Shelling out to tar:
Running command: 'tar czf
/var/vcap/data/tmp/bosh-platform-disk-TarballCompressor-CompressFilesInDir762165297
-C
/var/vcap/data/packages/buildpack_php/9c72be716ab8629d7e6feed43012d1d671720157.1-
.', stdout: '', stderr: '
gzip: stdout: No space left on device
': signal: broken pipe (00:02:41)
Failed compiling packages (00:02:41)

Error 450001: Action Failed get_task: Task
aba21e6a-2031-4a69-5b72-f238ecd07051 result: Compiling package
buildpack_php: Compressing compiled package: Shelling out to tar: Running
command: 'tar czf
/var/vcap/data/tmp/bosh-platform-disk-TarballCompressor-CompressFilesInDir762165297
-C
/var/vcap/data/packages/buildpack_php/9c72be716ab8629d7e6feed43012d1d671720157.1-
.', stdout: '', stderr: '
gzip: stdout: No space left on device
': signal: broken pipe

Could Anyone on this issue?

Regards

Parthiban A

On 19 October 2015 at 14:30, Bharath Posa <bharathp(a)vedams.com>
wrote:

Hi partiban

can u do a checksum of the tar file .


it should come like this *sha1:
b6f596eaff4c7af21cc18a52ef97e19debb00403*

example:

*sha1sum {file}*

regards
Bharath

On Mon, Oct 19, 2015 at 1:12 PM, Eric Poelke <epoelke(a)gmail.com>
wrote:

You actually do not need to download it. if you just run --

`bosh upload release
https://bosh.io/d/github.com/cloudfoundry/cf-release?v=202`
<https://bosh.io/d/github.com/cloudfoundry/cf-release?v=202>

The director will pull in the release directly from bosh.io.

--
Thank you,

James Bayer


Re: How to detect this case: CF-AppMemoryQuo taExceeded

Juan Antonio Breña Moral <bren at juanantonio.info...>
 

Hi,

Using this method, I receive the memory used by the organization:

{ memory_usage_in_mb: 576 }

If i use this method:
http://apidocs.cloudfoundry.org/222/organizations/get_organization_summary.html

I receive the same information:

{ guid: '2fcae642-b4b9-4393-89dc-509ece372f7d',
name: 'DevBox',
status: 'active',
spaces:
[ { guid: 'e558b66a-1b9c-4c66-a779-5cf46e3b060c',
name: 'dev',
service_count: 4,
app_count: 2,
mem_dev_total: 576,
mem_prod_total: 0 } ] }

I think that the limit is defined in a Quota definition for Space or an Organization. Using a local instance, I was doing some tests with the methods:
http://apidocs.cloudfoundry.org/222/organization_quota_definitions/delete_a_particular_organization_quota_definition.html

but a organization doesn't require a quota, so I suppose that exist a default quota, is it correct?
In my case, the unique quota is:
http://apidocs.cloudfoundry.org/222/organization_quota_definitions/list_all_organization_quota_definitions.html

[ { metadata:
{ guid: '59ce5f9d-8914-4783-a3dc-8f5f89cf023a',
url: '/v2/quota_definitions/59ce5f9d-8914-4783-a3dc-8f5f89cf023a',
created_at: '2015-07-15T12:32:30Z',
updated_at: null },
entity:
{ name: 'default',
non_basic_services_allowed: true,
total_services: 100,
total_routes: 1000,
memory_limit: 10240,
trial_db_allowed: false,
instance_memory_limit: -1 } } ]
√ The platform returns Quota Definitions from Organizations (359ms)

In Pivotal for example, I suppose that free accounts use the default quota:

{ metadata:
{ guid: 'b72b1acb-ff4f-468d-99c0-05cd91012b62',
url: '/v2/quota_definitions/b72b1acb-ff4f-468d-99c0-05cd91012b62',
created_at: '2013-11-19T18:53:48Z',
updated_at: '2013-11-19T19:34:57Z' },
entity:
{ name: 'trial',
non_basic_services_allowed: false,
total_services: 10,
total_routes: 1000,
total_private_domains: -1,
memory_limit: 2048,
trial_db_allowed: false,
instance_memory_limit: -1,
app_instance_limit: -1 } },

But method returns the following quotas.

[ { metadata:
{ guid: '8c4b4554-b43b-4673-ac93-3fc232896f0b',
url: '/v2/quota_definitions/8c4b4554-b43b-4673-ac93-3fc232896f0b',
created_at: '2013-11-19T18:53:48Z',
updated_at: '2013-11-19T19:34:57Z' },
entity:
{ name: 'free',
non_basic_services_allowed: false,
total_services: 0,
total_routes: 1000,
total_private_domains: -1,
memory_limit: 0,
trial_db_allowed: false,
instance_memory_limit: -1,
app_instance_limit: -1 } },
{ metadata:
{ guid: '7dbdcbb7-edb6-4246-a217-2031a75388f7',
url: '/v2/quota_definitions/7dbdcbb7-edb6-4246-a217-2031a75388f7',
created_at: '2013-11-19T18:53:48Z',
updated_at: '2013-11-19T19:34:57Z' },
entity:
{ name: 'paid',
non_basic_services_allowed: true,
total_services: -1,
total_routes: 1000,
total_private_domains: -1,
memory_limit: 10240,
trial_db_allowed: false,
instance_memory_limit: -1,
app_instance_limit: -1 } },
{ metadata:
{ guid: '2228e712-7b0c-4b65-899c-0fc599063e35',
url: '/v2/quota_definitions/2228e712-7b0c-4b65-899c-0fc599063e35',
created_at: '2013-11-19T18:53:48Z',
updated_at: '2014-05-07T18:33:19Z' },
entity:
{ name: 'runaway',
non_basic_services_allowed: true,
total_services: -1,
total_routes: 1000,
total_private_domains: -1,
memory_limit: 204800,
trial_db_allowed: false,
instance_memory_limit: -1,
app_instance_limit: -1 } },
{ metadata:
{ guid: 'b72b1acb-ff4f-468d-99c0-05cd91012b62',
url: '/v2/quota_definitions/b72b1acb-ff4f-468d-99c0-05cd91012b62',
created_at: '2013-11-19T18:53:48Z',
updated_at: '2013-11-19T19:34:57Z' },
entity:
{ name: 'trial',
non_basic_services_allowed: false,
total_services: 10,
total_routes: 1000,
total_private_domains: -1,
memory_limit: 2048,
trial_db_allowed: false,
instance_memory_limit: -1,
app_instance_limit: -1 } },
{ metadata:
{ guid: '39d630ba-66d6-4f6d-ba4e-8d45a05e99c4',
url: '/v2/quota_definitions/39d630ba-66d6-4f6d-ba4e-8d45a05e99c4',
created_at: '2014-01-23T20:03:27Z',
updated_at: null },
entity:
{ name: '25GB',
non_basic_services_allowed: true,
total_services: -1,
total_routes: 1000,
total_private_domains: -1,
memory_limit: 25600,
trial_db_allowed: false,
instance_memory_limit: -1,
app_instance_limit: -1 } },
{ metadata:
{ guid: '81226624-9e5a-4616-9b9c-6ab14aac2a03',
url: '/v2/quota_definitions/81226624-9e5a-4616-9b9c-6ab14aac2a03',
created_at: '2014-03-11T00:13:21Z',
updated_at: '2014-03-19T17:36:32Z' },
entity:
{ name: '25GB:30free',
non_basic_services_allowed: false,
total_services: 30,
total_routes: 1000,
total_private_domains: -1,
memory_limit: 25600,
trial_db_allowed: false,
instance_memory_limit: -1,
app_instance_limit: -1 } },
{ metadata:
{ guid: '0e7e2da4-0c74-4039-bdda-5cb575bf3c85',
url: '/v2/quota_definitions/0e7e2da4-0c74-4039-bdda-5cb575bf3c85',
created_at: '2014-05-08T03:56:31Z',
updated_at: null },
entity:
{ name: '50GB',
non_basic_services_allowed: true,
total_services: -1,
total_routes: 1000,
total_private_domains: -1,
memory_limit: 51200,
trial_db_allowed: false,
instance_memory_limit: -1,
app_instance_limit: -1 } },
{ metadata:
{ guid: 'e9473dc8-7c84-401c-88b2-ad61fc13e33d',
url: '/v2/quota_definitions/e9473dc8-7c84-401c-88b2-ad61fc13e33d',
created_at: '2014-05-08T03:57:42Z',
updated_at: null },
entity:
{ name: '100GB',
non_basic_services_allowed: true,
total_services: -1,
total_routes: 1000,
total_private_domains: -1,
memory_limit: 102400,
trial_db_allowed: false,
instance_memory_limit: -1,
app_instance_limit: -1 } },
{ metadata:
{ guid: '21577e73-0f16-48fc-9bb5-2b30a77731ae',
url: '/v2/quota_definitions/21577e73-0f16-48fc-9bb5-2b30a77731ae',
created_at: '2014-05-08T04:00:28Z',
updated_at: null },
entity:
{ name: '75GB',
non_basic_services_allowed: true,
total_services: -1,
total_routes: 1000,
total_private_domains: -1,
memory_limit: 76800,
trial_db_allowed: false,
instance_memory_limit: -1,
app_instance_limit: -1 } },
{ metadata:
{ guid: '6413dedd-5c1e-4b18-ac69-e87bbaf0bfdd',
url: '/v2/quota_definitions/6413dedd-5c1e-4b18-ac69-e87bbaf0bfdd',
created_at: '2014-05-13T18:18:18Z',
updated_at: null },
entity:
{ name: '100GB:50free',
non_basic_services_allowed: false,
total_services: 50,
total_routes: 1000,
total_private_domains: -1,
memory_limit: 102400,
trial_db_allowed: false,
instance_memory_limit: -1,
app_instance_limit: -1 } },
{ metadata:
{ guid: '9d078b97-0dab-4563-aea5-852b1fb50129',
url: '/v2/quota_definitions/9d078b97-0dab-4563-aea5-852b1fb50129',
created_at: '2014-09-11T02:32:49Z',
updated_at: null },
entity:
{ name: '10GB:30free',
non_basic_services_allowed: false,
total_services: 30,
total_routes: 1000,
total_private_domains: -1,
memory_limit: 10240,
trial_db_allowed: false,
instance_memory_limit: -1,
app_instance_limit: -1 } },
{ metadata:
{ guid: '851c99c6-7bb3-400f-80a0-a06962e0c5d3',
url: '/v2/quota_definitions/851c99c6-7bb3-400f-80a0-a06962e0c5d3',
created_at: '2014-10-31T17:10:53Z',
updated_at: '2014-11-04T23:53:50Z' },
entity:
{ name: '25GB:100free',
non_basic_services_allowed: false,
total_services: 100,
total_routes: 1000,
total_private_domains: -1,
memory_limit: 25600,
trial_db_allowed: false,
instance_memory_limit: -1,
app_instance_limit: -1 } },
{ metadata:
{ guid: '5ad22d2c-1519-4e17-b555-f702fb38417e',
url: '/v2/quota_definitions/5ad22d2c-1519-4e17-b555-f702fb38417e',
created_at: '2015-02-02T22:18:44Z',
updated_at: '2015-04-22T00:36:14Z' },
entity:
{ name: 'PCF-H',
non_basic_services_allowed: true,
total_services: 1000,
total_routes: 1000,
total_private_domains: -1,
memory_limit: 204800,
trial_db_allowed: false,
instance_memory_limit: -1,
app_instance_limit: -1 } },
{ metadata:
{ guid: 'cf04086c-ccf9-442c-b89a-3f3fbcd365e3',
url: '/v2/quota_definitions/cf04086c-ccf9-442c-b89a-3f3fbcd365e3',
created_at: '2015-05-04T19:20:47Z',
updated_at: '2015-05-04T19:26:14Z' },
entity:
{ name: 'oreilly',
non_basic_services_allowed: true,
total_services: 10000,
total_routes: 1000,
total_private_domains: -1,
memory_limit: 307200,
trial_db_allowed: false,
instance_memory_limit: -1,
app_instance_limit: -1 } } ]
√ The platform returns Quota Definitions from Organizations (720ms)

I suppose that the best practice is to define an organization a determinated quota.
How to set a Quota as default?
How to configurate?

Juan Antonio


Re: How to detect this case: CF-AppMemoryQuo taExceeded

Dieu Cao <dcao@...>
 

You can call this end point to retrieve the org memory usage
http://apidocs.cloudfoundry.org/222/organizations/retrieving_organization_memory_usage.html

You would then need to check this against the org quota.

There's a story further down in the backlog for a similar endpoint for
space.

There was a previous PR to add end points that would more clearly show
quota usage for org and space, but it fell through.

-Dieu

On Wed, Oct 21, 2015 at 7:15 AM, Juan Antonio Breña Moral <
bren(a)juanantonio.info> wrote:

Hi,

doing some tests, I detected in my testing environment the following
scenario:

Error: the string "{\n \"code\": 100005,\n \"description\": \"You
have ex
ceeded your organization's memory limit.\",\n \"error_code\":
\"CF-AppMemoryQuo
taExceeded\"\n}\n" was thrown, throw an Error :)

Does exist some REST Call to know if the org/space has reached the limit?

Many thanks in advance

Juan Antonio


Re: cf": error=2, No such file or directory and error=2

Varsha Nagraj
 

Hello Mathew,

Can you please let me know how do I add this to my PATH. Previously I would run the same commands on a windows system from eclipse. I have not set any PATH env on windows as I remember.


Re: Cloud Foundry DEA to Diego switch - when?

MaggieMeng
 

Hi, Amit

Our team is also planning the timeline of replacing dea with diego. Would you please let me know the approximated estimation on when the final iteration would come? Will it in 2016 or 1017?

Thanks,
Maggie

From: Amit Gupta [mailto:agupta(a)pivotal.io]
Sent: 2015年10月22日 2:59
To: Discussions about Cloud Foundry projects and the system overall.
Subject: [cf-dev] Re: Cloud Foundry DEA to Diego switch - when?

Hi Rishi,

Thanks for your question. Let's first clarify the distinction between what you deploy -- bosh releases (versioned packages of source code and binaries) -- and how you deploy things -- bosh deployment (a manifest of which releases to use, what code/binaries from those releases to place on nodes in your deployment cluster, property/credential configuration, networking and compute resources, etc.).

diego-release may not change, although it may be split into smaller releases, e.g. the cc-bridge part consisting of the components which talk to CC, and the diego runtime part consisting of components responsible for scheduling, running, and health-monitoring containerized workloads.

cf-release will undergo heavy changes. We are currently breaking it apart entirely, into separate releases: consul, etcd, logging-and-metrics, identity, routing, API, nats, postgres, and existing runtime backend (DEA, Warden, HM9k).

In addition to breaking up cf-release, we are working on cf-deployment[1], this will give you the same ability to deploy the Cloud Foundry PaaS as you know it today, but composed of multiple releases rather than the monolithic cf-release. We will ensure that cf-deployment has versioning and tooling to make it easy to deploy everything at versions that are known to work together.

For the first major iteration of cf-deployment, it will deploy all the existing components of cf-release, but coming from separate releases. You can still deploy diego separately (configured to talk to the CC) as you do today.

The second major iteration will be to leverage new BOSH features[2], such as links, AZs, cloud config, and global networking to simplify the manifest generation for cf-deployment. Again, you will still be able to deploy diego separately alongside your cf deployment.

The third iteration is to fold the diego-release deployment strategies into cf-deployment itself, so you'll have a single manifest deploying DEAs and Diego side-by-side.

The final iteration will be to remove the DEAs from cf-deployment and stop supporting the release that contains them.

As to your question of defaults, there are several definitions of "default". You can set Diego to be the default backend today[3]. You have to opt in to this, but then anyone using the platform you deployed will have their apps run on Diego by default. Pivotal Web Services, for example, now defaults to Diego as the backend. At some point, Diego will be the true default backend, and you will have to opt-out of it (either at the CC configuration level, or at the individual app level). Finally, at a later point in time, DEAs will no longer be supported and Diego will be the only backend option.

We are actively working on a timeline for all these things. You can see the Diego team's public tracker has a release marker[4] for when Diego will be capable of replacing the DEAs. After reaching that release marker, there will be some time given for the rest of the community to switch over before declaring end-of-life for the DEAs.

[1] https://github.com/cloudfoundry/cf-deployment
[2] https://github.com/cloudfoundry/bosh-notes/
[3] https://github.com/cloudfoundry/cf-release/blob/v222/jobs/cloud_controller_ng/spec#L396-L398
[4] https://www.pivotaltracker.com/story/show/76376202

Thanks,
Amit, OSS Release Integration PM

On Wed, Oct 21, 2015 at 10:31 AM, R M <rishi.investigate(a)gmail.com<mailto:rishi.investigate(a)gmail.com>> wrote:
I am trying to understand when will Diego become default runtime of Cloud Foundry. Latest cf-release is still using DEA and if my understanding is correct, at some stage, a new cf-release version will come out with Diego and perhaps change to v3. Do we have any ideas on when/if this will happen? Is it safe to assume that diego-release on github will slowly transition to cf-release?

Thanks.


Re: REST API endpoint for accessing application logs

Gianluca Volpe <gvolpe1968@...>
 

this is the maximum number of log lines the doppler can buffer while draining messages to remote syslog.

Gianluca

Il giorno 21/ott/2015, alle ore 09:23, Ponraj E <ponraj.e(a)gmail.com> ha scritto:

Hi,

Short update reg the question #2 above: I came to know from here http://docs.cloudfoundry.org/loggregator/ops.html that the number/size of log messages drained to the doppler can be controlled by a bosh deployment
manifest configuration : doppler.message_drain_buffer_size

It is specified that the doppler.message_drain_buffer_size default value is 100.

Is it 100MB?


Re: [abacus] Configuring Abacus applications

Jean-Sebastien Delfino
 

Hi Piotr,

We'll be looking for usage from Sept as well as we'll want to show it as
'previous month usage' in Oct, but we shouldn't *require* usage from Sept
and shouldn't fail (just show zero) if we can't find usage from Sept.

Like Assk said, if that fails open a Github issue. Thanks!

P.S. I will answer your initial question with some instance tuning
recommendations a bit later this evening.

- Jean-Sebastien

On Wed, Oct 21, 2015 at 5:26 PM, Saravanakumar A Srinivasan <
sasrin(a)us.ibm.com> wrote:

I would expect it to start with 0, if the previous month DB does not
exist. Could you open a github issue with more details(+ logs if possible)
about what you are seeing there?


Thanks,
Saravanakumar Srinivasan (Assk),



-----Piotr Przybylski/Burlingame/IBM(a)IBMUS wrote: -----
To: "Discussions about Cloud Foundry projects and the system overall." <
cf-dev(a)lists.cloudfoundry.org>
From: Piotr Przybylski/Burlingame/IBM(a)IBMUS
Date: 10/21/2015 04:58PM
Subject: [cf-dev] Re: [abacus] Configuring Abacus applications


The follow up question - if we initialize abacus and start submissions
this month (October) the accumulator seems to require previous month DB
partition. Is that expected ?

Piotr

-----Piotr Przybylski/Burlingame/IBM wrote: -----
To: "Discussions about Cloud Foundry projects and the system overall." <
cf-dev(a)lists.cloudfoundry.org>
From: Piotr Przybylski/Burlingame/IBM
Date: 10/21/2015 09:04AM
Subject: [abacus] Configuring Abacus applications

Hi,
couple of questions about configuring Abacus, specifically the recommended
settings and how to configure them

- is there a recommended minimal number of instances of Abacus applications
- how would above depend on expected number of submissions or documents to
be processed
- is there a dependency between number of instances of applications i.e.
do they have to match
- what is the default and recommended number of DB partitions and how can
they be configured (time based as well as key based)
- how would above depend on expected number of documents

Thank you,

Piotr




Re: [abacus] Configuring Abacus applications

Saravanakumar A. Srinivasan
 

I would expect it to start with 0, if the previous month DB does not exist. Could you open a github issue with more details(+ logs if possible) about what you are seeing there? 


Thanks,
Saravanakumar Srinivasan (Assk),



-----Piotr Przybylski/Burlingame/IBM@IBMUS wrote: -----
To: "Discussions about Cloud Foundry projects and the system overall." <cf-dev@...>
From: Piotr Przybylski/Burlingame/IBM@IBMUS
Date: 10/21/2015 04:58PM
Subject: [cf-dev] Re: [abacus] Configuring Abacus applications

The follow up question - if we initialize abacus and start submissions this month (October) the accumulator seems to require previous month DB partition. Is that expected ? 

Piotr

-----Piotr Przybylski/Burlingame/IBM wrote: -----
To: "Discussions about Cloud Foundry projects and the system overall." <cf-dev@...>
From: Piotr Przybylski/Burlingame/IBM
Date: 10/21/2015 09:04AM
Subject: [abacus] Configuring Abacus applications

Hi, 
couple of questions about configuring Abacus, specifically the recommended settings and how to configure them

- is there a recommended minimal number of instances of Abacus applications
- how would above depend on expected number of submissions or documents to be processed
- is there a dependency between number of instances of applications i.e. do they have to match 
- what is the default and recommended number of DB partitions and how can they be configured (time based as well as key based)
- how would above depend on expected number of documents

Thank you,

Piotr




Re: [abacus] Configuring Abacus applications

Piotr Przybylski <piotrp@...>
 

The follow up question - if we initialize abacus and start submissions this month (October) the accumulator seems to require previous month DB partition. Is that expected ? 

Piotr

-----Piotr Przybylski/Burlingame/IBM wrote: -----
To: "Discussions about Cloud Foundry projects and the system overall." <cf-dev@...>
From: Piotr Przybylski/Burlingame/IBM
Date: 10/21/2015 09:04AM
Subject: [abacus] Configuring Abacus applications

Hi, 
couple of questions about configuring Abacus, specifically the recommended settings and how to configure them

- is there a recommended minimal number of instances of Abacus applications
- how would above depend on expected number of submissions or documents to be processed
- is there a dependency between number of instances of applications i.e. do they have to match 
- what is the default and recommended number of DB partitions and how can they be configured (time based as well as key based)
- how would above depend on expected number of documents

Thank you,

Piotr



Re: how to get the CF endpoint API in my program

Scott Frederick <scottyfred@...>
 

There is no way for an app running on CF to detect the CC API URL for the
platform the app is running on. The API domain can be different from the
app domain, so you can’t reliably derive the domain of any system endpoints
(api, uaa, login, etc) from any routes bound to the app. You can set the
API endpoint in an environment variable or system property and have your
app read that.

Scott

On Wed, Oct 21, 2015 at 6:33 PM, zooba Sir <myfakename90(a)gmail.com> wrote:

I need to get the CF endpoint API programmatically (not using cf cli). My
java spring boot project should get this endpoint url while running in cf.


how to get the CF endpoint API in my program

Zuba Al <myfakename90@...>
 

I need to get the CF endpoint API programmatically (not using cf cli). My java spring boot project should get this endpoint url while running in cf.


Re: region qualifier for organizations

Bharath Sekar
 

Sounds good. Thanks Sebastien. I'll watch the thread [1] for updates

[1] https://github.com/cloudfoundry-incubator/cf-abacus/issues/110


Re: HM9000 gets stuck in bad state

Amit Kumar Gupta
 

This is not a known issue. Can you copy full log lines (including
timestamps, other metadata, error message, etc.) and paste them into a Gist
or Pastebin, and share the link?

Also, can you get a session on one of your etcd nodes (so that etcd is
reachable via localhost) and share the output of the following queries?

curl http://localhost:4001/v2/keys/hm/v4/actual-fresh
curl http://localhost:4001/v2/keys/hm/v4/desired-fresh

Note, you might need a different version than v4. You can figure out the
correct version by querying:

curl http://localhost:4001/v2/keys/hm

When I do so, I get the output:

{"action":"get","node":{"key":"/hm","dir":true,"nodes":[{"key":"/hm/locks","dir":true,"modifiedIndex":5,"createdIndex":5},{"key":"/hm/v4","dir":true,"modifiedIndex":15,"createdIndex":15}],"modifiedIndex":5,"createdIndex":5}}

Note the part that says "key":"/hm/v4", that's how you can determine
whether you need to query the v4 API or some other version.

On Wed, Oct 21, 2015 at 9:16 AM, kyle havlovitz <kylehav(a)gmail.com> wrote:

I'm having an issue where after a day or two or running, the health
manager is getting stuck in a bad state and doesn't display the state of
apps correctly. The logs show messages like this: "Store is not fresh -
Error:Actual and desired state are not fresh" and "Daemon returned an
error. Continuining... - Error:Actual and desired state are not fresh".

Restarting the process fixes the issue, but I'm wondering how to avoid
this problem altogether. Is this a known issue?


Re: region qualifier for organizations

Jean-Sebastien Delfino
 

So, since we're not sure about that region field, I'll wait for the
discussion to settle before making any additions to the API.

In the meantime, Bharath, the convention you've described (or another one
like us:86d0482c-7208-4f2f-8606-935c080cad41) should just work.

HTH

- Jean-Sebastien

On Wed, Oct 21, 2015 at 12:41 PM, Jean-Sebastien Delfino <
jsdelfino(a)gmail.com> wrote:

I could argue that region is actually a pretty generic term, as it's used
in a wide range of domains from geography (parts of the world or parts of a
country) to networking and datacenters (a region as a group of IP
addresses) and even memory management in garbage collectors for example :)
... but I was also wondering about a potential confusion where our users
could just assume that region always has to be a part of the world.

So we could either make clear that region is used in its generic sense and
not necessarily a geographical region in an Abacus doc on that topic, or
attempt to find another term like cluster for example, or zone or something
else... I've created Github issue #110 [1] to help folks submit their ideas
on this.

[1] https://github.com/cloudfoundry-incubator/cf-abacus/issues/110

- Jean-Sebastien

On Wed, Oct 21, 2015 at 10:32 AM, Dieu Cao <dcao(a)pivotal.io> wrote:

I'm curious if region is perhaps too specific?
Perhaps some other generic word would be better so that it's not
prescriptive.

-Dieu

On Wed, Oct 21, 2015 at 10:10 AM, Jean-Sebastien Delfino <
jsdelfino(a)gmail.com> wrote:

Hi all,

If there's no objection and nobody comes up with a better idea, I'll
start to work on adding a GET /v1/regions/us/orgs/ path as discussed
here sometime today.

- Jean-Sebastien

On Tue, Oct 20, 2015 at 10:11 PM, Jean-Sebastien Delfino <
jsdelfino(a)gmail.com> wrote:

Hi Bharath,

Sorry for the delay, I didn't realize this was a question for the
Abacus project as it didn't have the [abacus] subject tag we've been using
for Abacus discussions recently. I guess from now on I'll just check all
threads just in case :)

This is a good question. With independent deployments of CF in multiple
datacenters or regions you may need to distinguish between organization
86d0482c-7208-4f2f-8606-935c080cad41 in region 'us' and the same
organization id in region 'eu' for example.

We could add another path to the API for the cases where you care about
the region with GET /v1/regions/us/orgs/86d0482c-7208-4f2f-8606-935c080cad41/...
if that helps.

I could also sympathize with another approach, where we'd say that the
organization id being a guid should truly be *globally unique*. It looks
like the the current CF guid generation algorithm doesn't *guarantee*
uniqueness across deployments [1] but combining the region with it would
make it unique. IIUC I think that's what you're suggesting.

What do others think?

[1]
http://cf-dev.70369.x6.nabble.com/cf-dev-Pointer-to-the-CF-code-that-generates-org-and-service-instance-guids-tp2192.html

- Jean-Sebastien

On Tue, Oct 20, 2015 at 11:42 AM, Bharath Sekar <bsekar14(a)gmail.com>
wrote:

Hi,
account service implementations could need additional qualifiers to
uniquely identify an organization. For example, the implementation I'm
working on needs a region along with the guid of the org.
The API to get an account given org information looks like this

GET /v1/orgs/:org_id/account

How do we want to support the additional qualifier in abacus? One
solution that I can think of is including the region in the guid. org_id
could be 'guid_region'. ex:
GET /v1/orgs/86d0482c-7208-4f2f-8606-935c080cad41_us/account
Thoughts?