Date   

Re: Open sourcing our S3 service broker

Eric Poelke
 

Yeah I agree, but am in the same boat I don't really have much free time. It would be awesome to have a service broker marketplace of some sort. And I didn't even know your s3 service broker existed, although that probably would not have stopped me from writing mine ;). It was actually the first one I did as a "how do I make a service broker" project. The RDS one we opened sourced was actually built after our S3 one, but once we put the RDS one out there I figured we may as well put this one out there as well. We have some others around AWS services as well we will get out there at some point. But I really like the idea of a marketplace with some kind of review system as well I think this would really help the ecosystem as a whole.


Re: CF-RELEASE v202 UPLOAD ERROR

Amit Kumar Gupta
 

How did you create your manifest in the first place?

On Fri, Oct 23, 2015 at 8:17 AM, Parthiban Annadurai <senjiparthi(a)gmail.com>
wrote:

After trying the suggestions, now its throws the following error,

Started preparing configuration > Binding configuration. Failed: Error
filling in template `metron_agent.json.erb' for `ha_proxy_z1/0' (line 5:
Can't find property `["metron_agent.deployment"]') (00:00:00)

Error 100: Error filling in template `metron_agent.json.erb' for
`ha_proxy_z1/0' (line 5: Can't find property `["metron_agent.deployment"]')

Could anyone on this??

On 22 October 2015 at 18:08, Amit Gupta <agupta(a)pivotal.io> wrote:

Try running "bosh cck" and recreating VMs from last known apply spec.
You should also make sure that the IPs you're allocating to your jobs are
accessible from the BOSH director VM.

On Thu, Oct 22, 2015 at 5:27 AM, Parthiban Annadurai <
senjiparthi(a)gmail.com> wrote:

Yaa sure Amit. I have attached both the files with this mail. Could you
please? Thanks.



On 21 October 2015 at 19:49, Amit Gupta <agupta(a)pivotal.io> wrote:

Can you share the output of "bosh vms" and "bosh task 51 --debug".
It's preferable if you copy the terminal outputs and paste them to Gists or
Pastebins and share the links.

On Tue, Oct 20, 2015 at 6:18 AM, James Bayer <jbayer(a)pivotal.io> wrote:

sometimes a message like that is due to networking issues. does the
bosh director and the VM it is creating have an available network path to
reach each other? sometimes ssh'ing in to the VM that is identified can
yield more debug clues.

On Tue, Oct 20, 2015 at 5:09 AM, Parthiban Annadurai <
senjiparthi(a)gmail.com> wrote:

Thanks Bharath and Amit for the helpful solutions. I have surpassed
that error. Now, bosh deploy strucks like in attached image. Could you
anyone please?

Regards

Parthiban A



On 20 October 2015 at 11:57, Amit Gupta <agupta(a)pivotal.io> wrote:

Bharath, I think you mean to increase the *disk* size on the
compilation VMs, not the memory size.

Parthiban, the error message is happening during compiling, saying
"No space left on device". This means your compilation VMs are running out
of space on disk. This means you need to increase the allocated disk for
your compilation VMs. In the "compilation" section of your deployment
manifest, you can specify "cloud_properties". This is where you will
specify disk size. These "cloud_properties" look the same as the
could_properties specified for a resource pool. Depending on your IaaS,
the structure of the cloud_properties section differs. See here:
https://bosh.io/docs/deployment-manifest.html#resource-pools-cloud-properties

On Mon, Oct 19, 2015 at 11:13 PM, Bharath Posa <bharathp(a)vedams.com>
wrote:

hi parthiban

It seems you are running out of space in your vm in which you are
compiling . try to increase the size of memory in your compilation vm .

regards
Bharath



On Mon, Oct 19, 2015 at 7:39 PM, Parthiban Annadurai <
senjiparthi(a)gmail.com> wrote:

Hello All,
Thanks All for the helpful suggestions. Actually, now
we r facing the following issue while kicking bosh deploy,

Done compiling packages >
nats/d3a1f853f4980682ed8b48e4706b7280e2b7ce0e (00:01:07)
Failed compiling packages >
buildpack_php/9c72be716ab8629d7e6feed43012d1d671720157: Action Failed
get_task: Task aba21e6a-2031-4a69-5b72-f238ecd07051 result: Compiling
package buildpack_php: Compressing compiled package: Shelling out to tar:
Running command: 'tar czf
/var/vcap/data/tmp/bosh-platform-disk-TarballCompressor-CompressFilesInDir762165297
-C
/var/vcap/data/packages/buildpack_php/9c72be716ab8629d7e6feed43012d1d671720157.1-
.', stdout: '', stderr: '
gzip: stdout: No space left on device
': signal: broken pipe (00:02:41)
Failed compiling packages (00:02:41)

Error 450001: Action Failed get_task: Task
aba21e6a-2031-4a69-5b72-f238ecd07051 result: Compiling package
buildpack_php: Compressing compiled package: Shelling out to tar: Running
command: 'tar czf
/var/vcap/data/tmp/bosh-platform-disk-TarballCompressor-CompressFilesInDir762165297
-C
/var/vcap/data/packages/buildpack_php/9c72be716ab8629d7e6feed43012d1d671720157.1-
.', stdout: '', stderr: '
gzip: stdout: No space left on device
': signal: broken pipe

Could Anyone on this issue?

Regards

Parthiban A

On 19 October 2015 at 14:30, Bharath Posa <bharathp(a)vedams.com>
wrote:

Hi partiban

can u do a checksum of the tar file .


it should come like this *sha1:
b6f596eaff4c7af21cc18a52ef97e19debb00403*

example:

*sha1sum {file}*

regards
Bharath

On Mon, Oct 19, 2015 at 1:12 PM, Eric Poelke <epoelke(a)gmail.com>
wrote:

You actually do not need to download it. if you just run --

`bosh upload release
https://bosh.io/d/github.com/cloudfoundry/cf-release?v=202`
<https://bosh.io/d/github.com/cloudfoundry/cf-release?v=202>

The director will pull in the release directly from bosh.io.

--
Thank you,

James Bayer


Re: CF-RELEASE v202 UPLOAD ERROR

Parthiban Annadurai <senjiparthi@...>
 

After trying the suggestions, now its throws the following error,

Started preparing configuration > Binding configuration. Failed: Error
filling in template `metron_agent.json.erb' for `ha_proxy_z1/0' (line 5:
Can't find property `["metron_agent.deployment"]') (00:00:00)

Error 100: Error filling in template `metron_agent.json.erb' for
`ha_proxy_z1/0' (line 5: Can't find property `["metron_agent.deployment"]')

Could anyone on this??

On 22 October 2015 at 18:08, Amit Gupta <agupta(a)pivotal.io> wrote:

Try running "bosh cck" and recreating VMs from last known apply spec. You
should also make sure that the IPs you're allocating to your jobs are
accessible from the BOSH director VM.

On Thu, Oct 22, 2015 at 5:27 AM, Parthiban Annadurai <
senjiparthi(a)gmail.com> wrote:

Yaa sure Amit. I have attached both the files with this mail. Could you
please? Thanks.



On 21 October 2015 at 19:49, Amit Gupta <agupta(a)pivotal.io> wrote:

Can you share the output of "bosh vms" and "bosh task 51 --debug". It's
preferable if you copy the terminal outputs and paste them to Gists or
Pastebins and share the links.

On Tue, Oct 20, 2015 at 6:18 AM, James Bayer <jbayer(a)pivotal.io> wrote:

sometimes a message like that is due to networking issues. does the
bosh director and the VM it is creating have an available network path to
reach each other? sometimes ssh'ing in to the VM that is identified can
yield more debug clues.

On Tue, Oct 20, 2015 at 5:09 AM, Parthiban Annadurai <
senjiparthi(a)gmail.com> wrote:

Thanks Bharath and Amit for the helpful solutions. I have surpassed
that error. Now, bosh deploy strucks like in attached image. Could you
anyone please?

Regards

Parthiban A



On 20 October 2015 at 11:57, Amit Gupta <agupta(a)pivotal.io> wrote:

Bharath, I think you mean to increase the *disk* size on the
compilation VMs, not the memory size.

Parthiban, the error message is happening during compiling, saying
"No space left on device". This means your compilation VMs are running out
of space on disk. This means you need to increase the allocated disk for
your compilation VMs. In the "compilation" section of your deployment
manifest, you can specify "cloud_properties". This is where you will
specify disk size. These "cloud_properties" look the same as the
could_properties specified for a resource pool. Depending on your IaaS,
the structure of the cloud_properties section differs. See here:
https://bosh.io/docs/deployment-manifest.html#resource-pools-cloud-properties

On Mon, Oct 19, 2015 at 11:13 PM, Bharath Posa <bharathp(a)vedams.com>
wrote:

hi parthiban

It seems you are running out of space in your vm in which you are
compiling . try to increase the size of memory in your compilation vm .

regards
Bharath



On Mon, Oct 19, 2015 at 7:39 PM, Parthiban Annadurai <
senjiparthi(a)gmail.com> wrote:

Hello All,
Thanks All for the helpful suggestions. Actually, now
we r facing the following issue while kicking bosh deploy,

Done compiling packages >
nats/d3a1f853f4980682ed8b48e4706b7280e2b7ce0e (00:01:07)
Failed compiling packages >
buildpack_php/9c72be716ab8629d7e6feed43012d1d671720157: Action Failed
get_task: Task aba21e6a-2031-4a69-5b72-f238ecd07051 result: Compiling
package buildpack_php: Compressing compiled package: Shelling out to tar:
Running command: 'tar czf
/var/vcap/data/tmp/bosh-platform-disk-TarballCompressor-CompressFilesInDir762165297
-C
/var/vcap/data/packages/buildpack_php/9c72be716ab8629d7e6feed43012d1d671720157.1-
.', stdout: '', stderr: '
gzip: stdout: No space left on device
': signal: broken pipe (00:02:41)
Failed compiling packages (00:02:41)

Error 450001: Action Failed get_task: Task
aba21e6a-2031-4a69-5b72-f238ecd07051 result: Compiling package
buildpack_php: Compressing compiled package: Shelling out to tar: Running
command: 'tar czf
/var/vcap/data/tmp/bosh-platform-disk-TarballCompressor-CompressFilesInDir762165297
-C
/var/vcap/data/packages/buildpack_php/9c72be716ab8629d7e6feed43012d1d671720157.1-
.', stdout: '', stderr: '
gzip: stdout: No space left on device
': signal: broken pipe

Could Anyone on this issue?

Regards

Parthiban A

On 19 October 2015 at 14:30, Bharath Posa <bharathp(a)vedams.com>
wrote:

Hi partiban

can u do a checksum of the tar file .


it should come like this *sha1:
b6f596eaff4c7af21cc18a52ef97e19debb00403*

example:

*sha1sum {file}*

regards
Bharath

On Mon, Oct 19, 2015 at 1:12 PM, Eric Poelke <epoelke(a)gmail.com>
wrote:

You actually do not need to download it. if you just run --

`bosh upload release
https://bosh.io/d/github.com/cloudfoundry/cf-release?v=202`
<https://bosh.io/d/github.com/cloudfoundry/cf-release?v=202>

The director will pull in the release directly from bosh.io.

--
Thank you,

James Bayer


Re: Error uploading application when pushing application

Daniel Mikusa
 

Jim,

When you're sending requests to `api.system-domain`, you're talking to the
Cloud Controller. I'd suggest you start by taking a look at the Cloud
Controller logs. You can grab them with `bosh logs` or by SSH'ing to the
VM and cd'ing to /var/vcap/sys/log. Hopefully that'll show you an error or
stack trace.

Dan

On Fri, Oct 23, 2015 at 4:32 AM, Jim Lin <jimlintw922(a)gmail.com> wrote:

CF Version: 215

Description: My push command is `cf push myapp -p myapp.war -m 512m -t
120` and I got the error message "Error uploading application". The detail
trace log is as following:

============== Start of Log ==============
REQUEST: [2015-10-23T16:12:49+08:00]
GET /v2/jobs/a4866929-aff5-41bb-8891-0540ba45e97c HTTP/1.1
Host: api.140.92.27.254.xip.io
Accept: application/json
Authorization: [PRIVATE DATA HIDDEN]
Content-Type: application/json
User-Agent: go-cli 6.12.0-8c65bbd / linux



RESPONSE: [2015-10-23T16:12:49+08:00]
HTTP/1.1 200 OK
Content-Length: 491
Content-Type: application/json;charset=utf-8
Date: Fri, 23 Oct 2015 08:14:56 GMT
Server: nginx
X-Cf-Requestid: 7cea1ef8-d14a-4260-4b3c-dcc387684911
X-Content-Type-Options: nosniff
X-Vcap-Request-Id:
244f6491-caae-43b4-69c8-9e80f4a61c83::38d83968-cd06-4ede-8531-1356d08cf38d

{
"metadata": {
"guid": "a4866929-aff5-41bb-8891-0540ba45e97c",
"created_at": "2015-10-23T08:14:51Z",
"url": "/v2/jobs/a4866929-aff5-41bb-8891-0540ba45e97c"
},
"entity": {
"guid": "a4866929-aff5-41bb-8891-0540ba45e97c",
"status": "failed",
"error": "Use of entity>error is deprecated in favor of
entity>error_details.",
"error_details": {
"error_code": "UnknownError",
"description": "An unknown error occurred.",
"code": 10001
}
}
}
FAILED
Error uploading application.
An unknown error occurred.
FAILED
Error uploading application.
An unknown error occurred.

============== End of Log ==============

How to I do diagnosis to find out the root cause?

Thanks all.

Sincerely,
Jim


Re: How to detect this case: CF-AppMemoryQuo taExceeded

Dieu Cao <dcao@...>
 

The default org quota you're seeing is defined here [1]
I believe you can configure it by specifying the name of the quota you
would like to have as the default quota in your manifest.
For example:

properties:
cc:
default_quota_definition: turtle
quota_definitions:
turtle:
memory_limit: 10240
total_services: -1
total_routes: 1000
non_basic_services_allowed: true


[1]
https://github.com/cloudfoundry/cloud_controller_ng/blob/master/config/cloud_controller.yml#L102-L107

On Thu, Oct 22, 2015 at 3:48 AM, Juan Antonio Breña Moral <
bren(a)juanantonio.info> wrote:

Hi,

Using this method, I receive the memory used by the organization:

{ memory_usage_in_mb: 576 }

If i use this method:

http://apidocs.cloudfoundry.org/222/organizations/get_organization_summary.html

I receive the same information:

{ guid: '2fcae642-b4b9-4393-89dc-509ece372f7d',
name: 'DevBox',
status: 'active',
spaces:
[ { guid: 'e558b66a-1b9c-4c66-a779-5cf46e3b060c',
name: 'dev',
service_count: 4,
app_count: 2,
mem_dev_total: 576,
mem_prod_total: 0 } ] }

I think that the limit is defined in a Quota definition for Space or an
Organization. Using a local instance, I was doing some tests with the
methods:

http://apidocs.cloudfoundry.org/222/organization_quota_definitions/delete_a_particular_organization_quota_definition.html

but a organization doesn't require a quota, so I suppose that exist a
default quota, is it correct?
In my case, the unique quota is:

http://apidocs.cloudfoundry.org/222/organization_quota_definitions/list_all_organization_quota_definitions.html

[ { metadata:
{ guid: '59ce5f9d-8914-4783-a3dc-8f5f89cf023a',
url: '/v2/quota_definitions/59ce5f9d-8914-4783-a3dc-8f5f89cf023a',
created_at: '2015-07-15T12:32:30Z',
updated_at: null },
entity:
{ name: 'default',
non_basic_services_allowed: true,
total_services: 100,
total_routes: 1000,
memory_limit: 10240,
trial_db_allowed: false,
instance_memory_limit: -1 } } ]
√ The platform returns Quota Definitions from Organizations (359ms)

In Pivotal for example, I suppose that free accounts use the default quota:

{ metadata:
{ guid: 'b72b1acb-ff4f-468d-99c0-05cd91012b62',
url: '/v2/quota_definitions/b72b1acb-ff4f-468d-99c0-05cd91012b62',
created_at: '2013-11-19T18:53:48Z',
updated_at: '2013-11-19T19:34:57Z' },
entity:
{ name: 'trial',
non_basic_services_allowed: false,
total_services: 10,
total_routes: 1000,
total_private_domains: -1,
memory_limit: 2048,
trial_db_allowed: false,
instance_memory_limit: -1,
app_instance_limit: -1 } },

But method returns the following quotas.

[ { metadata:
{ guid: '8c4b4554-b43b-4673-ac93-3fc232896f0b',
url: '/v2/quota_definitions/8c4b4554-b43b-4673-ac93-3fc232896f0b',
created_at: '2013-11-19T18:53:48Z',
updated_at: '2013-11-19T19:34:57Z' },
entity:
{ name: 'free',
non_basic_services_allowed: false,
total_services: 0,
total_routes: 1000,
total_private_domains: -1,
memory_limit: 0,
trial_db_allowed: false,
instance_memory_limit: -1,
app_instance_limit: -1 } },
{ metadata:
{ guid: '7dbdcbb7-edb6-4246-a217-2031a75388f7',
url: '/v2/quota_definitions/7dbdcbb7-edb6-4246-a217-2031a75388f7',
created_at: '2013-11-19T18:53:48Z',
updated_at: '2013-11-19T19:34:57Z' },
entity:
{ name: 'paid',
non_basic_services_allowed: true,
total_services: -1,
total_routes: 1000,
total_private_domains: -1,
memory_limit: 10240,
trial_db_allowed: false,
instance_memory_limit: -1,
app_instance_limit: -1 } },
{ metadata:
{ guid: '2228e712-7b0c-4b65-899c-0fc599063e35',
url: '/v2/quota_definitions/2228e712-7b0c-4b65-899c-0fc599063e35',
created_at: '2013-11-19T18:53:48Z',
updated_at: '2014-05-07T18:33:19Z' },
entity:
{ name: 'runaway',
non_basic_services_allowed: true,
total_services: -1,
total_routes: 1000,
total_private_domains: -1,
memory_limit: 204800,
trial_db_allowed: false,
instance_memory_limit: -1,
app_instance_limit: -1 } },
{ metadata:
{ guid: 'b72b1acb-ff4f-468d-99c0-05cd91012b62',
url: '/v2/quota_definitions/b72b1acb-ff4f-468d-99c0-05cd91012b62',
created_at: '2013-11-19T18:53:48Z',
updated_at: '2013-11-19T19:34:57Z' },
entity:
{ name: 'trial',
non_basic_services_allowed: false,
total_services: 10,
total_routes: 1000,
total_private_domains: -1,
memory_limit: 2048,
trial_db_allowed: false,
instance_memory_limit: -1,
app_instance_limit: -1 } },
{ metadata:
{ guid: '39d630ba-66d6-4f6d-ba4e-8d45a05e99c4',
url: '/v2/quota_definitions/39d630ba-66d6-4f6d-ba4e-8d45a05e99c4',
created_at: '2014-01-23T20:03:27Z',
updated_at: null },
entity:
{ name: '25GB',
non_basic_services_allowed: true,
total_services: -1,
total_routes: 1000,
total_private_domains: -1,
memory_limit: 25600,
trial_db_allowed: false,
instance_memory_limit: -1,
app_instance_limit: -1 } },
{ metadata:
{ guid: '81226624-9e5a-4616-9b9c-6ab14aac2a03',
url: '/v2/quota_definitions/81226624-9e5a-4616-9b9c-6ab14aac2a03',
created_at: '2014-03-11T00:13:21Z',
updated_at: '2014-03-19T17:36:32Z' },
entity:
{ name: '25GB:30free',
non_basic_services_allowed: false,
total_services: 30,
total_routes: 1000,
total_private_domains: -1,
memory_limit: 25600,
trial_db_allowed: false,
instance_memory_limit: -1,
app_instance_limit: -1 } },
{ metadata:
{ guid: '0e7e2da4-0c74-4039-bdda-5cb575bf3c85',
url: '/v2/quota_definitions/0e7e2da4-0c74-4039-bdda-5cb575bf3c85',
created_at: '2014-05-08T03:56:31Z',
updated_at: null },
entity:
{ name: '50GB',
non_basic_services_allowed: true,
total_services: -1,
total_routes: 1000,
total_private_domains: -1,
memory_limit: 51200,
trial_db_allowed: false,
instance_memory_limit: -1,
app_instance_limit: -1 } },
{ metadata:
{ guid: 'e9473dc8-7c84-401c-88b2-ad61fc13e33d',
url: '/v2/quota_definitions/e9473dc8-7c84-401c-88b2-ad61fc13e33d',
created_at: '2014-05-08T03:57:42Z',
updated_at: null },
entity:
{ name: '100GB',
non_basic_services_allowed: true,
total_services: -1,
total_routes: 1000,
total_private_domains: -1,
memory_limit: 102400,
trial_db_allowed: false,
instance_memory_limit: -1,
app_instance_limit: -1 } },
{ metadata:
{ guid: '21577e73-0f16-48fc-9bb5-2b30a77731ae',
url: '/v2/quota_definitions/21577e73-0f16-48fc-9bb5-2b30a77731ae',
created_at: '2014-05-08T04:00:28Z',
updated_at: null },
entity:
{ name: '75GB',
non_basic_services_allowed: true,
total_services: -1,
total_routes: 1000,
total_private_domains: -1,
memory_limit: 76800,
trial_db_allowed: false,
instance_memory_limit: -1,
app_instance_limit: -1 } },
{ metadata:
{ guid: '6413dedd-5c1e-4b18-ac69-e87bbaf0bfdd',
url: '/v2/quota_definitions/6413dedd-5c1e-4b18-ac69-e87bbaf0bfdd',
created_at: '2014-05-13T18:18:18Z',
updated_at: null },
entity:
{ name: '100GB:50free',
non_basic_services_allowed: false,
total_services: 50,
total_routes: 1000,
total_private_domains: -1,
memory_limit: 102400,
trial_db_allowed: false,
instance_memory_limit: -1,
app_instance_limit: -1 } },
{ metadata:
{ guid: '9d078b97-0dab-4563-aea5-852b1fb50129',
url: '/v2/quota_definitions/9d078b97-0dab-4563-aea5-852b1fb50129',
created_at: '2014-09-11T02:32:49Z',
updated_at: null },
entity:
{ name: '10GB:30free',
non_basic_services_allowed: false,
total_services: 30,
total_routes: 1000,
total_private_domains: -1,
memory_limit: 10240,
trial_db_allowed: false,
instance_memory_limit: -1,
app_instance_limit: -1 } },
{ metadata:
{ guid: '851c99c6-7bb3-400f-80a0-a06962e0c5d3',
url: '/v2/quota_definitions/851c99c6-7bb3-400f-80a0-a06962e0c5d3',
created_at: '2014-10-31T17:10:53Z',
updated_at: '2014-11-04T23:53:50Z' },
entity:
{ name: '25GB:100free',
non_basic_services_allowed: false,
total_services: 100,
total_routes: 1000,
total_private_domains: -1,
memory_limit: 25600,
trial_db_allowed: false,
instance_memory_limit: -1,
app_instance_limit: -1 } },
{ metadata:
{ guid: '5ad22d2c-1519-4e17-b555-f702fb38417e',
url: '/v2/quota_definitions/5ad22d2c-1519-4e17-b555-f702fb38417e',
created_at: '2015-02-02T22:18:44Z',
updated_at: '2015-04-22T00:36:14Z' },
entity:
{ name: 'PCF-H',
non_basic_services_allowed: true,
total_services: 1000,
total_routes: 1000,
total_private_domains: -1,
memory_limit: 204800,
trial_db_allowed: false,
instance_memory_limit: -1,
app_instance_limit: -1 } },
{ metadata:
{ guid: 'cf04086c-ccf9-442c-b89a-3f3fbcd365e3',
url: '/v2/quota_definitions/cf04086c-ccf9-442c-b89a-3f3fbcd365e3',
created_at: '2015-05-04T19:20:47Z',
updated_at: '2015-05-04T19:26:14Z' },
entity:
{ name: 'oreilly',
non_basic_services_allowed: true,
total_services: 10000,
total_routes: 1000,
total_private_domains: -1,
memory_limit: 307200,
trial_db_allowed: false,
instance_memory_limit: -1,
app_instance_limit: -1 } } ]
√ The platform returns Quota Definitions from Organizations (720ms)

I suppose that the best practice is to define an organization a
determinated quota.
How to set a Quota as default?
How to configurate?

Juan Antonio


Error uploading application when pushing application

Jim Lin <jimlintw922@...>
 

CF Version: 215

Description: My push command is `cf push myapp -p myapp.war -m 512m -t 120` and I got the error message "Error uploading application". The detail trace log is as following:

============== Start of Log ==============
REQUEST: [2015-10-23T16:12:49+08:00]
GET /v2/jobs/a4866929-aff5-41bb-8891-0540ba45e97c HTTP/1.1
Host: api.140.92.27.254.xip.io
Accept: application/json
Authorization: [PRIVATE DATA HIDDEN]
Content-Type: application/json
User-Agent: go-cli 6.12.0-8c65bbd / linux



RESPONSE: [2015-10-23T16:12:49+08:00]
HTTP/1.1 200 OK
Content-Length: 491
Content-Type: application/json;charset=utf-8
Date: Fri, 23 Oct 2015 08:14:56 GMT
Server: nginx
X-Cf-Requestid: 7cea1ef8-d14a-4260-4b3c-dcc387684911
X-Content-Type-Options: nosniff
X-Vcap-Request-Id: 244f6491-caae-43b4-69c8-9e80f4a61c83::38d83968-cd06-4ede-8531-1356d08cf38d

{
"metadata": {
"guid": "a4866929-aff5-41bb-8891-0540ba45e97c",
"created_at": "2015-10-23T08:14:51Z",
"url": "/v2/jobs/a4866929-aff5-41bb-8891-0540ba45e97c"
},
"entity": {
"guid": "a4866929-aff5-41bb-8891-0540ba45e97c",
"status": "failed",
"error": "Use of entity>error is deprecated in favor of entity>error_details.",
"error_details": {
"error_code": "UnknownError",
"description": "An unknown error occurred.",
"code": 10001
}
}
}
FAILED
Error uploading application.
An unknown error occurred.
FAILED
Error uploading application.
An unknown error occurred.

============== End of Log ==============

How to I do diagnosis to find out the root cause?

Thanks all.

Sincerely,
Jim


Re: REST API endpoint for accessing application logs

Warren Fernandes
 

Loggregator doesn't store any logs. The most it does is maintain a buffer as mentioned above which is defaulted to 100 lines of logs. If you wish to store logs you can forward them to third-party syslog drains and other consumers.


[lattice-release] v0.6.0

David Wadden
 

Phew! We spent the last few weeks reworking our cluster internals and build
process to bring you the best Lattice yet. Functionally, this is not a big
release, but the changes give us a noticeably more stable cluster (less
random errors around startup), and open up a lot of possibilities for
future functionality.

There are a number of breaking changes, most notably the lack of support on
DigitalOcean, Google Compute Engine or Openstack. These platforms don't
have the same ability to publish public base images like Vagrant boxes on
Atlas (Vagrant Cloud) or AMIs on AWS. We're currently prioritizing
whether/how soon we can bring back support for those platforms, and what it
might look like (packer null builder
<https://www.packer.io/docs/builders/null.html> or maybe bake-your-own
images since those platforms support private base images).

If you use Lattice (and especially if you use one of the "temporarily
discontinued" platforms), please take the time and fill out our survey at
http://goo.gl/forms/z33xBoLaeQ. We'd love your feedback on what Lattice
does for you and what platform(s) you're using (or would like to).

Quick rundown:

- Cluster
- Retooling of Lattice build and deployment to use packer-bosh
<https://github.com/cppforlife/packer-bosh>
- Diego 0.1434.0, Garden-Linux 0.307.0, CF v218, Routing 0.99.0
- Default vagrant up <http://lattice.cf/docs/vagrant/> target is now
local.lattice.cf
- Simpler setup for terraform apply
<http://lattice.cf/docs/terraform/>
- CLI
- Provided by cluster, then ltc sync updates itself from cluster
- Supports setting user context by USER directive for docker images
- Define HTTP routes to fully-qualified domains or context paths

We updated a lot of our documentation to go with the cluster changes, but
there are still some broken links and outdated contents. We're working
towards open-sourcing the website contents themselves, but for now if you
want to fix something on on lattice.cf, we encourage you to open a Github
issue <https://github.com/cloudfoundry-incubator/lattice-release/issues/new>
.

Full release notes are included below.

As always:
- If you think you've found a bug, please file a GitHub issue.
- If you have a contribution, we'd be happy to give you feedback via a
Pull Request.
- You can track our prioritized queue of work at:
http://bit.ly/lattice-tracker

--
David Wadden
Product Manager
Pivotal Software, Inc.
dwadden(a)pivotal.io

---------- Forwarded message ---------
From: davidwadden <notifications(a)github.com>
Date: Thu, Oct 22, 2015 at 3:33 PM
Subject: [lattice-release] v0.6.0
To: cloudfoundry-incubator/lattice-release <
lattice-release(a)noreply.github.com>



*Help us help you, we'd love to hear your thoughts and suggestions on our
survey <http://goo.gl/forms/z33xBoLaeQ>! Your answers will help us
understand how you use Lattice and inform upcoming feature work.*

Breaking Changes

- *v0.6.0* does not work on DigitalOcean, Google Compute Engine,
Openstack
- Please continue to use the *v0.5.0*
<https://github.com/cloudfoundry-incubator/lattice-release/releases/tag/v0.5.0>
bundle to deploy to these platforms
- Lack of support for public user-created images (like AMIs) requires
different deployment strategy
- Additional discussions about this feature: DigitalOcean
<https://digitalocean.uservoice.com/forums/136585-digitalocean/suggestions/3249642-share-an-image-w-another-account>,
Google Compute Engine
<https://cloud.google.com/compute/docs/images#public_images>
- vagrant up expects ltc target local.lattice.cf by default [#102582770
<https://www.pivotaltracker.com/story/show/102582770>]
- Removed Terraform module
<https://terraform.io/docs/modules/create.html> definition to simplify
provisioning
- Configured using terraform.tfvars instead of lattice.<platform>.tf [
#104919576 <https://www.pivotaltracker.com/story/show/104919576>]
- terraform get -update no longer necessary
- ltc launch-droplet no longer accepts --working-dir [#104935318
<https://www.pivotaltracker.com/story/show/104935318>]
- Define multple routes by passing the --http-route or --tcp-route flag
multiple times [#105631892
<https://www.pivotaltracker.com/story/show/105631892>]
- Retire ltc update-routes [#104177142
<https://www.pivotaltracker.com/story/show/104177142>]
- Working with the Lattice development environment
<https://github.com/cloudfoundry-incubator/lattice-release#development>
has changed (significantly) [#105305792
<https://www.pivotaltracker.com/story/show/105305792>]
- Lattice has been split into separate cluster and CLI repositories
- Moved cloudfoundry-incubator/lattice
<https://github.com/cloudfoundry-incubator/lattice/tree/legacy> to
cloudfoundry-incubator/lattice-release
<https://github.com/cloudfoundry-incubator/lattice-release>
- Forked cloudfoundry-incubator/lattice/ltc
<https://github.com/cloudfoundry-incubator/lattice/tree/legacy/ltc>
to cloudfoundry-incubator/ltc
<https://github.com/cloudfoundry-incubator/ltc>

Complete retooling of Lattice build and cluster deployment [##2124074
<https://www.pivotaltracker.com/epic/show/2124074>]

The Lattice build process has been completely retooled to create images
that are fully configured with all Lattice
<https://github.com/cloudfoundry-incubator/lattice-release> + Diego
<https://github.com/cloudfoundry-incubator/diego-release> microservices at
build time. We use packer-bosh <https://github.com/cppforlife/packer-bosh>
to bake the Diego, Loggregator, and Routing BOSH releases into the Lattice
base image. This ensures the configurations never get out of sync with the
mainline CF versions. Thus, we greatly improve cluster stability. Users do
not and will not need to understand or use BOSH to deploy Lattice.
New Features

*Δ* indicates a *breaking change*.

Cluster

- Diego upgraded from *0.1424.1*
<https://github.com/cloudfoundry-incubator/diego-release/releases/tag/0.1424.1>
to *0.1434.0*
<https://github.com/cloudfoundry-incubator/diego-release/releases/tag/0.1434.0>
- Garden-Linux upgraded from *0.295.0*
<https://github.com/cloudfoundry-incubator/garden-linux-release/releases/tag/v0.295.0>
to *0.307.0*
<https://github.com/cloudfoundry-incubator/garden-linux-release/releases/tag/v0.307.0>
- Fixes #187
<https://github.com/cloudfoundry-incubator/lattice-release/issues/187>:
Cells disk is full cloudfoundry-incubator/garden-linux-release#7
<https://github.com/cloudfoundry-incubator/garden-linux-release/issues/7>
[#102180368 <https://www.pivotaltracker.com/story/show/102180368>]
- CF upgraded from *v213-93-g8a4f752*
<https://github.com/cloudfoundry/cf-release/tree/v213-93-g8a4f752> to
*v218* <https://github.com/cloudfoundry/cf-release/releases/tag/v218> [
#100518218 <https://www.pivotaltracker.com/story/show/100518218>]
- Works on Vagrant
- VirtualBox [#104128040
<https://www.pivotaltracker.com/story/show/104128040>]
- VMWare Fusion [#104921036
<https://www.pivotaltracker.com/story/show/104921036>]
- AWS (all regions) [#104920976
<https://www.pivotaltracker.com/story/show/104920976>] [#105827024
<https://www.pivotaltracker.com/story/show/105827024>]
- Replace shared folders with file provisioners for Vagrant VMs [
#105732128 <https://www.pivotaltracker.com/story/show/105732128>]
- vagrant up --provider=aws works to all AWS regions [#105376966
<https://www.pivotaltracker.com/story/show/105376966>]
- Works on Terraform (AWS only) (*Δ*)
- AWS (all regions) [#104919576
<https://www.pivotaltracker.com/story/show/104919576>] [#105827024
<https://www.pivotaltracker.com/story/show/105827024>] [#105827024
<https://www.pivotaltracker.com/story/show/105827024>]
- Removed Terraform module
<https://terraform.io/docs/modules/create.html> to simplify
provisioning
- Configured using terraform.tfvars instead of
lattice.<platform>.tf (*Δ*)
- terraform get -update no longer necessary

CLI

- ltc should be downloadable from the Lattice cluster [#102877664
<https://www.pivotaltracker.com/story/show/102877664>]
- ltc sync updates itself from the cluster [#102877664
<https://www.pivotaltracker.com/story/show/102877664>] [#105668046
<https://www.pivotaltracker.com/story/show/105668046>] [#102482290
<https://www.pivotaltracker.com/story/show/102482290>]
- Vendor ltc dependencies with submodules instead of Godeps [#101770536
<https://www.pivotaltracker.com/story/show/101770536>]
- ltc launch-droplet no longer accepts --working-dir [#104935318
<https://www.pivotaltracker.com/story/show/104935318>] (*Δ*)
- ltc build-droplet and ltc launch-droplet no longer use privileged
containers [#104921458
<https://www.pivotaltracker.com/story/show/104921458>]
- ltc create --privileged starts a docker image with a privileged
container [#105355654
<https://www.pivotaltracker.com/story/show/105355654>]
- ltc supports improved user namespacing [#105324808
<https://www.pivotaltracker.com/story/show/105324808>] [#105328688
<https://www.pivotaltracker.com/story/show/105328688>]
- ltc create --user specifies the user context of a docker app [
#104917574 <https://www.pivotaltracker.com/story/show/104917574>]
- Next, uses the USER directive from docker metadata [#104917678
<https://www.pivotaltracker.com/story/show/104917678>]
- Lastly, defaults to "root" [#104918540
<https://www.pivotaltracker.com/story/show/104918540>]
- Routing enhancements for ltc
- The below changes apply to ltc create, ltc launch-droplet, and ltc
update
- Define multple routes by passing the --http-route or --tcp-route
flag multiple times [#105631892
<https://www.pivotaltracker.com/story/show/105631892>] (*Δ*)
- HTTP/TCP routes determine default container port for single port
apps [#105635660 <https://www.pivotaltracker.com/story/show/105635660>
]
- #104
<https://github.com/cloudfoundry-incubator/lattice-release/issues/104>,
#137
<https://github.com/cloudfoundry-incubator/lattice-release/issues/137>:
Custom domains in routes [#93628052
<https://www.pivotaltracker.com/story/show/93628052>] [#96562554
<https://www.pivotaltracker.com/story/show/96562554>]
- #217
<https://github.com/cloudfoundry-incubator/lattice-release/issues/217>:
Use of context path routes with Lattice [#105301140
<https://www.pivotaltracker.com/story/show/105301140>]
- Retire ltc update-routes [#104177142
<https://www.pivotaltracker.com/story/show/104177142>] (*Δ*)

Bug Fixes

- Modify docker image examples to app(s) start properly [#105069548
<https://www.pivotaltracker.com/story/show/105069548>] [#105881880
<https://www.pivotaltracker.com/story/show/105881880>]
- Postgres docker image requires ltc create --privileged to start [
#105071050 <https://www.pivotaltracker.com/story/show/105071050>]

Interestings

- Configure local.lattice.cf to replace 192.168.11.11.xip.io [#102582770
<https://www.pivotaltracker.com/story/show/102582770>] (*Δ*)
- Default timeout on ltc test increased to 5m [#105622190
<https://www.pivotaltracker.com/story/show/105622190>]
- Longer timeout to adjust for AWS EBS volume no longer being
pre-warmed
- vagrant up works on Windows host with AWS provider [#98709384
<https://www.pivotaltracker.com/story/show/98709384>]

CI / Packaging

- Create pipeline that does docker build docker push from
cloudfoundry-incubator/lattice-ci
- Create pipeline that deploys from
cloudfoundry-incubator/lattice-release [#104919732
<https://www.pivotaltracker.com/story/show/104919732>] [#105306942
<https://www.pivotaltracker.com/story/show/105306942>]
- CI builds and publishes vagrant boxes to Atlas
<https://atlas.hashicorp.com/>
- VirtualBox, VMWare Fusion [#105496810
<https://www.pivotaltracker.com/story/show/105496810>]
- AWS [#105496796
<https://www.pivotaltracker.com/story/show/105496796>]
- Lattice has been split into separate cluster and CLI repositories
- cloudfoundry-incubator/lattice
<https://github.com/cloudfoundry-incubator/lattice/tree/legacy> moved
to cloudfoundry-incubator/lattice-release
<https://github.com/cloudfoundry-incubator/lattice-release>
- cloudfoundry-incubator/lattice/ltc
<https://github.com/cloudfoundry-incubator/lattice/tree/legacy/ltc>
forked to cloudfoundry-incubator/ltc
<https://github.com/cloudfoundry-incubator/ltc>
- Consolidate architecture-specific bundles into single bundle [
#102485658 <https://www.pivotaltracker.com/story/show/102485658>]
- Bundle no longer includes ltc; this is now served by the cluster [
#102877664 <https://www.pivotaltracker.com/story/show/102877664>]

Documentation

- Update documentation for cluster changes [#105488088
<https://www.pivotaltracker.com/story/show/105488088>]
- Vagrant Platforms <http://lattice.cf/docs/vagrant/> [#105491060
<https://www.pivotaltracker.com/story/show/105491060>]
- Terraform Platforms <http://lattice.cf/docs/terraform/> [#95925124
<https://www.pivotaltracker.com/story/show/95925124>]
- Document how to vagrant up using AWS provider [#105491060
<https://www.pivotaltracker.com/story/show/105491060>]
- Replace '192.168.11.11.xip.io' with 'local.lattice.cf' as default
system domain [#102582848
<https://www.pivotaltracker.com/story/show/102582848>] (*Δ*)
- #220
<https://github.com/cloudfoundry-incubator/lattice-release/issues/220>,
#221 <https://github.com/cloudfoundry-incubator/lattice-release/pull/221>:
Upgrade Vagrant to support VirtualBox 5.x [#106054292
<https://www.pivotaltracker.com/story/show/106054292>] [#106052660
<https://www.pivotaltracker.com/story/show/106052660>]
- Document setting up the v0.6.0+ development environment on VirtualBox [
#105305792 <https://www.pivotaltracker.com/story/show/105305792>] (*Δ*)
- Update ltc syntax for user context and routing functionality [
#105069548 <https://www.pivotaltracker.com/story/show/105069548>] [
#105635874 <https://www.pivotaltracker.com/story/show/105635874>]

Known Issues

- TCP routes are not deleted when route is removed / app(s) are stopped
#208
<https://github.com/cloudfoundry-incubator/lattice-release/issues/208> [
##1940024 <https://www.pivotaltracker.com/epic/show/1940024>]
- Two apps with same TCP route defined will round-robin between two
separate apps [#105929084
<https://www.pivotaltracker.com/story/show/105929084>]


View it on GitHub
<https://github.com/cloudfoundry-incubator/lattice-release/releases/tag/v0.6.0>
.


CF CLI Release v6.13.0

Koper, Dies <diesk@...>
 

The CF CLI team cut 6.13.0. Release notes and binaries are available at:


https://github.com/cloudfoundry/cli#downloads


Note that we have simplified the download matrix and filenames are being updated to include the release version.

Let us know what you think!


Highlights of this release include:


Diego GA


In alignment with the effort to get to a GA version of Diego [0] in CF-Release, this version of the CLI includes new commands specific to the Diego component of runtime. These commands have been pulled into the core CLI from the 2 existing plugins [1] [2]. Among the features, the highlights are:


· A user can now ssh to an app container

· `cf push` includes a new flag to specify a docker image


[0] https://github.com/cloudfoundry-incubator/diego-design-notes/blob/master/migrating-to-diego.md#installing-the-diego-enabler-cli-plugin

[1] https://github.com/cloudfoundry-incubator/diego-ssh

[2] https://github.com/cloudfoundry-incubator/diego-cli-plugin


Other Features:

· Plugin install now prompts interactively and provides warning to inform user of risk

· `cf scale` can now scale an app to zero instances


Bug Fixes:

· Fixed issue with password containing double-quote or backtick exposing partial password in cleartext in cf_trace

· login with --sso flag was providing link with http url. Fixed bug so that it provides https url.


Improved User Experience/Error Messages:

· Attempt to delete a shared domain with `cf delete-domain` will now fail early

· Improved error message when `cf curl` not properly formed

· Improved message when no users found in `cf org-users` and `cf space-users`

· Improved message when push of app times out due to wrong port specification


New Plugins:

· Firehose Nozzle Plugin http://github.com/pivotal-cf-experimental/nozzle-plugin

· Cloud Deployment Plugin http://github.com/xchapter7x/deploycloud


Also notable:


Updated CLI to Go 1.5.1, and added a --build flag to list this version.


Greg Oehmen & Dies Köper
Cloud Foundry CLI Product Manager


Re: Cloud Foundry DEA to Diego switch - when?

Amit Kumar Gupta
 

I'd encourage anyone wanting to switch to Diego to track the following
release marker in our project tracker:
https://www.pivotaltracker.com/story/show/76376202. When this marker is
delivered, it means the core teams have confidence that Diego can replace
the DEAs. Note that while the tracker shows the date for this release to
occur this week, there are actually several unpointed placeholder stories
above the line that will expand. Those stories will be broken down and
pointed soon, so it will be possible to get a more realistic estimate soon.

After it's deemed that Diego can replace the DEAs, there will be some time
before the DEAs will be end-of-life'd, but I would not recommend waiting
that long.

On Wed, Oct 21, 2015 at 11:07 PM, Meng, Xiangyi <xiangyi.meng(a)emc.com>
wrote:

Hi, Amit



Our team is also planning the timeline of replacing dea with diego. Would
you please let me know the approximated estimation on when the final
iteration would come? Will it in 2016 or 1017?



Thanks,

Maggie



*From:* Amit Gupta [mailto:agupta(a)pivotal.io]
*Sent:* 2015年10月22日 2:59
*To:* Discussions about Cloud Foundry projects and the system overall.
*Subject:* [cf-dev] Re: Cloud Foundry DEA to Diego switch - when?



Hi Rishi,



Thanks for your question. Let's first clarify the distinction between
what you deploy -- bosh releases (versioned packages of source code and
binaries) -- and how you deploy things -- bosh deployment (a manifest of
which releases to use, what code/binaries from those releases to place on
nodes in your deployment cluster, property/credential configuration,
networking and compute resources, etc.).



diego-release may not change, although it may be split into smaller
releases, e.g. the cc-bridge part consisting of the components which talk
to CC, and the diego runtime part consisting of components responsible for
scheduling, running, and health-monitoring containerized workloads.



cf-release will undergo heavy changes. We are currently breaking it apart
entirely, into separate releases: consul, etcd, logging-and-metrics,
identity, routing, API, nats, postgres, and existing runtime backend (DEA,
Warden, HM9k).



In addition to breaking up cf-release, we are working on cf-deployment[1],
this will give you the same ability to deploy the Cloud Foundry PaaS as you
know it today, but composed of multiple releases rather than the monolithic
cf-release. We will ensure that cf-deployment has versioning and tooling
to make it easy to deploy everything at versions that are known to work
together.



For the first major iteration of cf-deployment, it will deploy all the
existing components of cf-release, but coming from separate releases. You
can still deploy diego separately (configured to talk to the CC) as you do
today.



The second major iteration will be to leverage new BOSH features[2], such
as links, AZs, cloud config, and global networking to simplify the manifest
generation for cf-deployment. Again, you will still be able to deploy
diego separately alongside your cf deployment.



The third iteration is to fold the diego-release deployment strategies
into cf-deployment itself, so you'll have a single manifest deploying DEAs
and Diego side-by-side.



The final iteration will be to remove the DEAs from cf-deployment and stop
supporting the release that contains them.



As to your question of defaults, there are several definitions of
"default". You can set Diego to be the default backend today[3]. You have
to opt in to this, but then anyone using the platform you deployed will
have their apps run on Diego by default. Pivotal Web Services, for
example, now defaults to Diego as the backend. At some point, Diego will be
the true default backend, and you will have to opt-out of it (either at the
CC configuration level, or at the individual app level). Finally, at a
later point in time, DEAs will no longer be supported and Diego will be the
only backend option.



We are actively working on a timeline for all these things. You can see
the Diego team's public tracker has a release marker[4] for when Diego will
be capable of replacing the DEAs. After reaching that release marker,
there will be some time given for the rest of the community to switch over
before declaring end-of-life for the DEAs.



[1] https://github.com/cloudfoundry/cf-deployment

[2] https://github.com/cloudfoundry/bosh-notes/

[3]
https://github.com/cloudfoundry/cf-release/blob/v222/jobs/cloud_controller_ng/spec#L396-L398

[4] https://www.pivotaltracker.com/story/show/76376202



Thanks,

Amit, OSS Release Integration PM



On Wed, Oct 21, 2015 at 10:31 AM, R M <rishi.investigate(a)gmail.com> wrote:

I am trying to understand when will Diego become default runtime of Cloud
Foundry. Latest cf-release is still using DEA and if my understanding is
correct, at some stage, a new cf-release version will come out with Diego
and perhaps change to v3. Do we have any ideas on when/if this will
happen? Is it safe to assume that diego-release on github will slowly
transition to cf-release?

Thanks.



Re: CF-RELEASE v202 UPLOAD ERROR

Amit Kumar Gupta
 

Try running "bosh cck" and recreating VMs from last known apply spec. You
should also make sure that the IPs you're allocating to your jobs are
accessible from the BOSH director VM.

On Thu, Oct 22, 2015 at 5:27 AM, Parthiban Annadurai <senjiparthi(a)gmail.com>
wrote:

Yaa sure Amit. I have attached both the files with this mail. Could you
please? Thanks.



On 21 October 2015 at 19:49, Amit Gupta <agupta(a)pivotal.io> wrote:

Can you share the output of "bosh vms" and "bosh task 51 --debug". It's
preferable if you copy the terminal outputs and paste them to Gists or
Pastebins and share the links.

On Tue, Oct 20, 2015 at 6:18 AM, James Bayer <jbayer(a)pivotal.io> wrote:

sometimes a message like that is due to networking issues. does the bosh
director and the VM it is creating have an available network path to reach
each other? sometimes ssh'ing in to the VM that is identified can yield
more debug clues.

On Tue, Oct 20, 2015 at 5:09 AM, Parthiban Annadurai <
senjiparthi(a)gmail.com> wrote:

Thanks Bharath and Amit for the helpful solutions. I have surpassed
that error. Now, bosh deploy strucks like in attached image. Could you
anyone please?

Regards

Parthiban A



On 20 October 2015 at 11:57, Amit Gupta <agupta(a)pivotal.io> wrote:

Bharath, I think you mean to increase the *disk* size on the
compilation VMs, not the memory size.

Parthiban, the error message is happening during compiling, saying "No
space left on device". This means your compilation VMs are running out of
space on disk. This means you need to increase the allocated disk for your
compilation VMs. In the "compilation" section of your deployment manifest,
you can specify "cloud_properties". This is where you will specify disk
size. These "cloud_properties" look the same as the could_properties
specified for a resource pool. Depending on your IaaS, the structure of
the cloud_properties section differs. See here:
https://bosh.io/docs/deployment-manifest.html#resource-pools-cloud-properties

On Mon, Oct 19, 2015 at 11:13 PM, Bharath Posa <bharathp(a)vedams.com>
wrote:

hi parthiban

It seems you are running out of space in your vm in which you are
compiling . try to increase the size of memory in your compilation vm .

regards
Bharath



On Mon, Oct 19, 2015 at 7:39 PM, Parthiban Annadurai <
senjiparthi(a)gmail.com> wrote:

Hello All,
Thanks All for the helpful suggestions. Actually, now
we r facing the following issue while kicking bosh deploy,

Done compiling packages >
nats/d3a1f853f4980682ed8b48e4706b7280e2b7ce0e (00:01:07)
Failed compiling packages >
buildpack_php/9c72be716ab8629d7e6feed43012d1d671720157: Action Failed
get_task: Task aba21e6a-2031-4a69-5b72-f238ecd07051 result: Compiling
package buildpack_php: Compressing compiled package: Shelling out to tar:
Running command: 'tar czf
/var/vcap/data/tmp/bosh-platform-disk-TarballCompressor-CompressFilesInDir762165297
-C
/var/vcap/data/packages/buildpack_php/9c72be716ab8629d7e6feed43012d1d671720157.1-
.', stdout: '', stderr: '
gzip: stdout: No space left on device
': signal: broken pipe (00:02:41)
Failed compiling packages (00:02:41)

Error 450001: Action Failed get_task: Task
aba21e6a-2031-4a69-5b72-f238ecd07051 result: Compiling package
buildpack_php: Compressing compiled package: Shelling out to tar: Running
command: 'tar czf
/var/vcap/data/tmp/bosh-platform-disk-TarballCompressor-CompressFilesInDir762165297
-C
/var/vcap/data/packages/buildpack_php/9c72be716ab8629d7e6feed43012d1d671720157.1-
.', stdout: '', stderr: '
gzip: stdout: No space left on device
': signal: broken pipe

Could Anyone on this issue?

Regards

Parthiban A

On 19 October 2015 at 14:30, Bharath Posa <bharathp(a)vedams.com>
wrote:

Hi partiban

can u do a checksum of the tar file .


it should come like this *sha1:
b6f596eaff4c7af21cc18a52ef97e19debb00403*

example:

*sha1sum {file}*

regards
Bharath

On Mon, Oct 19, 2015 at 1:12 PM, Eric Poelke <epoelke(a)gmail.com>
wrote:

You actually do not need to download it. if you just run --

`bosh upload release
https://bosh.io/d/github.com/cloudfoundry/cf-release?v=202`
<https://bosh.io/d/github.com/cloudfoundry/cf-release?v=202>

The director will pull in the release directly from bosh.io.

--
Thank you,

James Bayer


Re: REST API endpoint for accessing application logs

Ponraj E
 

Hi Warren,

Thanks. Reg #1, Is the loggregator clears the logs after "certain period of interval"? If yes, How much is that and where do we configure that?


--
Ponraj


Re: [abacus] Configuring Abacus applications

Jean-Sebastien Delfino
 

Hi Piotr,

The answers to your questions really depend on the performance of the
environment and database you're integrating Abacus with, but let me try to
give you pointers to some of the factors you'll want to watch in your
performance tuning. Sorry for such a long email, but you had several
questions bundled in there and the answers are not just yes/no type of
answers.

- is there a recommended minimal number of instances of Abacus
applications

I recommend 2 of each as a minimum for availability if instances crash or
if you need to restart them individually.

- how would above depend on expected number of submissions or documents
to be processed

This really depends on the performance of your deployment environment and
database cluster. More instances will allow you to process more docs
faster, scaling linearly up to the load what your database can take.

- is there a dependency between number of instances of applications i.e.
do they have to match

You should be able to tune each application with a different number of
instances (see note *** below for additional info).

Here are some of the key factors to consider for tuning:

Collector service
- stateless, receives batches of submitted usage over HTTP, does 1 db write
per batch, 1 db write per usage doc;
- increase to provide better response time to resource providers as they
submit usage.

Metering service
- stateless, receives individual submitted usage docs from collector, does
2 db writes per usage doc;
- you can probably size it the same or a bit more than the collector app as
it's processing more (individual) docs than the submitted batches.

Accumulator service
- stateful as it accumulates usage per resource instance, does 2 db writes
per usage doc, 1 read per approx 100 usage docs;
- serializes updates to the accumulated usage per resource instance, so
increase if your individual resource instances are getting a lot of usage;
- resource instances are distributed to db partitions, one partition per
instance, and that instance is the only reader/writer from/to that
partition;
- I've seen the performance of the accumulator scale linearly from 1 to 16
instances, recommend to test its performance in your environment.

Aggregator service
- stateful as it aggregates usage per organization, does 2 db writes per
usage doc, 1 read per approx 100 usage docs;
- same performance characteristics and observations as for the accumulator,
except that the write serialization is on an organization basis.

Rating service
- stateless, just adds rated usage to input aggregated usage, no
serialization here, 2 db writes per usage doc;
- since there's no serialization you may be OK with less instances than the
accumulator and aggregator;
- on the other hand you don't want 16 aggregators to overload 2 instances
of the rating service, so look for a middle ground.

Reporting
- stateless, one db read per report per org;
- scales like a regular Web app, gated by the query performance on your db;
- recommend 2 instances minimum for availability then increase as your
reporting load increases;
- delegates org lookups to your account info service so include the
performance of that service in your analysis as well.

- what is the default and recommended number of DB partitions and how can
they be configured (time based as well as key based)

Time-based
- one per month, as most db writes and reads target the current month, and
sometimes the previous month;
- with that, monthly dbs can be archived once they're not needed anymore.

Key based
- depends how many resource instances and organizations you have and the
performance of your database as its volume increases;
- for the accumulator and aggregator services, you need one db partition
per app instance, reserved to that instance.

- how would above depend on expected number of documents
Same as your 2nd question, if I understood it correctly.

[***] While researching this I found that although you can configure each
app with a different number of instances, it's not very convenient to do
right now as we're currently using a single environment variable to
configure the number of db partitions a service uses and the number of
instances configured for the next service in the Abacus processing
pipeline. I'll open a Github issue to change that and use different env
variables to configure these two different aspects, as that'll make it
easier for you to use different numbers of db partitions and instances in
the accumulator and the aggregator services for example.

HTH


- Jean-Sebastien

On Wed, Oct 21, 2015 at 9:04 AM, Piotr Przybylski <piotrp(a)us.ibm.com> wrote:

Hi,
couple of questions about configuring Abacus, specifically the recommended
settings and how to configure them

- is there a recommended minimal number of instances of Abacus applications
- how would above depend on expected number of submissions or documents to
be processed
- is there a dependency between number of instances of applications i.e.
do they have to match
- what is the default and recommended number of DB partitions and how can
they be configured (time based as well as key based)
- how would above depend on expected number of documents

Thank you,

Piotr



Re: REST API endpoint for accessing application logs

Warren Fernandes
 

For #3,

The Loggregator team currently doesn't manage the cf-java-client library. There seems to be another post in the community here (https://lists.cloudfoundry.org/archives/list/cf-dev(a)lists.cloudfoundry.org/message/JWESJLSWV44KKLP7LTMSLB3L5N2I62BG/)
talking about a new v2 java client that will be more useful. If you see that there is some unexpected truncation, I'd suggest creating an issue on that repo so that they can fix it in v2.


Re: CF-RELEASE v202 UPLOAD ERROR

Parthiban Annadurai <senjiparthi@...>
 

Yaa sure Amit. I have attached both the files with this mail. Could you
please? Thanks.

On 21 October 2015 at 19:49, Amit Gupta <agupta(a)pivotal.io> wrote:

Can you share the output of "bosh vms" and "bosh task 51 --debug". It's
preferable if you copy the terminal outputs and paste them to Gists or
Pastebins and share the links.

On Tue, Oct 20, 2015 at 6:18 AM, James Bayer <jbayer(a)pivotal.io> wrote:

sometimes a message like that is due to networking issues. does the bosh
director and the VM it is creating have an available network path to reach
each other? sometimes ssh'ing in to the VM that is identified can yield
more debug clues.

On Tue, Oct 20, 2015 at 5:09 AM, Parthiban Annadurai <
senjiparthi(a)gmail.com> wrote:

Thanks Bharath and Amit for the helpful solutions. I have surpassed that
error. Now, bosh deploy strucks like in attached image. Could you anyone
please?

Regards

Parthiban A



On 20 October 2015 at 11:57, Amit Gupta <agupta(a)pivotal.io> wrote:

Bharath, I think you mean to increase the *disk* size on the
compilation VMs, not the memory size.

Parthiban, the error message is happening during compiling, saying "No
space left on device". This means your compilation VMs are running out of
space on disk. This means you need to increase the allocated disk for your
compilation VMs. In the "compilation" section of your deployment manifest,
you can specify "cloud_properties". This is where you will specify disk
size. These "cloud_properties" look the same as the could_properties
specified for a resource pool. Depending on your IaaS, the structure of
the cloud_properties section differs. See here:
https://bosh.io/docs/deployment-manifest.html#resource-pools-cloud-properties

On Mon, Oct 19, 2015 at 11:13 PM, Bharath Posa <bharathp(a)vedams.com>
wrote:

hi parthiban

It seems you are running out of space in your vm in which you are
compiling . try to increase the size of memory in your compilation vm .

regards
Bharath



On Mon, Oct 19, 2015 at 7:39 PM, Parthiban Annadurai <
senjiparthi(a)gmail.com> wrote:

Hello All,
Thanks All for the helpful suggestions. Actually, now we
r facing the following issue while kicking bosh deploy,

Done compiling packages >
nats/d3a1f853f4980682ed8b48e4706b7280e2b7ce0e (00:01:07)
Failed compiling packages >
buildpack_php/9c72be716ab8629d7e6feed43012d1d671720157: Action Failed
get_task: Task aba21e6a-2031-4a69-5b72-f238ecd07051 result: Compiling
package buildpack_php: Compressing compiled package: Shelling out to tar:
Running command: 'tar czf
/var/vcap/data/tmp/bosh-platform-disk-TarballCompressor-CompressFilesInDir762165297
-C
/var/vcap/data/packages/buildpack_php/9c72be716ab8629d7e6feed43012d1d671720157.1-
.', stdout: '', stderr: '
gzip: stdout: No space left on device
': signal: broken pipe (00:02:41)
Failed compiling packages (00:02:41)

Error 450001: Action Failed get_task: Task
aba21e6a-2031-4a69-5b72-f238ecd07051 result: Compiling package
buildpack_php: Compressing compiled package: Shelling out to tar: Running
command: 'tar czf
/var/vcap/data/tmp/bosh-platform-disk-TarballCompressor-CompressFilesInDir762165297
-C
/var/vcap/data/packages/buildpack_php/9c72be716ab8629d7e6feed43012d1d671720157.1-
.', stdout: '', stderr: '
gzip: stdout: No space left on device
': signal: broken pipe

Could Anyone on this issue?

Regards

Parthiban A

On 19 October 2015 at 14:30, Bharath Posa <bharathp(a)vedams.com>
wrote:

Hi partiban

can u do a checksum of the tar file .


it should come like this *sha1:
b6f596eaff4c7af21cc18a52ef97e19debb00403*

example:

*sha1sum {file}*

regards
Bharath

On Mon, Oct 19, 2015 at 1:12 PM, Eric Poelke <epoelke(a)gmail.com>
wrote:

You actually do not need to download it. if you just run --

`bosh upload release
https://bosh.io/d/github.com/cloudfoundry/cf-release?v=202`
<https://bosh.io/d/github.com/cloudfoundry/cf-release?v=202>

The director will pull in the release directly from bosh.io.

--
Thank you,

James Bayer


Re: How to detect this case: CF-AppMemoryQuo taExceeded

Juan Antonio Breña Moral <bren at juanantonio.info...>
 

Hi,

Using this method, I receive the memory used by the organization:

{ memory_usage_in_mb: 576 }

If i use this method:
http://apidocs.cloudfoundry.org/222/organizations/get_organization_summary.html

I receive the same information:

{ guid: '2fcae642-b4b9-4393-89dc-509ece372f7d',
name: 'DevBox',
status: 'active',
spaces:
[ { guid: 'e558b66a-1b9c-4c66-a779-5cf46e3b060c',
name: 'dev',
service_count: 4,
app_count: 2,
mem_dev_total: 576,
mem_prod_total: 0 } ] }

I think that the limit is defined in a Quota definition for Space or an Organization. Using a local instance, I was doing some tests with the methods:
http://apidocs.cloudfoundry.org/222/organization_quota_definitions/delete_a_particular_organization_quota_definition.html

but a organization doesn't require a quota, so I suppose that exist a default quota, is it correct?
In my case, the unique quota is:
http://apidocs.cloudfoundry.org/222/organization_quota_definitions/list_all_organization_quota_definitions.html

[ { metadata:
{ guid: '59ce5f9d-8914-4783-a3dc-8f5f89cf023a',
url: '/v2/quota_definitions/59ce5f9d-8914-4783-a3dc-8f5f89cf023a',
created_at: '2015-07-15T12:32:30Z',
updated_at: null },
entity:
{ name: 'default',
non_basic_services_allowed: true,
total_services: 100,
total_routes: 1000,
memory_limit: 10240,
trial_db_allowed: false,
instance_memory_limit: -1 } } ]
√ The platform returns Quota Definitions from Organizations (359ms)

In Pivotal for example, I suppose that free accounts use the default quota:

{ metadata:
{ guid: 'b72b1acb-ff4f-468d-99c0-05cd91012b62',
url: '/v2/quota_definitions/b72b1acb-ff4f-468d-99c0-05cd91012b62',
created_at: '2013-11-19T18:53:48Z',
updated_at: '2013-11-19T19:34:57Z' },
entity:
{ name: 'trial',
non_basic_services_allowed: false,
total_services: 10,
total_routes: 1000,
total_private_domains: -1,
memory_limit: 2048,
trial_db_allowed: false,
instance_memory_limit: -1,
app_instance_limit: -1 } },

But method returns the following quotas.

[ { metadata:
{ guid: '8c4b4554-b43b-4673-ac93-3fc232896f0b',
url: '/v2/quota_definitions/8c4b4554-b43b-4673-ac93-3fc232896f0b',
created_at: '2013-11-19T18:53:48Z',
updated_at: '2013-11-19T19:34:57Z' },
entity:
{ name: 'free',
non_basic_services_allowed: false,
total_services: 0,
total_routes: 1000,
total_private_domains: -1,
memory_limit: 0,
trial_db_allowed: false,
instance_memory_limit: -1,
app_instance_limit: -1 } },
{ metadata:
{ guid: '7dbdcbb7-edb6-4246-a217-2031a75388f7',
url: '/v2/quota_definitions/7dbdcbb7-edb6-4246-a217-2031a75388f7',
created_at: '2013-11-19T18:53:48Z',
updated_at: '2013-11-19T19:34:57Z' },
entity:
{ name: 'paid',
non_basic_services_allowed: true,
total_services: -1,
total_routes: 1000,
total_private_domains: -1,
memory_limit: 10240,
trial_db_allowed: false,
instance_memory_limit: -1,
app_instance_limit: -1 } },
{ metadata:
{ guid: '2228e712-7b0c-4b65-899c-0fc599063e35',
url: '/v2/quota_definitions/2228e712-7b0c-4b65-899c-0fc599063e35',
created_at: '2013-11-19T18:53:48Z',
updated_at: '2014-05-07T18:33:19Z' },
entity:
{ name: 'runaway',
non_basic_services_allowed: true,
total_services: -1,
total_routes: 1000,
total_private_domains: -1,
memory_limit: 204800,
trial_db_allowed: false,
instance_memory_limit: -1,
app_instance_limit: -1 } },
{ metadata:
{ guid: 'b72b1acb-ff4f-468d-99c0-05cd91012b62',
url: '/v2/quota_definitions/b72b1acb-ff4f-468d-99c0-05cd91012b62',
created_at: '2013-11-19T18:53:48Z',
updated_at: '2013-11-19T19:34:57Z' },
entity:
{ name: 'trial',
non_basic_services_allowed: false,
total_services: 10,
total_routes: 1000,
total_private_domains: -1,
memory_limit: 2048,
trial_db_allowed: false,
instance_memory_limit: -1,
app_instance_limit: -1 } },
{ metadata:
{ guid: '39d630ba-66d6-4f6d-ba4e-8d45a05e99c4',
url: '/v2/quota_definitions/39d630ba-66d6-4f6d-ba4e-8d45a05e99c4',
created_at: '2014-01-23T20:03:27Z',
updated_at: null },
entity:
{ name: '25GB',
non_basic_services_allowed: true,
total_services: -1,
total_routes: 1000,
total_private_domains: -1,
memory_limit: 25600,
trial_db_allowed: false,
instance_memory_limit: -1,
app_instance_limit: -1 } },
{ metadata:
{ guid: '81226624-9e5a-4616-9b9c-6ab14aac2a03',
url: '/v2/quota_definitions/81226624-9e5a-4616-9b9c-6ab14aac2a03',
created_at: '2014-03-11T00:13:21Z',
updated_at: '2014-03-19T17:36:32Z' },
entity:
{ name: '25GB:30free',
non_basic_services_allowed: false,
total_services: 30,
total_routes: 1000,
total_private_domains: -1,
memory_limit: 25600,
trial_db_allowed: false,
instance_memory_limit: -1,
app_instance_limit: -1 } },
{ metadata:
{ guid: '0e7e2da4-0c74-4039-bdda-5cb575bf3c85',
url: '/v2/quota_definitions/0e7e2da4-0c74-4039-bdda-5cb575bf3c85',
created_at: '2014-05-08T03:56:31Z',
updated_at: null },
entity:
{ name: '50GB',
non_basic_services_allowed: true,
total_services: -1,
total_routes: 1000,
total_private_domains: -1,
memory_limit: 51200,
trial_db_allowed: false,
instance_memory_limit: -1,
app_instance_limit: -1 } },
{ metadata:
{ guid: 'e9473dc8-7c84-401c-88b2-ad61fc13e33d',
url: '/v2/quota_definitions/e9473dc8-7c84-401c-88b2-ad61fc13e33d',
created_at: '2014-05-08T03:57:42Z',
updated_at: null },
entity:
{ name: '100GB',
non_basic_services_allowed: true,
total_services: -1,
total_routes: 1000,
total_private_domains: -1,
memory_limit: 102400,
trial_db_allowed: false,
instance_memory_limit: -1,
app_instance_limit: -1 } },
{ metadata:
{ guid: '21577e73-0f16-48fc-9bb5-2b30a77731ae',
url: '/v2/quota_definitions/21577e73-0f16-48fc-9bb5-2b30a77731ae',
created_at: '2014-05-08T04:00:28Z',
updated_at: null },
entity:
{ name: '75GB',
non_basic_services_allowed: true,
total_services: -1,
total_routes: 1000,
total_private_domains: -1,
memory_limit: 76800,
trial_db_allowed: false,
instance_memory_limit: -1,
app_instance_limit: -1 } },
{ metadata:
{ guid: '6413dedd-5c1e-4b18-ac69-e87bbaf0bfdd',
url: '/v2/quota_definitions/6413dedd-5c1e-4b18-ac69-e87bbaf0bfdd',
created_at: '2014-05-13T18:18:18Z',
updated_at: null },
entity:
{ name: '100GB:50free',
non_basic_services_allowed: false,
total_services: 50,
total_routes: 1000,
total_private_domains: -1,
memory_limit: 102400,
trial_db_allowed: false,
instance_memory_limit: -1,
app_instance_limit: -1 } },
{ metadata:
{ guid: '9d078b97-0dab-4563-aea5-852b1fb50129',
url: '/v2/quota_definitions/9d078b97-0dab-4563-aea5-852b1fb50129',
created_at: '2014-09-11T02:32:49Z',
updated_at: null },
entity:
{ name: '10GB:30free',
non_basic_services_allowed: false,
total_services: 30,
total_routes: 1000,
total_private_domains: -1,
memory_limit: 10240,
trial_db_allowed: false,
instance_memory_limit: -1,
app_instance_limit: -1 } },
{ metadata:
{ guid: '851c99c6-7bb3-400f-80a0-a06962e0c5d3',
url: '/v2/quota_definitions/851c99c6-7bb3-400f-80a0-a06962e0c5d3',
created_at: '2014-10-31T17:10:53Z',
updated_at: '2014-11-04T23:53:50Z' },
entity:
{ name: '25GB:100free',
non_basic_services_allowed: false,
total_services: 100,
total_routes: 1000,
total_private_domains: -1,
memory_limit: 25600,
trial_db_allowed: false,
instance_memory_limit: -1,
app_instance_limit: -1 } },
{ metadata:
{ guid: '5ad22d2c-1519-4e17-b555-f702fb38417e',
url: '/v2/quota_definitions/5ad22d2c-1519-4e17-b555-f702fb38417e',
created_at: '2015-02-02T22:18:44Z',
updated_at: '2015-04-22T00:36:14Z' },
entity:
{ name: 'PCF-H',
non_basic_services_allowed: true,
total_services: 1000,
total_routes: 1000,
total_private_domains: -1,
memory_limit: 204800,
trial_db_allowed: false,
instance_memory_limit: -1,
app_instance_limit: -1 } },
{ metadata:
{ guid: 'cf04086c-ccf9-442c-b89a-3f3fbcd365e3',
url: '/v2/quota_definitions/cf04086c-ccf9-442c-b89a-3f3fbcd365e3',
created_at: '2015-05-04T19:20:47Z',
updated_at: '2015-05-04T19:26:14Z' },
entity:
{ name: 'oreilly',
non_basic_services_allowed: true,
total_services: 10000,
total_routes: 1000,
total_private_domains: -1,
memory_limit: 307200,
trial_db_allowed: false,
instance_memory_limit: -1,
app_instance_limit: -1 } } ]
√ The platform returns Quota Definitions from Organizations (720ms)

I suppose that the best practice is to define an organization a determinated quota.
How to set a Quota as default?
How to configurate?

Juan Antonio


Re: How to detect this case: CF-AppMemoryQuo taExceeded

Dieu Cao <dcao@...>
 

You can call this end point to retrieve the org memory usage
http://apidocs.cloudfoundry.org/222/organizations/retrieving_organization_memory_usage.html

You would then need to check this against the org quota.

There's a story further down in the backlog for a similar endpoint for
space.

There was a previous PR to add end points that would more clearly show
quota usage for org and space, but it fell through.

-Dieu

On Wed, Oct 21, 2015 at 7:15 AM, Juan Antonio Breña Moral <
bren(a)juanantonio.info> wrote:

Hi,

doing some tests, I detected in my testing environment the following
scenario:

Error: the string "{\n \"code\": 100005,\n \"description\": \"You
have ex
ceeded your organization's memory limit.\",\n \"error_code\":
\"CF-AppMemoryQuo
taExceeded\"\n}\n" was thrown, throw an Error :)

Does exist some REST Call to know if the org/space has reached the limit?

Many thanks in advance

Juan Antonio


Re: cf": error=2, No such file or directory and error=2

Varsha Nagraj
 

Hello Mathew,

Can you please let me know how do I add this to my PATH. Previously I would run the same commands on a windows system from eclipse. I have not set any PATH env on windows as I remember.


Re: Cloud Foundry DEA to Diego switch - when?

MaggieMeng
 

Hi, Amit

Our team is also planning the timeline of replacing dea with diego. Would you please let me know the approximated estimation on when the final iteration would come? Will it in 2016 or 1017?

Thanks,
Maggie

From: Amit Gupta [mailto:agupta(a)pivotal.io]
Sent: 2015年10月22日 2:59
To: Discussions about Cloud Foundry projects and the system overall.
Subject: [cf-dev] Re: Cloud Foundry DEA to Diego switch - when?

Hi Rishi,

Thanks for your question. Let's first clarify the distinction between what you deploy -- bosh releases (versioned packages of source code and binaries) -- and how you deploy things -- bosh deployment (a manifest of which releases to use, what code/binaries from those releases to place on nodes in your deployment cluster, property/credential configuration, networking and compute resources, etc.).

diego-release may not change, although it may be split into smaller releases, e.g. the cc-bridge part consisting of the components which talk to CC, and the diego runtime part consisting of components responsible for scheduling, running, and health-monitoring containerized workloads.

cf-release will undergo heavy changes. We are currently breaking it apart entirely, into separate releases: consul, etcd, logging-and-metrics, identity, routing, API, nats, postgres, and existing runtime backend (DEA, Warden, HM9k).

In addition to breaking up cf-release, we are working on cf-deployment[1], this will give you the same ability to deploy the Cloud Foundry PaaS as you know it today, but composed of multiple releases rather than the monolithic cf-release. We will ensure that cf-deployment has versioning and tooling to make it easy to deploy everything at versions that are known to work together.

For the first major iteration of cf-deployment, it will deploy all the existing components of cf-release, but coming from separate releases. You can still deploy diego separately (configured to talk to the CC) as you do today.

The second major iteration will be to leverage new BOSH features[2], such as links, AZs, cloud config, and global networking to simplify the manifest generation for cf-deployment. Again, you will still be able to deploy diego separately alongside your cf deployment.

The third iteration is to fold the diego-release deployment strategies into cf-deployment itself, so you'll have a single manifest deploying DEAs and Diego side-by-side.

The final iteration will be to remove the DEAs from cf-deployment and stop supporting the release that contains them.

As to your question of defaults, there are several definitions of "default". You can set Diego to be the default backend today[3]. You have to opt in to this, but then anyone using the platform you deployed will have their apps run on Diego by default. Pivotal Web Services, for example, now defaults to Diego as the backend. At some point, Diego will be the true default backend, and you will have to opt-out of it (either at the CC configuration level, or at the individual app level). Finally, at a later point in time, DEAs will no longer be supported and Diego will be the only backend option.

We are actively working on a timeline for all these things. You can see the Diego team's public tracker has a release marker[4] for when Diego will be capable of replacing the DEAs. After reaching that release marker, there will be some time given for the rest of the community to switch over before declaring end-of-life for the DEAs.

[1] https://github.com/cloudfoundry/cf-deployment
[2] https://github.com/cloudfoundry/bosh-notes/
[3] https://github.com/cloudfoundry/cf-release/blob/v222/jobs/cloud_controller_ng/spec#L396-L398
[4] https://www.pivotaltracker.com/story/show/76376202

Thanks,
Amit, OSS Release Integration PM

On Wed, Oct 21, 2015 at 10:31 AM, R M <rishi.investigate(a)gmail.com<mailto:rishi.investigate(a)gmail.com>> wrote:
I am trying to understand when will Diego become default runtime of Cloud Foundry. Latest cf-release is still using DEA and if my understanding is correct, at some stage, a new cf-release version will come out with Diego and perhaps change to v3. Do we have any ideas on when/if this will happen? Is it safe to assume that diego-release on github will slowly transition to cf-release?

Thanks.


Re: REST API endpoint for accessing application logs

Gianluca Volpe <gvolpe1968@...>
 

this is the maximum number of log lines the doppler can buffer while draining messages to remote syslog.

Gianluca

Il giorno 21/ott/2015, alle ore 09:23, Ponraj E <ponraj.e(a)gmail.com> ha scritto:

Hi,

Short update reg the question #2 above: I came to know from here http://docs.cloudfoundry.org/loggregator/ops.html that the number/size of log messages drained to the doppler can be controlled by a bosh deployment
manifest configuration : doppler.message_drain_buffer_size

It is specified that the doppler.message_drain_buffer_size default value is 100.

Is it 100MB?