Re: Diego and Maven support
Dan,
It seems to me that there is more than one cause of this problem.
I have done the following: 1. upgraded cf CLI from 6.12 to 6.13 2.cf set-health-check MY_APP_NAME none 3. set <healthCheckTimeout>180</healthCheckTimeout> (pom.xml) 4. removed env variables in order to allow Maven to set them again
Still no luck.
However, I have read that there is intensive development of Maven client plugin 2.x happening now. I will give it a try as soon as first release is available.
Best, Krzysztof
|
|
doppler issue which fails to emit logs with syslog protocol on CFv212
Hi, I am investigating the doppler issue on CFv212 that sending logs to the external logging service with syslog protocol fails. As my understanding, the following messages are supposed to be recorded in the "doppler.stdout.log" if the doppler got the syslog drain url from etcd successfully. However it is actually missing. - Missing log messages which is expected to be shown {"timestamp":xxxxxx,"process_id":xxxx,"source":"doppler","log_level":"info","message":"Syslog Sink syslog://xx.xx.xx.xx:xxxx: Running.","data":null,"file":"/var/vcap/data/compile/doppler/loggregator/src/doppler/sinks/syslog/syslog_sink.go","line":56,"method":"doppler/sinks/syslog.(*SyslogSink).Run"} {"timestamp":xxxxxx,"process_id":xxxx,"source":"doppler","log_level":"info","message":"Syslog Sink syslog://xxxxxx:xxxxxx: successfully connected.","data":null,"file":"/var/vcap/data/compile/doppler/loggregator/src/doppler/sinks/syslog/syslog_sink.go","line":112,"method":"doppler/sinks/syslog.(*SyslogSink).Run"} Instead, there are a lot of etcd error events found as follows. {"timestamp":xxxxxxxxx,"process_id":xxx,"source":"doppler","log_level":"error","message":"AppStoreWatcher: Got error while waiting for ETCD events: store request timed out","data":null,"file":"/var/vcap/data/compile/doppler/loggregator/src/github.com/cloudfoundry/loggregatorlib/store/app_service_store_watcher.go","line":79,"method":"github.com/cloudfoundry/loggregatorlib/store.(*AppServiceStoreWatcher).Run"} I have two question about this. ============= Q1) Does anyone know what this event indicates and how it affects the CF environment? In the normal envrionment, is this event still triggered (In other words, can we ignore this error event messages?) Q2) If the etcd got some trouble at the moment, which cf component is also made an influence on? I guess at the least the following cf components could be affected. Do we have anyting else? - router : to support routing api - hm9000 : to support health check - doppler: to get syslog drain urls from etcd - syslog-binder to get syslog drain urls from cc and then store them to etcd - trafficcontroller, metron agents : to find healthy dopplers to access to) ============= Note that I am also doubting the following errors in the "syslog_drain_binder.stdout.log". This message indicates that syslog_drain_binder failed to get syslog drain urls from cc. {"timestamp":xxxx,"process_id":xxxx,"source":"syslog_drain_binder","log_level":"error","message":"Error when polling cloud controller: Remote server error: Unauthorized","data":null,"file":"/var/vcap/data/compile/syslog_drain_binder/loggregator/src/syslog_drain_binder/main.go","line":68,"method":"main.main"} Therefore I have not yet concluded the etcd mainly caused the issue, however need to understand the exact meaning of the error event message above as well as any impact on the cf envrionment if there is something wrong with the behaviour of etcd. Regards, Masumi -- View this message in context: http://cf-dev.70369.x6.nabble.com/doppler-issue-which-fails-to-emit-logs-with-syslog-protocol-on-CFv212-tp2418.htmlSent from the CF Dev mailing list archive at Nabble.com.
|
|
Re: CF-RELEASE v202 UPLOAD ERROR
Parthiban Annadurai <senjiparthi@...>
Thanks Amit for your suggestions. Let you know after regenerating the manifest again.
toggle quoted messageShow quoted text
On 24 October 2015 at 11:47, Amit Gupta <agupta(a)pivotal.io> wrote: Regenerate your manifest.
On Fri, Oct 23, 2015 at 10:49 PM, Parthiban Annadurai < senjiparthi(a)gmail.com> wrote:
Okay Amit. Yaa, I changed my CF Version from v202 to v210. Could you share that metron_agent.deployment property of the manifest which is required in v210? Thanks.
On 24 October 2015 at 10:57, Amit Gupta <agupta(a)pivotal.io> wrote:
Parthiban,
Your log.txt shows that you're using cf-release version 210, but your subject message says you're trying v202. Perhaps you've checked out v202 of cf-release and used the spiff tooling to generate the manifests from that version. v202 doesn't include the metron_agent.deployment property in its manifest, which is required in v210.
On Fri, Oct 23, 2015 at 10:07 PM, Parthiban Annadurai < senjiparthi(a)gmail.com> wrote:
I have created the manifest file using SPIFF tool. Any issues with that?
On 23 October 2015 at 20:49, Amit Gupta <agupta(a)pivotal.io> wrote:
How did you create your manifest in the first place?
On Fri, Oct 23, 2015 at 8:17 AM, Parthiban Annadurai < senjiparthi(a)gmail.com> wrote:
After trying the suggestions, now its throws the following error,
Started preparing configuration > Binding configuration. Failed: Error filling in template `metron_agent.json.erb' for `ha_proxy_z1/0' (line 5: Can't find property `["metron_agent.deployment"]') (00:00:00)
Error 100: Error filling in template `metron_agent.json.erb' for `ha_proxy_z1/0' (line 5: Can't find property `["metron_agent.deployment"]')
Could anyone on this??
On 22 October 2015 at 18:08, Amit Gupta <agupta(a)pivotal.io> wrote:
Try running "bosh cck" and recreating VMs from last known apply spec. You should also make sure that the IPs you're allocating to your jobs are accessible from the BOSH director VM.
On Thu, Oct 22, 2015 at 5:27 AM, Parthiban Annadurai < senjiparthi(a)gmail.com> wrote:
Yaa sure Amit. I have attached both the files with this mail. Could you please? Thanks.
On 21 October 2015 at 19:49, Amit Gupta <agupta(a)pivotal.io> wrote:
Can you share the output of "bosh vms" and "bosh task 51 --debug". It's preferable if you copy the terminal outputs and paste them to Gists or Pastebins and share the links.
On Tue, Oct 20, 2015 at 6:18 AM, James Bayer <jbayer(a)pivotal.io> wrote:
sometimes a message like that is due to networking issues. does the bosh director and the VM it is creating have an available network path to reach each other? sometimes ssh'ing in to the VM that is identified can yield more debug clues.
On Tue, Oct 20, 2015 at 5:09 AM, Parthiban Annadurai < senjiparthi(a)gmail.com> wrote:
Thanks Bharath and Amit for the helpful solutions. I have surpassed that error. Now, bosh deploy strucks like in attached image. Could you anyone please?
Regards
Parthiban A
On 20 October 2015 at 11:57, Amit Gupta <agupta(a)pivotal.io> wrote:
Bharath, I think you mean to increase the *disk* size on the compilation VMs, not the memory size.
Parthiban, the error message is happening during compiling, saying "No space left on device". This means your compilation VMs are running out of space on disk. This means you need to increase the allocated disk for your compilation VMs. In the "compilation" section of your deployment manifest, you can specify "cloud_properties". This is where you will specify disk size. These "cloud_properties" look the same as the could_properties specified for a resource pool. Depending on your IaaS, the structure of the cloud_properties section differs. See here: https://bosh.io/docs/deployment-manifest.html#resource-pools-cloud-properties
On Mon, Oct 19, 2015 at 11:13 PM, Bharath Posa < bharathp(a)vedams.com> wrote:
hi parthiban
It seems you are running out of space in your vm in which you are compiling . try to increase the size of memory in your compilation vm .
regards Bharath
On Mon, Oct 19, 2015 at 7:39 PM, Parthiban Annadurai < senjiparthi(a)gmail.com> wrote:
Hello All, Thanks All for the helpful suggestions. Actually, now we r facing the following issue while kicking bosh deploy,
Done compiling packages > nats/d3a1f853f4980682ed8b48e4706b7280e2b7ce0e (00:01:07) Failed compiling packages > buildpack_php/9c72be716ab8629d7e6feed43012d1d671720157: Action Failed get_task: Task aba21e6a-2031-4a69-5b72-f238ecd07051 result: Compiling package buildpack_php: Compressing compiled package: Shelling out to tar: Running command: 'tar czf /var/vcap/data/tmp/bosh-platform-disk-TarballCompressor-CompressFilesInDir762165297 -C /var/vcap/data/packages/buildpack_php/9c72be716ab8629d7e6feed43012d1d671720157.1- .', stdout: '', stderr: ' gzip: stdout: No space left on device ': signal: broken pipe (00:02:41) Failed compiling packages (00:02:41)
Error 450001: Action Failed get_task: Task aba21e6a-2031-4a69-5b72-f238ecd07051 result: Compiling package buildpack_php: Compressing compiled package: Shelling out to tar: Running command: 'tar czf /var/vcap/data/tmp/bosh-platform-disk-TarballCompressor-CompressFilesInDir762165297 -C /var/vcap/data/packages/buildpack_php/9c72be716ab8629d7e6feed43012d1d671720157.1- .', stdout: '', stderr: ' gzip: stdout: No space left on device ': signal: broken pipe
Could Anyone on this issue?
Regards
Parthiban A
On 19 October 2015 at 14:30, Bharath Posa < bharathp(a)vedams.com> wrote:
Hi partiban
can u do a checksum of the tar file .
it should come like this *sha1: b6f596eaff4c7af21cc18a52ef97e19debb00403*
example:
*sha1sum {file}*
regards Bharath
On Mon, Oct 19, 2015 at 1:12 PM, Eric Poelke < epoelke(a)gmail.com> wrote:
You actually do not need to download it. if you just run --
`bosh upload release https://bosh.io/d/github.com/cloudfoundry/cf-release?v=202` <https://bosh.io/d/github.com/cloudfoundry/cf-release?v=202>
The director will pull in the release directly from bosh.io .
-- Thank you,
James Bayer
|
|
Re: CF-RELEASE v202 UPLOAD ERROR
Regenerate your manifest.
toggle quoted messageShow quoted text
On Fri, Oct 23, 2015 at 10:49 PM, Parthiban Annadurai <senjiparthi(a)gmail.com wrote: Okay Amit. Yaa, I changed my CF Version from v202 to v210. Could you share that metron_agent.deployment property of the manifest which is required in v210? Thanks.
On 24 October 2015 at 10:57, Amit Gupta <agupta(a)pivotal.io> wrote:
Parthiban,
Your log.txt shows that you're using cf-release version 210, but your subject message says you're trying v202. Perhaps you've checked out v202 of cf-release and used the spiff tooling to generate the manifests from that version. v202 doesn't include the metron_agent.deployment property in its manifest, which is required in v210.
On Fri, Oct 23, 2015 at 10:07 PM, Parthiban Annadurai < senjiparthi(a)gmail.com> wrote:
I have created the manifest file using SPIFF tool. Any issues with that?
On 23 October 2015 at 20:49, Amit Gupta <agupta(a)pivotal.io> wrote:
How did you create your manifest in the first place?
On Fri, Oct 23, 2015 at 8:17 AM, Parthiban Annadurai < senjiparthi(a)gmail.com> wrote:
After trying the suggestions, now its throws the following error,
Started preparing configuration > Binding configuration. Failed: Error filling in template `metron_agent.json.erb' for `ha_proxy_z1/0' (line 5: Can't find property `["metron_agent.deployment"]') (00:00:00)
Error 100: Error filling in template `metron_agent.json.erb' for `ha_proxy_z1/0' (line 5: Can't find property `["metron_agent.deployment"]')
Could anyone on this??
On 22 October 2015 at 18:08, Amit Gupta <agupta(a)pivotal.io> wrote:
Try running "bosh cck" and recreating VMs from last known apply spec. You should also make sure that the IPs you're allocating to your jobs are accessible from the BOSH director VM.
On Thu, Oct 22, 2015 at 5:27 AM, Parthiban Annadurai < senjiparthi(a)gmail.com> wrote:
Yaa sure Amit. I have attached both the files with this mail. Could you please? Thanks.
On 21 October 2015 at 19:49, Amit Gupta <agupta(a)pivotal.io> wrote:
Can you share the output of "bosh vms" and "bosh task 51 --debug". It's preferable if you copy the terminal outputs and paste them to Gists or Pastebins and share the links.
On Tue, Oct 20, 2015 at 6:18 AM, James Bayer <jbayer(a)pivotal.io> wrote:
sometimes a message like that is due to networking issues. does the bosh director and the VM it is creating have an available network path to reach each other? sometimes ssh'ing in to the VM that is identified can yield more debug clues.
On Tue, Oct 20, 2015 at 5:09 AM, Parthiban Annadurai < senjiparthi(a)gmail.com> wrote:
Thanks Bharath and Amit for the helpful solutions. I have surpassed that error. Now, bosh deploy strucks like in attached image. Could you anyone please?
Regards
Parthiban A
On 20 October 2015 at 11:57, Amit Gupta <agupta(a)pivotal.io> wrote:
Bharath, I think you mean to increase the *disk* size on the compilation VMs, not the memory size.
Parthiban, the error message is happening during compiling, saying "No space left on device". This means your compilation VMs are running out of space on disk. This means you need to increase the allocated disk for your compilation VMs. In the "compilation" section of your deployment manifest, you can specify "cloud_properties". This is where you will specify disk size. These "cloud_properties" look the same as the could_properties specified for a resource pool. Depending on your IaaS, the structure of the cloud_properties section differs. See here: https://bosh.io/docs/deployment-manifest.html#resource-pools-cloud-properties
On Mon, Oct 19, 2015 at 11:13 PM, Bharath Posa < bharathp(a)vedams.com> wrote:
hi parthiban
It seems you are running out of space in your vm in which you are compiling . try to increase the size of memory in your compilation vm .
regards Bharath
On Mon, Oct 19, 2015 at 7:39 PM, Parthiban Annadurai < senjiparthi(a)gmail.com> wrote:
Hello All, Thanks All for the helpful suggestions. Actually, now we r facing the following issue while kicking bosh deploy,
Done compiling packages > nats/d3a1f853f4980682ed8b48e4706b7280e2b7ce0e (00:01:07) Failed compiling packages > buildpack_php/9c72be716ab8629d7e6feed43012d1d671720157: Action Failed get_task: Task aba21e6a-2031-4a69-5b72-f238ecd07051 result: Compiling package buildpack_php: Compressing compiled package: Shelling out to tar: Running command: 'tar czf /var/vcap/data/tmp/bosh-platform-disk-TarballCompressor-CompressFilesInDir762165297 -C /var/vcap/data/packages/buildpack_php/9c72be716ab8629d7e6feed43012d1d671720157.1- .', stdout: '', stderr: ' gzip: stdout: No space left on device ': signal: broken pipe (00:02:41) Failed compiling packages (00:02:41)
Error 450001: Action Failed get_task: Task aba21e6a-2031-4a69-5b72-f238ecd07051 result: Compiling package buildpack_php: Compressing compiled package: Shelling out to tar: Running command: 'tar czf /var/vcap/data/tmp/bosh-platform-disk-TarballCompressor-CompressFilesInDir762165297 -C /var/vcap/data/packages/buildpack_php/9c72be716ab8629d7e6feed43012d1d671720157.1- .', stdout: '', stderr: ' gzip: stdout: No space left on device ': signal: broken pipe
Could Anyone on this issue?
Regards
Parthiban A
On 19 October 2015 at 14:30, Bharath Posa <bharathp(a)vedams.com
wrote: Hi partiban
can u do a checksum of the tar file .
it should come like this *sha1: b6f596eaff4c7af21cc18a52ef97e19debb00403*
example:
*sha1sum {file}*
regards Bharath
On Mon, Oct 19, 2015 at 1:12 PM, Eric Poelke < epoelke(a)gmail.com> wrote:
You actually do not need to download it. if you just run --
`bosh upload release https://bosh.io/d/github.com/cloudfoundry/cf-release?v=202` <https://bosh.io/d/github.com/cloudfoundry/cf-release?v=202>
The director will pull in the release directly from bosh.io.
-- Thank you,
James Bayer
|
|
Re: CF-RELEASE v202 UPLOAD ERROR
Parthiban Annadurai <senjiparthi@...>
Okay Amit. Yaa, I changed my CF Version from v202 to v210. Could you share that metron_agent.deployment property of the manifest which is required in v210? Thanks.
toggle quoted messageShow quoted text
On 24 October 2015 at 10:57, Amit Gupta <agupta(a)pivotal.io> wrote: Parthiban,
Your log.txt shows that you're using cf-release version 210, but your subject message says you're trying v202. Perhaps you've checked out v202 of cf-release and used the spiff tooling to generate the manifests from that version. v202 doesn't include the metron_agent.deployment property in its manifest, which is required in v210.
On Fri, Oct 23, 2015 at 10:07 PM, Parthiban Annadurai < senjiparthi(a)gmail.com> wrote:
I have created the manifest file using SPIFF tool. Any issues with that?
On 23 October 2015 at 20:49, Amit Gupta <agupta(a)pivotal.io> wrote:
How did you create your manifest in the first place?
On Fri, Oct 23, 2015 at 8:17 AM, Parthiban Annadurai < senjiparthi(a)gmail.com> wrote:
After trying the suggestions, now its throws the following error,
Started preparing configuration > Binding configuration. Failed: Error filling in template `metron_agent.json.erb' for `ha_proxy_z1/0' (line 5: Can't find property `["metron_agent.deployment"]') (00:00:00)
Error 100: Error filling in template `metron_agent.json.erb' for `ha_proxy_z1/0' (line 5: Can't find property `["metron_agent.deployment"]')
Could anyone on this??
On 22 October 2015 at 18:08, Amit Gupta <agupta(a)pivotal.io> wrote:
Try running "bosh cck" and recreating VMs from last known apply spec. You should also make sure that the IPs you're allocating to your jobs are accessible from the BOSH director VM.
On Thu, Oct 22, 2015 at 5:27 AM, Parthiban Annadurai < senjiparthi(a)gmail.com> wrote:
Yaa sure Amit. I have attached both the files with this mail. Could you please? Thanks.
On 21 October 2015 at 19:49, Amit Gupta <agupta(a)pivotal.io> wrote:
Can you share the output of "bosh vms" and "bosh task 51 --debug". It's preferable if you copy the terminal outputs and paste them to Gists or Pastebins and share the links.
On Tue, Oct 20, 2015 at 6:18 AM, James Bayer <jbayer(a)pivotal.io> wrote:
sometimes a message like that is due to networking issues. does the bosh director and the VM it is creating have an available network path to reach each other? sometimes ssh'ing in to the VM that is identified can yield more debug clues.
On Tue, Oct 20, 2015 at 5:09 AM, Parthiban Annadurai < senjiparthi(a)gmail.com> wrote:
Thanks Bharath and Amit for the helpful solutions. I have surpassed that error. Now, bosh deploy strucks like in attached image. Could you anyone please?
Regards
Parthiban A
On 20 October 2015 at 11:57, Amit Gupta <agupta(a)pivotal.io> wrote:
Bharath, I think you mean to increase the *disk* size on the compilation VMs, not the memory size.
Parthiban, the error message is happening during compiling, saying "No space left on device". This means your compilation VMs are running out of space on disk. This means you need to increase the allocated disk for your compilation VMs. In the "compilation" section of your deployment manifest, you can specify "cloud_properties". This is where you will specify disk size. These "cloud_properties" look the same as the could_properties specified for a resource pool. Depending on your IaaS, the structure of the cloud_properties section differs. See here: https://bosh.io/docs/deployment-manifest.html#resource-pools-cloud-properties
On Mon, Oct 19, 2015 at 11:13 PM, Bharath Posa < bharathp(a)vedams.com> wrote:
hi parthiban
It seems you are running out of space in your vm in which you are compiling . try to increase the size of memory in your compilation vm .
regards Bharath
On Mon, Oct 19, 2015 at 7:39 PM, Parthiban Annadurai < senjiparthi(a)gmail.com> wrote:
Hello All, Thanks All for the helpful suggestions. Actually, now we r facing the following issue while kicking bosh deploy,
Done compiling packages > nats/d3a1f853f4980682ed8b48e4706b7280e2b7ce0e (00:01:07) Failed compiling packages > buildpack_php/9c72be716ab8629d7e6feed43012d1d671720157: Action Failed get_task: Task aba21e6a-2031-4a69-5b72-f238ecd07051 result: Compiling package buildpack_php: Compressing compiled package: Shelling out to tar: Running command: 'tar czf /var/vcap/data/tmp/bosh-platform-disk-TarballCompressor-CompressFilesInDir762165297 -C /var/vcap/data/packages/buildpack_php/9c72be716ab8629d7e6feed43012d1d671720157.1- .', stdout: '', stderr: ' gzip: stdout: No space left on device ': signal: broken pipe (00:02:41) Failed compiling packages (00:02:41)
Error 450001: Action Failed get_task: Task aba21e6a-2031-4a69-5b72-f238ecd07051 result: Compiling package buildpack_php: Compressing compiled package: Shelling out to tar: Running command: 'tar czf /var/vcap/data/tmp/bosh-platform-disk-TarballCompressor-CompressFilesInDir762165297 -C /var/vcap/data/packages/buildpack_php/9c72be716ab8629d7e6feed43012d1d671720157.1- .', stdout: '', stderr: ' gzip: stdout: No space left on device ': signal: broken pipe
Could Anyone on this issue?
Regards
Parthiban A
On 19 October 2015 at 14:30, Bharath Posa <bharathp(a)vedams.com> wrote:
Hi partiban
can u do a checksum of the tar file .
it should come like this *sha1: b6f596eaff4c7af21cc18a52ef97e19debb00403*
example:
*sha1sum {file}*
regards Bharath
On Mon, Oct 19, 2015 at 1:12 PM, Eric Poelke < epoelke(a)gmail.com> wrote:
You actually do not need to download it. if you just run --
`bosh upload release https://bosh.io/d/github.com/cloudfoundry/cf-release?v=202` <https://bosh.io/d/github.com/cloudfoundry/cf-release?v=202>
The director will pull in the release directly from bosh.io.
-- Thank you,
James Bayer
|
|
Re: CF-RELEASE v202 UPLOAD ERROR
Parthiban,
Your log.txt shows that you're using cf-release version 210, but your subject message says you're trying v202. Perhaps you've checked out v202 of cf-release and used the spiff tooling to generate the manifests from that version. v202 doesn't include the metron_agent.deployment property in its manifest, which is required in v210.
toggle quoted messageShow quoted text
On Fri, Oct 23, 2015 at 10:07 PM, Parthiban Annadurai <senjiparthi(a)gmail.com wrote: I have created the manifest file using SPIFF tool. Any issues with that?
On 23 October 2015 at 20:49, Amit Gupta <agupta(a)pivotal.io> wrote:
How did you create your manifest in the first place?
On Fri, Oct 23, 2015 at 8:17 AM, Parthiban Annadurai < senjiparthi(a)gmail.com> wrote:
After trying the suggestions, now its throws the following error,
Started preparing configuration > Binding configuration. Failed: Error filling in template `metron_agent.json.erb' for `ha_proxy_z1/0' (line 5: Can't find property `["metron_agent.deployment"]') (00:00:00)
Error 100: Error filling in template `metron_agent.json.erb' for `ha_proxy_z1/0' (line 5: Can't find property `["metron_agent.deployment"]')
Could anyone on this??
On 22 October 2015 at 18:08, Amit Gupta <agupta(a)pivotal.io> wrote:
Try running "bosh cck" and recreating VMs from last known apply spec. You should also make sure that the IPs you're allocating to your jobs are accessible from the BOSH director VM.
On Thu, Oct 22, 2015 at 5:27 AM, Parthiban Annadurai < senjiparthi(a)gmail.com> wrote:
Yaa sure Amit. I have attached both the files with this mail. Could you please? Thanks.
On 21 October 2015 at 19:49, Amit Gupta <agupta(a)pivotal.io> wrote:
Can you share the output of "bosh vms" and "bosh task 51 --debug". It's preferable if you copy the terminal outputs and paste them to Gists or Pastebins and share the links.
On Tue, Oct 20, 2015 at 6:18 AM, James Bayer <jbayer(a)pivotal.io> wrote:
sometimes a message like that is due to networking issues. does the bosh director and the VM it is creating have an available network path to reach each other? sometimes ssh'ing in to the VM that is identified can yield more debug clues.
On Tue, Oct 20, 2015 at 5:09 AM, Parthiban Annadurai < senjiparthi(a)gmail.com> wrote:
Thanks Bharath and Amit for the helpful solutions. I have surpassed that error. Now, bosh deploy strucks like in attached image. Could you anyone please?
Regards
Parthiban A
On 20 October 2015 at 11:57, Amit Gupta <agupta(a)pivotal.io> wrote:
Bharath, I think you mean to increase the *disk* size on the compilation VMs, not the memory size.
Parthiban, the error message is happening during compiling, saying "No space left on device". This means your compilation VMs are running out of space on disk. This means you need to increase the allocated disk for your compilation VMs. In the "compilation" section of your deployment manifest, you can specify "cloud_properties". This is where you will specify disk size. These "cloud_properties" look the same as the could_properties specified for a resource pool. Depending on your IaaS, the structure of the cloud_properties section differs. See here: https://bosh.io/docs/deployment-manifest.html#resource-pools-cloud-properties
On Mon, Oct 19, 2015 at 11:13 PM, Bharath Posa < bharathp(a)vedams.com> wrote:
hi parthiban
It seems you are running out of space in your vm in which you are compiling . try to increase the size of memory in your compilation vm .
regards Bharath
On Mon, Oct 19, 2015 at 7:39 PM, Parthiban Annadurai < senjiparthi(a)gmail.com> wrote:
Hello All, Thanks All for the helpful suggestions. Actually, now we r facing the following issue while kicking bosh deploy,
Done compiling packages > nats/d3a1f853f4980682ed8b48e4706b7280e2b7ce0e (00:01:07) Failed compiling packages > buildpack_php/9c72be716ab8629d7e6feed43012d1d671720157: Action Failed get_task: Task aba21e6a-2031-4a69-5b72-f238ecd07051 result: Compiling package buildpack_php: Compressing compiled package: Shelling out to tar: Running command: 'tar czf /var/vcap/data/tmp/bosh-platform-disk-TarballCompressor-CompressFilesInDir762165297 -C /var/vcap/data/packages/buildpack_php/9c72be716ab8629d7e6feed43012d1d671720157.1- .', stdout: '', stderr: ' gzip: stdout: No space left on device ': signal: broken pipe (00:02:41) Failed compiling packages (00:02:41)
Error 450001: Action Failed get_task: Task aba21e6a-2031-4a69-5b72-f238ecd07051 result: Compiling package buildpack_php: Compressing compiled package: Shelling out to tar: Running command: 'tar czf /var/vcap/data/tmp/bosh-platform-disk-TarballCompressor-CompressFilesInDir762165297 -C /var/vcap/data/packages/buildpack_php/9c72be716ab8629d7e6feed43012d1d671720157.1- .', stdout: '', stderr: ' gzip: stdout: No space left on device ': signal: broken pipe
Could Anyone on this issue?
Regards
Parthiban A
On 19 October 2015 at 14:30, Bharath Posa <bharathp(a)vedams.com> wrote:
Hi partiban
can u do a checksum of the tar file .
it should come like this *sha1: b6f596eaff4c7af21cc18a52ef97e19debb00403*
example:
*sha1sum {file}*
regards Bharath
On Mon, Oct 19, 2015 at 1:12 PM, Eric Poelke <epoelke(a)gmail.com
wrote: You actually do not need to download it. if you just run --
`bosh upload release https://bosh.io/d/github.com/cloudfoundry/cf-release?v=202` <https://bosh.io/d/github.com/cloudfoundry/cf-release?v=202>
The director will pull in the release directly from bosh.io.
-- Thank you,
James Bayer
|
|
Re: CF-RELEASE v202 UPLOAD ERROR
Parthiban Annadurai <senjiparthi@...>
I have created the manifest file using SPIFF tool. Any issues with that?
toggle quoted messageShow quoted text
On 23 October 2015 at 20:49, Amit Gupta <agupta(a)pivotal.io> wrote: How did you create your manifest in the first place?
On Fri, Oct 23, 2015 at 8:17 AM, Parthiban Annadurai < senjiparthi(a)gmail.com> wrote:
After trying the suggestions, now its throws the following error,
Started preparing configuration > Binding configuration. Failed: Error filling in template `metron_agent.json.erb' for `ha_proxy_z1/0' (line 5: Can't find property `["metron_agent.deployment"]') (00:00:00)
Error 100: Error filling in template `metron_agent.json.erb' for `ha_proxy_z1/0' (line 5: Can't find property `["metron_agent.deployment"]')
Could anyone on this??
On 22 October 2015 at 18:08, Amit Gupta <agupta(a)pivotal.io> wrote:
Try running "bosh cck" and recreating VMs from last known apply spec. You should also make sure that the IPs you're allocating to your jobs are accessible from the BOSH director VM.
On Thu, Oct 22, 2015 at 5:27 AM, Parthiban Annadurai < senjiparthi(a)gmail.com> wrote:
Yaa sure Amit. I have attached both the files with this mail. Could you please? Thanks.
On 21 October 2015 at 19:49, Amit Gupta <agupta(a)pivotal.io> wrote:
Can you share the output of "bosh vms" and "bosh task 51 --debug". It's preferable if you copy the terminal outputs and paste them to Gists or Pastebins and share the links.
On Tue, Oct 20, 2015 at 6:18 AM, James Bayer <jbayer(a)pivotal.io> wrote:
sometimes a message like that is due to networking issues. does the bosh director and the VM it is creating have an available network path to reach each other? sometimes ssh'ing in to the VM that is identified can yield more debug clues.
On Tue, Oct 20, 2015 at 5:09 AM, Parthiban Annadurai < senjiparthi(a)gmail.com> wrote:
Thanks Bharath and Amit for the helpful solutions. I have surpassed that error. Now, bosh deploy strucks like in attached image. Could you anyone please?
Regards
Parthiban A
On 20 October 2015 at 11:57, Amit Gupta <agupta(a)pivotal.io> wrote:
Bharath, I think you mean to increase the *disk* size on the compilation VMs, not the memory size.
Parthiban, the error message is happening during compiling, saying "No space left on device". This means your compilation VMs are running out of space on disk. This means you need to increase the allocated disk for your compilation VMs. In the "compilation" section of your deployment manifest, you can specify "cloud_properties". This is where you will specify disk size. These "cloud_properties" look the same as the could_properties specified for a resource pool. Depending on your IaaS, the structure of the cloud_properties section differs. See here: https://bosh.io/docs/deployment-manifest.html#resource-pools-cloud-properties
On Mon, Oct 19, 2015 at 11:13 PM, Bharath Posa <bharathp(a)vedams.com
wrote: hi parthiban
It seems you are running out of space in your vm in which you are compiling . try to increase the size of memory in your compilation vm .
regards Bharath
On Mon, Oct 19, 2015 at 7:39 PM, Parthiban Annadurai < senjiparthi(a)gmail.com> wrote:
Hello All, Thanks All for the helpful suggestions. Actually, now we r facing the following issue while kicking bosh deploy,
Done compiling packages > nats/d3a1f853f4980682ed8b48e4706b7280e2b7ce0e (00:01:07) Failed compiling packages > buildpack_php/9c72be716ab8629d7e6feed43012d1d671720157: Action Failed get_task: Task aba21e6a-2031-4a69-5b72-f238ecd07051 result: Compiling package buildpack_php: Compressing compiled package: Shelling out to tar: Running command: 'tar czf /var/vcap/data/tmp/bosh-platform-disk-TarballCompressor-CompressFilesInDir762165297 -C /var/vcap/data/packages/buildpack_php/9c72be716ab8629d7e6feed43012d1d671720157.1- .', stdout: '', stderr: ' gzip: stdout: No space left on device ': signal: broken pipe (00:02:41) Failed compiling packages (00:02:41)
Error 450001: Action Failed get_task: Task aba21e6a-2031-4a69-5b72-f238ecd07051 result: Compiling package buildpack_php: Compressing compiled package: Shelling out to tar: Running command: 'tar czf /var/vcap/data/tmp/bosh-platform-disk-TarballCompressor-CompressFilesInDir762165297 -C /var/vcap/data/packages/buildpack_php/9c72be716ab8629d7e6feed43012d1d671720157.1- .', stdout: '', stderr: ' gzip: stdout: No space left on device ': signal: broken pipe
Could Anyone on this issue?
Regards
Parthiban A
On 19 October 2015 at 14:30, Bharath Posa <bharathp(a)vedams.com> wrote:
Hi partiban
can u do a checksum of the tar file .
it should come like this *sha1: b6f596eaff4c7af21cc18a52ef97e19debb00403*
example:
*sha1sum {file}*
regards Bharath
On Mon, Oct 19, 2015 at 1:12 PM, Eric Poelke <epoelke(a)gmail.com> wrote:
You actually do not need to download it. if you just run --
`bosh upload release https://bosh.io/d/github.com/cloudfoundry/cf-release?v=202` <https://bosh.io/d/github.com/cloudfoundry/cf-release?v=202>
The director will pull in the release directly from bosh.io.
-- Thank you,
James Bayer
|
|
[abacus] Abacus v0.0.2-rc.2 available
|
|
Re: PHP extension 'gettext' doesn't work?
OK, so really sorry this took me so long to investigate, but I think I've found the issue. Ubuntu has these "language packs" and in order for gettext to work the system has to have that language pack installed. https://help.ubuntu.com/community/LocaleYou can see what language packs are installed on your system by running `locale -a`. I was testing this on an Ubuntu docker image and the only way I could make it work was to install the language pack. As soon as I did that and restarted Apache HTTPD, I started to get my translations. Running `locale -a` on the `cflinuxfs2` docker image shows that only these language packs are installed. ``` $ locale -a C C.UTF-8 POSIX en_US.utf8 ``` When I setup a test app and run it on CF, I get the same results. Only `en_US.utf8` works. Unfortunately, I'm not sure how you could go about installing more language packs into the stack for CF. It seems that you have to install them via `apt-get` and that simply won't work, since there's no root access in the container. If anyone has any ideas about how to install more language packs, let me know. My only suggestion would be to use the intl extension instead of gettext. I believe it offers similar functionality, although it's not something I've done myself. Hope that helps! Dan
toggle quoted messageShow quoted text
|
|
how does hm9000 actually determine application health?
Can anyone tell me or point me to some documentation about how hm9k actually determines application health?
|
|
Re: Open sourcing our S3 service broker
Yeah I agree, but am in the same boat I don't really have much free time. It would be awesome to have a service broker marketplace of some sort. And I didn't even know your s3 service broker existed, although that probably would not have stopped me from writing mine ;). It was actually the first one I did as a "how do I make a service broker" project. The RDS one we opened sourced was actually built after our S3 one, but once we put the RDS one out there I figured we may as well put this one out there as well. We have some others around AWS services as well we will get out there at some point. But I really like the idea of a marketplace with some kind of review system as well I think this would really help the ecosystem as a whole.
|
|
Re: CF-RELEASE v202 UPLOAD ERROR
How did you create your manifest in the first place? On Fri, Oct 23, 2015 at 8:17 AM, Parthiban Annadurai <senjiparthi(a)gmail.com> wrote: After trying the suggestions, now its throws the following error,
Started preparing configuration > Binding configuration. Failed: Error filling in template `metron_agent.json.erb' for `ha_proxy_z1/0' (line 5: Can't find property `["metron_agent.deployment"]') (00:00:00)
Error 100: Error filling in template `metron_agent.json.erb' for `ha_proxy_z1/0' (line 5: Can't find property `["metron_agent.deployment"]')
Could anyone on this??
On 22 October 2015 at 18:08, Amit Gupta <agupta(a)pivotal.io> wrote:
Try running "bosh cck" and recreating VMs from last known apply spec. You should also make sure that the IPs you're allocating to your jobs are accessible from the BOSH director VM.
On Thu, Oct 22, 2015 at 5:27 AM, Parthiban Annadurai < senjiparthi(a)gmail.com> wrote:
Yaa sure Amit. I have attached both the files with this mail. Could you please? Thanks.
On 21 October 2015 at 19:49, Amit Gupta <agupta(a)pivotal.io> wrote:
Can you share the output of "bosh vms" and "bosh task 51 --debug". It's preferable if you copy the terminal outputs and paste them to Gists or Pastebins and share the links.
On Tue, Oct 20, 2015 at 6:18 AM, James Bayer <jbayer(a)pivotal.io> wrote:
sometimes a message like that is due to networking issues. does the bosh director and the VM it is creating have an available network path to reach each other? sometimes ssh'ing in to the VM that is identified can yield more debug clues.
On Tue, Oct 20, 2015 at 5:09 AM, Parthiban Annadurai < senjiparthi(a)gmail.com> wrote:
Thanks Bharath and Amit for the helpful solutions. I have surpassed that error. Now, bosh deploy strucks like in attached image. Could you anyone please?
Regards
Parthiban A
On 20 October 2015 at 11:57, Amit Gupta <agupta(a)pivotal.io> wrote:
Bharath, I think you mean to increase the *disk* size on the compilation VMs, not the memory size.
Parthiban, the error message is happening during compiling, saying "No space left on device". This means your compilation VMs are running out of space on disk. This means you need to increase the allocated disk for your compilation VMs. In the "compilation" section of your deployment manifest, you can specify "cloud_properties". This is where you will specify disk size. These "cloud_properties" look the same as the could_properties specified for a resource pool. Depending on your IaaS, the structure of the cloud_properties section differs. See here: https://bosh.io/docs/deployment-manifest.html#resource-pools-cloud-properties
On Mon, Oct 19, 2015 at 11:13 PM, Bharath Posa <bharathp(a)vedams.com> wrote:
hi parthiban
It seems you are running out of space in your vm in which you are compiling . try to increase the size of memory in your compilation vm .
regards Bharath
On Mon, Oct 19, 2015 at 7:39 PM, Parthiban Annadurai < senjiparthi(a)gmail.com> wrote:
Hello All, Thanks All for the helpful suggestions. Actually, now we r facing the following issue while kicking bosh deploy,
Done compiling packages > nats/d3a1f853f4980682ed8b48e4706b7280e2b7ce0e (00:01:07) Failed compiling packages > buildpack_php/9c72be716ab8629d7e6feed43012d1d671720157: Action Failed get_task: Task aba21e6a-2031-4a69-5b72-f238ecd07051 result: Compiling package buildpack_php: Compressing compiled package: Shelling out to tar: Running command: 'tar czf /var/vcap/data/tmp/bosh-platform-disk-TarballCompressor-CompressFilesInDir762165297 -C /var/vcap/data/packages/buildpack_php/9c72be716ab8629d7e6feed43012d1d671720157.1- .', stdout: '', stderr: ' gzip: stdout: No space left on device ': signal: broken pipe (00:02:41) Failed compiling packages (00:02:41)
Error 450001: Action Failed get_task: Task aba21e6a-2031-4a69-5b72-f238ecd07051 result: Compiling package buildpack_php: Compressing compiled package: Shelling out to tar: Running command: 'tar czf /var/vcap/data/tmp/bosh-platform-disk-TarballCompressor-CompressFilesInDir762165297 -C /var/vcap/data/packages/buildpack_php/9c72be716ab8629d7e6feed43012d1d671720157.1- .', stdout: '', stderr: ' gzip: stdout: No space left on device ': signal: broken pipe
Could Anyone on this issue?
Regards
Parthiban A
On 19 October 2015 at 14:30, Bharath Posa <bharathp(a)vedams.com> wrote:
Hi partiban
can u do a checksum of the tar file .
it should come like this *sha1: b6f596eaff4c7af21cc18a52ef97e19debb00403*
example:
*sha1sum {file}*
regards Bharath
On Mon, Oct 19, 2015 at 1:12 PM, Eric Poelke <epoelke(a)gmail.com> wrote:
You actually do not need to download it. if you just run --
`bosh upload release https://bosh.io/d/github.com/cloudfoundry/cf-release?v=202` <https://bosh.io/d/github.com/cloudfoundry/cf-release?v=202>
The director will pull in the release directly from bosh.io.
-- Thank you,
James Bayer
|
|
Re: CF-RELEASE v202 UPLOAD ERROR
Parthiban Annadurai <senjiparthi@...>
After trying the suggestions, now its throws the following error,
Started preparing configuration > Binding configuration. Failed: Error filling in template `metron_agent.json.erb' for `ha_proxy_z1/0' (line 5: Can't find property `["metron_agent.deployment"]') (00:00:00)
Error 100: Error filling in template `metron_agent.json.erb' for `ha_proxy_z1/0' (line 5: Can't find property `["metron_agent.deployment"]')
Could anyone on this??
toggle quoted messageShow quoted text
On 22 October 2015 at 18:08, Amit Gupta <agupta(a)pivotal.io> wrote: Try running "bosh cck" and recreating VMs from last known apply spec. You should also make sure that the IPs you're allocating to your jobs are accessible from the BOSH director VM.
On Thu, Oct 22, 2015 at 5:27 AM, Parthiban Annadurai < senjiparthi(a)gmail.com> wrote:
Yaa sure Amit. I have attached both the files with this mail. Could you please? Thanks.
On 21 October 2015 at 19:49, Amit Gupta <agupta(a)pivotal.io> wrote:
Can you share the output of "bosh vms" and "bosh task 51 --debug". It's preferable if you copy the terminal outputs and paste them to Gists or Pastebins and share the links.
On Tue, Oct 20, 2015 at 6:18 AM, James Bayer <jbayer(a)pivotal.io> wrote:
sometimes a message like that is due to networking issues. does the bosh director and the VM it is creating have an available network path to reach each other? sometimes ssh'ing in to the VM that is identified can yield more debug clues.
On Tue, Oct 20, 2015 at 5:09 AM, Parthiban Annadurai < senjiparthi(a)gmail.com> wrote:
Thanks Bharath and Amit for the helpful solutions. I have surpassed that error. Now, bosh deploy strucks like in attached image. Could you anyone please?
Regards
Parthiban A
On 20 October 2015 at 11:57, Amit Gupta <agupta(a)pivotal.io> wrote:
Bharath, I think you mean to increase the *disk* size on the compilation VMs, not the memory size.
Parthiban, the error message is happening during compiling, saying "No space left on device". This means your compilation VMs are running out of space on disk. This means you need to increase the allocated disk for your compilation VMs. In the "compilation" section of your deployment manifest, you can specify "cloud_properties". This is where you will specify disk size. These "cloud_properties" look the same as the could_properties specified for a resource pool. Depending on your IaaS, the structure of the cloud_properties section differs. See here: https://bosh.io/docs/deployment-manifest.html#resource-pools-cloud-properties
On Mon, Oct 19, 2015 at 11:13 PM, Bharath Posa <bharathp(a)vedams.com> wrote:
hi parthiban
It seems you are running out of space in your vm in which you are compiling . try to increase the size of memory in your compilation vm .
regards Bharath
On Mon, Oct 19, 2015 at 7:39 PM, Parthiban Annadurai < senjiparthi(a)gmail.com> wrote:
Hello All, Thanks All for the helpful suggestions. Actually, now we r facing the following issue while kicking bosh deploy,
Done compiling packages > nats/d3a1f853f4980682ed8b48e4706b7280e2b7ce0e (00:01:07) Failed compiling packages > buildpack_php/9c72be716ab8629d7e6feed43012d1d671720157: Action Failed get_task: Task aba21e6a-2031-4a69-5b72-f238ecd07051 result: Compiling package buildpack_php: Compressing compiled package: Shelling out to tar: Running command: 'tar czf /var/vcap/data/tmp/bosh-platform-disk-TarballCompressor-CompressFilesInDir762165297 -C /var/vcap/data/packages/buildpack_php/9c72be716ab8629d7e6feed43012d1d671720157.1- .', stdout: '', stderr: ' gzip: stdout: No space left on device ': signal: broken pipe (00:02:41) Failed compiling packages (00:02:41)
Error 450001: Action Failed get_task: Task aba21e6a-2031-4a69-5b72-f238ecd07051 result: Compiling package buildpack_php: Compressing compiled package: Shelling out to tar: Running command: 'tar czf /var/vcap/data/tmp/bosh-platform-disk-TarballCompressor-CompressFilesInDir762165297 -C /var/vcap/data/packages/buildpack_php/9c72be716ab8629d7e6feed43012d1d671720157.1- .', stdout: '', stderr: ' gzip: stdout: No space left on device ': signal: broken pipe
Could Anyone on this issue?
Regards
Parthiban A
On 19 October 2015 at 14:30, Bharath Posa <bharathp(a)vedams.com> wrote:
Hi partiban
can u do a checksum of the tar file .
it should come like this *sha1: b6f596eaff4c7af21cc18a52ef97e19debb00403*
example:
*sha1sum {file}*
regards Bharath
On Mon, Oct 19, 2015 at 1:12 PM, Eric Poelke <epoelke(a)gmail.com> wrote:
You actually do not need to download it. if you just run --
`bosh upload release https://bosh.io/d/github.com/cloudfoundry/cf-release?v=202` <https://bosh.io/d/github.com/cloudfoundry/cf-release?v=202>
The director will pull in the release directly from bosh.io.
-- Thank you,
James Bayer
|
|
Re: Error uploading application when pushing application
Jim,
When you're sending requests to `api.system-domain`, you're talking to the Cloud Controller. I'd suggest you start by taking a look at the Cloud Controller logs. You can grab them with `bosh logs` or by SSH'ing to the VM and cd'ing to /var/vcap/sys/log. Hopefully that'll show you an error or stack trace.
Dan
toggle quoted messageShow quoted text
On Fri, Oct 23, 2015 at 4:32 AM, Jim Lin <jimlintw922(a)gmail.com> wrote: CF Version: 215
Description: My push command is `cf push myapp -p myapp.war -m 512m -t 120` and I got the error message "Error uploading application". The detail trace log is as following:
============== Start of Log ============== REQUEST: [2015-10-23T16:12:49+08:00] GET /v2/jobs/a4866929-aff5-41bb-8891-0540ba45e97c HTTP/1.1 Host: api.140.92.27.254.xip.io Accept: application/json Authorization: [PRIVATE DATA HIDDEN] Content-Type: application/json User-Agent: go-cli 6.12.0-8c65bbd / linux
RESPONSE: [2015-10-23T16:12:49+08:00] HTTP/1.1 200 OK Content-Length: 491 Content-Type: application/json;charset=utf-8 Date: Fri, 23 Oct 2015 08:14:56 GMT Server: nginx X-Cf-Requestid: 7cea1ef8-d14a-4260-4b3c-dcc387684911 X-Content-Type-Options: nosniff X-Vcap-Request-Id: 244f6491-caae-43b4-69c8-9e80f4a61c83::38d83968-cd06-4ede-8531-1356d08cf38d
{ "metadata": { "guid": "a4866929-aff5-41bb-8891-0540ba45e97c", "created_at": "2015-10-23T08:14:51Z", "url": "/v2/jobs/a4866929-aff5-41bb-8891-0540ba45e97c" }, "entity": { "guid": "a4866929-aff5-41bb-8891-0540ba45e97c", "status": "failed", "error": "Use of entity>error is deprecated in favor of entity>error_details.", "error_details": { "error_code": "UnknownError", "description": "An unknown error occurred.", "code": 10001 } } } FAILED Error uploading application. An unknown error occurred. FAILED Error uploading application. An unknown error occurred.
============== End of Log ==============
How to I do diagnosis to find out the root cause?
Thanks all.
Sincerely, Jim
|
|
Re: How to detect this case: CF-AppMemoryQuo taExceeded
The default org quota you're seeing is defined here [1] I believe you can configure it by specifying the name of the quota you would like to have as the default quota in your manifest. For example: properties: cc: default_quota_definition: turtle quota_definitions: turtle: memory_limit: 10240 total_services: -1 total_routes: 1000 non_basic_services_allowed: true [1] https://github.com/cloudfoundry/cloud_controller_ng/blob/master/config/cloud_controller.yml#L102-L107On Thu, Oct 22, 2015 at 3:48 AM, Juan Antonio Breña Moral < bren(a)juanantonio.info> wrote: Hi,
Using this method, I receive the memory used by the organization:
{ memory_usage_in_mb: 576 }
If i use this method:
http://apidocs.cloudfoundry.org/222/organizations/get_organization_summary.html
I receive the same information:
{ guid: '2fcae642-b4b9-4393-89dc-509ece372f7d', name: 'DevBox', status: 'active', spaces: [ { guid: 'e558b66a-1b9c-4c66-a779-5cf46e3b060c', name: 'dev', service_count: 4, app_count: 2, mem_dev_total: 576, mem_prod_total: 0 } ] }
I think that the limit is defined in a Quota definition for Space or an Organization. Using a local instance, I was doing some tests with the methods:
http://apidocs.cloudfoundry.org/222/organization_quota_definitions/delete_a_particular_organization_quota_definition.html
but a organization doesn't require a quota, so I suppose that exist a default quota, is it correct? In my case, the unique quota is:
http://apidocs.cloudfoundry.org/222/organization_quota_definitions/list_all_organization_quota_definitions.html
[ { metadata: { guid: '59ce5f9d-8914-4783-a3dc-8f5f89cf023a', url: '/v2/quota_definitions/59ce5f9d-8914-4783-a3dc-8f5f89cf023a', created_at: '2015-07-15T12:32:30Z', updated_at: null }, entity: { name: 'default', non_basic_services_allowed: true, total_services: 100, total_routes: 1000, memory_limit: 10240, trial_db_allowed: false, instance_memory_limit: -1 } } ] √ The platform returns Quota Definitions from Organizations (359ms)
In Pivotal for example, I suppose that free accounts use the default quota:
{ metadata: { guid: 'b72b1acb-ff4f-468d-99c0-05cd91012b62', url: '/v2/quota_definitions/b72b1acb-ff4f-468d-99c0-05cd91012b62', created_at: '2013-11-19T18:53:48Z', updated_at: '2013-11-19T19:34:57Z' }, entity: { name: 'trial', non_basic_services_allowed: false, total_services: 10, total_routes: 1000, total_private_domains: -1, memory_limit: 2048, trial_db_allowed: false, instance_memory_limit: -1, app_instance_limit: -1 } },
But method returns the following quotas.
[ { metadata: { guid: '8c4b4554-b43b-4673-ac93-3fc232896f0b', url: '/v2/quota_definitions/8c4b4554-b43b-4673-ac93-3fc232896f0b', created_at: '2013-11-19T18:53:48Z', updated_at: '2013-11-19T19:34:57Z' }, entity: { name: 'free', non_basic_services_allowed: false, total_services: 0, total_routes: 1000, total_private_domains: -1, memory_limit: 0, trial_db_allowed: false, instance_memory_limit: -1, app_instance_limit: -1 } }, { metadata: { guid: '7dbdcbb7-edb6-4246-a217-2031a75388f7', url: '/v2/quota_definitions/7dbdcbb7-edb6-4246-a217-2031a75388f7', created_at: '2013-11-19T18:53:48Z', updated_at: '2013-11-19T19:34:57Z' }, entity: { name: 'paid', non_basic_services_allowed: true, total_services: -1, total_routes: 1000, total_private_domains: -1, memory_limit: 10240, trial_db_allowed: false, instance_memory_limit: -1, app_instance_limit: -1 } }, { metadata: { guid: '2228e712-7b0c-4b65-899c-0fc599063e35', url: '/v2/quota_definitions/2228e712-7b0c-4b65-899c-0fc599063e35', created_at: '2013-11-19T18:53:48Z', updated_at: '2014-05-07T18:33:19Z' }, entity: { name: 'runaway', non_basic_services_allowed: true, total_services: -1, total_routes: 1000, total_private_domains: -1, memory_limit: 204800, trial_db_allowed: false, instance_memory_limit: -1, app_instance_limit: -1 } }, { metadata: { guid: 'b72b1acb-ff4f-468d-99c0-05cd91012b62', url: '/v2/quota_definitions/b72b1acb-ff4f-468d-99c0-05cd91012b62', created_at: '2013-11-19T18:53:48Z', updated_at: '2013-11-19T19:34:57Z' }, entity: { name: 'trial', non_basic_services_allowed: false, total_services: 10, total_routes: 1000, total_private_domains: -1, memory_limit: 2048, trial_db_allowed: false, instance_memory_limit: -1, app_instance_limit: -1 } }, { metadata: { guid: '39d630ba-66d6-4f6d-ba4e-8d45a05e99c4', url: '/v2/quota_definitions/39d630ba-66d6-4f6d-ba4e-8d45a05e99c4', created_at: '2014-01-23T20:03:27Z', updated_at: null }, entity: { name: '25GB', non_basic_services_allowed: true, total_services: -1, total_routes: 1000, total_private_domains: -1, memory_limit: 25600, trial_db_allowed: false, instance_memory_limit: -1, app_instance_limit: -1 } }, { metadata: { guid: '81226624-9e5a-4616-9b9c-6ab14aac2a03', url: '/v2/quota_definitions/81226624-9e5a-4616-9b9c-6ab14aac2a03', created_at: '2014-03-11T00:13:21Z', updated_at: '2014-03-19T17:36:32Z' }, entity: { name: '25GB:30free', non_basic_services_allowed: false, total_services: 30, total_routes: 1000, total_private_domains: -1, memory_limit: 25600, trial_db_allowed: false, instance_memory_limit: -1, app_instance_limit: -1 } }, { metadata: { guid: '0e7e2da4-0c74-4039-bdda-5cb575bf3c85', url: '/v2/quota_definitions/0e7e2da4-0c74-4039-bdda-5cb575bf3c85', created_at: '2014-05-08T03:56:31Z', updated_at: null }, entity: { name: '50GB', non_basic_services_allowed: true, total_services: -1, total_routes: 1000, total_private_domains: -1, memory_limit: 51200, trial_db_allowed: false, instance_memory_limit: -1, app_instance_limit: -1 } }, { metadata: { guid: 'e9473dc8-7c84-401c-88b2-ad61fc13e33d', url: '/v2/quota_definitions/e9473dc8-7c84-401c-88b2-ad61fc13e33d', created_at: '2014-05-08T03:57:42Z', updated_at: null }, entity: { name: '100GB', non_basic_services_allowed: true, total_services: -1, total_routes: 1000, total_private_domains: -1, memory_limit: 102400, trial_db_allowed: false, instance_memory_limit: -1, app_instance_limit: -1 } }, { metadata: { guid: '21577e73-0f16-48fc-9bb5-2b30a77731ae', url: '/v2/quota_definitions/21577e73-0f16-48fc-9bb5-2b30a77731ae', created_at: '2014-05-08T04:00:28Z', updated_at: null }, entity: { name: '75GB', non_basic_services_allowed: true, total_services: -1, total_routes: 1000, total_private_domains: -1, memory_limit: 76800, trial_db_allowed: false, instance_memory_limit: -1, app_instance_limit: -1 } }, { metadata: { guid: '6413dedd-5c1e-4b18-ac69-e87bbaf0bfdd', url: '/v2/quota_definitions/6413dedd-5c1e-4b18-ac69-e87bbaf0bfdd', created_at: '2014-05-13T18:18:18Z', updated_at: null }, entity: { name: '100GB:50free', non_basic_services_allowed: false, total_services: 50, total_routes: 1000, total_private_domains: -1, memory_limit: 102400, trial_db_allowed: false, instance_memory_limit: -1, app_instance_limit: -1 } }, { metadata: { guid: '9d078b97-0dab-4563-aea5-852b1fb50129', url: '/v2/quota_definitions/9d078b97-0dab-4563-aea5-852b1fb50129', created_at: '2014-09-11T02:32:49Z', updated_at: null }, entity: { name: '10GB:30free', non_basic_services_allowed: false, total_services: 30, total_routes: 1000, total_private_domains: -1, memory_limit: 10240, trial_db_allowed: false, instance_memory_limit: -1, app_instance_limit: -1 } }, { metadata: { guid: '851c99c6-7bb3-400f-80a0-a06962e0c5d3', url: '/v2/quota_definitions/851c99c6-7bb3-400f-80a0-a06962e0c5d3', created_at: '2014-10-31T17:10:53Z', updated_at: '2014-11-04T23:53:50Z' }, entity: { name: '25GB:100free', non_basic_services_allowed: false, total_services: 100, total_routes: 1000, total_private_domains: -1, memory_limit: 25600, trial_db_allowed: false, instance_memory_limit: -1, app_instance_limit: -1 } }, { metadata: { guid: '5ad22d2c-1519-4e17-b555-f702fb38417e', url: '/v2/quota_definitions/5ad22d2c-1519-4e17-b555-f702fb38417e', created_at: '2015-02-02T22:18:44Z', updated_at: '2015-04-22T00:36:14Z' }, entity: { name: 'PCF-H', non_basic_services_allowed: true, total_services: 1000, total_routes: 1000, total_private_domains: -1, memory_limit: 204800, trial_db_allowed: false, instance_memory_limit: -1, app_instance_limit: -1 } }, { metadata: { guid: 'cf04086c-ccf9-442c-b89a-3f3fbcd365e3', url: '/v2/quota_definitions/cf04086c-ccf9-442c-b89a-3f3fbcd365e3', created_at: '2015-05-04T19:20:47Z', updated_at: '2015-05-04T19:26:14Z' }, entity: { name: 'oreilly', non_basic_services_allowed: true, total_services: 10000, total_routes: 1000, total_private_domains: -1, memory_limit: 307200, trial_db_allowed: false, instance_memory_limit: -1, app_instance_limit: -1 } } ] √ The platform returns Quota Definitions from Organizations (720ms)
I suppose that the best practice is to define an organization a determinated quota. How to set a Quota as default? How to configurate?
Juan Antonio
|
|
Error uploading application when pushing application
Jim Lin <jimlintw922@...>
CF Version: 215
Description: My push command is `cf push myapp -p myapp.war -m 512m -t 120` and I got the error message "Error uploading application". The detail trace log is as following:
============== Start of Log ============== REQUEST: [2015-10-23T16:12:49+08:00] GET /v2/jobs/a4866929-aff5-41bb-8891-0540ba45e97c HTTP/1.1 Host: api.140.92.27.254.xip.io Accept: application/json Authorization: [PRIVATE DATA HIDDEN] Content-Type: application/json User-Agent: go-cli 6.12.0-8c65bbd / linux
RESPONSE: [2015-10-23T16:12:49+08:00] HTTP/1.1 200 OK Content-Length: 491 Content-Type: application/json;charset=utf-8 Date: Fri, 23 Oct 2015 08:14:56 GMT Server: nginx X-Cf-Requestid: 7cea1ef8-d14a-4260-4b3c-dcc387684911 X-Content-Type-Options: nosniff X-Vcap-Request-Id: 244f6491-caae-43b4-69c8-9e80f4a61c83::38d83968-cd06-4ede-8531-1356d08cf38d
{ "metadata": { "guid": "a4866929-aff5-41bb-8891-0540ba45e97c", "created_at": "2015-10-23T08:14:51Z", "url": "/v2/jobs/a4866929-aff5-41bb-8891-0540ba45e97c" }, "entity": { "guid": "a4866929-aff5-41bb-8891-0540ba45e97c", "status": "failed", "error": "Use of entity>error is deprecated in favor of entity>error_details.", "error_details": { "error_code": "UnknownError", "description": "An unknown error occurred.", "code": 10001 } } } FAILED Error uploading application. An unknown error occurred. FAILED Error uploading application. An unknown error occurred.
============== End of Log ==============
How to I do diagnosis to find out the root cause?
Thanks all.
Sincerely, Jim
|
|
Re: REST API endpoint for accessing application logs
Loggregator doesn't store any logs. The most it does is maintain a buffer as mentioned above which is defaulted to 100 lines of logs. If you wish to store logs you can forward them to third-party syslog drains and other consumers.
|
|
Phew! We spent the last few weeks reworking our cluster internals and build process to bring you the best Lattice yet. Functionally, this is not a big release, but the changes give us a noticeably more stable cluster (less random errors around startup), and open up a lot of possibilities for future functionality. There are a number of breaking changes, most notably the lack of support on DigitalOcean, Google Compute Engine or Openstack. These platforms don't have the same ability to publish public base images like Vagrant boxes on Atlas (Vagrant Cloud) or AMIs on AWS. We're currently prioritizing whether/how soon we can bring back support for those platforms, and what it might look like (packer null builder < https://www.packer.io/docs/builders/null.html> or maybe bake-your-own images since those platforms support private base images). If you use Lattice (and especially if you use one of the "temporarily discontinued" platforms), please take the time and fill out our survey at http://goo.gl/forms/z33xBoLaeQ. We'd love your feedback on what Lattice does for you and what platform(s) you're using (or would like to). Quick rundown: - Cluster - Retooling of Lattice build and deployment to use packer-bosh < https://github.com/cppforlife/packer-bosh> - Diego 0.1434.0, Garden-Linux 0.307.0, CF v218, Routing 0.99.0 - Default vagrant up < http://lattice.cf/docs/vagrant/> target is now local.lattice.cf - Simpler setup for terraform apply < http://lattice.cf/docs/terraform/> - CLI - Provided by cluster, then ltc sync updates itself from cluster - Supports setting user context by USER directive for docker images - Define HTTP routes to fully-qualified domains or context paths We updated a lot of our documentation to go with the cluster changes, but there are still some broken links and outdated contents. We're working towards open-sourcing the website contents themselves, but for now if you want to fix something on on lattice.cf, we encourage you to open a Github issue < https://github.com/cloudfoundry-incubator/lattice-release/issues/new> . Full release notes are included below. As always: - If you think you've found a bug, please file a GitHub issue. - If you have a contribution, we'd be happy to give you feedback via a Pull Request. - You can track our prioritized queue of work at: http://bit.ly/lattice-tracker-- David Wadden Product Manager Pivotal Software, Inc. dwadden(a)pivotal.io ---------- Forwarded message --------- From: davidwadden <notifications(a)github.com> Date: Thu, Oct 22, 2015 at 3:33 PM Subject: [lattice-release] v0.6.0 To: cloudfoundry-incubator/lattice-release < lattice-release(a)noreply.github.com> *Help us help you, we'd love to hear your thoughts and suggestions on our survey < http://goo.gl/forms/z33xBoLaeQ>! Your answers will help us understand how you use Lattice and inform upcoming feature work.* Breaking Changes - *v0.6.0* does not work on DigitalOcean, Google Compute Engine, Openstack - Please continue to use the *v0.5.0* < https://github.com/cloudfoundry-incubator/lattice-release/releases/tag/v0.5.0> bundle to deploy to these platforms - Lack of support for public user-created images (like AMIs) requires different deployment strategy - Additional discussions about this feature: DigitalOcean < https://digitalocean.uservoice.com/forums/136585-digitalocean/suggestions/3249642-share-an-image-w-another-account>, Google Compute Engine < https://cloud.google.com/compute/docs/images#public_images> - vagrant up expects ltc target local.lattice.cf by default [#102582770 < https://www.pivotaltracker.com/story/show/102582770>] - Removed Terraform module < https://terraform.io/docs/modules/create.html> definition to simplify provisioning - Configured using terraform.tfvars instead of lattice.<platform>.tf [ #104919576 < https://www.pivotaltracker.com/story/show/104919576>] - terraform get -update no longer necessary - ltc launch-droplet no longer accepts --working-dir [#104935318 < https://www.pivotaltracker.com/story/show/104935318>] - Define multple routes by passing the --http-route or --tcp-route flag multiple times [#105631892 < https://www.pivotaltracker.com/story/show/105631892>] - Retire ltc update-routes [#104177142 < https://www.pivotaltracker.com/story/show/104177142>] - Working with the Lattice development environment < https://github.com/cloudfoundry-incubator/lattice-release#development> has changed (significantly) [#105305792 < https://www.pivotaltracker.com/story/show/105305792>] - Lattice has been split into separate cluster and CLI repositories - Moved cloudfoundry-incubator/lattice < https://github.com/cloudfoundry-incubator/lattice/tree/legacy> to cloudfoundry-incubator/lattice-release < https://github.com/cloudfoundry-incubator/lattice-release> - Forked cloudfoundry-incubator/lattice/ltc < https://github.com/cloudfoundry-incubator/lattice/tree/legacy/ltc> to cloudfoundry-incubator/ltc < https://github.com/cloudfoundry-incubator/ltc> Complete retooling of Lattice build and cluster deployment [##2124074 < https://www.pivotaltracker.com/epic/show/2124074>] The Lattice build process has been completely retooled to create images that are fully configured with all Lattice < https://github.com/cloudfoundry-incubator/lattice-release> + Diego < https://github.com/cloudfoundry-incubator/diego-release> microservices at build time. We use packer-bosh < https://github.com/cppforlife/packer-bosh> to bake the Diego, Loggregator, and Routing BOSH releases into the Lattice base image. This ensures the configurations never get out of sync with the mainline CF versions. Thus, we greatly improve cluster stability. Users do not and will not need to understand or use BOSH to deploy Lattice. New Features *Δ* indicates a *breaking change*. Cluster - Diego upgraded from *0.1424.1* < https://github.com/cloudfoundry-incubator/diego-release/releases/tag/0.1424.1> to *0.1434.0* < https://github.com/cloudfoundry-incubator/diego-release/releases/tag/0.1434.0> - Garden-Linux upgraded from *0.295.0* < https://github.com/cloudfoundry-incubator/garden-linux-release/releases/tag/v0.295.0> to *0.307.0* < https://github.com/cloudfoundry-incubator/garden-linux-release/releases/tag/v0.307.0> - Fixes #187 < https://github.com/cloudfoundry-incubator/lattice-release/issues/187>: Cells disk is full cloudfoundry-incubator/garden-linux-release#7 < https://github.com/cloudfoundry-incubator/garden-linux-release/issues/7> [#102180368 < https://www.pivotaltracker.com/story/show/102180368>] - CF upgraded from *v213-93-g8a4f752* < https://github.com/cloudfoundry/cf-release/tree/v213-93-g8a4f752> to *v218* < https://github.com/cloudfoundry/cf-release/releases/tag/v218> [ #100518218 < https://www.pivotaltracker.com/story/show/100518218>] - Works on Vagrant - VirtualBox [#104128040 < https://www.pivotaltracker.com/story/show/104128040>] - VMWare Fusion [#104921036 < https://www.pivotaltracker.com/story/show/104921036>] - AWS (all regions) [#104920976 < https://www.pivotaltracker.com/story/show/104920976>] [#105827024 < https://www.pivotaltracker.com/story/show/105827024>] - Replace shared folders with file provisioners for Vagrant VMs [ #105732128 < https://www.pivotaltracker.com/story/show/105732128>] - vagrant up --provider=aws works to all AWS regions [#105376966 < https://www.pivotaltracker.com/story/show/105376966>] - Works on Terraform (AWS only) (*Δ*) - AWS (all regions) [#104919576 < https://www.pivotaltracker.com/story/show/104919576>] [#105827024 < https://www.pivotaltracker.com/story/show/105827024>] [#105827024 < https://www.pivotaltracker.com/story/show/105827024>] - Removed Terraform module < https://terraform.io/docs/modules/create.html> to simplify provisioning - Configured using terraform.tfvars instead of lattice.<platform>.tf (*Δ*) - terraform get -update no longer necessary CLI - ltc should be downloadable from the Lattice cluster [#102877664 < https://www.pivotaltracker.com/story/show/102877664>] - ltc sync updates itself from the cluster [#102877664 < https://www.pivotaltracker.com/story/show/102877664>] [#105668046 < https://www.pivotaltracker.com/story/show/105668046>] [#102482290 < https://www.pivotaltracker.com/story/show/102482290>] - Vendor ltc dependencies with submodules instead of Godeps [#101770536 < https://www.pivotaltracker.com/story/show/101770536>] - ltc launch-droplet no longer accepts --working-dir [#104935318 < https://www.pivotaltracker.com/story/show/104935318>] (*Δ*) - ltc build-droplet and ltc launch-droplet no longer use privileged containers [#104921458 < https://www.pivotaltracker.com/story/show/104921458>] - ltc create --privileged starts a docker image with a privileged container [#105355654 < https://www.pivotaltracker.com/story/show/105355654>] - ltc supports improved user namespacing [#105324808 < https://www.pivotaltracker.com/story/show/105324808>] [#105328688 < https://www.pivotaltracker.com/story/show/105328688>] - ltc create --user specifies the user context of a docker app [ #104917574 < https://www.pivotaltracker.com/story/show/104917574>] - Next, uses the USER directive from docker metadata [#104917678 < https://www.pivotaltracker.com/story/show/104917678>] - Lastly, defaults to "root" [#104918540 < https://www.pivotaltracker.com/story/show/104918540>] - Routing enhancements for ltc - The below changes apply to ltc create, ltc launch-droplet, and ltc update - Define multple routes by passing the --http-route or --tcp-route flag multiple times [#105631892 < https://www.pivotaltracker.com/story/show/105631892>] (*Δ*) - HTTP/TCP routes determine default container port for single port apps [#105635660 < https://www.pivotaltracker.com/story/show/105635660> ] - #104 < https://github.com/cloudfoundry-incubator/lattice-release/issues/104>, #137 < https://github.com/cloudfoundry-incubator/lattice-release/issues/137>: Custom domains in routes [#93628052 < https://www.pivotaltracker.com/story/show/93628052>] [#96562554 < https://www.pivotaltracker.com/story/show/96562554>] - #217 < https://github.com/cloudfoundry-incubator/lattice-release/issues/217>: Use of context path routes with Lattice [#105301140 < https://www.pivotaltracker.com/story/show/105301140>] - Retire ltc update-routes [#104177142 < https://www.pivotaltracker.com/story/show/104177142>] (*Δ*) Bug Fixes - Modify docker image examples to app(s) start properly [#105069548 < https://www.pivotaltracker.com/story/show/105069548>] [#105881880 < https://www.pivotaltracker.com/story/show/105881880>] - Postgres docker image requires ltc create --privileged to start [ #105071050 < https://www.pivotaltracker.com/story/show/105071050>] Interestings - Configure local.lattice.cf to replace 192.168.11.11.xip.io [#102582770 < https://www.pivotaltracker.com/story/show/102582770>] (*Δ*) - Default timeout on ltc test increased to 5m [#105622190 < https://www.pivotaltracker.com/story/show/105622190>] - Longer timeout to adjust for AWS EBS volume no longer being pre-warmed - vagrant up works on Windows host with AWS provider [#98709384 < https://www.pivotaltracker.com/story/show/98709384>] CI / Packaging - Create pipeline that does docker build docker push from cloudfoundry-incubator/lattice-ci - Create pipeline that deploys from cloudfoundry-incubator/lattice-release [#104919732 < https://www.pivotaltracker.com/story/show/104919732>] [#105306942 < https://www.pivotaltracker.com/story/show/105306942>] - CI builds and publishes vagrant boxes to Atlas < https://atlas.hashicorp.com/> - VirtualBox, VMWare Fusion [#105496810 < https://www.pivotaltracker.com/story/show/105496810>] - AWS [#105496796 < https://www.pivotaltracker.com/story/show/105496796>] - Lattice has been split into separate cluster and CLI repositories - cloudfoundry-incubator/lattice < https://github.com/cloudfoundry-incubator/lattice/tree/legacy> moved to cloudfoundry-incubator/lattice-release < https://github.com/cloudfoundry-incubator/lattice-release> - cloudfoundry-incubator/lattice/ltc < https://github.com/cloudfoundry-incubator/lattice/tree/legacy/ltc> forked to cloudfoundry-incubator/ltc < https://github.com/cloudfoundry-incubator/ltc> - Consolidate architecture-specific bundles into single bundle [ #102485658 < https://www.pivotaltracker.com/story/show/102485658>] - Bundle no longer includes ltc; this is now served by the cluster [ #102877664 < https://www.pivotaltracker.com/story/show/102877664>] Documentation - Update documentation for cluster changes [#105488088 < https://www.pivotaltracker.com/story/show/105488088>] - Vagrant Platforms < http://lattice.cf/docs/vagrant/> [#105491060 < https://www.pivotaltracker.com/story/show/105491060>] - Terraform Platforms < http://lattice.cf/docs/terraform/> [#95925124 < https://www.pivotaltracker.com/story/show/95925124>] - Document how to vagrant up using AWS provider [#105491060 < https://www.pivotaltracker.com/story/show/105491060>] - Replace '192.168.11.11.xip.io' with 'local.lattice.cf' as default system domain [#102582848 < https://www.pivotaltracker.com/story/show/102582848>] (*Δ*) - #220 < https://github.com/cloudfoundry-incubator/lattice-release/issues/220>, #221 < https://github.com/cloudfoundry-incubator/lattice-release/pull/221>: Upgrade Vagrant to support VirtualBox 5.x [#106054292 < https://www.pivotaltracker.com/story/show/106054292>] [#106052660 < https://www.pivotaltracker.com/story/show/106052660>] - Document setting up the v0.6.0+ development environment on VirtualBox [ #105305792 < https://www.pivotaltracker.com/story/show/105305792>] (*Δ*) - Update ltc syntax for user context and routing functionality [ #105069548 < https://www.pivotaltracker.com/story/show/105069548>] [ #105635874 < https://www.pivotaltracker.com/story/show/105635874>] Known Issues - TCP routes are not deleted when route is removed / app(s) are stopped #208 < https://github.com/cloudfoundry-incubator/lattice-release/issues/208> [ ##1940024 < https://www.pivotaltracker.com/epic/show/1940024>] - Two apps with same TCP route defined will round-robin between two separate apps [#105929084 < https://www.pivotaltracker.com/story/show/105929084>] — View it on GitHub < https://github.com/cloudfoundry-incubator/lattice-release/releases/tag/v0.6.0> .
|
|
The CF CLI team cut 6.13.0. Release notes and binaries are available at: https://github.com/cloudfoundry/cli#downloadsNote that we have simplified the download matrix and filenames are being updated to include the release version. Let us know what you think! Highlights of this release include: Diego GA In alignment with the effort to get to a GA version of Diego [0] in CF-Release, this version of the CLI includes new commands specific to the Diego component of runtime. These commands have been pulled into the core CLI from the 2 existing plugins [1] [2]. Among the features, the highlights are: · A user can now ssh to an app container · `cf push` includes a new flag to specify a docker image [0] https://github.com/cloudfoundry-incubator/diego-design-notes/blob/master/migrating-to-diego.md#installing-the-diego-enabler-cli-plugin[1] https://github.com/cloudfoundry-incubator/diego-ssh[2] https://github.com/cloudfoundry-incubator/diego-cli-pluginOther Features: · Plugin install now prompts interactively and provides warning to inform user of risk · `cf scale` can now scale an app to zero instances Bug Fixes: · Fixed issue with password containing double-quote or backtick exposing partial password in cleartext in cf_trace · login with --sso flag was providing link with http url. Fixed bug so that it provides https url. Improved User Experience/Error Messages: · Attempt to delete a shared domain with `cf delete-domain` will now fail early · Improved error message when `cf curl` not properly formed · Improved message when no users found in `cf org-users` and `cf space-users` · Improved message when push of app times out due to wrong port specification New Plugins: · Firehose Nozzle Plugin http://github.com/pivotal-cf-experimental/nozzle-plugin· Cloud Deployment Plugin http://github.com/xchapter7x/deploycloudAlso notable: Updated CLI to Go 1.5.1, and added a --build flag to list this version. Greg Oehmen & Dies Köper Cloud Foundry CLI Product Manager
|
|
Re: Cloud Foundry DEA to Diego switch - when?
I'd encourage anyone wanting to switch to Diego to track the following release marker in our project tracker: https://www.pivotaltracker.com/story/show/76376202. When this marker is delivered, it means the core teams have confidence that Diego can replace the DEAs. Note that while the tracker shows the date for this release to occur this week, there are actually several unpointed placeholder stories above the line that will expand. Those stories will be broken down and pointed soon, so it will be possible to get a more realistic estimate soon. After it's deemed that Diego can replace the DEAs, there will be some time before the DEAs will be end-of-life'd, but I would not recommend waiting that long. On Wed, Oct 21, 2015 at 11:07 PM, Meng, Xiangyi <xiangyi.meng(a)emc.com> wrote: Hi, Amit
Our team is also planning the timeline of replacing dea with diego. Would you please let me know the approximated estimation on when the final iteration would come? Will it in 2016 or 1017?
Thanks,
Maggie
*From:* Amit Gupta [mailto:agupta(a)pivotal.io] *Sent:* 2015年10月22日 2:59 *To:* Discussions about Cloud Foundry projects and the system overall. *Subject:* [cf-dev] Re: Cloud Foundry DEA to Diego switch - when?
Hi Rishi,
Thanks for your question. Let's first clarify the distinction between what you deploy -- bosh releases (versioned packages of source code and binaries) -- and how you deploy things -- bosh deployment (a manifest of which releases to use, what code/binaries from those releases to place on nodes in your deployment cluster, property/credential configuration, networking and compute resources, etc.).
diego-release may not change, although it may be split into smaller releases, e.g. the cc-bridge part consisting of the components which talk to CC, and the diego runtime part consisting of components responsible for scheduling, running, and health-monitoring containerized workloads.
cf-release will undergo heavy changes. We are currently breaking it apart entirely, into separate releases: consul, etcd, logging-and-metrics, identity, routing, API, nats, postgres, and existing runtime backend (DEA, Warden, HM9k).
In addition to breaking up cf-release, we are working on cf-deployment[1], this will give you the same ability to deploy the Cloud Foundry PaaS as you know it today, but composed of multiple releases rather than the monolithic cf-release. We will ensure that cf-deployment has versioning and tooling to make it easy to deploy everything at versions that are known to work together.
For the first major iteration of cf-deployment, it will deploy all the existing components of cf-release, but coming from separate releases. You can still deploy diego separately (configured to talk to the CC) as you do today.
The second major iteration will be to leverage new BOSH features[2], such as links, AZs, cloud config, and global networking to simplify the manifest generation for cf-deployment. Again, you will still be able to deploy diego separately alongside your cf deployment.
The third iteration is to fold the diego-release deployment strategies into cf-deployment itself, so you'll have a single manifest deploying DEAs and Diego side-by-side.
The final iteration will be to remove the DEAs from cf-deployment and stop supporting the release that contains them.
As to your question of defaults, there are several definitions of "default". You can set Diego to be the default backend today[3]. You have to opt in to this, but then anyone using the platform you deployed will have their apps run on Diego by default. Pivotal Web Services, for example, now defaults to Diego as the backend. At some point, Diego will be the true default backend, and you will have to opt-out of it (either at the CC configuration level, or at the individual app level). Finally, at a later point in time, DEAs will no longer be supported and Diego will be the only backend option.
We are actively working on a timeline for all these things. You can see the Diego team's public tracker has a release marker[4] for when Diego will be capable of replacing the DEAs. After reaching that release marker, there will be some time given for the rest of the community to switch over before declaring end-of-life for the DEAs.
[1] https://github.com/cloudfoundry/cf-deployment
[2] https://github.com/cloudfoundry/bosh-notes/
[3] https://github.com/cloudfoundry/cf-release/blob/v222/jobs/cloud_controller_ng/spec#L396-L398
[4] https://www.pivotaltracker.com/story/show/76376202
Thanks,
Amit, OSS Release Integration PM
On Wed, Oct 21, 2015 at 10:31 AM, R M <rishi.investigate(a)gmail.com> wrote:
I am trying to understand when will Diego become default runtime of Cloud Foundry. Latest cf-release is still using DEA and if my understanding is correct, at some stage, a new cf-release version will come out with Diego and perhaps change to v3. Do we have any ideas on when/if this will happen? Is it safe to assume that diego-release on github will slowly transition to cf-release?
Thanks.
|
|