Re: PR for allowing gem repo configuration in Ruby buildpack
Hey Jack, Thanks for putting the putting the pull request in. We'll review it shortly and get back to you on the thread.
toggle quoted messageShow quoted text
On Tue, Feb 2, 2016 at 2:35 PM, Jack Cai <greensight(a)gmail.com> wrote: Yes that should work too, but it's still nice to be able to install the gems during staging.
Jack
On Tue, Feb 2, 2016 at 2:33 PM, Evan Farrar <evanfarrar(a)gmail.com> wrote:
Could using a custom gem mirror be achieved by caching the dependencies before push? Operating in a "airgapped" or "Disconnected environment" and operating in a "whitelist firewall environment" often require similar approaches, so following the advice for disconnected environments might help:
https://github.com/cloudfoundry/buildpack-packager/blob/master/doc/disconnected_environments.md
On Tue, Feb 2, 2016 at 11:22 AM, Jack Cai <greensight(a)gmail.com> wrote:
Can someone merge this PR: [1]? It provides useful features to allow operators to set a global gem repo mirror (via environment variable group) where the default repo is slow to reach. Thanks!!
Jack
[1] https://github.com/cloudfoundry/ruby-buildpack/pull/47
|
|
Re: - CC configuration in deployment manifest
Hi Kinjal,
We generally recommend naming it with a specific identifiable folder name that is not the bucket root.
This might look like: ... buildpack_directory_key: buildpacks .... droplet_directory_key: droplets ...
or ... buildpack_directory_key: dev-cc-buildpacks ... droplet_directory_key: dev-cc-droplets ...
Hope that helps. Similarly for app_package_directory_key and resource_directory_key
-Dieu CF CAPI PM
toggle quoted messageShow quoted text
On Tue, Feb 2, 2016 at 7:29 AM, Kinjal Doshi <kindoshi(a)gmail.com> wrote: Hi,
I have a question regarding the configuration of cloud controller.
It is described at http://docs.cloudfoundry.org/deploying/aws/cf-stub.html#editing that DROPLET_DIRECTORY_KEY should be replaced with the directory/bucket used to store the droplets and BUILDPACK_DIRECTORY_KEY should be replaced with the directory/bucket used to store buildpacks
Please help me understand if there should be specific values for these two parameters or it could be just any random directory name which is created later?
Thanks, Kinjal
|
|
Re: PR for allowing gem repo configuration in Ruby buildpack
Yes that should work too, but it's still nice to be able to install the gems during staging.
Jack
toggle quoted messageShow quoted text
On Tue, Feb 2, 2016 at 2:33 PM, Evan Farrar <evanfarrar(a)gmail.com> wrote: Could using a custom gem mirror be achieved by caching the dependencies before push? Operating in a "airgapped" or "Disconnected environment" and operating in a "whitelist firewall environment" often require similar approaches, so following the advice for disconnected environments might help:
https://github.com/cloudfoundry/buildpack-packager/blob/master/doc/disconnected_environments.md
On Tue, Feb 2, 2016 at 11:22 AM, Jack Cai <greensight(a)gmail.com> wrote:
Can someone merge this PR: [1]? It provides useful features to allow operators to set a global gem repo mirror (via environment variable group) where the default repo is slow to reach. Thanks!!
Jack
[1] https://github.com/cloudfoundry/ruby-buildpack/pull/47
|
|
Re: PR for allowing gem repo configuration in Ruby buildpack
Evan Farrar <evanfarrar@...>
toggle quoted messageShow quoted text
On Tue, Feb 2, 2016 at 11:22 AM, Jack Cai <greensight(a)gmail.com> wrote: Can someone merge this PR: [1]? It provides useful features to allow operators to set a global gem repo mirror (via environment variable group) where the default repo is slow to reach. Thanks!!
Jack
[1] https://github.com/cloudfoundry/ruby-buildpack/pull/47
|
|
PR for allowing gem repo configuration in Ruby buildpack
|
|
Re: Need help for diego deployment
Sorry, for the typo I meant 6868
Thanks, Kinjal
toggle quoted messageShow quoted text
On Tue, Feb 2, 2016 at 11:04 PM, Kinjal Doshi <kindoshi(a)gmail.com> wrote: Hi Amit,
This does not seem to be a port issue on 6968. I tried the same deployment by modifying the security groups (both bosh and cf ) to allow All Protocol All Ports. even wit this change the deployment fails while compiling packages.
Would be great i you could provide some pointers to have this corrected. One thing I noticed is that the config ha_proxy is set to null in the generate deployment manifest.
Thanks, Kinjal
On Tue, Feb 2, 2016 at 12:35 AM, Kinjal Doshi <kindoshi(a)gmail.com> wrote:
Hi Amit,
I checked the ports on all the security groups and found that 6868 is enabled on the inbound for 0.0.0.0/0 in all the groups.
Am sending you the the bosh logs on your personal email address.
Would be great if you could please take a look.
Thanks, Kinjal
On Sat, Jan 30, 2016 at 4:00 AM, Amit Gupta <agupta(a)pivotal.io> wrote:
Hey Kinjal,
Happy to help!
Looks like your director is failing to connect to your compilation VMs. In your manifest you have a network called "cf1" with an associated subnet ID and security groups. I believe specifically trying to reach those VMs on port 6868. Can you look at the security group rules, including the security groups applied to the micro bosh VM, and see why there might be problems communicating?
Best, Amit
On Fri, Jan 29, 2016 at 12:49 PM, Kinjal Doshi <kindoshi(a)gmail.com> wrote:
Hi Amit,
Really appreciate all the help I receive on this forum. Hats off to you all.
Here is the deployment log output: https://gist.github.com/kinjaldoshi/0925fdf6022b079ca2b5
Thanks, Kinjal
On Sat, Jan 30, 2016 at 2:03 AM, Amit Gupta <agupta(a)pivotal.io> wrote:
Since you started with the "minimal-aws" flow which happens to document using micro bosh, you should be fine to continue with micro bosh, instead of the newer bosh-init workflow. You may run into some discrepancies in the downstream documentation depending on whether it assumed a bosh-init workflow vs a micro bosh workflow, but we can guide you through those should you hit any problems.
On Fri, Jan 29, 2016 at 12:24 PM, Kinjal Doshi <kindoshi(a)gmail.com> wrote:
Hi Amit,
Thanks a lot for the quick response.
I am currently sanitizing the output, will send it soon. In the mean while, I wanted to confirm if I have created microbosh using the correct process. I have followed the instructions at: https://bosh.io/docs/deploy-microbosh-to-aws.html. However, I see that there are other instructions too to create microbosh as follows: https://docs.cloudfoundry.org/deploying/aws/setup_aws.html and http://bosh.io/docs/init-aws.html
I am guessing I have used the wrong procedure, is taht correct?
Thanks in advance, Kinjal
On Sat, Jan 30, 2016 at 1:27 AM, Amit Gupta <agupta(a)pivotal.io> wrote:
Hi Kinjal,
The task logs have sensitive credentials in them. "bosh tasks 255 --debug" will give that output, and will probably also include the full manifest in the output. You may wish to sanitize the output before sharing it or send me the output privately (agupta(a)pivotal.io) if you're concerned about leaking some info.
Best, Amit
On Fri, Jan 29, 2016 at 11:44 AM, Kinjal Doshi <kindoshi(a)gmail.com> wrote:
Hi
I have resolved the errors in generating deployment manifest. on executing bosh deploy, the below error is encountered while compiling packages:
Started compiling packages Started compiling packages > rtr/2d7de4f6fc25938c21c5be87174f95583feb14b5 Started compiling packages > syslog_drain_binder/3c9c0b02c11c8dba10d059fe07e6d2ee641ec053 Started compiling packages > routing-api/b4a3e7034c4a925aa42d45419b46ad6b128d92b1 Started compiling packages > collector/158398837665181c70bd786b46e6f4d772523017 Failed compiling packages > routing-api/b4a3e7034c4a925aa42d45419b46ad6b128d92b1: Timed out pinging to dc15da09-8086-4231-a5b4-15efafa27eaf after 600 seconds (00:11:03) Failed compiling packages > syslog_drain_binder/3c9c0b02c11c8dba10d059fe07e6d2ee641ec053: Timed out pinging to d150aff4-095c-4d48-8c6d-f182fc3738c7 after 600 seconds (00:11:03) Failed compiling packages > collector/158398837665181c70bd786b46e6f4d772523017: Timed out pinging to 824b2de9-bb39-4b24-8491-4e26f79adb50 after 600 seconds (00:11:03) Failed compiling packages > rtr/2d7de4f6fc25938c21c5be87174f95583feb14b5: Timed out pinging to 4d636c66-690a-43e7-8481-71258732d066 after 600 seconds (00:11:35)
Error 450002: Timed out pinging to dc15da09-8086-4231-a5b4-15efafa27eaf after 600 seconds
Task 255 error
Would be great if some pointers can be provided to proceed further. Please let me know if the logs for this bosh task are required.
Thanks in advance, Kinjal
On Fri, Jan 29, 2016 at 10:45 PM, Kinjal Doshi <kindoshi(a)gmail.com> wrote:
Hi Amit,
Please ignore the unresolved nodes error in the above email. I have been able to correct it, running into some more problems, checking it right now.
Please do let me know about my question on the dbs, though.
Thanks in advance, Kinjal
On Fri, Jan 29, 2016 at 1:29 PM, Kinjal Doshi <kindoshi(a)gmail.com> wrote:
Hi Amit,
Thanks a lot for your response on this.
I was trying to use the manifest generation scripts to redeploy cf but I ran into errors during spiff merge as below:
ubuntu(a)ip-172-31-45-52:~/cf-deployment/cf-release$ scripts/generate_deployment_manifest aws ../cf-stub.yml > cf-deployment.yml 2016/01/29 07:49:05 error generating manifest: unresolved nodes: (( static_ips(1) )) in /home/ubuntu/cf-deployment/cf-release/templates/cf-infrastructure-aws.yml jobs.[0].networks.[0].static_ips (( static_ips(5, 6, 15, 16, 17, 18, 19, 20) )) in /home/ubuntu/cf-deployment/cf-release/templates/cf-infrastructure-aws.yml jobs.[1].networks.[0].static_ips (( static_ips(27, 28, 29) )) in /home/ubuntu/cf-deployment/cf-release/templates/cf-infrastructure-aws.yml jobs.[5].networks.[0].static_ips (( static_ips(10, 25) )) in /home/ubuntu/cf-deployment/cf-release/templates/cf-infrastructure-aws.yml jobs.[6].networks.[0].static_ips
The public gist pointing to the cf-stub created for this attempt is at: https://gist.github.com/kinjaldoshi/b0dc004876d2a4615c65
I am not very sure but I think this has something to do with the way I configured the subnets. Could you please guide me on the corrections required here. I know how (( static_ips(27, 28, 29) )) works, but not sure why there is a problem in resolving to the required values.
Another question, I have is on the editing instructions at: http://docs.cloudfoundry.org/deploying/aws/cf-stub.html#editing
For the ccdb and uaadb, as per comments, is it required for me to create a service and host these DBs as mentioned in the 'Editing Instructions' column? In that case where can i find the DDL to create the db and tables?
Thanks a lot in advance, Kinjal
On Fri, Jan 29, 2016 at 10:31 AM, Amit Gupta <agupta(a)pivotal.io> wrote:
Hi Kinjal,
The minimal-aws manifest would be quite difficult to augment to get it to work with diego. You would need to add static IP to your private network, add a resource pool or increase the size of an existing one, add the consul job, colocate the consul agent with some of the CF jobs, and add a few configuration properties that aren't in the minimal one (e.g. loggregator.tls.ca). It's probably simpler to use the manifest generations scripts to redeploy cf (before deploying diego).
Use:
* http://docs.cloudfoundry.org/deploying/common/create_a_manifest.html * http://docs.cloudfoundry.org/deploying/common/deploy.html
Let us know if you run into some difficulties. These documents ask you to define stubs, which require you to input data from your AWS IaaS setup, and may not exactly play nicely with the AWS setup described in the minimal-aws doc, I'm not sure.
Best, Amit
On Wed, Jan 27, 2016 at 3:17 AM, Kinjal Doshi < kindoshi(a)gmail.com> wrote:
Hi Eric,
Thanks a lot for the detailed response to my query.
I used the minimal-aws.yml configuration ( https://github.com/cloudfoundry/cf-release/tree/v226/example_manifests ) to create my deployment manifest which does not have the consul VMs set up. I am guessing that the first step would be to change this.
In this case should I use the script generators to generate the CF deployment manifest and re-deploy cloud foundry, or are there any other techniques/shorter path for doing this?
Thanks in advance, Kinjal
On Mon, Jan 25, 2016 at 6:57 AM, Eric Malm <emalm(a)pivotal.io> wrote:
Hi, Kinjal,
The stub I included in-line in my previous email may not have come through so well for all mail clients, so I've also included it in a public gist at https://gist.github.com/ematpl/149ac1bac691caae0722.
Thanks, Eric
On Fri, Jan 22, 2016 at 6:32 PM, Eric Malm <emalm(a)pivotal.io> wrote:
Hi, Kinjal,
Thanks for asking: this is an area in which the Diego team is looking forward to improving documentation and tooling in the near term. For the time being, here are some more manual instructions:
Assuming you have AWS infrastructure already provisioned for your CF deployment (VPC, subnets, NAT box, ELBs, etc.), you should need only to add one or more additional subnets for the VMs in the Diego deployment, and optionally an ELB for the SSH proxy routing tier (you can also use the HAproxy in the CF deployment to do the same load-balancing, but you'll need to give it an Elastic IP). If you're brave, and can coordinate the reserved sections in the CF and Diego deployment manifests' networking configs correctly, you could even share the same subnet(s) between the two deployments.
Once you have those subnets provisioned, you'll need to translate their properties into the iaas-settings.yml stub that you supply to the generate-deployment-manifest script in diego-release. Since you're deploying CF v226, we recommend you use Diego final version v0.1442.0 and the associated manifest-generation script in that version of the release. The other stubs should be independent of that iaas-settings one, and should be pretty much the same as the ones for the BOSH-Lite deployment. You'll likely want to provide different secrets and credentials in the property-overrides stub, though, and perhaps different instance counts depending on the availability needs of your deployment. I've included at the end of this email a representative iaas-settings.yml file from one of the Diego team's environments, with any specific identifiers for AWS entities replaced by PLACEHOLDER values.
As a side note, if you don't already have the consul VMs deployed in your CF deployment, you'll need to enable them so that the Diego components can use it to communicate. We recommend you operate an odd number of consul VMs: 1 if don't need high availability, and 3 or 5 if you do (as in a production environment). You can enable them by changing the instance count on the consul_z1 and consul_z2 jobs in the CF manifest.
After you've customized those stubs and adjusted your CF manifest if necessary, you can generate the Diego manifest by running something like the following from your diego-release directory:
$ ./scripts/generate-deployment-manifest \ PATH/TO/MY/CUSTOMIZED-PROPERTY-OVERRIDES.YML \ PATH/TO/MY/CUSTOMIZED-INSTANCE-COUNT-OVERRIDES.YML \
manifest-generation/bosh-lite-stubs/persistent-disk-overrides.yml \ PATH/TO/MY/CUSTOMIZED-IAAS-SETTINGS.YML \ manifest-generation/bosh-lite-stubs/additional-jobs.yml \ manifest-generation/bosh-lite-stubs/release-versions.yml \ PATH/TO/MY/MANIFEST/DIRECTORY \ > PATH/TO/MY/MANIFEST/DIRECTORY/diego.yml
'PATH/TO/MY/MANIFEST/DIRECTORY' should contain your CF manifest in a file named 'cf.yml'. Also, please note that if you move to CF v227 or later, which recommend Diego v0.1445.0 or later, the manifest-generation script has changed to take its stub arguments via flags, instead of as these positional arguments, and some of the stubs have changed slightly.
We also realize this is currently an obscure and potentially error-prone process, and the Diego team does have a couple stories queued up to do soon to provide more information about how to set up Diego on AWS:
- We plan in https://www.pivotaltracker.com/story/show/100909610 to parametrize, document, and publish the tools and additional templates we use to provision the AWS environments we use for CI and for our developers' experiments and investigations, all the way from an empty account to a VPC with BOSH, CF, and Diego. - We plan in https://www.pivotaltracker.com/story/show/100909610 to provide more manual instructions to set up a Diego environment compatible with the 'minimal-aws' CF deployment manifest and infrastructure settings, including provisioning any additional infrastructure such as subnets and translating their information into the stubs for the diego-release manifest-generation script.
We'll also be eager to adopt and to integrate with the tooling the CF Infrastructure and CF Release Integration teams will produce at some point to automate environment bootstrapping and CF manifest generation as much as possible.
Please let me and the rest of the team know here if you need further assistance or clarification.
Thanks again, Eric, CF Runtime Diego PM
*****
Example iaas-settings.yml file, with PLACEHOLDER entries for your environment's info:
iaas_settings: compilation_cloud_properties: availability_zone: us-east-1a instance_type: c3.large resource_pool_cloud_properties: - cloud_properties: availability_zone: us-east-1a elbs: - PLACEHOLDER-SSHProxyELB-ID instance_type: m3.medium name: access_z1 - cloud_properties: availability_zone: us-east-1b elbs: - PLACEHOLDER-SSHProxyELB-ID instance_type: m3.medium name: access_z2 - cloud_properties: availability_zone: us-east-1c elbs: - PLACEHOLDER-SSHProxyELB-ID instance_type: m3.medium name: access_z3 - cloud_properties: availability_zone: us-east-1a instance_type: m3.medium name: brain_z1 - cloud_properties: availability_zone: us-east-1b instance_type: m3.medium name: brain_z2 - cloud_properties: availability_zone: us-east-1c instance_type: m3.medium name: brain_z3 - cloud_properties: availability_zone: us-east-1a instance_type: m3.medium name: cc_bridge_z1 - cloud_properties: availability_zone: us-east-1b instance_type: m3.medium name: cc_bridge_z2 - cloud_properties: availability_zone: us-east-1c instance_type: m3.medium name: cc_bridge_z3 - cloud_properties: availability_zone: us-east-1a ephemeral_disk: iops: 1200 size: 50000 type: io1 instance_type: m3.large name: cell_z1 - cloud_properties: availability_zone: us-east-1b ephemeral_disk: iops: 1200 size: 50000 type: io1 instance_type: m3.large name: cell_z2 - cloud_properties: availability_zone: us-east-1c ephemeral_disk: iops: 1200 size: 50000 type: io1 instance_type: m3.large name: cell_z3 - cloud_properties: availability_zone: us-east-1a instance_type: m3.large name: colocated_z1 - cloud_properties: availability_zone: us-east-1b instance_type: m3.large name: colocated_z2 - cloud_properties: availability_zone: us-east-1c instance_type: m3.large name: colocated_z3 - cloud_properties: availability_zone: us-east-1a instance_type: m3.large name: database_z1 - cloud_properties: availability_zone: us-east-1b instance_type: m3.large name: database_z2 - cloud_properties: availability_zone: us-east-1c instance_type: m3.large name: database_z3 - cloud_properties: availability_zone: us-east-1a instance_type: m3.medium name: route_emitter_z1 - cloud_properties: availability_zone: us-east-1b instance_type: m3.medium name: route_emitter_z2 - cloud_properties: availability_zone: us-east-1c instance_type: m3.medium name: route_emitter_z3 stemcell: name: bosh-aws-xen-hvm-ubuntu-trusty-go_agent version: latest subnet_configs: - name: diego1 subnets: - cloud_properties: security_groups: - PLACEHOLDER-InternalSecurityGroup-ID subnet: PLACEHOLDER-subnet-id-A dns: - 10.10.0.2 gateway: 10.10.5.1 range: 10.10.5.0/24 reserved: - 10.10.5.2 - 10.10.5.9 static: - 10.10.5.10 - 10.10.5.63 - name: diego2 subnets: - cloud_properties: security_groups: - PLACEHOLDER-InternalSecurityGroup-ID subnet: PLACEHOLDER-subnet-id-B dns: - 10.10.0.2 gateway: 10.10.6.1 range: 10.10.6.0/24 reserved: - 10.10.6.2 - 10.10.6.9 static: - 10.10.6.10 - 10.10.6.63 - name: diego3 subnets: - cloud_properties: security_groups: - PLACEHOLDER-InternalSecurityGroup-ID subnet: PLACEHOLDER-subnet-id-C dns: - 10.10.0.2 gateway: 10.10.7.1 range: 10.10.7.0/24 reserved: - 10.10.7.2 - 10.10.7.9 static: - 10.10.7.10 - 10.10.7.63
On Fri, Jan 22, 2016 at 4:28 AM, Kinjal Doshi < kindoshi(a)gmail.com> wrote:
Hi,
After deploying CF version 226 on AWS using microbosh, I am trying to understand how to deploy Diego now to work with this version of CF but have not been able to figure out much yet. I was able to find steps for deploying Diego on BOSH-Lite at https://github.com/cloudfoundry-incubator/diego-release#deploying-diego-to-bosh-lite but not for BOSH.
Would appreciate some pointers in this direction .
Thanks in advance, Kinjal
|
|
Re: How does an application know it's running in DIEGO?
Thanks for all the replies. I think I've got what need.
Jack
toggle quoted messageShow quoted text
On Mon, Feb 1, 2016 at 8:05 PM, Matt Cholick <cholick(a)gmail.com> wrote: Your mileage may vary, but if you look at the full processes's environment there are differences. In our case, we'll see things like the ssh port in the diego environment but not the DEA, as well as INSTANCE_INDEX. On the flip side, the deprecated VCAP_APP_PORT is present when we deploy apps to DEA but not Diego.
-Matt
On Mon, Feb 1, 2016 at 9:05 AM, Dieu Cao <dcao(a)pivotal.io> wrote:
Hi Jack,
There are not any explicit environment variables that we expose that an app is running on DEAs or on Diego. My best suggestion here would be to set an environment variable explicitly and trigger your agent's behavior based on that.
-Dieu
On Mon, Feb 1, 2016 at 8:48 AM, Kris Hicks <khicks(a)pivotal.io> wrote:
You can use the CF CLI with the Diego-Enabler plugin to query whether an application is running on Diego or not: https://github.com/cloudfoundry-incubator/Diego-Enabler
e.g. `cf has-diego-enabled App_Name`
On Mon, Feb 1, 2016 at 7:25 AM, Jack Cai <greensight(a)gmail.com> wrote:
Hi,
Is there a way for the application to know whether it's running in DEA or DIEGO? Looking at [1], there is no environment variable that can tell this.
Jack
[1] https://docs.cloudfoundry.org/devguide/deploy-apps/environment-variable.html
|
|
Re: Need help for diego deployment
Hi Amit,
This does not seem to be a port issue on 6968. I tried the same deployment by modifying the security groups (both bosh and cf ) to allow All Protocol All Ports. even wit this change the deployment fails while compiling packages.
Would be great i you could provide some pointers to have this corrected. One thing I noticed is that the config ha_proxy is set to null in the generate deployment manifest.
Thanks, Kinjal
toggle quoted messageShow quoted text
On Tue, Feb 2, 2016 at 12:35 AM, Kinjal Doshi <kindoshi(a)gmail.com> wrote: Hi Amit,
I checked the ports on all the security groups and found that 6868 is enabled on the inbound for 0.0.0.0/0 in all the groups.
Am sending you the the bosh logs on your personal email address.
Would be great if you could please take a look.
Thanks, Kinjal
On Sat, Jan 30, 2016 at 4:00 AM, Amit Gupta <agupta(a)pivotal.io> wrote:
Hey Kinjal,
Happy to help!
Looks like your director is failing to connect to your compilation VMs. In your manifest you have a network called "cf1" with an associated subnet ID and security groups. I believe specifically trying to reach those VMs on port 6868. Can you look at the security group rules, including the security groups applied to the micro bosh VM, and see why there might be problems communicating?
Best, Amit
On Fri, Jan 29, 2016 at 12:49 PM, Kinjal Doshi <kindoshi(a)gmail.com> wrote:
Hi Amit,
Really appreciate all the help I receive on this forum. Hats off to you all.
Here is the deployment log output: https://gist.github.com/kinjaldoshi/0925fdf6022b079ca2b5
Thanks, Kinjal
On Sat, Jan 30, 2016 at 2:03 AM, Amit Gupta <agupta(a)pivotal.io> wrote:
Since you started with the "minimal-aws" flow which happens to document using micro bosh, you should be fine to continue with micro bosh, instead of the newer bosh-init workflow. You may run into some discrepancies in the downstream documentation depending on whether it assumed a bosh-init workflow vs a micro bosh workflow, but we can guide you through those should you hit any problems.
On Fri, Jan 29, 2016 at 12:24 PM, Kinjal Doshi <kindoshi(a)gmail.com> wrote:
Hi Amit,
Thanks a lot for the quick response.
I am currently sanitizing the output, will send it soon. In the mean while, I wanted to confirm if I have created microbosh using the correct process. I have followed the instructions at: https://bosh.io/docs/deploy-microbosh-to-aws.html. However, I see that there are other instructions too to create microbosh as follows: https://docs.cloudfoundry.org/deploying/aws/setup_aws.html and http://bosh.io/docs/init-aws.html
I am guessing I have used the wrong procedure, is taht correct?
Thanks in advance, Kinjal
On Sat, Jan 30, 2016 at 1:27 AM, Amit Gupta <agupta(a)pivotal.io> wrote:
Hi Kinjal,
The task logs have sensitive credentials in them. "bosh tasks 255 --debug" will give that output, and will probably also include the full manifest in the output. You may wish to sanitize the output before sharing it or send me the output privately (agupta(a)pivotal.io) if you're concerned about leaking some info.
Best, Amit
On Fri, Jan 29, 2016 at 11:44 AM, Kinjal Doshi <kindoshi(a)gmail.com> wrote:
Hi
I have resolved the errors in generating deployment manifest. on executing bosh deploy, the below error is encountered while compiling packages:
Started compiling packages Started compiling packages > rtr/2d7de4f6fc25938c21c5be87174f95583feb14b5 Started compiling packages > syslog_drain_binder/3c9c0b02c11c8dba10d059fe07e6d2ee641ec053 Started compiling packages > routing-api/b4a3e7034c4a925aa42d45419b46ad6b128d92b1 Started compiling packages > collector/158398837665181c70bd786b46e6f4d772523017 Failed compiling packages > routing-api/b4a3e7034c4a925aa42d45419b46ad6b128d92b1: Timed out pinging to dc15da09-8086-4231-a5b4-15efafa27eaf after 600 seconds (00:11:03) Failed compiling packages > syslog_drain_binder/3c9c0b02c11c8dba10d059fe07e6d2ee641ec053: Timed out pinging to d150aff4-095c-4d48-8c6d-f182fc3738c7 after 600 seconds (00:11:03) Failed compiling packages > collector/158398837665181c70bd786b46e6f4d772523017: Timed out pinging to 824b2de9-bb39-4b24-8491-4e26f79adb50 after 600 seconds (00:11:03) Failed compiling packages > rtr/2d7de4f6fc25938c21c5be87174f95583feb14b5: Timed out pinging to 4d636c66-690a-43e7-8481-71258732d066 after 600 seconds (00:11:35)
Error 450002: Timed out pinging to dc15da09-8086-4231-a5b4-15efafa27eaf after 600 seconds
Task 255 error
Would be great if some pointers can be provided to proceed further. Please let me know if the logs for this bosh task are required.
Thanks in advance, Kinjal
On Fri, Jan 29, 2016 at 10:45 PM, Kinjal Doshi <kindoshi(a)gmail.com> wrote:
Hi Amit,
Please ignore the unresolved nodes error in the above email. I have been able to correct it, running into some more problems, checking it right now.
Please do let me know about my question on the dbs, though.
Thanks in advance, Kinjal
On Fri, Jan 29, 2016 at 1:29 PM, Kinjal Doshi <kindoshi(a)gmail.com> wrote:
Hi Amit,
Thanks a lot for your response on this.
I was trying to use the manifest generation scripts to redeploy cf but I ran into errors during spiff merge as below:
ubuntu(a)ip-172-31-45-52:~/cf-deployment/cf-release$ scripts/generate_deployment_manifest aws ../cf-stub.yml > cf-deployment.yml 2016/01/29 07:49:05 error generating manifest: unresolved nodes: (( static_ips(1) )) in /home/ubuntu/cf-deployment/cf-release/templates/cf-infrastructure-aws.yml jobs.[0].networks.[0].static_ips (( static_ips(5, 6, 15, 16, 17, 18, 19, 20) )) in /home/ubuntu/cf-deployment/cf-release/templates/cf-infrastructure-aws.yml jobs.[1].networks.[0].static_ips (( static_ips(27, 28, 29) )) in /home/ubuntu/cf-deployment/cf-release/templates/cf-infrastructure-aws.yml jobs.[5].networks.[0].static_ips (( static_ips(10, 25) )) in /home/ubuntu/cf-deployment/cf-release/templates/cf-infrastructure-aws.yml jobs.[6].networks.[0].static_ips
The public gist pointing to the cf-stub created for this attempt is at: https://gist.github.com/kinjaldoshi/b0dc004876d2a4615c65
I am not very sure but I think this has something to do with the way I configured the subnets. Could you please guide me on the corrections required here. I know how (( static_ips(27, 28, 29) )) works, but not sure why there is a problem in resolving to the required values.
Another question, I have is on the editing instructions at: http://docs.cloudfoundry.org/deploying/aws/cf-stub.html#editing
For the ccdb and uaadb, as per comments, is it required for me to create a service and host these DBs as mentioned in the 'Editing Instructions' column? In that case where can i find the DDL to create the db and tables?
Thanks a lot in advance, Kinjal
On Fri, Jan 29, 2016 at 10:31 AM, Amit Gupta <agupta(a)pivotal.io> wrote:
Hi Kinjal,
The minimal-aws manifest would be quite difficult to augment to get it to work with diego. You would need to add static IP to your private network, add a resource pool or increase the size of an existing one, add the consul job, colocate the consul agent with some of the CF jobs, and add a few configuration properties that aren't in the minimal one (e.g. loggregator.tls.ca). It's probably simpler to use the manifest generations scripts to redeploy cf (before deploying diego).
Use:
* http://docs.cloudfoundry.org/deploying/common/create_a_manifest.html * http://docs.cloudfoundry.org/deploying/common/deploy.html
Let us know if you run into some difficulties. These documents ask you to define stubs, which require you to input data from your AWS IaaS setup, and may not exactly play nicely with the AWS setup described in the minimal-aws doc, I'm not sure.
Best, Amit
On Wed, Jan 27, 2016 at 3:17 AM, Kinjal Doshi <kindoshi(a)gmail.com
wrote: Hi Eric,
Thanks a lot for the detailed response to my query.
I used the minimal-aws.yml configuration ( https://github.com/cloudfoundry/cf-release/tree/v226/example_manifests ) to create my deployment manifest which does not have the consul VMs set up. I am guessing that the first step would be to change this.
In this case should I use the script generators to generate the CF deployment manifest and re-deploy cloud foundry, or are there any other techniques/shorter path for doing this?
Thanks in advance, Kinjal
On Mon, Jan 25, 2016 at 6:57 AM, Eric Malm <emalm(a)pivotal.io> wrote:
Hi, Kinjal,
The stub I included in-line in my previous email may not have come through so well for all mail clients, so I've also included it in a public gist at https://gist.github.com/ematpl/149ac1bac691caae0722.
Thanks, Eric
On Fri, Jan 22, 2016 at 6:32 PM, Eric Malm <emalm(a)pivotal.io> wrote:
Hi, Kinjal,
Thanks for asking: this is an area in which the Diego team is looking forward to improving documentation and tooling in the near term. For the time being, here are some more manual instructions:
Assuming you have AWS infrastructure already provisioned for your CF deployment (VPC, subnets, NAT box, ELBs, etc.), you should need only to add one or more additional subnets for the VMs in the Diego deployment, and optionally an ELB for the SSH proxy routing tier (you can also use the HAproxy in the CF deployment to do the same load-balancing, but you'll need to give it an Elastic IP). If you're brave, and can coordinate the reserved sections in the CF and Diego deployment manifests' networking configs correctly, you could even share the same subnet(s) between the two deployments.
Once you have those subnets provisioned, you'll need to translate their properties into the iaas-settings.yml stub that you supply to the generate-deployment-manifest script in diego-release. Since you're deploying CF v226, we recommend you use Diego final version v0.1442.0 and the associated manifest-generation script in that version of the release. The other stubs should be independent of that iaas-settings one, and should be pretty much the same as the ones for the BOSH-Lite deployment. You'll likely want to provide different secrets and credentials in the property-overrides stub, though, and perhaps different instance counts depending on the availability needs of your deployment. I've included at the end of this email a representative iaas-settings.yml file from one of the Diego team's environments, with any specific identifiers for AWS entities replaced by PLACEHOLDER values.
As a side note, if you don't already have the consul VMs deployed in your CF deployment, you'll need to enable them so that the Diego components can use it to communicate. We recommend you operate an odd number of consul VMs: 1 if don't need high availability, and 3 or 5 if you do (as in a production environment). You can enable them by changing the instance count on the consul_z1 and consul_z2 jobs in the CF manifest.
After you've customized those stubs and adjusted your CF manifest if necessary, you can generate the Diego manifest by running something like the following from your diego-release directory:
$ ./scripts/generate-deployment-manifest \ PATH/TO/MY/CUSTOMIZED-PROPERTY-OVERRIDES.YML \ PATH/TO/MY/CUSTOMIZED-INSTANCE-COUNT-OVERRIDES.YML \
manifest-generation/bosh-lite-stubs/persistent-disk-overrides.yml \ PATH/TO/MY/CUSTOMIZED-IAAS-SETTINGS.YML \ manifest-generation/bosh-lite-stubs/additional-jobs.yml \ manifest-generation/bosh-lite-stubs/release-versions.yml \ PATH/TO/MY/MANIFEST/DIRECTORY \ > PATH/TO/MY/MANIFEST/DIRECTORY/diego.yml
'PATH/TO/MY/MANIFEST/DIRECTORY' should contain your CF manifest in a file named 'cf.yml'. Also, please note that if you move to CF v227 or later, which recommend Diego v0.1445.0 or later, the manifest-generation script has changed to take its stub arguments via flags, instead of as these positional arguments, and some of the stubs have changed slightly.
We also realize this is currently an obscure and potentially error-prone process, and the Diego team does have a couple stories queued up to do soon to provide more information about how to set up Diego on AWS:
- We plan in https://www.pivotaltracker.com/story/show/100909610 to parametrize, document, and publish the tools and additional templates we use to provision the AWS environments we use for CI and for our developers' experiments and investigations, all the way from an empty account to a VPC with BOSH, CF, and Diego. - We plan in https://www.pivotaltracker.com/story/show/100909610 to provide more manual instructions to set up a Diego environment compatible with the 'minimal-aws' CF deployment manifest and infrastructure settings, including provisioning any additional infrastructure such as subnets and translating their information into the stubs for the diego-release manifest-generation script.
We'll also be eager to adopt and to integrate with the tooling the CF Infrastructure and CF Release Integration teams will produce at some point to automate environment bootstrapping and CF manifest generation as much as possible.
Please let me and the rest of the team know here if you need further assistance or clarification.
Thanks again, Eric, CF Runtime Diego PM
*****
Example iaas-settings.yml file, with PLACEHOLDER entries for your environment's info:
iaas_settings: compilation_cloud_properties: availability_zone: us-east-1a instance_type: c3.large resource_pool_cloud_properties: - cloud_properties: availability_zone: us-east-1a elbs: - PLACEHOLDER-SSHProxyELB-ID instance_type: m3.medium name: access_z1 - cloud_properties: availability_zone: us-east-1b elbs: - PLACEHOLDER-SSHProxyELB-ID instance_type: m3.medium name: access_z2 - cloud_properties: availability_zone: us-east-1c elbs: - PLACEHOLDER-SSHProxyELB-ID instance_type: m3.medium name: access_z3 - cloud_properties: availability_zone: us-east-1a instance_type: m3.medium name: brain_z1 - cloud_properties: availability_zone: us-east-1b instance_type: m3.medium name: brain_z2 - cloud_properties: availability_zone: us-east-1c instance_type: m3.medium name: brain_z3 - cloud_properties: availability_zone: us-east-1a instance_type: m3.medium name: cc_bridge_z1 - cloud_properties: availability_zone: us-east-1b instance_type: m3.medium name: cc_bridge_z2 - cloud_properties: availability_zone: us-east-1c instance_type: m3.medium name: cc_bridge_z3 - cloud_properties: availability_zone: us-east-1a ephemeral_disk: iops: 1200 size: 50000 type: io1 instance_type: m3.large name: cell_z1 - cloud_properties: availability_zone: us-east-1b ephemeral_disk: iops: 1200 size: 50000 type: io1 instance_type: m3.large name: cell_z2 - cloud_properties: availability_zone: us-east-1c ephemeral_disk: iops: 1200 size: 50000 type: io1 instance_type: m3.large name: cell_z3 - cloud_properties: availability_zone: us-east-1a instance_type: m3.large name: colocated_z1 - cloud_properties: availability_zone: us-east-1b instance_type: m3.large name: colocated_z2 - cloud_properties: availability_zone: us-east-1c instance_type: m3.large name: colocated_z3 - cloud_properties: availability_zone: us-east-1a instance_type: m3.large name: database_z1 - cloud_properties: availability_zone: us-east-1b instance_type: m3.large name: database_z2 - cloud_properties: availability_zone: us-east-1c instance_type: m3.large name: database_z3 - cloud_properties: availability_zone: us-east-1a instance_type: m3.medium name: route_emitter_z1 - cloud_properties: availability_zone: us-east-1b instance_type: m3.medium name: route_emitter_z2 - cloud_properties: availability_zone: us-east-1c instance_type: m3.medium name: route_emitter_z3 stemcell: name: bosh-aws-xen-hvm-ubuntu-trusty-go_agent version: latest subnet_configs: - name: diego1 subnets: - cloud_properties: security_groups: - PLACEHOLDER-InternalSecurityGroup-ID subnet: PLACEHOLDER-subnet-id-A dns: - 10.10.0.2 gateway: 10.10.5.1 range: 10.10.5.0/24 reserved: - 10.10.5.2 - 10.10.5.9 static: - 10.10.5.10 - 10.10.5.63 - name: diego2 subnets: - cloud_properties: security_groups: - PLACEHOLDER-InternalSecurityGroup-ID subnet: PLACEHOLDER-subnet-id-B dns: - 10.10.0.2 gateway: 10.10.6.1 range: 10.10.6.0/24 reserved: - 10.10.6.2 - 10.10.6.9 static: - 10.10.6.10 - 10.10.6.63 - name: diego3 subnets: - cloud_properties: security_groups: - PLACEHOLDER-InternalSecurityGroup-ID subnet: PLACEHOLDER-subnet-id-C dns: - 10.10.0.2 gateway: 10.10.7.1 range: 10.10.7.0/24 reserved: - 10.10.7.2 - 10.10.7.9 static: - 10.10.7.10 - 10.10.7.63
On Fri, Jan 22, 2016 at 4:28 AM, Kinjal Doshi < kindoshi(a)gmail.com> wrote:
Hi,
After deploying CF version 226 on AWS using microbosh, I am trying to understand how to deploy Diego now to work with this version of CF but have not been able to figure out much yet. I was able to find steps for deploying Diego on BOSH-Lite at https://github.com/cloudfoundry-incubator/diego-release#deploying-diego-to-bosh-lite but not for BOSH.
Would appreciate some pointers in this direction .
Thanks in advance, Kinjal
|
|
Re: Issue with crashing Windows apps on Diego
Just to clarify as well why I think this is so important - a majority of apps on our internal platforms require authentication and will return a 401 on the root page, making them unusable on Diego for Windows without completely disabling the healthchecks. These same apps work just fine on Iron Foundry because it was only checking the port. I'd love to move forward with Garden Windows support when we land Diego but for now I don't see how we can. Aaron -- View this message in context: http://cf-dev.70369.x6.nabble.com/Issue-with-crashing-Windows-apps-on-Diego-tp3586p3635.htmlSent from the CF Dev mailing list archive at Nabble.com.
|
|
Re: Issue with crashing Windows apps on Diego
I agree with your root argument that the port check doesn't really address application health and it's easy to push a non-working app and have the healthcheck still pass. My argument is that is exactly how the healthchecks work for Linux-based apps and it seems clear that is the intent of the "port" healthcheck. Any buildpack or Docker based app that I push on cflinuxfs2 will pass as soon as the web server starts accepting connections even if the actual app isn't working (yet, or at all). I don't disagree that improvement can be made here, but I do strongly believe that 1) the platform should be consistent across Linux and Windows apps and what is described as a "port" check should just be checking the port, and 2) any HTTP check should be configurable (either opt-in or opt-out) in cases where the root of an app isn't expected to return a 200, of which there are many valid cases. Your proposed work-around in my opinion is even worse, in that I have to disable any container checking at all if an app falls outside of what you consider typical. I think we agree that the best solution for most common apps is to use an HTTP check, but in order for that to be functional I think the platform would need to define a new "http" healthcheck type and allow the user to configure a timeout and expected status code (with defaults of 1 second and 200). Aaron -- View this message in context: http://cf-dev.70369.x6.nabble.com/Issue-with-crashing-Windows-apps-on-Diego-tp3586p3633.htmlSent from the CF Dev mailing list archive at Nabble.com.
|
|
Re: PHP Buildpack: Sunsetting HHVM and Supporting PHP7
I updated the RFC pointed at in Danny's email, but figured I'd reply here as well, in the spirit of overcommunication. I'll state my opinion as the Buildpacks PMC Lead: I think we should sunset HHVM support, and focus on PHP7. Happy to discuss, but since nobody else has commented (either on this email thread or on the RFC Github Issue), I'm assuming that this is the right path forward. If you feel we should continue experimental support for HHVM, I'll repeat Danny's request to comment on the RFC here by the end of this week: https://github.com/cloudfoundry/php-buildpack/issues/127 If, at the end of this week, we haven't heard compelling reasons to continue experimental support for HHVM, I'm going to ask Danny to proceed with his plans. -m On Mon, Feb 1, 2016 at 2:21 PM, Danny Rosen <drosen(a)pivotal.io> wrote: Hi there from the Cloud Foundry Buildpacks Team,
Much like Heroku <https://devcenter.heroku.com/articles/php-support#php-runtimes>, the Cloudfoundry PHP buildpack has had experimental support for HHVM <http://hhvm.com/>.
We'd like to understand if the team responsible for the PHP buildpack should continue to support HHVM and if so, what the benefits of HHVM are over PHP7.
Our preference is to move forward with PHP7 <https://github.com/cloudfoundry/php-buildpack/issues/125> and sunset support for HHVM. If you have a compelling reason for this buildpack to support HHVM please comment on the issue. <https://github.com/cloudfoundry/php-buildpack/issues/127>
Thanks for your time
--- Danny Rosen CF Buildpacks Product Manager
|
|
CVE-2016-0732 Privilege Escalation
Chip Childers <cchilders@...>
CVE-2016-0732 Privilege EscalationSeverity Critical Vendor Cloud Foundry Foundation Versions Affected Cloud Foundry v208 through v229 UAA v2.0.0 - v2.7.3 & v3.0.0 UAA-Release v2 through v4 Description A vulnerability has been identified with the identity zones feature of UAA, allowing elevation of privileges. Users with the appropriate permissions in one zone can perform unauthorized operations on a different zone. Only instances of UAA configured with multiple identity zones are vulnerable. Mitigation OSS users are strongly encouraged to follow one of the mitigations below: - Upgrade to Cloud Foundry v230 [1] or later - For standalone UAA users - For users using UAA Version 3.0.0, please upgrade to UAA Release to v3.0.1 [3] or later - For users using standalone UAA Version 2.X.X, please upgrade to UAA Release to v2.7.4 [2] or v3.0.1 [3] - For users using UAA-Release (UAA bosh release), please upgrade to UAA-Release v5 [4] Credit Discovered by the GE Digital Security Team References [1] https://github.com/cloudfoundry/cf-release/releases/tag/v230[2] https://github.com/cloudfoundry/uaa/releases/tag/2.7.4[3] https://github.com/cloudfoundry/uaa/releases/tag/3.0.1[4] https://github.com/cloudfoundry/uaa-release/releases/tag/v5History 2016-Feb-2: Initial vulnerability report published
|
|
- CC configuration in deployment manifest
Hi, I have a question regarding the configuration of cloud controller. It is described at http://docs.cloudfoundry.org/deploying/aws/cf-stub.html#editing that DROPLET_DIRECTORY_KEY should be replaced with the directory/bucket used to store the droplets and BUILDPACK_DIRECTORY_KEY should be replaced with the directory/bucket used to store buildpacks Please help me understand if there should be specific values for these two parameters or it could be just any random directory name which is created later? Thanks, Kinjal
|
|
Re: Issue with crashing Windows apps on Diego
On Tue, Feb 2, 2016 at 9:31 AM, Matthew Horan <mhoran(a)pivotal.io> wrote: On Mon, Feb 1, 2016 at 7:06 PM, aaron_huber <aaron.m.huber(a)intel.com> wrote:
This has been nagging at me all weekend and I think I finally figured out why. So far all healthchecks in CloudFoundry have been either on the PID (process didn't crash) or the port (accepting TCP connections). This is the first time I've seen one that is actually doing an HTTP check that must pass for the "container" (such as it is on Windows) to be considered healthy. Looking at the Linux healthcheck code it looks like there is a "uri" healthcheck:
https://github.com/cloudfoundry-incubator/healthcheck/blob/master/cmd/healthcheck/main.go#L49-L53
But as far as I can tell it's unused because only port is ever called:
https://github.com/cloudfoundry-incubator/nsync/blob/master/recipebuilder/recipe_builder.go#L97-L98
Hey Aaron -
You're right; it looks like the port check is only ever used. I don't have the history as to why we (the CF .NET team) implemented an HTTP check instead of a port check, but that's how it is.
In talking with a former team member, I came across the story [1] where we made this change. The WebAppServer will listen on the port immediately upon starting, even if the app has not successfully loaded. This was undesirable for the common case -- but obviously causes issues for slow apps, or apps which require authentication. As mentioned in the story, the developers pointed out that this behavior should be configurable -- but this was never implemented. Hopefully we can see some progress on the proposed healthcheck changes, which would better address your issue. In the meantime, I'm not sure of the best course of action. It's quite easy to push an unlaunchable app to Windows, and there will be little to no debug information available to help the developer figure out why their app is inaccessible. The current implementation has its drawbacks, but can be worked around by "disabling" the health check. [1] https://www.pivotaltracker.com/story/show/96080778
|
|
Re: Issue with crashing Windows apps on Diego
On Mon, Feb 1, 2016 at 7:06 PM, aaron_huber <aaron.m.huber(a)intel.com> wrote: This has been nagging at me all weekend and I think I finally figured out why. So far all healthchecks in CloudFoundry have been either on the PID (process didn't crash) or the port (accepting TCP connections). This is the first time I've seen one that is actually doing an HTTP check that must pass for the "container" (such as it is on Windows) to be considered healthy. Looking at the Linux healthcheck code it looks like there is a "uri" healthcheck:
https://github.com/cloudfoundry-incubator/healthcheck/blob/master/cmd/healthcheck/main.go#L49-L53
But as far as I can tell it's unused because only port is ever called:
https://github.com/cloudfoundry-incubator/nsync/blob/master/recipebuilder/recipe_builder.go#L97-L98
Hey Aaron - You're right; it looks like the port check is only ever used. I don't have the history as to why we (the CF .NET team) implemented an HTTP check instead of a port check, but that's how it is. In addition, all the documentation and even the help text on the CLI describe this as a "port" healthcheck. It's bad enough that doing the HTTP healthcheck means it's now inconsistent between Linux and Windows on Diego, but the following are serious concerns for me:
1) Especially on .NET it can take a while for apps to start up and it's likely we could get into a loop of starting and then killing containers because we don't give them enough time to start up.
2) Even if all is working well, we've now hard coded that any app landed on garden-windows now has to have a faster than 1 second HTTP response time or it just can't land. What if my developer has an app that is expected to be slow due to back-end dependencies or processing logic?
There is a proposal [1] in place to address your concerns. As far as I know, work towards implementing this proposal is stalled, but I've looped in Eric for more details. [1] https://github.com/cloudfoundry-incubator/diego-dev-notes/issues/31
|
|
Re: External user-provided-service?
Whoops, nevermind, really did misread the docs. Was as easy as: `cf cups mongo -p '{"address": " http://localhost:3526", "username": "foo", "password": "bar"}'`
|
|
External user-provided-service?
Let's say I'm using an DBaaS provider or have setup a database in a non-cloudfoundry cluster, how do I add it as a user-provided service? Reading the documentation, and it isn't clear [to me at least]: https://docs.cloudfoundry.org/devguide/services/user-provided.htmlThere are plenty of advantages to facilitating this use-case, and cloudfoundry should still be able to do health checks + apply throughput quotas. Disadvantages are obvious: it will have to fallback to alerting (rather than self-healing) when something goes wrong, also it would arguably mean more management (user + maintenance level), and security credential sharing would have to be managed somehow. Anyway, I'm fairly certain someone has written a feature for this already, so how do I do this? Thanks for all suggestions
|
|
about flow control for app
Is there any way to limit request for specific app? For example: Limit concurrent request Limit request per second/munites/hours Limit bandwidth
I think gorouter is the best location to do flow control for app. But it seams gorouter does not support this yet.
|
|
Re: AUFS bug in Linux kernel
Simon Johansson <simon@...>
|
|
Re: /v2/service_plan_visibilities returns empty when logged in as org manager role in CF
I have enabled the plans to all orgs. And I logged into CF as a space developer in the same org as where the service broker is being registered. In the market place I am able to view both the plans, but when I try to call update service it shows as OK, but when I look at the service instance, the plan is never updated and the call never hits the service broker code. In the same org, if I login as cloud foundry admin, I see that the plan gets updated when I do the same update service call and it reaches the service broker code.
Thanks, Navyatha
|
|