Adding security groups in resource_pools instead of networks
Marco Voelz
Dear Boshers,
we currently encounter a problem which has been discussed briefly before on the list [1]: Adding security groups in resource pools should be possible. On a side note: we are dealing with openstack, so references below might be openstack specific. Here is a use-case: having machines in the same network, but with different incoming/outgoing rules. Example: only the runners/DEAs of a CF deployment should be able to access some service VMs. Currently, this means that we have to have the same network configuration twice in our manifests, the only difference being the set of security groups. I’d like to propose a change to allow specifying them on resource_pool level and discuss some implementation-specifics and impacts before writing code, so we are all on the same page. If we are introducing anything new, my assumption is that current behavior of specifying security groups in networks should not break or change. If you have a manifest specifying security groups, you probably expect it to work also when a new feature is added. Analysis of current state * global default security groups can be specified when setting up your director * network security groups override those when specified for a deployment * security groups are not a first-class concept. They are transported through the entities known to bosh within the cloud_properties of a network. Therefore, only methods dealing with the network entities obtained from the manifest or director db actually know about them implicitly. Concept proposal * introduce ability to specify security groups for resource pools * keep current behavior: ** if there are global default security groups only, use them. ** if there are network security groups, use them instead of anything else. Don’t care about global groups or the new resource pool groups ** if there are resource pool groups AND no network security groups, use them. Don’t care about global groups. ** probably remove security groups on networks at some point in time with a heads-up to everyone currently using it. I have no idea if this is feasible. Implementation proposal * create_vm and configure_networks of CPI seem to be the relevant calls setting up the security groups: the former or creating a vm, the latter for updating an existing one. Any changes done here would be CPI specific! * adapting create_vm could be done straight forward: it is already using the network_configurator to merge the security groups [2], and has access to the resource_pool for the vm as well. We could simply add logic here to take security groups within a resource pool into account. * adapting configure_networks is more tricky: It gets a network spec and compares the security groups in there with the ones currently present on a vm [3]. It has no idea about the resource pool of that vm. The CPI is called by the director’s network_updater [4] which gets initialized for a specific instance and is called by the instance_updater [5]. * The instance entity combines all there is to know in terms of configuration for a specific vm (e.g. network settings [6]), so this could be the point to include the new feature So, what could be changed now? * introduce a new method security_groups on bosh-director/lib/bosh/director/deployment_plan/instance.rb called security_groups, providing information about security groups. If there are secrurity groups on networks, return them, otherwise return security groups defined on resource pools if there are any. Just like the desired behavior we assumed above. * adapt create_vm and configure_networks to accept security_groups as an additional argument. Instead of having the CPIs extract security groups from the network’s cloud_properties, take them from the argument and keep the current logic of the methods. What are your thoughts on this? I would love to have the change isolated from the actual CPI coding, so we don’t need to adapt all of them at the same time. However, this seems like an API change might be in order, so I’m not sure on how to do it. Given any form of agreement on how to proceed, we could provide a PR as a further means for discussion. However, this change will impact the API, so I wanted to get your feedback on this before actually implementing anything. Warm regards Marco [1] https://groups.google.com/a/cloudfoundry.org/forum/#!topic/bosh-users/LJ2Kym6QCak [2] https://github.com/cloudfoundry-incubator/bosh-openstack-cpi-release/blob/master/src/bosh_openstack_cpi/lib/cloud/openstack/cloud.rb#L226 [3] https://github.com/cloudfoundry-incubator/bosh-openstack-cpi-release/blob/master/src/bosh_openstack_cpi/lib/cloud/openstack/cloud.rb#L397 [4] https://github.com/cloudfoundry/bosh/blob/master/bosh-director/lib/bosh/director/instance_updater/network_updater.rb#L28 [5] https://github.com/cloudfoundry/bosh/blob/master/bosh-director/lib/bosh/director/instance_updater.rb#L283-L286 [6] https://github.com/cloudfoundry/bosh/blob/master/bosh-director/lib/bosh/director/deployment_plan/instance.rb#L186-L222 |
|
Re: Started updating job api_z1 > api_z1/0. Failed: `api_z1/0' is not running after update
Parthiban Annadurai <senjiparthi@...>
Great Ramesh..
toggle quoted message
Show quoted text
On 8 September 2015 at 10:48, Ramesh Sambandan <rsamban(a)gmail.com> wrote:
|
|
Relation between Network property in Resource pool property and Network property in Actual Job block
Ronak Banka
Hello All,
I have a resource pool , let say small_z1 - name: small_z1 network: cf1 stemcell: stemcell-xyz cloud_properties: instance_type: m1.small availability_zone: zone1 and a Job , router having two networks assigned to it - name: router instances: 1 networks: - name: router_internal default: [dns, gateway] static_ips: - xy.xy.xy.xy - name: router_external static_ips: - yz.yz.yz.yz gateway: yy.yy.yy.yy networks: apps: router_internal management: router_internal resource_pool: small_z1 With these properties there are no issues anywhere. what is the network property in resource pool responsible for, if the created job networks and not linked to the one in pool?? Regards, Ronak -- View this message in context: http://cf-bosh.70367.x6.nabble.com/Relation-between-Network-property-in-Resource-pool-property-and-Network-property-in-Actual-Job-block-tp649.html Sent from the CF BOSH mailing list archive at Nabble.com. |
|
Re: Started updating job api_z1 > api_z1/0. Failed: `api_z1/0' is not running after update
Ramesh Sambandan
I finally figured it out. It is my bad, I missed the properties.uaa.jwt.signing_key/verification_key in my manifest. It is something I thought I took care of it, but must have overwritten in subsequent edits.
The clue came from looking at the log files in router_api in api_z1 vm (/var/vcap/sys/log/router* folder) And I successfully deployed cloud foundry in vsphere. :):) |
|
Re: Started updating job api_z1 > api_z1/0. Failed: `api_z1/0' is not running after update
Parthiban Annadurai <senjiparthi@...>
Try after "bosh cck" and again re-deploy it.. Most of the Times it will
toggle quoted message
Show quoted text
work.. On 8 September 2015 at 08:15, ronak banka <ronakbanka.cse(a)gmail.com> wrote:
You can create a gist of manifest from github and link it here. |
|
Re: Started updating job api_z1 > api_z1/0. Failed: `api_z1/0' is not running after update
Ronak Banka
You can create a gist of manifest from github and link it here.
toggle quoted message
Show quoted text
On Sep 8, 2015 10:56, "Ramesh Sambandan" <rsamban(a)gmail.com> wrote:
I am using cf release 215. |
|
Re: Started updating job api_z1 > api_z1/0. Failed: `api_z1/0' is not running after update
Ramesh Sambandan
I am using cf release 215.
I am trying to attach my manifest, but cannot figure out how to. |
|
Started updating job api_z1 > api_z1/0. Failed: `api_z1/0' is not running after update
Ramesh Sambandan
I am trying in deploy cloud foundry in vsphere and getting following error
Started updating job api_z1 > api_z1/0. Failed: `api_z1/0' is not running after update (00:10:31) Following is entries in log file in api_z1/0 VM that I believe is pointing the cause. vcap(a)6bd94f72-b20d-47a5-851d-d86e66b47df7:/var/vcap/sys/log$ tail -f nfs_mounter_ctl.err.log cloud_controller_ng_ctl.err.log ==> nfs_mounter_ctl.err.log <== [2015-09-07 18:28:05+0000] mount.nfs: trying 192.168.1.102 prog 100003 vers 3 prot TCP port 2049 [2015-09-07 18:28:05+0000] mount.nfs: trying 192.168.1.102 prog 100005 vers 3 prot UDP port 38896 [2015-09-07 18:53:58+0000] stop: Unknown instance: [2015-09-07 18:53:58+0000] mount.nfs: mount(2): No such file or directory [2015-09-07 18:53:59+0000] mount.nfs: trying 192.168.1.102 prog 100003 vers 3 prot TCP port 2049 [2015-09-07 18:53:59+0000] mount.nfs: trying 192.168.1.102 prog 100005 vers 3 prot UDP port 38896 [2015-09-07 19:22:09+0000] stop: Unknown instance: [2015-09-07 19:22:10+0000] mount.nfs: mount(2): No such file or directory [2015-09-07 19:22:10+0000] mount.nfs: trying 192.168.1.102 prog 100003 vers 3 prot TCP port 2049 [2015-09-07 19:22:10+0000] mount.nfs: trying 192.168.1.102 prog 100005 vers 3 prot UDP port 38896 ==> cloud_controller_ng_ctl.err.log <== [2015-09-07 18:28:05+0000] chown: changing ownership of ‘/var/vcap/nfs/shared’: Operation not permitted [2015-09-07 18:28:05+0000] chown: changing ownership of ‘/var/vcap/nfs/shared’: Operation not permitted [2015-09-07 18:53:58+0000] ------------ STARTING cloud_controller_ng_ctl at Mon Sep 7 18:53:58 UTC 2015 -------------- [2015-09-07 18:53:59+0000] chown: changing ownership of ‘/var/vcap/nfs/shared’: Operation not permitted [2015-09-07 18:53:59+0000] chown: changing ownership of ‘/var/vcap/nfs/shared’: Operation not permitted [2015-09-07 18:53:59+0000] chown: changing ownership of ‘/var/vcap/nfs/shared’: Operation not permitted [2015-09-07 19:22:09+0000] ------------ STARTING cloud_controller_ng_ctl at Mon Sep 7 19:22:09 UTC 2015 -------------- [2015-09-07 19:22:10+0000] chown: changing ownership of ‘/var/vcap/nfs/shared’: Operation not permitted [2015-09-07 19:22:10+0000] chown: changing ownership of ‘/var/vcap/nfs/shared’: Operation not permitted [2015-09-07 19:22:10+0000] chown: changing ownership of ‘/var/vcap/nfs/shared’: Operation not permitted Can somebody help. |
|
Error: Public uaa token must be PEM encoded
JOSE FELIX HERNANDEZ BARRIO
Hi,
I'm trying to deploy cloudfoundry on openstack. I'm getting stuck at bosh deploy. The result from bosh vms: +------------------------------------+---------+---------------+---------------+ | Job/index | State | Resource Pool | IPs | +------------------------------------+---------+---------------+---------------+ | api_worker_z1/0 | running | small_z1 | 10.2.0.105 | | api_z1/0 | failing | large_z1 | 10.2.0.103 | | clock_global/0 | running | medium_z1 | 10.2.0.104 | | doppler_z1/0 | running | medium_z1 | 10.2.0.108 | | etcd_z1/0 | running | medium_z1 | 10.2.0.58 | | ha_proxy_z1/0 | running | router_z1 | 10.2.0.50 | | | | | 192.168.1.203 | | hm9000_z1/0 | running | medium_z1 | 10.2.0.106 | | loggregator_trafficcontroller_z1/0 | running | small_z1 | 10.2.0.109 | | nats_z1/0 | running | medium_z1 | 10.2.0.52 | | nfs_z1/0 | running | medium_z1 | 10.2.0.53 | | postgres_z1/0 | running | medium_z1 | 10.2.0.54 | | router_z1/0 | running | router_z1 | 10.2.0.55 | | runner_z1/0 | running | runner_z1 | 10.2.0.107 | | stats_z1/0 | running | small_z1 | 10.2.0.101 | | uaa_z1/0 | running | medium_z1 | 10.2.0.102 | +------------------------------------+---------+---------------+---------------+ So i check monit summary at api_z1: Process 'cloud_controller_ng' running Process 'cloud_controller_worker_local_1' running Process 'cloud_controller_worker_local_2' running Process 'nginx_cc' running Process 'cloud_controller_migration' running Process 'routing-api' not monitored Process 'metron_agent' running Process 'statsd-injector' running Process 'consul_agent' running File 'nfs_mounter' accessible System 'system_10391701-0eec-4e27-916b-5b9d95f86cbc' runnin Then i check the log file /var/vcap/sys/log/routing-api/routing-api.log and it has the message: {"timestamp":"1441587481.040446758","source":"routing-api","message":"routing-api.failed to check public token","log_level":2,"data":{"error":"Public uaa token must be PEM encoded"}} what am i doing wrong in my cf-deployment.yml? my cf-deployment.yml https://gist.github.com/josefhernandez/0f022c38f25539f9db7b Best regards |
|
Re: Deploy CF Diego Release on OpenStack
王天青 <wang.tianqing.cn at gmail.com...>
Thanks
toggle quoted message
Show quoted text
Johannes Hiemer <jvhiemer(a)gmail.com>于2015年9月6日周日 下午5:14写道: Hi Wang, --
Best Regards~! Grissom |
|
Re: Deploy CF Diego Release on OpenStack
Johannes Hiemer
Hi Wang,
toggle quoted message
Show quoted text
I wrote a blog post you can read about here: http://www.evoila.de/cloud-foundry/adding-diego-release-to-cloud-foundry-release-v212/?lang=en On Sun, Sep 6, 2015 at 7:02 AM, <wang.tianqing.cn(a)gmail.com> wrote:
Hi all, --
Mit freundlichen Grüßen Johannes Hiemer |
|
Deploy CF Diego Release on OpenStack
王天青 <wang.tianqing.cn at gmail.com...>
Hi all,
When deploying Cloud Foundry on OpenStack using BOSH, there is a good reference doc: http://docs.cloudfoundry. For Diego release, is there any similar docs? If more specifically, is there any sample deployment manifest file? Like cf-stub.yml, thanks. Best Regards! $ ./generate_deployment_manifest openstack cf-stub.yml > cf-deployment.yml |
|
Re: health metrics via bosh
Klevenz, Stephan <stephan.klevenz@...>
Thanks. I will have a look into this.
toggle quoted message
Show quoted text
-- Stephan Von: Dmitriy Kalinin Antworten an: "Discussions about the Cloud Foundry BOSH project." Datum: Dienstag, 1. September 2015 19:25 An: "Discussions about the Cloud Foundry BOSH project." Betreff: [cf-bosh] Re: health metrics via bosh It would be great to extend HM to support Riemann. Please see hm's README: https://github.com/cloudfoundry/bosh/tree/master/bosh-monitor for how to extend it. You can find all available plugins in here: https://github.com/cloudfoundry/bosh/tree/master/bosh-monitor/lib/bosh/monitor/plugins On Tue, Sep 1, 2015 at 2:20 AM, Klevenz, Stephan <stephan.klevenz(a)sap.com<mailto:stephan.klevenz(a)sap.com>> wrote:
Hi, Some health metrics disappeared from our dashboard mentioned in this issue [1]. The proposal is to get these health metrics from bosh [2] instead. Do you have an example how bosh health monitor works and how metrics can be forwarded to a Riemann consumer? Our current CF deployment is 211. Thanks in advance. Regards, Stephan [1] https://github.com/cloudfoundry/loggregator/issues/72 [2] https://bosh.io/docs/bosh-components.html#health-monitor |
|
Re: Live Migrating OpenStack VMs between physical nodes
ramonskie
most of time with live migration you should only experience a few seconds of downtime
toggle quoted message
Show quoted text
-----Original Message-----
From: Dmitriy Kalinin <dkalinin(a)pivotal.io<mailto:Dmitriy%20Kalinin%20%3cdkalinin(a)pivotal.io%3e>> Reply-to: Discussions about the Cloud Foundry BOSH project. <cf-bosh(a)lists.cloudfoundry.org> To: Discussions about the Cloud Foundry BOSH project. <cf-bosh(a)lists.cloudfoundry.org<mailto:%22Discussions%20about%20the%20Cloud%20Foundry%20BOSH%20project.%22%20%3ccf-bosh(a)lists.cloudfoundry.org%3e>> Subject: [cf-bosh] Re: Re: Re: Live Migrating OpenStack VMs between physical nodes Date: Wed, 2 Sep 2015 10:11:00 -0700 I am not too familiar with OpenStack's *live* migration. Do VMs lose connectivity when moved around by OpenStack? On Wed, Sep 2, 2015 at 1:02 AM, Makkelie, R (ITCDCC) - KLM <Ramon.Makkelie(a)klm.com<mailto:Ramon.Makkelie(a)klm.com>> wrote: when you want to migrate or evacuate hosts you will notice that bosh will try to respawn the vms/jobs he created because of the health monitor so what we normally do is disable the bosh health monitor until the the nodes are done greetz Ramonskie -----Original Message----- From: Dmitriy Kalinin <dkalinin(a)pivotal.io<mailto:Dmitriy%20Kalinin%20%3cdkalinin(a)pivotal.io%3e>> Reply-to: Discussions about the Cloud Foundry BOSH project. <cf-bosh(a)lists.cloudfoundry.org<mailto:cf-bosh(a)lists.cloudfoundry.org>> To: Discussions about the Cloud Foundry BOSH project. <cf-bosh(a)lists.cloudfoundry.org<mailto:%22Discussions%20about%20the%20Cloud%20Foundry%20BOSH%20project.%22%20%3ccf-bosh(a)lists.cloudfoundry.org%3e>> Subject: [cf-bosh] Re: Live Migrating OpenStack VMs between physical nodes Date: Tue, 1 Sep 2015 10:48:06 -0700 OpenStack CPI does not care / is not aware on which node VMs are running. As long as VM remains powered on and accessible you should not have a problem. In practice you may run into certain OpenStack limitations which may prevent moving VMs around. I vaguely remember presence of a config drive (CPI is configured to bootstrap VMs via config drive) may limit VM relocation. On Tue, Sep 1, 2015 at 10:12 AM, Josh Ghiloni <jghiloni(a)ecsteam.com<mailto:jghiloni(a)ecsteam.com>> wrote: Hi All, I've had a question from my client's IT department that I thought I would run by you before actually just trying it. We're running open source Cloud Foundry on OpenStack, and the admin was wondering if the CPI could handle a particular VM (or a set of vms) being transferred from one physical node to another for something like maintenance on the original node. My first thought is that it shouldn't affect anything in the CPI, since OpenStack is handling the VM provisioning and locating, but I was curious if anyone had practical experience with this. Thanks! Josh Ghiloni Senior Consultant 303.932.2202<tel:303.932.2202> o | 303.590.5427<tel:303.590.5427> m | 303.565.2794<tel:303.565.2794> f jghiloni(a)ecsteam.com<mailto:jghiloni(a)ecsteam.com> ECS Team Technology Solutions Delivered ECSTeam.com<http://www.ecsteam.com/> ******************************************************** For information, services and offers, please visit our web site: http://www.klm.com. This e-mail and any attachment may contain confidential and privileged material intended for the addressee only. If you are not the addressee, you are notified that no part of the e-mail or any attachment may be disclosed, copied or distributed, and that any other action related to this e-mail or attachment is strictly prohibited, and may be unlawful. If you have received this e-mail by error, please notify the sender immediately by return e-mail, and delete this message. Koninklijke Luchtvaart Maatschappij NV (KLM), its subsidiaries and/or its employees shall not be liable for the incorrect or incomplete transmission of this e-mail or any attachments, nor responsible for any delay in receipt. Koninklijke Luchtvaart Maatschappij N.V. (also known as KLM Royal Dutch Airlines) is registered in Amstelveen, The Netherlands, with registered number 33014286 ******************************************************** ******************************************************** For information, services and offers, please visit our web site: http://www.klm.com. This e-mail and any attachment may contain confidential and privileged material intended for the addressee only. If you are not the addressee, you are notified that no part of the e-mail or any attachment may be disclosed, copied or distributed, and that any other action related to this e-mail or attachment is strictly prohibited, and may be unlawful. If you have received this e-mail by error, please notify the sender immediately by return e-mail, and delete this message. Koninklijke Luchtvaart Maatschappij NV (KLM), its subsidiaries and/or its employees shall not be liable for the incorrect or incomplete transmission of this e-mail or any attachments, nor responsible for any delay in receipt. Koninklijke Luchtvaart Maatschappij N.V. (also known as KLM Royal Dutch Airlines) is registered in Amstelveen, The Netherlands, with registered number 33014286 ******************************************************** |
|
Re: Bosh target password.
Guruprakash Srinivasamurthy <guruprakashsrinivasamurthy@...>
Thanks,Dmitriy. That helps.
On Wed, Sep 2, 2015 at 11:14 AM, Dmitriy Kalinin <dkalinin(a)pivotal.io> wrote: If you are able to upgrade (with bosh-init or micro deploy) to a newer |
|
Re: Bosh target password.
Dmitriy Kalinin
If you are able to upgrade (with bosh-init or micro deploy) to a newer
version of BOSH ( v177+ / 1.2999.0) you can redeploy your Director with pre-configured users as explained in https://bosh.io/docs/director-users.html#preconfigured. When pre-configured users are enabled, database users are ignored. We are actually planning to remove db users in favor of pre-configured users or UAA provided users once I finish writing documentation about how BOSH works with UAA. On Wed, Sep 2, 2015 at 10:53 AM, Guruprakash Srinivasamurthy < guruprakashsrinivasamurthy(a)gmail.com> wrote: Hi, |
|
Bosh target password.
Guruprakash Srinivasamurthy <guruprakashsrinivasamurthy@...>
Hi,
We lost track of password for one of the micro bosh director and we are unable to login to the target. Is there a way we can reset the password that we use to target the bosh director ? Thanks, Guru. |
|
Re: Live Migrating OpenStack VMs between physical nodes
Dmitriy Kalinin
I am not too familiar with OpenStack's *live* migration. Do VMs lose
connectivity when moved around by OpenStack? On Wed, Sep 2, 2015 at 1:02 AM, Makkelie, R (ITCDCC) - KLM < Ramon.Makkelie(a)klm.com> wrote: when you want to migrate or evacuate hosts |
|
Re: Live Migrating OpenStack VMs between physical nodes
ramonskie
when you want to migrate or evacuate hosts
toggle quoted message
Show quoted text
you will notice that bosh will try to respawn the vms/jobs he created because of the health monitor so what we normally do is disable the bosh health monitor until the the nodes are done greetz Ramonskie -----Original Message-----
From: Dmitriy Kalinin <dkalinin(a)pivotal.io<mailto:Dmitriy%20Kalinin%20%3cdkalinin(a)pivotal.io%3e>> Reply-to: Discussions about the Cloud Foundry BOSH project. <cf-bosh(a)lists.cloudfoundry.org> To: Discussions about the Cloud Foundry BOSH project. <cf-bosh(a)lists.cloudfoundry.org<mailto:%22Discussions%20about%20the%20Cloud%20Foundry%20BOSH%20project.%22%20%3ccf-bosh(a)lists.cloudfoundry.org%3e>> Subject: [cf-bosh] Re: Live Migrating OpenStack VMs between physical nodes Date: Tue, 1 Sep 2015 10:48:06 -0700 OpenStack CPI does not care / is not aware on which node VMs are running. As long as VM remains powered on and accessible you should not have a problem. In practice you may run into certain OpenStack limitations which may prevent moving VMs around. I vaguely remember presence of a config drive (CPI is configured to bootstrap VMs via config drive) may limit VM relocation. On Tue, Sep 1, 2015 at 10:12 AM, Josh Ghiloni <jghiloni(a)ecsteam.com<mailto:jghiloni(a)ecsteam.com>> wrote: Hi All, I've had a question from my client's IT department that I thought I would run by you before actually just trying it. We're running open source Cloud Foundry on OpenStack, and the admin was wondering if the CPI could handle a particular VM (or a set of vms) being transferred from one physical node to another for something like maintenance on the original node. My first thought is that it shouldn't affect anything in the CPI, since OpenStack is handling the VM provisioning and locating, but I was curious if anyone had practical experience with this. Thanks! Josh Ghiloni Senior Consultant 303.932.2202<tel:303.932.2202> o | 303.590.5427<tel:303.590.5427> m | 303.565.2794<tel:303.565.2794> f jghiloni(a)ecsteam.com<mailto:jghiloni(a)ecsteam.com> ECS Team Technology Solutions Delivered ECSTeam.com<http://www.ecsteam.com/> ******************************************************** For information, services and offers, please visit our web site: http://www.klm.com. This e-mail and any attachment may contain confidential and privileged material intended for the addressee only. If you are not the addressee, you are notified that no part of the e-mail or any attachment may be disclosed, copied or distributed, and that any other action related to this e-mail or attachment is strictly prohibited, and may be unlawful. If you have received this e-mail by error, please notify the sender immediately by return e-mail, and delete this message. Koninklijke Luchtvaart Maatschappij NV (KLM), its subsidiaries and/or its employees shall not be liable for the incorrect or incomplete transmission of this e-mail or any attachments, nor responsible for any delay in receipt. Koninklijke Luchtvaart Maatschappij N.V. (also known as KLM Royal Dutch Airlines) is registered in Amstelveen, The Netherlands, with registered number 33014286 ******************************************************** |
|
Re: Live Migrating OpenStack VMs between physical nodes
Dmitriy Kalinin
OpenStack CPI does not care / is not aware on which node VMs are running.
toggle quoted message
Show quoted text
As long as VM remains powered on and accessible you should not have a problem. In practice you may run into certain OpenStack limitations which may prevent moving VMs around. I vaguely remember presence of a config drive (CPI is configured to bootstrap VMs via config drive) may limit VM relocation. On Tue, Sep 1, 2015 at 10:12 AM, Josh Ghiloni <jghiloni(a)ecsteam.com> wrote:
Hi All, |
|