Date   

Re: Extending Org to support multi-level Orgs (i.e. OU)

James Bayer
 

using separate regions or geographies implies a separate CF installations.
there is no way today to use a single quota across multiple CF
installations.

this "no sharing across region boundaries" approach is actually aligned
with how aws manages resource limits, which are almost completely region
specific [1] and do not have an uber-resource limits shared across the
regions except for a few exceptions.

within a single cf installation, you can set sub-quotas at a space level,
which can limit the amount of resources any one space can use within an org.

further down the road, there are discussions around a concept called
'isolation groups' that dieu is going to share a design doc on soon. in
addition to have targeted sets of local capacity for particular tenants,
isolation groups have some potential to address the use case of a remote
set of cf components with capacity that is managed from a centralized cloud
controller. there are many issues to work-out however as how do you handle
network segments and loss of a connection to the control plane or some of
the centralized information and artifacts you may need. we may end up
having to federate some of that 'intended state' information as well as
propagate artifacts like app source code, droplets, docker images,
buildpacks, etc so that there are local copies of assets that originate
from a central source.

[1] http://docs.aws.amazon.com/general/latest/gr/aws_service_limits.html

On Fri, Sep 11, 2015 at 11:49 AM, Zongwei Sun <Zongwei.Sun(a)huawei.com>
wrote:

The following is what they have asked for:
1. Total quotas for their whole ORG;
2. Each Branch has its own quotas;
3. Super users for the ORG can access each Branch.

What we have offered them:
1. Created a separated ORG for each branch;
2. Each separated ORG (for a branch site) has quotas

What are still missing:
1. Uniform ORG for their corporation
2. No way to set quotas for deeper organization units
3. No uniform way distribute or manage resources across their organization
hierarchy.

Thanks,
-Zongwei


--
Thank you,

James Bayer


Re: Extending Org to support multi-level Orgs (i.e. OU)

Zongwei Sun
 

The following is what they have asked for:
1. Total quotas for their whole ORG;
2. Each Branch has its own quotas;
3. Super users for the ORG can access each Branch.

What we have offered them:
1. Created a separated ORG for each branch;
2. Each separated ORG (for a branch site) has quotas

What are still missing:
1. Uniform ORG for their corporation
2. No way to set quotas for deeper organization units
3. No uniform way distribute or manage resources across their organization hierarchy.

Thanks,
-Zongwei


Re: Extending Org to support multi-level Orgs (i.e. OU)

Benjamin Black
 

what specifically did you try and what specifically didn't they like? the
more information you provide around your request the more productive the
discussion will be.

On Fri, Sep 11, 2015 at 11:27 AM, Zongwei Sun <Zongwei.Sun(a)huawei.com>
wrote:

Unfortunately, we have tried that but they didn't like it very much.


Re: Extending Org to support multi-level Orgs (i.e. OU)

Zongwei Sun
 

Unfortunately, we have tried that but they didn't like it very much.


Re: New bosh-lite default url

Zachary Auerbach <zauerbach@...>
 

Whoops,
I have egg on my face. We've now communicated better with the CAPI team and
are using their changes to the relevant repos with bosh-lite.com

Sorry for the confusion
Zak + Dan

On Fri, Sep 11, 2015 at 11:22 AM, Zachary Auerbach <zauerbach(a)pivotal.io>
wrote:

Hi Folks,

Due to recent instability with xip.io we've created a dedicated DNS entry
at "bosh-lite.cf-app.com" that points to 10.244.0.34.

We've changed over the default domain in our cf-warden templates so all
new bosh-lite deployments should continue to work as before with the new
domain. Upgrading bosh-lite deployments with this change should also
continue to work, however you should be aware that existing bosh-lite
deployments will still have 10.244.0.34.xip.io set as their default
domain. Deploying these changes will ADD the new bosh-lite.cf-app.com
domain, but not replace xip.io as the default domain. This will break
things like the cats-persistent-app.

We will be upgrading the relevant READMEs but please reply if you have any
questions or thoughts about this change.

Thanks,
Zak + Dan, CF OSS Release Integration


--
-Zak
CF Voltron
"Defender of the Universe"


New bosh-lite default url

Zachary Auerbach <zauerbach@...>
 

Hi Folks,

Due to recent instability with xip.io we've created a dedicated DNS entry
at "bosh-lite.cf-app.com" that points to 10.244.0.34.

We've changed over the default domain in our cf-warden templates so all new
bosh-lite deployments should continue to work as before with the new
domain. Upgrading bosh-lite deployments with this change should also
continue to work, however you should be aware that existing bosh-lite
deployments will still have 10.244.0.34.xip.io set as their default domain.
Deploying these changes will ADD the new bosh-lite.cf-app.com domain, but
not replace xip.io as the default domain. This will break things like the
cats-persistent-app.

We will be upgrading the relevant READMEs but please reply if you have any
questions or thoughts about this change.

Thanks,
Zak + Dan, CF OSS Release Integration


Re: Benchmark for UAA performance

Filip Hanik
 

Siva brings up a good point, you haven't really told us anything about your
environment :)

On Fri, Sep 11, 2015 at 12:16 PM, Zongwei Sun <Zongwei.Sun(a)huawei.com>
wrote:

My understanding is calling POST on /auth/token 100 times in parallel.
I'll find out more about the test scenarios when I'm able to reach them.
Thanks!


Re: Benchmark for UAA performance

Zongwei Sun
 

My understanding is calling POST on /auth/token 100 times in parallel. I'll find out more about the test scenarios when I'm able to reach them. Thanks!


Re: Is spiff dead?

Wayne E. Seguin
 

That said, you should definitely be keeping an eye on Spruce!!! ::

spruce is a domain-specific YAML merging tool, for generating BOSH
<http://bosh.io/> manifests.

It was written with the goal of being the most intuitive solution for
merging BOSH templates. As such, it pulls in a few semantics that may seem
familiar to those used to merging with the other merging tool
<https://github.com/cloudfoundry-incubator/spiff>, but there are a few key
differences.
https://github.com/geofffranks/spruce


<https://github.com/geofffranks/spruce#installation>

On Fri, Aug 21, 2015 at 12:44 PM, Amit Gupta <agupta(a)pivotal.io> wrote:

Spiff is not currently replaced by another tool, but it is not the ideal
tool for the job (too many features to shoot yourself in the foot with, not
enough features about BOSH knowledge, and just some general awkward hoops
it makes you jump through). We have it on our roadmap to improve manifest
generation, so we're not investing more activity into spiff that will slow
our progress towards where we eventually want to end up. For now, manifest
generation remains the same, and we will aim to introduce improvements in a
smooth manner.

A great majority of the improvements in manifest generation will come from
BOSH itself. See the bosh-notes for a list of current and upcoming
features (https://github.com/cloudfoundry/bosh-notes). Specifically the
list under the "Deployment configuration" heading in the README. Those
features open up some exciting possibilities for how simple manifest might
become.

Best,
Amit, Release Integration (MEGA) team PM

On Thu, Aug 20, 2015 at 11:29 PM, Kei YAMAZAKI <
daydream.yamazaki(a)gmail.com> wrote:

Hi all,

I consider use the spiff to manage a lot of manifest.
But I found statement that the spiff is not under active development.

https://github.com/cloudfoundry-incubator/spiff/commit/e7af3990e58b7390826b606d6de76ea576d9ad4f

Manifest management is very complex and cf-release has same problem (
diego-release also ).
I think the spiff is unable to resolve this problem completely and I
recognize the mega team is working on this problem.

So I have questions,
- Is spiff replaced by other tool?
- How to manage manifest files in the mega team.

Thanks,


Re: Extending Org to support multi-level Orgs (i.e. OU)

Benjamin Black
 

zongwei,

if they could define multi-org quotas rather than requiring a hierarchical
org structure would that meet their needs?


b


On Fri, Sep 11, 2015 at 11:02 AM, Zongwei Sun <Zongwei.Sun(a)huawei.com>
wrote:

Hi Benjamin,

For example, one of our customers today has to create two Orgs for their
two braches in different geographic locations, in order to manage resources
quotas and etc. separately. Also, they said that an Org hierarchy is closer
to the real structure in any organization.

I appreciate your help much.

-Zongwei


Re: Benchmark for UAA performance

Siva Balan <mailsiva@...>
 

100ms to 200ms sounds about the right value to get an Oauth token. We did a
benchmark with UAA and found out that out of the box, the auto
reconfiguration sets the default JDBC connection pool value to 4. You will
run out of connection pool if are trying to test with 100 users and your
requests may be waiting for a JDBC connection to become available in the
pool due to which you may be seeing 5+ second response times with 100
users.
Check this github issue for more details on this -
https://github.com/cloudfoundry/uaa/issues/165

-Siva

On Fri, Sep 11, 2015 at 10:54 AM, Filip Hanik <fhanik(a)pivotal.io> wrote:

What request are you doing to get a token?

On Fri, Sep 11, 2015 at 11:52 AM, Zongwei Sun <Zongwei.Sun(a)huawei.com>
wrote:

We're having a situation where it took about 5 seconds to get auth TOKENs
for 100 users (but only 100 ms for a single user). Does this sound right?
Has anybody else done benchmark of UAA perf who can share your experience
with me?

Thanks,
Zongwei


Re: Extending Org to support multi-level Orgs (i.e. OU)

Zongwei Sun
 

Hi Benjamin,

For example, one of our customers today has to create two Orgs for their two braches in different geographic locations, in order to manage resources quotas and etc. separately. Also, they said that an Org hierarchy is closer to the real structure in any organization.

I appreciate your help much.

-Zongwei


Re: Warden: staging error when pushing app

kyle havlovitz <kylehav@...>
 

I'm not using bosh, just trying to run it locally. I can post the dea and
warden configs though:

dea.yml:

base_dir: /tmp/dea_ng


domain: local.example.com

logging:
file: /opt/cloudfoundry/logs/dea_ng.log

level: debug


loggregator:
router: "127.0.0.1:3456"

shared_secret: "secret"


resources:
memory_mb: 8000

memory_overcommit_factor: 1

disk_mb: 40960

disk_overcommit_factor: 1


nats_servers:
- nats://127.0.0.1:4222


pid_filename: /tmp/dea_ng.pid

warden_socket: /tmp/warden.sock

evacuation_delay_secs: 10

default_health_check_timeout: 120

index: 0

intervals:
heartbeat: 10

advertise: 5

router_register_in_seconds: 20


staging:
enabled: true

memory_limit_mb: 4096

disk_limit_mb: 6144

disk_inode_limit: 200000

cpu_limit_shares: 512

max_staging_duration: 900


instance:
disk_inode_limit: 200000

memory_to_cpu_share_ratio: 8

max_cpu_share_limit: 256

min_cpu_share_limit: 1


dea_ruby: /usr/bin/ruby

# For Go-based directory server
directory_server:

protocol: 'http'

v1_port: 4385

v2_port: 5678

file_api_port: 1234

streaming_timeout: 10

logging:

file: /opt/cloudfoundry/logs/dea_dirserver.log

level: debug


stacks:
- name: cflinuxfs2

package_path: /var/warden/rootfs


placement_properties:
zone: "zone"

warden test_vm.yml:

server:

container_klass: Warden::Container::Linux


# Wait this long before destroying a container, after the last client
# referencing it disconnected. The timer is cancelled when during this

# period, another client references the container.

#

# Clients can be forced to specify this setting by setting the

# server-wide variable to an invalid value:

# container_grace_time: invalid

#

# The grace time can be disabled by setting it to nil:

# container_grace_time: ~

#

container_grace_time: 300


unix_domain_permissions: 0777
unix_domain_path: /tmp/warden.sock


# Specifies the path to the base chroot used as the read-only root
# filesystem

container_rootfs_path: /var/warden/rootfs


# Specifies the path to the parent directory under which all containers
# will live.

container_depot_path: /var/warden/containers


# See getrlimit(2) for details. Integer values are passed verbatim.
container_rlimits:

core: 0


quota:
disk_quota_enabled: false


allow_nested_warden: false

health_check_server:
port: 2345


logging:
file: /opt/cloudfoundry/logs/warden.log

level: debug2


network:
# Use this /30 network as offset for the network pool.

pool_start_address: 10.254.0.0


# Pool this many /30 networks.
pool_size: 256


# Interface MTU size
# (for OpenStack use 1454 to avoid problems with rubygems with GRE
tunneling)
mtu: 1500


user:
pool_start_uid: 11000

pool_size: 256


This is all using the latest CFv217

On Fri, Sep 11, 2015 at 1:31 PM, CF Runtime <cfruntime(a)gmail.com> wrote:

Hey Kyle,

Can we take a look at your deployment manifest (with all the secrets
redacted)?

Zak + Dan, CF OSS Integration team

On Fri, Sep 11, 2015 at 8:55 AM, kyle havlovitz <kylehav(a)gmail.com> wrote:

I'm getting an error pushing any app during the staging step. The cf logs
returns only this:

2015-09-11T15:24:24.33+0000 [API] OUT Created app with guid
47efa472-e2d5-400b-9135-b1a1dbe3ba88
2015-09-11T15:24:24.41+0000 [API] OUT Updated app with guid
47efa472-e2d5-400b-9135-b1a1dbe3ba88
({"route"=>"5737c5f5-b017-43da-9013-2b6fe7db03f7"})
2015-09-11T15:24:29.54+0000 [DEA/0] OUT Got staging request for app
with id 47efa472-e2d5-400b-9135-b1a1dbe3ba88
2015-09-11T15:24:30.71+0000 [API] OUT Updated app with guid
47efa472-e2d5-400b-9135-b1a1dbe3ba88 ({"state"=>"STARTED"})
2015-09-11T15:24:30.76+0000 [STG/0] OUT -----> Downloaded app
package (4.0K)
2015-09-11T15:25:06.00+0000 [API] ERR encountered error: Staging
error: failed to stage application:
2015-09-11T15:25:06.00+0000 [API] ERR Script exited with status 1

In the warden logs, there are a few suspect messages:

{
"timestamp": 1441985105.8883495,

"message": "Exited with status 1 (35.120s):
[[\"/opt/cloudfoundry/warden/warden/src/closefds/closefds\",
\"/opt/cloudfoundry/warden/warden/src/closefds/closefds\"],
\"/var/warden/containers/18vf956il5v/bin/iomux-link\", \"-w\",
\"/var/warden/containers/18vf956il5v/jobs/8/cursors\",
\"/var/warden/containers/18vf956il5v/jobs/8\"]",
"log_level": "warn",

"source": "Warden::Container::Linux",

"data": {

"handle": "18vf956il5v",

"stdout": "",

"stderr": ""

},

"thread_id": 69890836968240,

"fiber_id": 69890849112480,

"process_id": 17063,

"file":
"/opt/cloudfoundry/warden/warden/lib/warden/container/spawn.rb",
"lineno": 135,

"method": "set_deferred_success"

}



{

"timestamp": 1441985105.94083,

"message": "Exited with status 23 (0.023s):
[[\"/opt/cloudfoundry/warden/warden/src/closefds/closefds\",
\"/opt/cloudfoundry/warden/warden/src/closefds/closefds\"], \"rsync\",
\"-e\", \"/var/warden/containers/18vf956il5v/bin/wsh --socket
/var/warden/containers/18vf956il5v/run/wshd.sock --rsh\", \"-r\", \"-p\",
\"--links\", \"vcap(a)container:/tmp/staged/staging_info.yml\",
\"/tmp/dea_ng/staging/d20150911-17093-1amg6y8\"]",
"log_level": "warn",

"source": "Warden::Container::Linux",

"data": {

"handle": "18vf956il5v",

"stdout": "",

"stderr": "rsync: link_stat \"/tmp/staged/staging_info.yml\"
failed: No such file or directory (2)\nrsync error: some files/attrs were
not transferred (see previous errors) (code 23) at main.c(1655)
[Receiver=3.1.0]\nrsync: [Receiver] write error: Broken pipe (32)\n"
},

"thread_id": 69890836968240,

"fiber_id": 69890849112480,

"process_id": 17063,

"file":
"/opt/cloudfoundry/warden/warden/lib/warden/container/spawn.rb",
"lineno": 135,

"method": "set_deferred_success"

}



{

"timestamp": 1441985106.0086887,

"message": "Killing oom-notifier process",

"log_level": "debug",

"source": "Warden::Container::Features::MemLimit::OomNotifier",

"data": {



},

"thread_id": 69890836968240,

"fiber_id": 69890848620580,

"process_id": 17063,

"file":
"/opt/cloudfoundry/warden/warden/lib/warden/container/features/mem_limit.rb",
"lineno": 51,

"method": "kill"

}



{

"timestamp": 1441985106.0095143,

"message": "Exited with status 0 (35.427s):
[[\"/opt/cloudfoundry/warden/warden/src/closefds/closefds\",
\"/opt/cloudfoundry/warden/warden/src/closefds/closefds\"],
\"/opt/cloudfoundry/warden/warden/src/oom/oom\",
\"/tmp/warden/cgroup/memory/instance-18vf956il5v\"]",
"log_level": "warn",

"source": "Warden::Container::Features::MemLimit::OomNotifier",

"data": {

"stdout": "",

"stderr": ""

},

"thread_id": 69890836968240,

"fiber_id": 69890849112480,

"process_id": 17063,

"file":
"/opt/cloudfoundry/warden/warden/lib/warden/container/spawn.rb",
"lineno": 135,

"method": "set_deferred_success"

}




Obviously something is misconfigured, but I'm not sure what. I don't know
why the out of memory thing is appearing as the memory being used by the
test app i've pushed is tiny (64M app with staticfile buildpack) and the
dea config has resource.memory_mb set to 8 gigs and staging.memory_limit_mb
set to 1 gigs. Is there some config I'm lacking that's causing this to fail?


Re: Benchmark for UAA performance

Filip Hanik
 

What request are you doing to get a token?

On Fri, Sep 11, 2015 at 11:52 AM, Zongwei Sun <Zongwei.Sun(a)huawei.com>
wrote:

We're having a situation where it took about 5 seconds to get auth TOKENs
for 100 users (but only 100 ms for a single user). Does this sound right?
Has anybody else done benchmark of UAA perf who can share your experience
with me?

Thanks,
Zongwei


Benchmark for UAA performance

Zongwei Sun
 

We're having a situation where it took about 5 seconds to get auth TOKENs for 100 users (but only 100 ms for a single user). Does this sound right? Has anybody else done benchmark of UAA perf who can share your experience with me?

Thanks,
Zongwei


Re: Extending Org to support multi-level Orgs (i.e. OU)

Benjamin Black
 

zongwei,

could you give more detail on the problem you are hoping to solve?
multi-level orgs is a solution, but i want to make sure there is clarity on
the problem so we can discuss what other solutions might be possible.


b


On Fri, Sep 11, 2015 at 10:47 AM, Zongwei Sun <Zongwei.Sun(a)huawei.com>
wrote:

We've been asked to support multi-level of Orgs. I realized this is huge,
probably bigger than I thought. If you can give me some hints of the whole
implications of this (components need to be changed, the risks and etc), it
would be highly appreciated.

Thanks!
Zongwei


Extending Org to support multi-level Orgs (i.e. OU)

Zongwei Sun
 

We've been asked to support multi-level of Orgs. I realized this is huge, probably bigger than I thought. If you can give me some hints of the whole implications of this (components need to be changed, the risks and etc), it would be highly appreciated.

Thanks!
Zongwei


Re: UAA: Level count of Spaces under an Org

Zongwei Sun
 

Hi Filip,

Will do. You actually have already helped me half way through this. My initial motivation was to find out the whole implications of this.

Thanks!
-Zongwei


Re: Warden: staging error when pushing app

CF Runtime
 

Hey Kyle,

Can we take a look at your deployment manifest (with all the secrets
redacted)?

Zak + Dan, CF OSS Integration team

On Fri, Sep 11, 2015 at 8:55 AM, kyle havlovitz <kylehav(a)gmail.com> wrote:

I'm getting an error pushing any app during the staging step. The cf logs
returns only this:

2015-09-11T15:24:24.33+0000 [API] OUT Created app with guid
47efa472-e2d5-400b-9135-b1a1dbe3ba88
2015-09-11T15:24:24.41+0000 [API] OUT Updated app with guid
47efa472-e2d5-400b-9135-b1a1dbe3ba88
({"route"=>"5737c5f5-b017-43da-9013-2b6fe7db03f7"})
2015-09-11T15:24:29.54+0000 [DEA/0] OUT Got staging request for app
with id 47efa472-e2d5-400b-9135-b1a1dbe3ba88
2015-09-11T15:24:30.71+0000 [API] OUT Updated app with guid
47efa472-e2d5-400b-9135-b1a1dbe3ba88 ({"state"=>"STARTED"})
2015-09-11T15:24:30.76+0000 [STG/0] OUT -----> Downloaded app
package (4.0K)
2015-09-11T15:25:06.00+0000 [API] ERR encountered error: Staging
error: failed to stage application:
2015-09-11T15:25:06.00+0000 [API] ERR Script exited with status 1

In the warden logs, there are a few suspect messages:

{
"timestamp": 1441985105.8883495,

"message": "Exited with status 1 (35.120s):
[[\"/opt/cloudfoundry/warden/warden/src/closefds/closefds\",
\"/opt/cloudfoundry/warden/warden/src/closefds/closefds\"],
\"/var/warden/containers/18vf956il5v/bin/iomux-link\", \"-w\",
\"/var/warden/containers/18vf956il5v/jobs/8/cursors\",
\"/var/warden/containers/18vf956il5v/jobs/8\"]",
"log_level": "warn",

"source": "Warden::Container::Linux",

"data": {

"handle": "18vf956il5v",

"stdout": "",

"stderr": ""

},

"thread_id": 69890836968240,

"fiber_id": 69890849112480,

"process_id": 17063,

"file":
"/opt/cloudfoundry/warden/warden/lib/warden/container/spawn.rb",
"lineno": 135,

"method": "set_deferred_success"

}



{

"timestamp": 1441985105.94083,

"message": "Exited with status 23 (0.023s):
[[\"/opt/cloudfoundry/warden/warden/src/closefds/closefds\",
\"/opt/cloudfoundry/warden/warden/src/closefds/closefds\"], \"rsync\",
\"-e\", \"/var/warden/containers/18vf956il5v/bin/wsh --socket
/var/warden/containers/18vf956il5v/run/wshd.sock --rsh\", \"-r\", \"-p\",
\"--links\", \"vcap(a)container:/tmp/staged/staging_info.yml\",
\"/tmp/dea_ng/staging/d20150911-17093-1amg6y8\"]",
"log_level": "warn",

"source": "Warden::Container::Linux",

"data": {

"handle": "18vf956il5v",

"stdout": "",

"stderr": "rsync: link_stat \"/tmp/staged/staging_info.yml\"
failed: No such file or directory (2)\nrsync error: some files/attrs were
not transferred (see previous errors) (code 23) at main.c(1655)
[Receiver=3.1.0]\nrsync: [Receiver] write error: Broken pipe (32)\n"
},

"thread_id": 69890836968240,

"fiber_id": 69890849112480,

"process_id": 17063,

"file":
"/opt/cloudfoundry/warden/warden/lib/warden/container/spawn.rb",
"lineno": 135,

"method": "set_deferred_success"

}



{

"timestamp": 1441985106.0086887,

"message": "Killing oom-notifier process",

"log_level": "debug",

"source": "Warden::Container::Features::MemLimit::OomNotifier",

"data": {



},

"thread_id": 69890836968240,

"fiber_id": 69890848620580,

"process_id": 17063,

"file":
"/opt/cloudfoundry/warden/warden/lib/warden/container/features/mem_limit.rb",
"lineno": 51,

"method": "kill"

}



{

"timestamp": 1441985106.0095143,

"message": "Exited with status 0 (35.427s):
[[\"/opt/cloudfoundry/warden/warden/src/closefds/closefds\",
\"/opt/cloudfoundry/warden/warden/src/closefds/closefds\"],
\"/opt/cloudfoundry/warden/warden/src/oom/oom\",
\"/tmp/warden/cgroup/memory/instance-18vf956il5v\"]",
"log_level": "warn",

"source": "Warden::Container::Features::MemLimit::OomNotifier",

"data": {

"stdout": "",

"stderr": ""

},

"thread_id": 69890836968240,

"fiber_id": 69890849112480,

"process_id": 17063,

"file":
"/opt/cloudfoundry/warden/warden/lib/warden/container/spawn.rb",
"lineno": 135,

"method": "set_deferred_success"

}




Obviously something is misconfigured, but I'm not sure what. I don't know
why the out of memory thing is appearing as the memory being used by the
test app i've pushed is tiny (64M app with staticfile buildpack) and the
dea config has resource.memory_mb set to 8 gigs and staging.memory_limit_mb
set to 1 gigs. Is there some config I'm lacking that's causing this to fail?


Re: UAA: Level count of Spaces under an Org

Filip Hanik
 

hi Zongwei, I suggest you start a new thread that doesn't have [UAA] in the
subject. This is a cloud controller question, and the experts on that
component will not read this post, because it says [UAA]

Filip

On Fri, Sep 11, 2015 at 11:18 AM, Zongwei Sun <Zongwei.Sun(a)huawei.com>
wrote:

Hi Filip,
Currently, there is only one level of Org supported, so you just cannot
create a child Org under an Org. People are asking if we can extend it and
support multiple of Orgs. I am sure the whole implications of doing this.
Any help would be appreciated.

Thanks,
Zongwei