Date   

Re: Is spiff dead?

Wayne E. Seguin
 

That said, you should definitely be keeping an eye on Spruce!!! ::

spruce is a domain-specific YAML merging tool, for generating BOSH
<http://bosh.io/> manifests.

It was written with the goal of being the most intuitive solution for
merging BOSH templates. As such, it pulls in a few semantics that may seem
familiar to those used to merging with the other merging tool
<https://github.com/cloudfoundry-incubator/spiff>, but there are a few key
differences.
https://github.com/geofffranks/spruce


<https://github.com/geofffranks/spruce#installation>

On Fri, Aug 21, 2015 at 12:44 PM, Amit Gupta <agupta(a)pivotal.io> wrote:

Spiff is not currently replaced by another tool, but it is not the ideal
tool for the job (too many features to shoot yourself in the foot with, not
enough features about BOSH knowledge, and just some general awkward hoops
it makes you jump through). We have it on our roadmap to improve manifest
generation, so we're not investing more activity into spiff that will slow
our progress towards where we eventually want to end up. For now, manifest
generation remains the same, and we will aim to introduce improvements in a
smooth manner.

A great majority of the improvements in manifest generation will come from
BOSH itself. See the bosh-notes for a list of current and upcoming
features (https://github.com/cloudfoundry/bosh-notes). Specifically the
list under the "Deployment configuration" heading in the README. Those
features open up some exciting possibilities for how simple manifest might
become.

Best,
Amit, Release Integration (MEGA) team PM

On Thu, Aug 20, 2015 at 11:29 PM, Kei YAMAZAKI <
daydream.yamazaki(a)gmail.com> wrote:

Hi all,

I consider use the spiff to manage a lot of manifest.
But I found statement that the spiff is not under active development.

https://github.com/cloudfoundry-incubator/spiff/commit/e7af3990e58b7390826b606d6de76ea576d9ad4f

Manifest management is very complex and cf-release has same problem (
diego-release also ).
I think the spiff is unable to resolve this problem completely and I
recognize the mega team is working on this problem.

So I have questions,
- Is spiff replaced by other tool?
- How to manage manifest files in the mega team.

Thanks,


Re: Extending Org to support multi-level Orgs (i.e. OU)

Benjamin Black
 

zongwei,

if they could define multi-org quotas rather than requiring a hierarchical
org structure would that meet their needs?


b


On Fri, Sep 11, 2015 at 11:02 AM, Zongwei Sun <Zongwei.Sun(a)huawei.com>
wrote:

Hi Benjamin,

For example, one of our customers today has to create two Orgs for their
two braches in different geographic locations, in order to manage resources
quotas and etc. separately. Also, they said that an Org hierarchy is closer
to the real structure in any organization.

I appreciate your help much.

-Zongwei


Re: Benchmark for UAA performance

Siva Balan <mailsiva@...>
 

100ms to 200ms sounds about the right value to get an Oauth token. We did a
benchmark with UAA and found out that out of the box, the auto
reconfiguration sets the default JDBC connection pool value to 4. You will
run out of connection pool if are trying to test with 100 users and your
requests may be waiting for a JDBC connection to become available in the
pool due to which you may be seeing 5+ second response times with 100
users.
Check this github issue for more details on this -
https://github.com/cloudfoundry/uaa/issues/165

-Siva

On Fri, Sep 11, 2015 at 10:54 AM, Filip Hanik <fhanik(a)pivotal.io> wrote:

What request are you doing to get a token?

On Fri, Sep 11, 2015 at 11:52 AM, Zongwei Sun <Zongwei.Sun(a)huawei.com>
wrote:

We're having a situation where it took about 5 seconds to get auth TOKENs
for 100 users (but only 100 ms for a single user). Does this sound right?
Has anybody else done benchmark of UAA perf who can share your experience
with me?

Thanks,
Zongwei


Re: Extending Org to support multi-level Orgs (i.e. OU)

Zongwei Sun
 

Hi Benjamin,

For example, one of our customers today has to create two Orgs for their two braches in different geographic locations, in order to manage resources quotas and etc. separately. Also, they said that an Org hierarchy is closer to the real structure in any organization.

I appreciate your help much.

-Zongwei


Re: Warden: staging error when pushing app

kyle havlovitz <kylehav@...>
 

I'm not using bosh, just trying to run it locally. I can post the dea and
warden configs though:

dea.yml:

base_dir: /tmp/dea_ng


domain: local.example.com

logging:
file: /opt/cloudfoundry/logs/dea_ng.log

level: debug


loggregator:
router: "127.0.0.1:3456"

shared_secret: "secret"


resources:
memory_mb: 8000

memory_overcommit_factor: 1

disk_mb: 40960

disk_overcommit_factor: 1


nats_servers:
- nats://127.0.0.1:4222


pid_filename: /tmp/dea_ng.pid

warden_socket: /tmp/warden.sock

evacuation_delay_secs: 10

default_health_check_timeout: 120

index: 0

intervals:
heartbeat: 10

advertise: 5

router_register_in_seconds: 20


staging:
enabled: true

memory_limit_mb: 4096

disk_limit_mb: 6144

disk_inode_limit: 200000

cpu_limit_shares: 512

max_staging_duration: 900


instance:
disk_inode_limit: 200000

memory_to_cpu_share_ratio: 8

max_cpu_share_limit: 256

min_cpu_share_limit: 1


dea_ruby: /usr/bin/ruby

# For Go-based directory server
directory_server:

protocol: 'http'

v1_port: 4385

v2_port: 5678

file_api_port: 1234

streaming_timeout: 10

logging:

file: /opt/cloudfoundry/logs/dea_dirserver.log

level: debug


stacks:
- name: cflinuxfs2

package_path: /var/warden/rootfs


placement_properties:
zone: "zone"

warden test_vm.yml:

server:

container_klass: Warden::Container::Linux


# Wait this long before destroying a container, after the last client
# referencing it disconnected. The timer is cancelled when during this

# period, another client references the container.

#

# Clients can be forced to specify this setting by setting the

# server-wide variable to an invalid value:

# container_grace_time: invalid

#

# The grace time can be disabled by setting it to nil:

# container_grace_time: ~

#

container_grace_time: 300


unix_domain_permissions: 0777
unix_domain_path: /tmp/warden.sock


# Specifies the path to the base chroot used as the read-only root
# filesystem

container_rootfs_path: /var/warden/rootfs


# Specifies the path to the parent directory under which all containers
# will live.

container_depot_path: /var/warden/containers


# See getrlimit(2) for details. Integer values are passed verbatim.
container_rlimits:

core: 0


quota:
disk_quota_enabled: false


allow_nested_warden: false

health_check_server:
port: 2345


logging:
file: /opt/cloudfoundry/logs/warden.log

level: debug2


network:
# Use this /30 network as offset for the network pool.

pool_start_address: 10.254.0.0


# Pool this many /30 networks.
pool_size: 256


# Interface MTU size
# (for OpenStack use 1454 to avoid problems with rubygems with GRE
tunneling)
mtu: 1500


user:
pool_start_uid: 11000

pool_size: 256


This is all using the latest CFv217

On Fri, Sep 11, 2015 at 1:31 PM, CF Runtime <cfruntime(a)gmail.com> wrote:

Hey Kyle,

Can we take a look at your deployment manifest (with all the secrets
redacted)?

Zak + Dan, CF OSS Integration team

On Fri, Sep 11, 2015 at 8:55 AM, kyle havlovitz <kylehav(a)gmail.com> wrote:

I'm getting an error pushing any app during the staging step. The cf logs
returns only this:

2015-09-11T15:24:24.33+0000 [API] OUT Created app with guid
47efa472-e2d5-400b-9135-b1a1dbe3ba88
2015-09-11T15:24:24.41+0000 [API] OUT Updated app with guid
47efa472-e2d5-400b-9135-b1a1dbe3ba88
({"route"=>"5737c5f5-b017-43da-9013-2b6fe7db03f7"})
2015-09-11T15:24:29.54+0000 [DEA/0] OUT Got staging request for app
with id 47efa472-e2d5-400b-9135-b1a1dbe3ba88
2015-09-11T15:24:30.71+0000 [API] OUT Updated app with guid
47efa472-e2d5-400b-9135-b1a1dbe3ba88 ({"state"=>"STARTED"})
2015-09-11T15:24:30.76+0000 [STG/0] OUT -----> Downloaded app
package (4.0K)
2015-09-11T15:25:06.00+0000 [API] ERR encountered error: Staging
error: failed to stage application:
2015-09-11T15:25:06.00+0000 [API] ERR Script exited with status 1

In the warden logs, there are a few suspect messages:

{
"timestamp": 1441985105.8883495,

"message": "Exited with status 1 (35.120s):
[[\"/opt/cloudfoundry/warden/warden/src/closefds/closefds\",
\"/opt/cloudfoundry/warden/warden/src/closefds/closefds\"],
\"/var/warden/containers/18vf956il5v/bin/iomux-link\", \"-w\",
\"/var/warden/containers/18vf956il5v/jobs/8/cursors\",
\"/var/warden/containers/18vf956il5v/jobs/8\"]",
"log_level": "warn",

"source": "Warden::Container::Linux",

"data": {

"handle": "18vf956il5v",

"stdout": "",

"stderr": ""

},

"thread_id": 69890836968240,

"fiber_id": 69890849112480,

"process_id": 17063,

"file":
"/opt/cloudfoundry/warden/warden/lib/warden/container/spawn.rb",
"lineno": 135,

"method": "set_deferred_success"

}



{

"timestamp": 1441985105.94083,

"message": "Exited with status 23 (0.023s):
[[\"/opt/cloudfoundry/warden/warden/src/closefds/closefds\",
\"/opt/cloudfoundry/warden/warden/src/closefds/closefds\"], \"rsync\",
\"-e\", \"/var/warden/containers/18vf956il5v/bin/wsh --socket
/var/warden/containers/18vf956il5v/run/wshd.sock --rsh\", \"-r\", \"-p\",
\"--links\", \"vcap(a)container:/tmp/staged/staging_info.yml\",
\"/tmp/dea_ng/staging/d20150911-17093-1amg6y8\"]",
"log_level": "warn",

"source": "Warden::Container::Linux",

"data": {

"handle": "18vf956il5v",

"stdout": "",

"stderr": "rsync: link_stat \"/tmp/staged/staging_info.yml\"
failed: No such file or directory (2)\nrsync error: some files/attrs were
not transferred (see previous errors) (code 23) at main.c(1655)
[Receiver=3.1.0]\nrsync: [Receiver] write error: Broken pipe (32)\n"
},

"thread_id": 69890836968240,

"fiber_id": 69890849112480,

"process_id": 17063,

"file":
"/opt/cloudfoundry/warden/warden/lib/warden/container/spawn.rb",
"lineno": 135,

"method": "set_deferred_success"

}



{

"timestamp": 1441985106.0086887,

"message": "Killing oom-notifier process",

"log_level": "debug",

"source": "Warden::Container::Features::MemLimit::OomNotifier",

"data": {



},

"thread_id": 69890836968240,

"fiber_id": 69890848620580,

"process_id": 17063,

"file":
"/opt/cloudfoundry/warden/warden/lib/warden/container/features/mem_limit.rb",
"lineno": 51,

"method": "kill"

}



{

"timestamp": 1441985106.0095143,

"message": "Exited with status 0 (35.427s):
[[\"/opt/cloudfoundry/warden/warden/src/closefds/closefds\",
\"/opt/cloudfoundry/warden/warden/src/closefds/closefds\"],
\"/opt/cloudfoundry/warden/warden/src/oom/oom\",
\"/tmp/warden/cgroup/memory/instance-18vf956il5v\"]",
"log_level": "warn",

"source": "Warden::Container::Features::MemLimit::OomNotifier",

"data": {

"stdout": "",

"stderr": ""

},

"thread_id": 69890836968240,

"fiber_id": 69890849112480,

"process_id": 17063,

"file":
"/opt/cloudfoundry/warden/warden/lib/warden/container/spawn.rb",
"lineno": 135,

"method": "set_deferred_success"

}




Obviously something is misconfigured, but I'm not sure what. I don't know
why the out of memory thing is appearing as the memory being used by the
test app i've pushed is tiny (64M app with staticfile buildpack) and the
dea config has resource.memory_mb set to 8 gigs and staging.memory_limit_mb
set to 1 gigs. Is there some config I'm lacking that's causing this to fail?


Re: Benchmark for UAA performance

Filip Hanik
 

What request are you doing to get a token?

On Fri, Sep 11, 2015 at 11:52 AM, Zongwei Sun <Zongwei.Sun(a)huawei.com>
wrote:

We're having a situation where it took about 5 seconds to get auth TOKENs
for 100 users (but only 100 ms for a single user). Does this sound right?
Has anybody else done benchmark of UAA perf who can share your experience
with me?

Thanks,
Zongwei


Benchmark for UAA performance

Zongwei Sun
 

We're having a situation where it took about 5 seconds to get auth TOKENs for 100 users (but only 100 ms for a single user). Does this sound right? Has anybody else done benchmark of UAA perf who can share your experience with me?

Thanks,
Zongwei


Re: Extending Org to support multi-level Orgs (i.e. OU)

Benjamin Black
 

zongwei,

could you give more detail on the problem you are hoping to solve?
multi-level orgs is a solution, but i want to make sure there is clarity on
the problem so we can discuss what other solutions might be possible.


b


On Fri, Sep 11, 2015 at 10:47 AM, Zongwei Sun <Zongwei.Sun(a)huawei.com>
wrote:

We've been asked to support multi-level of Orgs. I realized this is huge,
probably bigger than I thought. If you can give me some hints of the whole
implications of this (components need to be changed, the risks and etc), it
would be highly appreciated.

Thanks!
Zongwei


Extending Org to support multi-level Orgs (i.e. OU)

Zongwei Sun
 

We've been asked to support multi-level of Orgs. I realized this is huge, probably bigger than I thought. If you can give me some hints of the whole implications of this (components need to be changed, the risks and etc), it would be highly appreciated.

Thanks!
Zongwei


Re: UAA: Level count of Spaces under an Org

Zongwei Sun
 

Hi Filip,

Will do. You actually have already helped me half way through this. My initial motivation was to find out the whole implications of this.

Thanks!
-Zongwei


Re: Warden: staging error when pushing app

CF Runtime
 

Hey Kyle,

Can we take a look at your deployment manifest (with all the secrets
redacted)?

Zak + Dan, CF OSS Integration team

On Fri, Sep 11, 2015 at 8:55 AM, kyle havlovitz <kylehav(a)gmail.com> wrote:

I'm getting an error pushing any app during the staging step. The cf logs
returns only this:

2015-09-11T15:24:24.33+0000 [API] OUT Created app with guid
47efa472-e2d5-400b-9135-b1a1dbe3ba88
2015-09-11T15:24:24.41+0000 [API] OUT Updated app with guid
47efa472-e2d5-400b-9135-b1a1dbe3ba88
({"route"=>"5737c5f5-b017-43da-9013-2b6fe7db03f7"})
2015-09-11T15:24:29.54+0000 [DEA/0] OUT Got staging request for app
with id 47efa472-e2d5-400b-9135-b1a1dbe3ba88
2015-09-11T15:24:30.71+0000 [API] OUT Updated app with guid
47efa472-e2d5-400b-9135-b1a1dbe3ba88 ({"state"=>"STARTED"})
2015-09-11T15:24:30.76+0000 [STG/0] OUT -----> Downloaded app
package (4.0K)
2015-09-11T15:25:06.00+0000 [API] ERR encountered error: Staging
error: failed to stage application:
2015-09-11T15:25:06.00+0000 [API] ERR Script exited with status 1

In the warden logs, there are a few suspect messages:

{
"timestamp": 1441985105.8883495,

"message": "Exited with status 1 (35.120s):
[[\"/opt/cloudfoundry/warden/warden/src/closefds/closefds\",
\"/opt/cloudfoundry/warden/warden/src/closefds/closefds\"],
\"/var/warden/containers/18vf956il5v/bin/iomux-link\", \"-w\",
\"/var/warden/containers/18vf956il5v/jobs/8/cursors\",
\"/var/warden/containers/18vf956il5v/jobs/8\"]",
"log_level": "warn",

"source": "Warden::Container::Linux",

"data": {

"handle": "18vf956il5v",

"stdout": "",

"stderr": ""

},

"thread_id": 69890836968240,

"fiber_id": 69890849112480,

"process_id": 17063,

"file":
"/opt/cloudfoundry/warden/warden/lib/warden/container/spawn.rb",
"lineno": 135,

"method": "set_deferred_success"

}



{

"timestamp": 1441985105.94083,

"message": "Exited with status 23 (0.023s):
[[\"/opt/cloudfoundry/warden/warden/src/closefds/closefds\",
\"/opt/cloudfoundry/warden/warden/src/closefds/closefds\"], \"rsync\",
\"-e\", \"/var/warden/containers/18vf956il5v/bin/wsh --socket
/var/warden/containers/18vf956il5v/run/wshd.sock --rsh\", \"-r\", \"-p\",
\"--links\", \"vcap(a)container:/tmp/staged/staging_info.yml\",
\"/tmp/dea_ng/staging/d20150911-17093-1amg6y8\"]",
"log_level": "warn",

"source": "Warden::Container::Linux",

"data": {

"handle": "18vf956il5v",

"stdout": "",

"stderr": "rsync: link_stat \"/tmp/staged/staging_info.yml\"
failed: No such file or directory (2)\nrsync error: some files/attrs were
not transferred (see previous errors) (code 23) at main.c(1655)
[Receiver=3.1.0]\nrsync: [Receiver] write error: Broken pipe (32)\n"
},

"thread_id": 69890836968240,

"fiber_id": 69890849112480,

"process_id": 17063,

"file":
"/opt/cloudfoundry/warden/warden/lib/warden/container/spawn.rb",
"lineno": 135,

"method": "set_deferred_success"

}



{

"timestamp": 1441985106.0086887,

"message": "Killing oom-notifier process",

"log_level": "debug",

"source": "Warden::Container::Features::MemLimit::OomNotifier",

"data": {



},

"thread_id": 69890836968240,

"fiber_id": 69890848620580,

"process_id": 17063,

"file":
"/opt/cloudfoundry/warden/warden/lib/warden/container/features/mem_limit.rb",
"lineno": 51,

"method": "kill"

}



{

"timestamp": 1441985106.0095143,

"message": "Exited with status 0 (35.427s):
[[\"/opt/cloudfoundry/warden/warden/src/closefds/closefds\",
\"/opt/cloudfoundry/warden/warden/src/closefds/closefds\"],
\"/opt/cloudfoundry/warden/warden/src/oom/oom\",
\"/tmp/warden/cgroup/memory/instance-18vf956il5v\"]",
"log_level": "warn",

"source": "Warden::Container::Features::MemLimit::OomNotifier",

"data": {

"stdout": "",

"stderr": ""

},

"thread_id": 69890836968240,

"fiber_id": 69890849112480,

"process_id": 17063,

"file":
"/opt/cloudfoundry/warden/warden/lib/warden/container/spawn.rb",
"lineno": 135,

"method": "set_deferred_success"

}




Obviously something is misconfigured, but I'm not sure what. I don't know
why the out of memory thing is appearing as the memory being used by the
test app i've pushed is tiny (64M app with staticfile buildpack) and the
dea config has resource.memory_mb set to 8 gigs and staging.memory_limit_mb
set to 1 gigs. Is there some config I'm lacking that's causing this to fail?


Re: UAA: Level count of Spaces under an Org

Filip Hanik
 

hi Zongwei, I suggest you start a new thread that doesn't have [UAA] in the
subject. This is a cloud controller question, and the experts on that
component will not read this post, because it says [UAA]

Filip

On Fri, Sep 11, 2015 at 11:18 AM, Zongwei Sun <Zongwei.Sun(a)huawei.com>
wrote:

Hi Filip,
Currently, there is only one level of Org supported, so you just cannot
create a child Org under an Org. People are asking if we can extend it and
support multiple of Orgs. I am sure the whole implications of doing this.
Any help would be appreciated.

Thanks,
Zongwei



Re: UAA: Level count of Spaces under an Org

Zongwei Sun
 

Hi Filip,
Currently, there is only one level of Org supported, so you just cannot create a child Org under an Org. People are asking if we can extend it and support multiple of Orgs. I am sure the whole implications of doing this. Any help would be appreciated.

Thanks,
Zongwei


Warden: staging error when pushing app

kyle havlovitz <kylehav@...>
 

I'm getting an error pushing any app during the staging step. The cf logs
returns only this:

2015-09-11T15:24:24.33+0000 [API] OUT Created app with guid
47efa472-e2d5-400b-9135-b1a1dbe3ba88
2015-09-11T15:24:24.41+0000 [API] OUT Updated app with guid
47efa472-e2d5-400b-9135-b1a1dbe3ba88
({"route"=>"5737c5f5-b017-43da-9013-2b6fe7db03f7"})
2015-09-11T15:24:29.54+0000 [DEA/0] OUT Got staging request for app
with id 47efa472-e2d5-400b-9135-b1a1dbe3ba88
2015-09-11T15:24:30.71+0000 [API] OUT Updated app with guid
47efa472-e2d5-400b-9135-b1a1dbe3ba88 ({"state"=>"STARTED"})
2015-09-11T15:24:30.76+0000 [STG/0] OUT -----> Downloaded app package
(4.0K)
2015-09-11T15:25:06.00+0000 [API] ERR encountered error: Staging
error: failed to stage application:
2015-09-11T15:25:06.00+0000 [API] ERR Script exited with status 1

In the warden logs, there are a few suspect messages:

{
"timestamp": 1441985105.8883495,

"message": "Exited with status 1 (35.120s):
[[\"/opt/cloudfoundry/warden/warden/src/closefds/closefds\",
\"/opt/cloudfoundry/warden/warden/src/closefds/closefds\"],
\"/var/warden/containers/18vf956il5v/bin/iomux-link\", \"-w\",
\"/var/warden/containers/18vf956il5v/jobs/8/cursors\",
\"/var/warden/containers/18vf956il5v/jobs/8\"]",
"log_level": "warn",

"source": "Warden::Container::Linux",

"data": {

"handle": "18vf956il5v",

"stdout": "",

"stderr": ""

},

"thread_id": 69890836968240,

"fiber_id": 69890849112480,

"process_id": 17063,

"file": "/opt/cloudfoundry/warden/warden/lib/warden/container/spawn.rb",

"lineno": 135,

"method": "set_deferred_success"

}



{

"timestamp": 1441985105.94083,

"message": "Exited with status 23 (0.023s):
[[\"/opt/cloudfoundry/warden/warden/src/closefds/closefds\",
\"/opt/cloudfoundry/warden/warden/src/closefds/closefds\"], \"rsync\",
\"-e\", \"/var/warden/containers/18vf956il5v/bin/wsh --socket
/var/warden/containers/18vf956il5v/run/wshd.sock --rsh\", \"-r\", \"-p\",
\"--links\", \"vcap(a)container:/tmp/staged/staging_info.yml\",
\"/tmp/dea_ng/staging/d20150911-17093-1amg6y8\"]",
"log_level": "warn",

"source": "Warden::Container::Linux",

"data": {

"handle": "18vf956il5v",

"stdout": "",

"stderr": "rsync: link_stat \"/tmp/staged/staging_info.yml\"
failed: No such file or directory (2)\nrsync error: some files/attrs were
not transferred (see previous errors) (code 23) at main.c(1655)
[Receiver=3.1.0]\nrsync: [Receiver] write error: Broken pipe (32)\n"
},

"thread_id": 69890836968240,

"fiber_id": 69890849112480,

"process_id": 17063,

"file": "/opt/cloudfoundry/warden/warden/lib/warden/container/spawn.rb",

"lineno": 135,

"method": "set_deferred_success"

}



{

"timestamp": 1441985106.0086887,

"message": "Killing oom-notifier process",

"log_level": "debug",

"source": "Warden::Container::Features::MemLimit::OomNotifier",

"data": {



},

"thread_id": 69890836968240,

"fiber_id": 69890848620580,

"process_id": 17063,

"file":
"/opt/cloudfoundry/warden/warden/lib/warden/container/features/mem_limit.rb",
"lineno": 51,

"method": "kill"

}



{

"timestamp": 1441985106.0095143,

"message": "Exited with status 0 (35.427s):
[[\"/opt/cloudfoundry/warden/warden/src/closefds/closefds\",
\"/opt/cloudfoundry/warden/warden/src/closefds/closefds\"],
\"/opt/cloudfoundry/warden/warden/src/oom/oom\",
\"/tmp/warden/cgroup/memory/instance-18vf956il5v\"]",
"log_level": "warn",

"source": "Warden::Container::Features::MemLimit::OomNotifier",

"data": {

"stdout": "",

"stderr": ""

},

"thread_id": 69890836968240,

"fiber_id": 69890849112480,

"process_id": 17063,

"file": "/opt/cloudfoundry/warden/warden/lib/warden/container/spawn.rb",

"lineno": 135,

"method": "set_deferred_success"

}




Obviously something is misconfigured, but I'm not sure what. I don't know
why the out of memory thing is appearing as the memory being used by the
test app i've pushed is tiny (64M app with staticfile buildpack) and the
dea config has resource.memory_mb set to 8 gigs and staging.memory_limit_mb
set to 1 gigs. Is there some config I'm lacking that's causing this to fail?


Re: Consumer from doppler

Rohit Kumar
 

The noaa library also lets you consume data from the loggregator firehose.
The firehose sample app
<https://github.com/cloudfoundry/noaa/blob/master/firehose_sample/main.go>
shows how you can do this.

Rohit

On Fri, Sep 11, 2015 at 2:07 AM, yancey0623 <yancey0623(a)163.com> wrote:

Dear all!

How can i consumer all data from doppler?

for example, in noaa consumer example, the GUID prama is required, it
seems like that noaa can only read from a app.


Re: cloud_controller_ng process only uses 100% cpu

CF Runtime
 

MRI Ruby is not able to execute threads in parallel. There is a "Global
Interpreter Lock" that prevents Ruby code in multiple threads from
executing at the same time. Threads can still do IO operations, but it will
never be able to use more than ~100% cpu.

Joseph
OSS Release Integration Team

On Fri, Sep 11, 2015 at 1:44 AM, Lyu yun <lvyun(a)huawei.com> wrote:

I'm using CF v195, Ruby v2.1.4;

CC VM using 4 core, I found ruby process(in fact is cloud_controller_ng
process) can reach 104% cpu usage on average on the 4 cores, but can not
reach more higher.

Dose ruby 2.1.4 can parallel threads on multi core?


[Bosh-lite] Can not recreate vm/job

Yitao Jiang
 

All,

i just recreate my router vm but failed with following execption.

root(a)bosh-lite:~# bosh -n -d /vagrant/manifests/cf-manifest.yml recreate
router_z1

Processing deployment manifest
------------------------------

Processing deployment manifest
------------------------------
You are about to recreate router_z1/0

Processing deployment manifest
------------------------------

Performing `recreate router_z1/0'...

Director task 128
Started preparing deployment
Started preparing deployment > Binding deployment. Done (00:00:00)
Started preparing deployment > Binding releases. Done (00:00:00)
Started preparing deployment > Binding existing deployment. Done
(00:00:01)
Started preparing deployment > Binding resource pools. Done (00:00:00)
Started preparing deployment > Binding stemcells. Done (00:00:00)
Started preparing deployment > Binding templates. Done (00:00:00)
Started preparing deployment > Binding properties. Done (00:00:00)
Started preparing deployment > Binding unallocated VMs. Done (00:00:00)
Started preparing deployment > Binding instance networks. Done (00:00:00)
Done preparing deployment (00:00:01)

Started preparing package compilation > Finding packages to compile. Done
(00:00:00)

Started preparing dns > Binding DNS. Done (00:00:00)

Started preparing configuration > Binding configuration. Done (00:00:02)

Started updating job api_z1 > api_z1/0. Failed: Attaching disk
'32a54912-9641-4c01-577c-99b09bb2d39c' to VM
'a5532a05-88e5-45aa-5022-ad4c6f81c4cc': Mounting persistent bind mounts
dir: Mounting disk specific persistent bind mount: Running command: 'mount
/var/vcap/store/cpi/disks/32a54912-9641-4c01-577c-99b09bb2d39c
/var/vcap/store/cpi/persistent_bind_mounts_dir/a5532a05-88e5-45aa-5022-ad4c6f81c4cc/32a54912-9641-4c01-577c-99b09bb2d39c
-o loop', stdout: '', stderr: 'mount: according to mtab
/var/vcap/store/cpi/disks/32a54912-9641-4c01-577c-99b09bb2d39c is already
mounted on
/var/vcap/store/cpi/persistent_bind_mounts_dir/a5532a05-88e5-45aa-5022-ad4c6f81c4cc/32a54912-9641-4c01-577c-99b09bb2d39c
as loop
': exit status 32 (00:10:40)

Error 100: Attaching disk '32a54912-9641-4c01-577c-99b09bb2d39c' to VM
'a5532a05-88e5-45aa-5022-ad4c6f81c4cc': Mounting persistent bind mounts
dir: Mounting disk specific persistent bind mount: Running command: 'mount
/var/vcap/store/cpi/disks/32a54912-9641-4c01-577c-99b09bb2d39c
/var/vcap/store/cpi/persistent_bind_mounts_dir/a5532a05-88e5-45aa-5022-ad4c6f81c4cc/32a54912-9641-4c01-577c-99b09bb2d39c
-o loop', stdout: '', stderr: 'mount: according to mtab
/var/vcap/store/cpi/disks/32a54912-9641-4c01-577c-99b09bb2d39c is already
mounted on
/var/vcap/store/cpi/persistent_bind_mounts_dir/a5532a05-88e5-45aa-5022-ad4c6f81c4cc/32a54912-9641-4c01-577c-99b09bb2d39c
as loop
': exit status 32




--

Regards,

Yitao
jiangyt.github.io


cloud_controller_ng process only uses 100% cpu

Lyu yun
 

I'm using CF v195, Ruby v2.1.4;

CC VM using 4 core, I found ruby process(in fact is cloud_controller_ng process) can reach 104% cpu usage on average on the 4 cores, but can not reach more higher.

Dose ruby 2.1.4 can parallel threads on multi core?


Consumer from doppler

Yancey
 

Dear all!

How can i consumer all data from doppler?

for example, in noaa consumer example, the GUID prama is required, it seems like that noaa can only read from a app. 


Re: Starting Spring Boot App after deploying it to CF

Naga Rakesh
 

Did you make your jar/war executable? if not that would help.

Just add the following below the dependency in pom

<build>
<plugins>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
</plugin>
</plugins></build>

spring-boot-maven-plugin will help make the jar/war executable


Thanks,
Venkata

On Thu, Sep 10, 2015 at 12:33 PM, Qing Gong <qinggong(a)gmail.com> wrote:

I built a Spring Boot App and using java -jar SpringBootApp.jar to run it,
the code works as expected. The System.out printed as expected.

public static void main(String[] args)
{
SpringApplication.run(Application.class, args);
System.out.println("Spring Boot Test Message");
}

However, when deployed in CF using cf push myApp -p SpringBootApp.jar, the
main() was not executed. I have tried using META-INF/MANIFEST.MF to include
the Main-Class, or using config/java-main.yml, or manifest.yml to include
java_main_class, none worked. The app just would not start. Do I need to do
anything else to trigger the app to start its main method?

Thanks!