Date   

Re: UAA: Level count of Spaces under an Org

Zongwei Sun
 

Hi Filip,
Currently, there is only one level of Org supported, so you just cannot create a child Org under an Org. People are asking if we can extend it and support multiple of Orgs. I am sure the whole implications of doing this. Any help would be appreciated.

Thanks,
Zongwei


Warden: staging error when pushing app

kyle havlovitz <kylehav@...>
 

I'm getting an error pushing any app during the staging step. The cf logs
returns only this:

2015-09-11T15:24:24.33+0000 [API] OUT Created app with guid
47efa472-e2d5-400b-9135-b1a1dbe3ba88
2015-09-11T15:24:24.41+0000 [API] OUT Updated app with guid
47efa472-e2d5-400b-9135-b1a1dbe3ba88
({"route"=>"5737c5f5-b017-43da-9013-2b6fe7db03f7"})
2015-09-11T15:24:29.54+0000 [DEA/0] OUT Got staging request for app
with id 47efa472-e2d5-400b-9135-b1a1dbe3ba88
2015-09-11T15:24:30.71+0000 [API] OUT Updated app with guid
47efa472-e2d5-400b-9135-b1a1dbe3ba88 ({"state"=>"STARTED"})
2015-09-11T15:24:30.76+0000 [STG/0] OUT -----> Downloaded app package
(4.0K)
2015-09-11T15:25:06.00+0000 [API] ERR encountered error: Staging
error: failed to stage application:
2015-09-11T15:25:06.00+0000 [API] ERR Script exited with status 1

In the warden logs, there are a few suspect messages:

{
"timestamp": 1441985105.8883495,

"message": "Exited with status 1 (35.120s):
[[\"/opt/cloudfoundry/warden/warden/src/closefds/closefds\",
\"/opt/cloudfoundry/warden/warden/src/closefds/closefds\"],
\"/var/warden/containers/18vf956il5v/bin/iomux-link\", \"-w\",
\"/var/warden/containers/18vf956il5v/jobs/8/cursors\",
\"/var/warden/containers/18vf956il5v/jobs/8\"]",
"log_level": "warn",

"source": "Warden::Container::Linux",

"data": {

"handle": "18vf956il5v",

"stdout": "",

"stderr": ""

},

"thread_id": 69890836968240,

"fiber_id": 69890849112480,

"process_id": 17063,

"file": "/opt/cloudfoundry/warden/warden/lib/warden/container/spawn.rb",

"lineno": 135,

"method": "set_deferred_success"

}



{

"timestamp": 1441985105.94083,

"message": "Exited with status 23 (0.023s):
[[\"/opt/cloudfoundry/warden/warden/src/closefds/closefds\",
\"/opt/cloudfoundry/warden/warden/src/closefds/closefds\"], \"rsync\",
\"-e\", \"/var/warden/containers/18vf956il5v/bin/wsh --socket
/var/warden/containers/18vf956il5v/run/wshd.sock --rsh\", \"-r\", \"-p\",
\"--links\", \"vcap(a)container:/tmp/staged/staging_info.yml\",
\"/tmp/dea_ng/staging/d20150911-17093-1amg6y8\"]",
"log_level": "warn",

"source": "Warden::Container::Linux",

"data": {

"handle": "18vf956il5v",

"stdout": "",

"stderr": "rsync: link_stat \"/tmp/staged/staging_info.yml\"
failed: No such file or directory (2)\nrsync error: some files/attrs were
not transferred (see previous errors) (code 23) at main.c(1655)
[Receiver=3.1.0]\nrsync: [Receiver] write error: Broken pipe (32)\n"
},

"thread_id": 69890836968240,

"fiber_id": 69890849112480,

"process_id": 17063,

"file": "/opt/cloudfoundry/warden/warden/lib/warden/container/spawn.rb",

"lineno": 135,

"method": "set_deferred_success"

}



{

"timestamp": 1441985106.0086887,

"message": "Killing oom-notifier process",

"log_level": "debug",

"source": "Warden::Container::Features::MemLimit::OomNotifier",

"data": {



},

"thread_id": 69890836968240,

"fiber_id": 69890848620580,

"process_id": 17063,

"file":
"/opt/cloudfoundry/warden/warden/lib/warden/container/features/mem_limit.rb",
"lineno": 51,

"method": "kill"

}



{

"timestamp": 1441985106.0095143,

"message": "Exited with status 0 (35.427s):
[[\"/opt/cloudfoundry/warden/warden/src/closefds/closefds\",
\"/opt/cloudfoundry/warden/warden/src/closefds/closefds\"],
\"/opt/cloudfoundry/warden/warden/src/oom/oom\",
\"/tmp/warden/cgroup/memory/instance-18vf956il5v\"]",
"log_level": "warn",

"source": "Warden::Container::Features::MemLimit::OomNotifier",

"data": {

"stdout": "",

"stderr": ""

},

"thread_id": 69890836968240,

"fiber_id": 69890849112480,

"process_id": 17063,

"file": "/opt/cloudfoundry/warden/warden/lib/warden/container/spawn.rb",

"lineno": 135,

"method": "set_deferred_success"

}




Obviously something is misconfigured, but I'm not sure what. I don't know
why the out of memory thing is appearing as the memory being used by the
test app i've pushed is tiny (64M app with staticfile buildpack) and the
dea config has resource.memory_mb set to 8 gigs and staging.memory_limit_mb
set to 1 gigs. Is there some config I'm lacking that's causing this to fail?


Re: Consumer from doppler

Rohit Kumar
 

The noaa library also lets you consume data from the loggregator firehose.
The firehose sample app
<https://github.com/cloudfoundry/noaa/blob/master/firehose_sample/main.go>
shows how you can do this.

Rohit

On Fri, Sep 11, 2015 at 2:07 AM, yancey0623 <yancey0623(a)163.com> wrote:

Dear all!

How can i consumer all data from doppler?

for example, in noaa consumer example, the GUID prama is required, it
seems like that noaa can only read from a app.


Re: cloud_controller_ng process only uses 100% cpu

CF Runtime
 

MRI Ruby is not able to execute threads in parallel. There is a "Global
Interpreter Lock" that prevents Ruby code in multiple threads from
executing at the same time. Threads can still do IO operations, but it will
never be able to use more than ~100% cpu.

Joseph
OSS Release Integration Team

On Fri, Sep 11, 2015 at 1:44 AM, Lyu yun <lvyun(a)huawei.com> wrote:

I'm using CF v195, Ruby v2.1.4;

CC VM using 4 core, I found ruby process(in fact is cloud_controller_ng
process) can reach 104% cpu usage on average on the 4 cores, but can not
reach more higher.

Dose ruby 2.1.4 can parallel threads on multi core?


[Bosh-lite] Can not recreate vm/job

Yitao Jiang
 

All,

i just recreate my router vm but failed with following execption.

root(a)bosh-lite:~# bosh -n -d /vagrant/manifests/cf-manifest.yml recreate
router_z1

Processing deployment manifest
------------------------------

Processing deployment manifest
------------------------------
You are about to recreate router_z1/0

Processing deployment manifest
------------------------------

Performing `recreate router_z1/0'...

Director task 128
Started preparing deployment
Started preparing deployment > Binding deployment. Done (00:00:00)
Started preparing deployment > Binding releases. Done (00:00:00)
Started preparing deployment > Binding existing deployment. Done
(00:00:01)
Started preparing deployment > Binding resource pools. Done (00:00:00)
Started preparing deployment > Binding stemcells. Done (00:00:00)
Started preparing deployment > Binding templates. Done (00:00:00)
Started preparing deployment > Binding properties. Done (00:00:00)
Started preparing deployment > Binding unallocated VMs. Done (00:00:00)
Started preparing deployment > Binding instance networks. Done (00:00:00)
Done preparing deployment (00:00:01)

Started preparing package compilation > Finding packages to compile. Done
(00:00:00)

Started preparing dns > Binding DNS. Done (00:00:00)

Started preparing configuration > Binding configuration. Done (00:00:02)

Started updating job api_z1 > api_z1/0. Failed: Attaching disk
'32a54912-9641-4c01-577c-99b09bb2d39c' to VM
'a5532a05-88e5-45aa-5022-ad4c6f81c4cc': Mounting persistent bind mounts
dir: Mounting disk specific persistent bind mount: Running command: 'mount
/var/vcap/store/cpi/disks/32a54912-9641-4c01-577c-99b09bb2d39c
/var/vcap/store/cpi/persistent_bind_mounts_dir/a5532a05-88e5-45aa-5022-ad4c6f81c4cc/32a54912-9641-4c01-577c-99b09bb2d39c
-o loop', stdout: '', stderr: 'mount: according to mtab
/var/vcap/store/cpi/disks/32a54912-9641-4c01-577c-99b09bb2d39c is already
mounted on
/var/vcap/store/cpi/persistent_bind_mounts_dir/a5532a05-88e5-45aa-5022-ad4c6f81c4cc/32a54912-9641-4c01-577c-99b09bb2d39c
as loop
': exit status 32 (00:10:40)

Error 100: Attaching disk '32a54912-9641-4c01-577c-99b09bb2d39c' to VM
'a5532a05-88e5-45aa-5022-ad4c6f81c4cc': Mounting persistent bind mounts
dir: Mounting disk specific persistent bind mount: Running command: 'mount
/var/vcap/store/cpi/disks/32a54912-9641-4c01-577c-99b09bb2d39c
/var/vcap/store/cpi/persistent_bind_mounts_dir/a5532a05-88e5-45aa-5022-ad4c6f81c4cc/32a54912-9641-4c01-577c-99b09bb2d39c
-o loop', stdout: '', stderr: 'mount: according to mtab
/var/vcap/store/cpi/disks/32a54912-9641-4c01-577c-99b09bb2d39c is already
mounted on
/var/vcap/store/cpi/persistent_bind_mounts_dir/a5532a05-88e5-45aa-5022-ad4c6f81c4cc/32a54912-9641-4c01-577c-99b09bb2d39c
as loop
': exit status 32




--

Regards,

Yitao
jiangyt.github.io


cloud_controller_ng process only uses 100% cpu

Lyu yun
 

I'm using CF v195, Ruby v2.1.4;

CC VM using 4 core, I found ruby process(in fact is cloud_controller_ng process) can reach 104% cpu usage on average on the 4 cores, but can not reach more higher.

Dose ruby 2.1.4 can parallel threads on multi core?


Consumer from doppler

Yancey
 

Dear all!

How can i consumer all data from doppler?

for example, in noaa consumer example, the GUID prama is required, it seems like that noaa can only read from a app. 


Re: Starting Spring Boot App after deploying it to CF

Naga Rakesh
 

Did you make your jar/war executable? if not that would help.

Just add the following below the dependency in pom

<build>
<plugins>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
</plugin>
</plugins></build>

spring-boot-maven-plugin will help make the jar/war executable


Thanks,
Venkata

On Thu, Sep 10, 2015 at 12:33 PM, Qing Gong <qinggong(a)gmail.com> wrote:

I built a Spring Boot App and using java -jar SpringBootApp.jar to run it,
the code works as expected. The System.out printed as expected.

public static void main(String[] args)
{
SpringApplication.run(Application.class, args);
System.out.println("Spring Boot Test Message");
}

However, when deployed in CF using cf push myApp -p SpringBootApp.jar, the
main() was not executed. I have tried using META-INF/MANIFEST.MF to include
the Main-Class, or using config/java-main.yml, or manifest.yml to include
java_main_class, none worked. The app just would not start. Do I need to do
anything else to trigger the app to start its main method?

Thanks!


Re: Starting Spring Boot App after deploying it to CF

James Bayer
 

i tried this simple getting-started guide [1] and it worked easily for me
[2].

[1] http://spring.io/guides/gs/spring-boot/
[2] https://gist.github.com/jbayer/ecacb25822dddd44ba13

On Thu, Sep 10, 2015 at 12:33 PM, Qing Gong <qinggong(a)gmail.com> wrote:

I built a Spring Boot App and using java -jar SpringBootApp.jar to run it,
the code works as expected. The System.out printed as expected.

public static void main(String[] args)
{
SpringApplication.run(Application.class, args);
System.out.println("Spring Boot Test Message");
}

However, when deployed in CF using cf push myApp -p SpringBootApp.jar, the
main() was not executed. I have tried using META-INF/MANIFEST.MF to include
the Main-Class, or using config/java-main.yml, or manifest.yml to include
java_main_class, none worked. The app just would not start. Do I need to do
anything else to trigger the app to start its main method?

Thanks!
--
Thank you,

James Bayer


Re: Unsubscribe

Karan Makim
 

-----Original Message-----
From: "Nithya Rajagopalan" <nith.r79(a)gmail.com>
Sent: ‎11-‎09-‎2015 07:36
To: "cf-dev(a)lists.cloudfoundry.org" <cf-dev(a)lists.cloudfoundry.org>; "cf-eng(a)lists.cloudfoundry.org" <cf-eng(a)lists.cloudfoundry.org>
Subject: [cf-dev] Unsubscribe


Re: UAA: Level count of Spaces under an Org

Filip Hanik
 

I think you mean Clod Controller and not UAA

UAA supports nested group hierarchies. but it doesn't manage spaces and
orgs.

On Thursday, September 10, 2015, Zongwei Sun <Zongwei.Sun(a)huawei.com> wrote:

Currently, there is only 1 level of Spaces under an Org with UAA. I heard
people talking about adding more levels of Spaces to it. I'd like to have
some discussions if this really makes sense. Thanks.


Re: Generic data points for dropsonde

Jim Park
 

One of the use cases that would benefit from this would be metrics sending.
Given that the current statsd protocol lacks the ability to supply
metadata, such as job and index ids, some apps have taken to inserting what
would otherwise be tagged data into the metric namespace. As an
example: [image:
Screenshot 2015-09-10 17.25.19.png]

Endpoints like Datadog and OpenTSDB want key names that are not unique per
instance. Graphite has wildcard semantics to accomodate this. But Datadog
and OpenTSDB do not, and would need this implemented elsewhere in the
delivery chain. StatsD doesn't provide a way to side-channel this
information, and we don't want to implement custom parsing on consumers
when we overload the metric key.

I believe that this protocol will be a move towards providing a better
means by which folks can supply metrics to the system without having to
make convention decisions that have to be scraped out and transformed on
the consumer side, as wasn't done above. A generic schema does not exist
currently, and this appears to be a promising way of delivering that
functionality.

It would be much easier to use a generic schema to output to DataDog,
OpenTSDB, Graphite, and others, than it would be to guess a schema from a
flattened result (for example, "router__0" is understandably job and index,
but what does the "vizzini_1_abcd" part represent? How would I parse this
if I didn't have a human trace it back to source?).

Thanks,


Jim

On Tue, Sep 8, 2015 at 7:29 AM Johannes Tuchscherer <jtuchscherer(a)pivotal.io>
wrote:

Ben,

I guess I am working under the assumption that the current upstream schema
is not going to see a terrible amount of change. The StatsD protocol has
been very stable for over four years, so I don't understand why we would
add more and more metric types. (I already struggle with the decision to
have container metrics as their own data type. I am not quite sure why that
was done vs just expressing them as ValueMetrics).

I am also not following your argument with the multiple implementations of
a redis export? Why would you have multiple implementations of a redis info
export? Also, why does the downstream consumer have to know about the
schema? Neither the datadog nozzle nor the graphite nozzle cares about any
type of schema right now.

But to answer your question, I think as a downstream developer I am not as
interested in whether you are sending me a uint32 or uint64, but the
meaning (e.g. counter vs value) is much more important to me. So, if you
were to do nested metrics, I think I would rather like to see having nested
counters or values in there plus maybe one type that we are missing which
is a generic event with just a string.

Generally, I would try to avoid falling into the trap of creating a overly
generic system at the cost of making consumers unnecessarily complicated.
Maybe it would help if you outlined a few use cases that might benefit from
a system like this and how specifically you would implement a downstream
consumer (e.g. is there a common place where I can fetch the schema for the
generic data point?).

On Sat, Sep 5, 2015 at 6:57 AM, James Bayer <jbayer(a)pivotal.io> wrote:

after understanding ben's proposal of what i would call an extensible
generic point versus the status quo of metrics that are actually hard-coded
in software on by the metric producer and the metric consumer, i
immediately gravitated toward the approach by ben.

cloud foundry has really benefited from extensibility in these examples:

* diego lifecycles
* app buildpacks
* app docker images
* app as windows build artifact
* service brokers
* cf cli plugins
* collector plugins
* firehose nozzles
* diego route emitters
* garden backends
* bosh cli plugins
* bosh releases
* external bosh CPIs
* bosh health monitor plugins

let me know if there are other points of extension i'm missing.

in most cases, the initial implementations required cloud foundry system
components to change software to support additional extensibility, and some
of the examples above still require that and it's an issue in frustration
as someone with an idea to explore needs to persuade the cf maintaining
team to process a pull request or complete work on an area. i see ben's
proposal as making an extremely valuable additional point of extension for
creating application and system metrics that benefits the entire cloud
foundry ecosystem.

i am sympathetic to the question raised by dwayne around how large the
messages will be. it would seem that we could consider an upper bound on
the number of attributes supported by looking at the types of metrics that
would be expressed. the redis info point is already 84 attributes for
example.

all of the following seem related to scaling considerations off the top
of my head:
* how large an individual metric may be
* at what rate the platform should support producers sending metrics
* what platform quality of service to provide (lossiness or not, back
pressure, rate limiting, etc)
* what type of clients to the metrics are supported and any limitations
related to that.
* whether there is tenant variability in some of the dimensions above.
for example a system metric might have a higher SLA than an app metric

should we consider putting a boundary on the "how large an individual
metric may be" by limiting the initial implementation to a number of
attributes (that we could change later or make configurable?).

i'm personally really excited about this new set of extensibility being
proposed and the creative things people will do with it. having loggregator
as a built-in system component versus a bolt-on is already such a great
capability compared with other platforms and i see investments to make it
more extensible and apply to more scenarios as making cloud foundry more
valuable and more fun to use.

On Fri, Sep 4, 2015 at 10:52 AM, Benjamin Black <bblack(a)pivotal.io>
wrote:

johannes,

the problem of upstream schema changes causing downstream change or
breakage is the current situation: every addition of a metric type implies
a change to the dropsonde-protocol requiring everything downstream to be
updated.

the schema concerns are similar. currently there is no schema whatsoever
beyond the very fine grained "this is a name and this is a value". this
means every implementation of redis info export, for example, can, and
almost certainly will, be different. this results in every downstream
consumer having to know every possible variant or to only support specific
variants, both exactly the problem you are looking to avoid.

i share the core concern regarding ensuring points are "sufficiently"
self describing. however, there is no clear line delineating what is
sufficient. the current proposal pushes almost everything out to schema. we
could imagine a change to the attributes that includes what an attribute is
(gauge, counter, etc), what the units are for the attribute, and so on.

it is critical that we balance the complexity of the points against
complexity of the consumers as there is no free lunch here. which specific
functionality would you want to see in the generic points to achieve the
balance you prefer?


b



On Wed, Sep 2, 2015 at 2:07 PM, Johannes Tuchscherer <
jtuchscherer(a)pivotal.io> wrote:

The current way of sending metrics as either Values or Counters through
the pipeline makes the development of a downstream consumer (=nozzle)
pretty easy. If you look at the datadog nozzle[0], it just takes all
ValueMetrics and Counters and sends them off to datadog. The nozzle does
not have to know anything about these metrics (e.g. their origin, name, or
layout).

Adding a new way to send metrics as a nested object would make the
downstream implementation certainly more complicated. In that case, the
nozzle developer has to know what metrics are included inside the generic
point (basically the schema of the metric) and treat each point
accordingly. For example, if I were to write a nozzle that emits metrics to
Graphite with a StatsD client (like it is done here[1]), I need to know if
my int64 value is a Gauge or a Counter. Also, my consumer is under constant
risk of breaking when the upstream schema changes.

We are already facing this problem with the container metrics. But at
least the container metrics are in a defined format that is well documented
and not likely to change.

I agree with you, though, the the dropsonde protocol could use a
mechanism for easier extension. Having a GenericPoint and/or GenericEvent
seems like a good idea that I whole-heartedly support. I would just like to
stay away from nested metrics. I think the cost of adding more logic into
the downstream consumer (and making it harder to maintain) is not worth the
benefit of a more concise metric transport.


[0] https://github.com/cloudfoundry-incubator/datadog-firehose-nozzle
[1] https://github.com/CloudCredo/graphite-nozzle

On Tue, Sep 1, 2015 at 5:52 PM, Benjamin Black <bblack(a)pivotal.io>
wrote:

great questions, dwayne.

1) the partition key is intended to be used in a similar manner to
partitioners in distributed systems like cassandra or kafka. the specific
behavior i would like to make part of the contract is two-fold: that all
data with the same key is routed to the same partition and that all data in
a partition is FIFO (meaning no ordering guarantees beyond arrival time).

this could help with the multi-line log problem by ensuring a single
consumer will receive all lines for a given log entry in order, allowing
simple reassembly. however, the lines might be interleaved with other lines
with the same key or even other keys that happen to map to the same
partition, so the consumer does require a bit of intelligence. this was
actually one of the driving scenarios for adding the key.

2) i expect typical points to be in the hundreds of bytes to a few KB.
if we find ourselves regularly needing much larger points, especially near
that 64KB limit, i'd look to the JSON representation as the hierarchical
structure is more efficiently managed there.


b




On Tue, Sep 1, 2015 at 4:42 PM, <dschultz(a)pivotal.io> wrote:

Hi Ben,

I was wondering if you could give a concrete use case for the
partition key functionality.

In particular I am interested in how we solve multi line log entries.
I think it would be better to solve it by keeping all the data (the
multiple lines) together throughout the logging/metrics pipeline, but could
see how something like a partition key might help keep the data together as
well.

Second question: how large do you see these point messages getting
(average and max)? There are still several stages of the logging/metrics
pipeline that use UDP with a standard 64K size limit.

Thanks,
Dwayne

On Aug 28, 2015, at 4:54 PM, Benjamin Black <bblack(a)pivotal.io>
wrote:

All,

The existing dropsonde protocol uses a different message type for
each event type. HttpStart, HttpStop, ContainerMetrics, and so on are all
distinct types in the protocol definition. This requires protocol changes
to introduce any new event type, making such changes very expensive. We've
been working for the past few weeks on an addition to the dropsonde
protocol to support easier future extension to new types of events and to
make it easier for users to define their own events.

The document linked below [1] describes a generic data point message
capable of carrying multi-dimensional, multi-metric points as sets of
name/value pairs. This new message is expected to be added as an additional
entry in the existing dropsonde protocol metric type enum. Things are now
at a point where we'd like to get feedback from the community before moving
forward with implementation.

Please contribute your thoughts on the document in whichever way you
are most comfortable: comments on the document, email here, or email
directly to me. If you comment on the document, please make sure you are
logged in so we can keep track of who is asking for what. Your views are
not just appreciated, but critical to the continued health and success of
the Cloud Foundry community. Thank you!


b

[1]
https://docs.google.com/document/d/1SzvT1BjrBPqUw6zfSYYFfaW9vX_dTZZjn5sl2nxB6Bc/edit?usp=sharing





--
Thank you,

James Bayer


CF Release Scripts Moved

Natalie Bennett
 

CF Release top-level scripts have been moved. The new location is under the
`scripts/` folder.

Thanks,
CF OSS Release Integration Team


Migrating from 10.244.0.34.xip.io to bosh-lite.com

Dan Wendorf
 

On the CAPI team, we were experiencing pain around the flakiness of xip.io
causing spurious test failures. We have registered bosh-lite.com, which is
just an A record to 10.244.0.34, as well as for its subdomains (e.g.
foo.bar.bosh-lite.com also resolves to 10.244.0.34).

We would like to switch cf-release, cf-acceptance-tests, and smoke-tests to
use bosh-lite.com exclusively, but wanted to check with the community to
see if there are any compelling reasons not to switch. In the case that
there's no reason not to switch, we wanted to give a little bit of a
heads-up before making the change.

In our testing of this, CATs run just fine. On our bosh-lite CF
environment, we had to `cf delete-shared-domain 10.244.0.34.xip.io`, `cf
target -o CATS-persistent-org && cf delete CATS-persistent-app`, and update
the file in our CONFIG environment variable to make tests pass, but
everything else "just worked".

You can review these changes on the bosh_lite_com branches of the following
repositories:

cf-acceptance-tests
<https://github.com/cloudfoundry/cf-acceptance-tests/tree/bosh_lite_com>
cf-smoke-tests
<https://github.com/cloudfoundry/cf-smoke-tests/tree/bosh_lite_com>
cf-release <https://github.com/cloudfoundry/cf-release/tree/bosh_lite_com>
sprout-capi
<https://github.com/cloudfoundry-incubator/sprout-capi/tree/bosh_lite_com>
(for those interested in configuring their bosh-lite via Sprout)

Thoughts?


FYI: Survey: Cloud Foundry Service Broker Compliance

Michael Maximilien
 

Hi, all,

I've been working on some side projects with IBM Research to improve
various aspects of CF. Some are pretty early research work and some are
ready to graduate and presented to you.

One of these relates to compliance of CF Service Brokers. We want to share
this work and make it open. We are planning a series of meetings next week
to socialize, open, and propose incubation. If interested then ping me
directly.

--------
In the mean time, Mohamed and Heiko, my colleagues from IBM Research, and
I have put together a short (literally two minutes) survey to gage what
would be the value of having Cloud Foundry (CF) service brokers
compliance.

https://www.surveymonkey.com/r/N37SD85

We'd be grateful if you could find some time to take this short survey
before we start socializing the solution we have been working on.

--------
Feel free to forward the survey link to others who may not be on this
mailing list and that you think should also take the survey

After we gather results, we will share a summary with everyone by next
Thursday.

All the best,

Mohamed, Heiko, and Max

------
dr.max
ibm cloud labs
silicon valley, ca
maximilien.org


CF-Abacus: IPMs

Michael Maximilien
 

Hi, all,

For anyone interested in CF-Abacus, we are having our IPMs on Friday's @
10a PDT @ Pivotal HQ.

I don't have a room reserve for tomorrow, but please ping me if you are
interested in joining and I will add you to the invite list.

For folks at Pivotal, the team will have our scrum right after standup
near the ping pong area tomorrow, so feel free to swing by.

Best,

------
dr.max
ibm cloud labs
silicon valley, ca
maximilien.org


UAA: Level count of Spaces under an Org

Zongwei Sun
 

Currently, there is only 1 level of Spaces under an Org with UAA. I heard people talking about adding more levels of Spaces to it. I'd like to have some discussions if this really makes sense. Thanks.


Re: tcp-routing in Lattice

Atul Kshirsagar
 

Great! Give us your feedback after you have played around with tcp routing.


Re: tcp-routing in Lattice

Jack Cai
 

After ssh into the vagrant VM and digging into the processes/ports, I found
out that in my previous attempt I was trying to map one additional port
that was already occupied by garden (7777). Because of this conflict,
haproxy gave up mapping all the ports. Once I changed 7777 to 17777, the
issue went away.

So the lesson-learn is to examine the ports that are already in use in the
vagrant VM, and avoid using them.

Jack

On Thu, Sep 10, 2015 at 2:18 PM, Jack Cai <greensight(a)gmail.com> wrote:

Thanks Atul and Marco for your advice.

Below is the command I used to push the docker image:

* ltc create hello <docker-image> --ports 8888,8788 --http-routes
hello:8888 --tcp-routes 8788:8788 --memory-mb=0 --timeout=10m
--monitor-port=8888*

After the push completed, it reported below:





*...hello is now running.App is reachable at:192.168.11.11.xip.io:8788
<http://192.168.11.11.xip.io:8788>http://hello.192.168.11.11.xip.io
<http://hello.192.168.11.11.xip.io>*

I also tried to update the routes:


* ltc update hello --http-routes hello:8888 --tcp-routes 8788:8788*
If I do "ltc status hello", I see the below routes:








*Instances 1/1Start Timeout 0DiskMB 0MemoryMB
0CPUWeight 100Ports 8788,8888Routes
192.168.11.11.xip.io:8788 <http://192.168.11.11.xip.io:8788> =>
8788 hello.192.168.11.11.xip.io
<http://hello.192.168.11.11.xip.io> => 8888*

But when I visited http://192.168.11.11.xip.io:8788/, I got "Unable to
connect", while I could visit http://hello.192.168.11.11.xip.io/
successfully.

Below is the log I saw when doing "vagrant up" to bring up Lattice:



















*...==> default: stdin: is not a tty==> default: mkdir: created directory
â/var/latticeâ==> default: mkdir: created directory â/var/lattice/setupâ==>
default: Running provisioner: shell... default: Running: inline
script==> default: stdin: is not a tty==> default: * Stopping web server
lighttpd==> default: ...done.==> default: Installing cflinuxfs2
rootfs...==> default: done==> default: * Starting web server lighttpd==>
default: ...done.==> default: Installing Lattice (v0.4.0) (Diego
0.1398.0) - Brain==> default: Finished Installing Lattice Brain (v0.4.0)
(Diego 0.1398.0)!==> default: Installing Lattice (v0.4.0) (Diego 0.1398.0)
- Lattice Cell==> default: Finished Installing Lattice Cell (v0.4.0) (Diego
0.1398.0)!==> default: bootstrap start/running==> default: Lattice is now
installed and running.==> default: You may target it using: ltc target
192.168.11.11.xip.io <http://192.168.11.11.xip.io>*

There is an error "stdin: is not a tty", and I don't see haproxy mentioned
in the log. Maybe haproxy is not started at all?

Jack



On Wed, Sep 9, 2015 at 8:13 PM, Marco Nicosia <mnicosia(a)pivotal.io> wrote:

Hi Jack,

In addition to Atul's suggestions, could you please give us the exact
command lines which you used to launch the two apps?

The CLI arguments are tricky, we may be able to see something about the
way you've tried to configure the routes by looking at how you've launched
the apps.

--
Marco Nicosia
Product Manager
Pivotal Software, Inc.
mnicosia(a)pivotal.io
c: 650-796-2948


On Wed, Sep 9, 2015 at 2:32 PM, Jack Cai <greensight(a)gmail.com> wrote:

I'm playing around with the tcp-routing feature in the latest Lattice
release. I started two node.js applications in the pushed image (listening
on two ports), one mapped to an http route and the other to a tcp route. I
can connect to the http route successfully in the browser, but when I try
to connect to the tcp port in the browser, I got connection refused. It
looks like the mapped public tcp port on 192.168.11.11 is not open at all.
Any advice on how to diagnose this? Thanks in advance!

Jack


Starting Spring Boot App after deploying it to CF

Qing Gong
 

I built a Spring Boot App and using java -jar SpringBootApp.jar to run it, the code works as expected. The System.out printed as expected.

public static void main(String[] args)
{
SpringApplication.run(Application.class, args);
System.out.println("Spring Boot Test Message");
}

However, when deployed in CF using cf push myApp -p SpringBootApp.jar, the main() was not executed. I have tried using META-INF/MANIFEST.MF to include the Main-Class, or using config/java-main.yml, or manifest.yml to include java_main_class, none worked. The app just would not start. Do I need to do anything else to trigger the app to start its main method?

Thanks!