Date   

[Bosh-lite] Can not recreate vm/job

Yitao Jiang
 

All,

i just recreate my router vm but failed with following execption.

root(a)bosh-lite:~# bosh -n -d /vagrant/manifests/cf-manifest.yml recreate
router_z1

Processing deployment manifest
------------------------------

Processing deployment manifest
------------------------------
You are about to recreate router_z1/0

Processing deployment manifest
------------------------------

Performing `recreate router_z1/0'...

Director task 128
Started preparing deployment
Started preparing deployment > Binding deployment. Done (00:00:00)
Started preparing deployment > Binding releases. Done (00:00:00)
Started preparing deployment > Binding existing deployment. Done
(00:00:01)
Started preparing deployment > Binding resource pools. Done (00:00:00)
Started preparing deployment > Binding stemcells. Done (00:00:00)
Started preparing deployment > Binding templates. Done (00:00:00)
Started preparing deployment > Binding properties. Done (00:00:00)
Started preparing deployment > Binding unallocated VMs. Done (00:00:00)
Started preparing deployment > Binding instance networks. Done (00:00:00)
Done preparing deployment (00:00:01)

Started preparing package compilation > Finding packages to compile. Done
(00:00:00)

Started preparing dns > Binding DNS. Done (00:00:00)

Started preparing configuration > Binding configuration. Done (00:00:02)

Started updating job api_z1 > api_z1/0. Failed: Attaching disk
'32a54912-9641-4c01-577c-99b09bb2d39c' to VM
'a5532a05-88e5-45aa-5022-ad4c6f81c4cc': Mounting persistent bind mounts
dir: Mounting disk specific persistent bind mount: Running command: 'mount
/var/vcap/store/cpi/disks/32a54912-9641-4c01-577c-99b09bb2d39c
/var/vcap/store/cpi/persistent_bind_mounts_dir/a5532a05-88e5-45aa-5022-ad4c6f81c4cc/32a54912-9641-4c01-577c-99b09bb2d39c
-o loop', stdout: '', stderr: 'mount: according to mtab
/var/vcap/store/cpi/disks/32a54912-9641-4c01-577c-99b09bb2d39c is already
mounted on
/var/vcap/store/cpi/persistent_bind_mounts_dir/a5532a05-88e5-45aa-5022-ad4c6f81c4cc/32a54912-9641-4c01-577c-99b09bb2d39c
as loop
': exit status 32 (00:10:40)

Error 100: Attaching disk '32a54912-9641-4c01-577c-99b09bb2d39c' to VM
'a5532a05-88e5-45aa-5022-ad4c6f81c4cc': Mounting persistent bind mounts
dir: Mounting disk specific persistent bind mount: Running command: 'mount
/var/vcap/store/cpi/disks/32a54912-9641-4c01-577c-99b09bb2d39c
/var/vcap/store/cpi/persistent_bind_mounts_dir/a5532a05-88e5-45aa-5022-ad4c6f81c4cc/32a54912-9641-4c01-577c-99b09bb2d39c
-o loop', stdout: '', stderr: 'mount: according to mtab
/var/vcap/store/cpi/disks/32a54912-9641-4c01-577c-99b09bb2d39c is already
mounted on
/var/vcap/store/cpi/persistent_bind_mounts_dir/a5532a05-88e5-45aa-5022-ad4c6f81c4cc/32a54912-9641-4c01-577c-99b09bb2d39c
as loop
': exit status 32




--

Regards,

Yitao
jiangyt.github.io


cloud_controller_ng process only uses 100% cpu

Lyu yun
 

I'm using CF v195, Ruby v2.1.4;

CC VM using 4 core, I found ruby process(in fact is cloud_controller_ng process) can reach 104% cpu usage on average on the 4 cores, but can not reach more higher.

Dose ruby 2.1.4 can parallel threads on multi core?


Consumer from doppler

Yancey
 

Dear all!

How can i consumer all data from doppler?

for example, in noaa consumer example, the GUID prama is required, it seems like that noaa can only read from a app. 


Re: Starting Spring Boot App after deploying it to CF

Naga Rakesh
 

Did you make your jar/war executable? if not that would help.

Just add the following below the dependency in pom

<build>
<plugins>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
</plugin>
</plugins></build>

spring-boot-maven-plugin will help make the jar/war executable


Thanks,
Venkata

On Thu, Sep 10, 2015 at 12:33 PM, Qing Gong <qinggong(a)gmail.com> wrote:

I built a Spring Boot App and using java -jar SpringBootApp.jar to run it,
the code works as expected. The System.out printed as expected.

public static void main(String[] args)
{
SpringApplication.run(Application.class, args);
System.out.println("Spring Boot Test Message");
}

However, when deployed in CF using cf push myApp -p SpringBootApp.jar, the
main() was not executed. I have tried using META-INF/MANIFEST.MF to include
the Main-Class, or using config/java-main.yml, or manifest.yml to include
java_main_class, none worked. The app just would not start. Do I need to do
anything else to trigger the app to start its main method?

Thanks!


Re: Starting Spring Boot App after deploying it to CF

James Bayer
 

i tried this simple getting-started guide [1] and it worked easily for me
[2].

[1] http://spring.io/guides/gs/spring-boot/
[2] https://gist.github.com/jbayer/ecacb25822dddd44ba13

On Thu, Sep 10, 2015 at 12:33 PM, Qing Gong <qinggong(a)gmail.com> wrote:

I built a Spring Boot App and using java -jar SpringBootApp.jar to run it,
the code works as expected. The System.out printed as expected.

public static void main(String[] args)
{
SpringApplication.run(Application.class, args);
System.out.println("Spring Boot Test Message");
}

However, when deployed in CF using cf push myApp -p SpringBootApp.jar, the
main() was not executed. I have tried using META-INF/MANIFEST.MF to include
the Main-Class, or using config/java-main.yml, or manifest.yml to include
java_main_class, none worked. The app just would not start. Do I need to do
anything else to trigger the app to start its main method?

Thanks!
--
Thank you,

James Bayer


Re: Unsubscribe

Karan Makim
 

-----Original Message-----
From: "Nithya Rajagopalan" <nith.r79(a)gmail.com>
Sent: ‎11-‎09-‎2015 07:36
To: "cf-dev(a)lists.cloudfoundry.org" <cf-dev(a)lists.cloudfoundry.org>; "cf-eng(a)lists.cloudfoundry.org" <cf-eng(a)lists.cloudfoundry.org>
Subject: [cf-dev] Unsubscribe


Re: UAA: Level count of Spaces under an Org

Filip Hanik
 

I think you mean Clod Controller and not UAA

UAA supports nested group hierarchies. but it doesn't manage spaces and
orgs.

On Thursday, September 10, 2015, Zongwei Sun <Zongwei.Sun(a)huawei.com> wrote:

Currently, there is only 1 level of Spaces under an Org with UAA. I heard
people talking about adding more levels of Spaces to it. I'd like to have
some discussions if this really makes sense. Thanks.


Re: Generic data points for dropsonde

Jim Park
 

One of the use cases that would benefit from this would be metrics sending.
Given that the current statsd protocol lacks the ability to supply
metadata, such as job and index ids, some apps have taken to inserting what
would otherwise be tagged data into the metric namespace. As an
example: [image:
Screenshot 2015-09-10 17.25.19.png]

Endpoints like Datadog and OpenTSDB want key names that are not unique per
instance. Graphite has wildcard semantics to accomodate this. But Datadog
and OpenTSDB do not, and would need this implemented elsewhere in the
delivery chain. StatsD doesn't provide a way to side-channel this
information, and we don't want to implement custom parsing on consumers
when we overload the metric key.

I believe that this protocol will be a move towards providing a better
means by which folks can supply metrics to the system without having to
make convention decisions that have to be scraped out and transformed on
the consumer side, as wasn't done above. A generic schema does not exist
currently, and this appears to be a promising way of delivering that
functionality.

It would be much easier to use a generic schema to output to DataDog,
OpenTSDB, Graphite, and others, than it would be to guess a schema from a
flattened result (for example, "router__0" is understandably job and index,
but what does the "vizzini_1_abcd" part represent? How would I parse this
if I didn't have a human trace it back to source?).

Thanks,


Jim

On Tue, Sep 8, 2015 at 7:29 AM Johannes Tuchscherer <jtuchscherer(a)pivotal.io>
wrote:

Ben,

I guess I am working under the assumption that the current upstream schema
is not going to see a terrible amount of change. The StatsD protocol has
been very stable for over four years, so I don't understand why we would
add more and more metric types. (I already struggle with the decision to
have container metrics as their own data type. I am not quite sure why that
was done vs just expressing them as ValueMetrics).

I am also not following your argument with the multiple implementations of
a redis export? Why would you have multiple implementations of a redis info
export? Also, why does the downstream consumer have to know about the
schema? Neither the datadog nozzle nor the graphite nozzle cares about any
type of schema right now.

But to answer your question, I think as a downstream developer I am not as
interested in whether you are sending me a uint32 or uint64, but the
meaning (e.g. counter vs value) is much more important to me. So, if you
were to do nested metrics, I think I would rather like to see having nested
counters or values in there plus maybe one type that we are missing which
is a generic event with just a string.

Generally, I would try to avoid falling into the trap of creating a overly
generic system at the cost of making consumers unnecessarily complicated.
Maybe it would help if you outlined a few use cases that might benefit from
a system like this and how specifically you would implement a downstream
consumer (e.g. is there a common place where I can fetch the schema for the
generic data point?).

On Sat, Sep 5, 2015 at 6:57 AM, James Bayer <jbayer(a)pivotal.io> wrote:

after understanding ben's proposal of what i would call an extensible
generic point versus the status quo of metrics that are actually hard-coded
in software on by the metric producer and the metric consumer, i
immediately gravitated toward the approach by ben.

cloud foundry has really benefited from extensibility in these examples:

* diego lifecycles
* app buildpacks
* app docker images
* app as windows build artifact
* service brokers
* cf cli plugins
* collector plugins
* firehose nozzles
* diego route emitters
* garden backends
* bosh cli plugins
* bosh releases
* external bosh CPIs
* bosh health monitor plugins

let me know if there are other points of extension i'm missing.

in most cases, the initial implementations required cloud foundry system
components to change software to support additional extensibility, and some
of the examples above still require that and it's an issue in frustration
as someone with an idea to explore needs to persuade the cf maintaining
team to process a pull request or complete work on an area. i see ben's
proposal as making an extremely valuable additional point of extension for
creating application and system metrics that benefits the entire cloud
foundry ecosystem.

i am sympathetic to the question raised by dwayne around how large the
messages will be. it would seem that we could consider an upper bound on
the number of attributes supported by looking at the types of metrics that
would be expressed. the redis info point is already 84 attributes for
example.

all of the following seem related to scaling considerations off the top
of my head:
* how large an individual metric may be
* at what rate the platform should support producers sending metrics
* what platform quality of service to provide (lossiness or not, back
pressure, rate limiting, etc)
* what type of clients to the metrics are supported and any limitations
related to that.
* whether there is tenant variability in some of the dimensions above.
for example a system metric might have a higher SLA than an app metric

should we consider putting a boundary on the "how large an individual
metric may be" by limiting the initial implementation to a number of
attributes (that we could change later or make configurable?).

i'm personally really excited about this new set of extensibility being
proposed and the creative things people will do with it. having loggregator
as a built-in system component versus a bolt-on is already such a great
capability compared with other platforms and i see investments to make it
more extensible and apply to more scenarios as making cloud foundry more
valuable and more fun to use.

On Fri, Sep 4, 2015 at 10:52 AM, Benjamin Black <bblack(a)pivotal.io>
wrote:

johannes,

the problem of upstream schema changes causing downstream change or
breakage is the current situation: every addition of a metric type implies
a change to the dropsonde-protocol requiring everything downstream to be
updated.

the schema concerns are similar. currently there is no schema whatsoever
beyond the very fine grained "this is a name and this is a value". this
means every implementation of redis info export, for example, can, and
almost certainly will, be different. this results in every downstream
consumer having to know every possible variant or to only support specific
variants, both exactly the problem you are looking to avoid.

i share the core concern regarding ensuring points are "sufficiently"
self describing. however, there is no clear line delineating what is
sufficient. the current proposal pushes almost everything out to schema. we
could imagine a change to the attributes that includes what an attribute is
(gauge, counter, etc), what the units are for the attribute, and so on.

it is critical that we balance the complexity of the points against
complexity of the consumers as there is no free lunch here. which specific
functionality would you want to see in the generic points to achieve the
balance you prefer?


b



On Wed, Sep 2, 2015 at 2:07 PM, Johannes Tuchscherer <
jtuchscherer(a)pivotal.io> wrote:

The current way of sending metrics as either Values or Counters through
the pipeline makes the development of a downstream consumer (=nozzle)
pretty easy. If you look at the datadog nozzle[0], it just takes all
ValueMetrics and Counters and sends them off to datadog. The nozzle does
not have to know anything about these metrics (e.g. their origin, name, or
layout).

Adding a new way to send metrics as a nested object would make the
downstream implementation certainly more complicated. In that case, the
nozzle developer has to know what metrics are included inside the generic
point (basically the schema of the metric) and treat each point
accordingly. For example, if I were to write a nozzle that emits metrics to
Graphite with a StatsD client (like it is done here[1]), I need to know if
my int64 value is a Gauge or a Counter. Also, my consumer is under constant
risk of breaking when the upstream schema changes.

We are already facing this problem with the container metrics. But at
least the container metrics are in a defined format that is well documented
and not likely to change.

I agree with you, though, the the dropsonde protocol could use a
mechanism for easier extension. Having a GenericPoint and/or GenericEvent
seems like a good idea that I whole-heartedly support. I would just like to
stay away from nested metrics. I think the cost of adding more logic into
the downstream consumer (and making it harder to maintain) is not worth the
benefit of a more concise metric transport.


[0] https://github.com/cloudfoundry-incubator/datadog-firehose-nozzle
[1] https://github.com/CloudCredo/graphite-nozzle

On Tue, Sep 1, 2015 at 5:52 PM, Benjamin Black <bblack(a)pivotal.io>
wrote:

great questions, dwayne.

1) the partition key is intended to be used in a similar manner to
partitioners in distributed systems like cassandra or kafka. the specific
behavior i would like to make part of the contract is two-fold: that all
data with the same key is routed to the same partition and that all data in
a partition is FIFO (meaning no ordering guarantees beyond arrival time).

this could help with the multi-line log problem by ensuring a single
consumer will receive all lines for a given log entry in order, allowing
simple reassembly. however, the lines might be interleaved with other lines
with the same key or even other keys that happen to map to the same
partition, so the consumer does require a bit of intelligence. this was
actually one of the driving scenarios for adding the key.

2) i expect typical points to be in the hundreds of bytes to a few KB.
if we find ourselves regularly needing much larger points, especially near
that 64KB limit, i'd look to the JSON representation as the hierarchical
structure is more efficiently managed there.


b




On Tue, Sep 1, 2015 at 4:42 PM, <dschultz(a)pivotal.io> wrote:

Hi Ben,

I was wondering if you could give a concrete use case for the
partition key functionality.

In particular I am interested in how we solve multi line log entries.
I think it would be better to solve it by keeping all the data (the
multiple lines) together throughout the logging/metrics pipeline, but could
see how something like a partition key might help keep the data together as
well.

Second question: how large do you see these point messages getting
(average and max)? There are still several stages of the logging/metrics
pipeline that use UDP with a standard 64K size limit.

Thanks,
Dwayne

On Aug 28, 2015, at 4:54 PM, Benjamin Black <bblack(a)pivotal.io>
wrote:

All,

The existing dropsonde protocol uses a different message type for
each event type. HttpStart, HttpStop, ContainerMetrics, and so on are all
distinct types in the protocol definition. This requires protocol changes
to introduce any new event type, making such changes very expensive. We've
been working for the past few weeks on an addition to the dropsonde
protocol to support easier future extension to new types of events and to
make it easier for users to define their own events.

The document linked below [1] describes a generic data point message
capable of carrying multi-dimensional, multi-metric points as sets of
name/value pairs. This new message is expected to be added as an additional
entry in the existing dropsonde protocol metric type enum. Things are now
at a point where we'd like to get feedback from the community before moving
forward with implementation.

Please contribute your thoughts on the document in whichever way you
are most comfortable: comments on the document, email here, or email
directly to me. If you comment on the document, please make sure you are
logged in so we can keep track of who is asking for what. Your views are
not just appreciated, but critical to the continued health and success of
the Cloud Foundry community. Thank you!


b

[1]
https://docs.google.com/document/d/1SzvT1BjrBPqUw6zfSYYFfaW9vX_dTZZjn5sl2nxB6Bc/edit?usp=sharing





--
Thank you,

James Bayer


CF Release Scripts Moved

Natalie Bennett
 

CF Release top-level scripts have been moved. The new location is under the
`scripts/` folder.

Thanks,
CF OSS Release Integration Team


Migrating from 10.244.0.34.xip.io to bosh-lite.com

Dan Wendorf
 

On the CAPI team, we were experiencing pain around the flakiness of xip.io
causing spurious test failures. We have registered bosh-lite.com, which is
just an A record to 10.244.0.34, as well as for its subdomains (e.g.
foo.bar.bosh-lite.com also resolves to 10.244.0.34).

We would like to switch cf-release, cf-acceptance-tests, and smoke-tests to
use bosh-lite.com exclusively, but wanted to check with the community to
see if there are any compelling reasons not to switch. In the case that
there's no reason not to switch, we wanted to give a little bit of a
heads-up before making the change.

In our testing of this, CATs run just fine. On our bosh-lite CF
environment, we had to `cf delete-shared-domain 10.244.0.34.xip.io`, `cf
target -o CATS-persistent-org && cf delete CATS-persistent-app`, and update
the file in our CONFIG environment variable to make tests pass, but
everything else "just worked".

You can review these changes on the bosh_lite_com branches of the following
repositories:

cf-acceptance-tests
<https://github.com/cloudfoundry/cf-acceptance-tests/tree/bosh_lite_com>
cf-smoke-tests
<https://github.com/cloudfoundry/cf-smoke-tests/tree/bosh_lite_com>
cf-release <https://github.com/cloudfoundry/cf-release/tree/bosh_lite_com>
sprout-capi
<https://github.com/cloudfoundry-incubator/sprout-capi/tree/bosh_lite_com>
(for those interested in configuring their bosh-lite via Sprout)

Thoughts?


FYI: Survey: Cloud Foundry Service Broker Compliance

Michael Maximilien
 

Hi, all,

I've been working on some side projects with IBM Research to improve
various aspects of CF. Some are pretty early research work and some are
ready to graduate and presented to you.

One of these relates to compliance of CF Service Brokers. We want to share
this work and make it open. We are planning a series of meetings next week
to socialize, open, and propose incubation. If interested then ping me
directly.

--------
In the mean time, Mohamed and Heiko, my colleagues from IBM Research, and
I have put together a short (literally two minutes) survey to gage what
would be the value of having Cloud Foundry (CF) service brokers
compliance.

https://www.surveymonkey.com/r/N37SD85

We'd be grateful if you could find some time to take this short survey
before we start socializing the solution we have been working on.

--------
Feel free to forward the survey link to others who may not be on this
mailing list and that you think should also take the survey

After we gather results, we will share a summary with everyone by next
Thursday.

All the best,

Mohamed, Heiko, and Max

------
dr.max
ibm cloud labs
silicon valley, ca
maximilien.org


CF-Abacus: IPMs

Michael Maximilien
 

Hi, all,

For anyone interested in CF-Abacus, we are having our IPMs on Friday's @
10a PDT @ Pivotal HQ.

I don't have a room reserve for tomorrow, but please ping me if you are
interested in joining and I will add you to the invite list.

For folks at Pivotal, the team will have our scrum right after standup
near the ping pong area tomorrow, so feel free to swing by.

Best,

------
dr.max
ibm cloud labs
silicon valley, ca
maximilien.org


UAA: Level count of Spaces under an Org

Zongwei Sun
 

Currently, there is only 1 level of Spaces under an Org with UAA. I heard people talking about adding more levels of Spaces to it. I'd like to have some discussions if this really makes sense. Thanks.


Re: tcp-routing in Lattice

Atul Kshirsagar
 

Great! Give us your feedback after you have played around with tcp routing.


Re: tcp-routing in Lattice

Jack Cai
 

After ssh into the vagrant VM and digging into the processes/ports, I found
out that in my previous attempt I was trying to map one additional port
that was already occupied by garden (7777). Because of this conflict,
haproxy gave up mapping all the ports. Once I changed 7777 to 17777, the
issue went away.

So the lesson-learn is to examine the ports that are already in use in the
vagrant VM, and avoid using them.

Jack

On Thu, Sep 10, 2015 at 2:18 PM, Jack Cai <greensight(a)gmail.com> wrote:

Thanks Atul and Marco for your advice.

Below is the command I used to push the docker image:

* ltc create hello <docker-image> --ports 8888,8788 --http-routes
hello:8888 --tcp-routes 8788:8788 --memory-mb=0 --timeout=10m
--monitor-port=8888*

After the push completed, it reported below:





*...hello is now running.App is reachable at:192.168.11.11.xip.io:8788
<http://192.168.11.11.xip.io:8788>http://hello.192.168.11.11.xip.io
<http://hello.192.168.11.11.xip.io>*

I also tried to update the routes:


* ltc update hello --http-routes hello:8888 --tcp-routes 8788:8788*
If I do "ltc status hello", I see the below routes:








*Instances 1/1Start Timeout 0DiskMB 0MemoryMB
0CPUWeight 100Ports 8788,8888Routes
192.168.11.11.xip.io:8788 <http://192.168.11.11.xip.io:8788> =>
8788 hello.192.168.11.11.xip.io
<http://hello.192.168.11.11.xip.io> => 8888*

But when I visited http://192.168.11.11.xip.io:8788/, I got "Unable to
connect", while I could visit http://hello.192.168.11.11.xip.io/
successfully.

Below is the log I saw when doing "vagrant up" to bring up Lattice:



















*...==> default: stdin: is not a tty==> default: mkdir: created directory
â/var/latticeâ==> default: mkdir: created directory â/var/lattice/setupâ==>
default: Running provisioner: shell... default: Running: inline
script==> default: stdin: is not a tty==> default: * Stopping web server
lighttpd==> default: ...done.==> default: Installing cflinuxfs2
rootfs...==> default: done==> default: * Starting web server lighttpd==>
default: ...done.==> default: Installing Lattice (v0.4.0) (Diego
0.1398.0) - Brain==> default: Finished Installing Lattice Brain (v0.4.0)
(Diego 0.1398.0)!==> default: Installing Lattice (v0.4.0) (Diego 0.1398.0)
- Lattice Cell==> default: Finished Installing Lattice Cell (v0.4.0) (Diego
0.1398.0)!==> default: bootstrap start/running==> default: Lattice is now
installed and running.==> default: You may target it using: ltc target
192.168.11.11.xip.io <http://192.168.11.11.xip.io>*

There is an error "stdin: is not a tty", and I don't see haproxy mentioned
in the log. Maybe haproxy is not started at all?

Jack



On Wed, Sep 9, 2015 at 8:13 PM, Marco Nicosia <mnicosia(a)pivotal.io> wrote:

Hi Jack,

In addition to Atul's suggestions, could you please give us the exact
command lines which you used to launch the two apps?

The CLI arguments are tricky, we may be able to see something about the
way you've tried to configure the routes by looking at how you've launched
the apps.

--
Marco Nicosia
Product Manager
Pivotal Software, Inc.
mnicosia(a)pivotal.io
c: 650-796-2948


On Wed, Sep 9, 2015 at 2:32 PM, Jack Cai <greensight(a)gmail.com> wrote:

I'm playing around with the tcp-routing feature in the latest Lattice
release. I started two node.js applications in the pushed image (listening
on two ports), one mapped to an http route and the other to a tcp route. I
can connect to the http route successfully in the browser, but when I try
to connect to the tcp port in the browser, I got connection refused. It
looks like the mapped public tcp port on 192.168.11.11 is not open at all.
Any advice on how to diagnose this? Thanks in advance!

Jack


Starting Spring Boot App after deploying it to CF

Qing Gong
 

I built a Spring Boot App and using java -jar SpringBootApp.jar to run it, the code works as expected. The System.out printed as expected.

public static void main(String[] args)
{
SpringApplication.run(Application.class, args);
System.out.println("Spring Boot Test Message");
}

However, when deployed in CF using cf push myApp -p SpringBootApp.jar, the main() was not executed. I have tried using META-INF/MANIFEST.MF to include the Main-Class, or using config/java-main.yml, or manifest.yml to include java_main_class, none worked. The app just would not start. Do I need to do anything else to trigger the app to start its main method?

Thanks!


Re: tcp-routing in Lattice

Jack Cai
 

Thanks Atul and Marco for your advice.

Below is the command I used to push the docker image:

* ltc create hello <docker-image> --ports 8888,8788 --http-routes
hello:8888 --tcp-routes 8788:8788 --memory-mb=0 --timeout=10m
--monitor-port=8888*

After the push completed, it reported below:





*...hello is now running.App is reachable at:192.168.11.11.xip.io:8788
<http://192.168.11.11.xip.io:8788>http://hello.192.168.11.11.xip.io
<http://hello.192.168.11.11.xip.io>*

I also tried to update the routes:


* ltc update hello --http-routes hello:8888 --tcp-routes 8788:8788*
If I do "ltc status hello", I see the below routes:








*Instances 1/1Start Timeout 0DiskMB 0MemoryMB
0CPUWeight 100Ports 8788,8888Routes
192.168.11.11.xip.io:8788 <http://192.168.11.11.xip.io:8788> =>
8788 hello.192.168.11.11.xip.io
<http://hello.192.168.11.11.xip.io> => 8888*

But when I visited http://192.168.11.11.xip.io:8788/, I got "Unable to
connect", while I could visit http://hello.192.168.11.11.xip.io/
successfully.

Below is the log I saw when doing "vagrant up" to bring up Lattice:



















*...==> default: stdin: is not a tty==> default: mkdir: created directory
â/var/latticeâ==> default: mkdir: created directory â/var/lattice/setupâ==>
default: Running provisioner: shell... default: Running: inline
script==> default: stdin: is not a tty==> default: * Stopping web server
lighttpd==> default: ...done.==> default: Installing cflinuxfs2
rootfs...==> default: done==> default: * Starting web server lighttpd==>
default: ...done.==> default: Installing Lattice (v0.4.0) (Diego
0.1398.0) - Brain==> default: Finished Installing Lattice Brain (v0.4.0)
(Diego 0.1398.0)!==> default: Installing Lattice (v0.4.0) (Diego 0.1398.0)
- Lattice Cell==> default: Finished Installing Lattice Cell (v0.4.0) (Diego
0.1398.0)!==> default: bootstrap start/running==> default: Lattice is now
installed and running.==> default: You may target it using: ltc target
192.168.11.11.xip.io <http://192.168.11.11.xip.io>*

There is an error "stdin: is not a tty", and I don't see haproxy mentioned
in the log. Maybe haproxy is not started at all?

Jack

On Wed, Sep 9, 2015 at 8:13 PM, Marco Nicosia <mnicosia(a)pivotal.io> wrote:

Hi Jack,

In addition to Atul's suggestions, could you please give us the exact
command lines which you used to launch the two apps?

The CLI arguments are tricky, we may be able to see something about the
way you've tried to configure the routes by looking at how you've launched
the apps.

--
Marco Nicosia
Product Manager
Pivotal Software, Inc.
mnicosia(a)pivotal.io
c: 650-796-2948


On Wed, Sep 9, 2015 at 2:32 PM, Jack Cai <greensight(a)gmail.com> wrote:

I'm playing around with the tcp-routing feature in the latest Lattice
release. I started two node.js applications in the pushed image (listening
on two ports), one mapped to an http route and the other to a tcp route. I
can connect to the http route successfully in the browser, but when I try
to connect to the tcp port in the browser, I got connection refused. It
looks like the mapped public tcp port on 192.168.11.11 is not open at all.
Any advice on how to diagnose this? Thanks in advance!

Jack


Proposal: UAA SAML Integration & Mapping CF Roles to external groups

Sree Tummidi
 

Hi all,

The UAA team has come with a proposal for handling claims (User Attributes
& Group Memberships) from SAML Identity Providers. These claims can be
further mapped to CF roles in order to derive CF role memberships from
external group memberships.

The Proposal is split into two parts.


- Part 1 deals with the general UAA & SAML Integration for handling SAML
claims. This involves exposing them in OpenID Connect ID Token and allow
mapping of claims to OAuth Scopes for coarse grained authorization. The
proposal can be found here
<https://docs.google.com/a/pivotal.io/document/d/107sv7YqxdoDWi2vX5Z8WHm1JaqwHZOL_wa-esn2U5cE/edit?usp=sharing>
.
- Part 2 deals with leveraging the claims received in the ID Token to
derive CF role memberships. The proposal can be found here
<https://docs.google.com/a/pivotal.io/document/d/1UBtwEma5pkivNHD1QfTXOpPZAWCBE8Az9OVoT7oO0G4/edit?usp=sharing>
.



We are looking forward to you valuable feedback and suggestions on these
topics.
Happy Reviewing !!


Thanks,
Sree Tummidi
Sr. Product Manager
Identity - Pivotal Cloud Foundry


Re: Cloud Foundry NodeJS 4 support and release schedule

Mike Dalessio
 

Quick update on Node 4, which is that we're blocked on openssl
compatibility.

One of the requirements we place on binaries we ship with CF buildpacks is
that libraries should be dynamically linked from the rootfs whenever
possible, particularly for libraries that are likely to be affected by
CVEs, so that we can patch everything with a rootfs update.

Node has defaulted, for quite a while, to statically linking OpenSSL,
despite a history of not-infrequent CVEs affecting that library. The Node
build scripts do allow overriding this, and choosing to dynamically link
instead. We've used this option successfully for building all of the
CF-supported 0.x node versions against the openssl 1.0.1 versions that are
shipped with Ubuntu 14.04 LTS (and therefore the cflinuxfs2 rootfs).

However, in Node 4, the code only supports openssl 1.0.2. That is, it fails
to compile against openssl 1.0.1 headers.

(Possibly worth mentioning for additional context, even RHEL7 appears to
still ship openssl 1.0.1.)

We opened a Github issue on the node project, which has been closed without
a suggested fix for our situation:

https://github.com/nodejs/node/issues/2783

We've also reached out to friends of the CFF at Joyent, and IBM has notably
reached out to their own Node committers on staff. I'll keep this thread
updated as the conversation progresses.

I'm not comfortable introducing a new binary to the CF ecosystem that's not
"patch-able" via a rootfs update. I'm open to suggestions around what else
we could be doing to move towards shipping Node 4, but for right now we're
blocked.

-m

On Wed, Sep 9, 2015 at 3:52 PM, Shawn Nielsen <sknielse(a)gmail.com> wrote:

Thanks for the quick feedback on this, we appreciate your responsiveness.
We'll continue to follow the issue in the pivotal tracker.

On Wed, Sep 9, 2015 at 12:36 PM, Mike Dalessio <mdalessio(a)pivotal.io>
wrote:

Hi Shawn,

Great question, thanks for asking it.

The Buildpacks team has a Tracker story in its backlog to work on Node 4:

https://www.pivotaltracker.com/story/show/102941608

Generally our turnaround time on vanilla version updates is less than a
day; however, Node 4 isn't just a regular version update, as it comes from
io.js lineage which we haven't ever officially supported; and so we're
going to proceed carefully.

The story I linked to has some specific acceptance criteria:

* Does the binary build with our current tooling? If not, we'll have to
update our tooling.
* Does the binary dynamically link openssl? (This was a specific use case
we've had to work around in the past.) If not, we'll have to make sure it
does, so that rootfs updates will be sufficient to address openssl CVEs.
* Does the binary avoid statically linking any other rootfs libraries? If
not, see above.
* Does the binary pass BRATs? If not, we'll have to fix BRATs.

Only when all of the above look good will we ship; and since we haven't
worked with io.js and family before, I don't want to make any promises
about delivery. If things go well, it could ship as early as tomorrow,
though that's probably overly optimistic.

Additionally I'll likely delay committing it into cf-release until we
have positive feedback from the community.

I'm happy to keep this thread updated with our progress; or you can
follow along at the Tracker story.

Cheers,
-mike


On Wed, Sep 9, 2015 at 11:33 AM, Shawn Nielsen <sknielse(a)gmail.com>
wrote:

NodeJS version 4 was released yesterday to the community.

Generally speaking, what is the typical release schedule for buildpack
binaries after new runtime releases are announced?

More specifically, I'd be curious if you have information on release
schedule of the NodeJS 4 buildpack binaries.






CF Summit EU / China - Dates and contributor registration codes

Chip Childers <cchilders@...>
 

Hi all!

Two things about our upcoming CF Summit EU and CF Summit China:

Key dates for both... (The call for talk proposals for both are closing
soon):

*CF Summit EU - November 2nd & 3rd - Berlin*
CFP Closed: September 11, 2015 - http://berlin2015.cfsummit.com/program/cfp
CFP Notifications: September 29, 2015
Schedule Announced: October 1, 2015
Event Dates: November 2-3, 2015

*CF Summit China - December 2nd & 3rd - Shanghai*
CFP Closed: September 25, 2015 -
http://shanghai2015.cfsummit.com/program/cfp
CFP Notifications: October 13, 2015
Schedule Announced: October 15, 2015
Event Dates: December 2 & 3, 2015

For these events, we are providing a 25% discount code for registration to
contributors of CF projects. This is "trust-based", in that I'm sharing on
public lists. If you have contributed to the development of a CF project in
any way (docs, code, feature comments), feel free to use these codes.

Berlin: CFEU15CON

Shanghai: CFAS15CON


Chip Childers | VP Technology | Cloud Foundry Foundation

7781 - 7800 of 9425