Date   

Re: Introducing CF-Swagger

Michael Maximilien
 

Since I know various folks are looking at better API docs. I went ahead
and did some quick investigation on what other kind of docs formats could
be generated from Swagger.

Found a bunch, but experimented with Swagger2Markup and was able to
generate the following from the Service Broker Swagger definition here:
https://github.com/maximilien/cf-swagger/blob/master/descriptions/cloudfoundry/service_broker/service_broker.json

1. ASSCIIDoc:
https://github.com/maximilien/cf-swagger/tree/master/markup/cloudfoundry/service_broker/assciidoc
2. GitHub Markdown:
https://github.com/maximilien/cf-swagger/tree/master/markup/cloudfoundry/service_broker/markdown

These are generated from the JSON above without any customization or
changes.

Best,

------
dr.max
ibm cloud labs
silicon valley, ca
maximilien.org



Michael Maximilien/Almaden/IBM
09/18/2015 04:51 PM

To
cf-dev(a)lists.cloudfoundry.org
cc
Mohamed Mohamed/Almaden/IBM(a)ibmus, Christopher B Ferris/Waltham/IBM(a)ibmus,
Alex Tarpinian/Austin/IBM(a)ibmus, Heiko Ludwig/Watson/IBM(a)ibmus
Subject
Introducing CF-Swagger






Hi, all,

This email serves two purposes: 1) introduce CF-Swagger, and 2) shares the
results of the CF service broker compliance survey I sent out a couple of
weeks ago.

------
My IBM Research colleague, Mohamed (on cc:), and I have been working on
creating Swagger descriptions for some CF APIs.

Our main goal was to explore what useful tools or utilities we could build
with these Swagger descriptions once created.

The initial results of this exploratory research is CF-Swagger which is
included in the following:

See presentation here: https://goo.gl/Y16plT
Video demo here: http://goo.gl/C8Nz5p
Temp repo here: https://github.com/maximilien/cf-swagger

The gist of of our work and results are:

1. We created a full Swagger description of the CF service broker
2. Using this description you can use the Swagger editor to create a neat
API docs that is browsable and even callable
3. Using the description you can create client and server stubs for
service brokers in a variety of languages, e.g., JS, Java, Ruby, etc.
4. We've extended go-swagger to generate workable client and server stubs
for service brokers in Golang. We plan to submit all changes to go-swagger
back to that project
5. We've extended go-swagger to generate prototypes of working Ginkgo
tests to service brokers
6. We've extended go-swagger to generate a CF service broker Ginkgo Test
Compliance Kit (TCK) that anyone could use to validate their broker's
compliance with any Swagger-described version of spec
7. We've created a custom Ginkgo reporter that when ran with TCK will give
you a summary of your compliance, e.g., 100% compliant with v2.5 but 90%
compliant with v2.6 due to failing test X, Y, Z... (in Ginkgo fashion)
8. The survey results (all included in the presentation) indicate that
over 50% of respondants believe TCK tests for service broker would be
valuable to them. Many (over 50%) are using custom proprietary tests, and
this project maybe a way to get everyone to converge to a common set of
tests we could all use and improve...

------
We plan to propose this work to become a CF incubator at the next CAB and
PMC calls, especially the TCK part for service brokers. The overall
approach and project could be useful for other parts of the CF APIs but we
will start with CF Service Brokers.

The actual Swagger descriptions should ideally come from the teams who own
the APIs. So for service brokers, the CAPI team. We are engaging them as
they have also been looking at improving APIs docs and descriptions. Maybe
there are potential for synergies and at a minimum making sure what we
generate ends up becoming useful to their pipelines.

Finally, while the repo is temporary and will change, I welcome you to
take a look at presentation and video and code and let us know your
thoughts and feedback.

Thanks for your time and interest.

Mohamed and Max
IBM


Re: Extending Org to support multi-level Orgs (i.e. OU)

Deepak Vij
 

Hi Sree Tummidi, this is in regards to our brief discussion during Runtime PMC meeting today. As we discussed during the meeting, I would like to discuss with you more on this topic. As I don’t have your direct contact email/phone, please let me know a good time to talk. My direct email is deepak.vij(a)huawei.com<mailto:deepak.vij(a)huawei.com> (Cell: 408-806-6182). Thanks.

Regards,
Deepak Vij

From: Deepak Vij (A)
Sent: Monday, September 14, 2015 11:17 AM
To: 'cf-dev(a)lists.cloudfoundry.org'
Subject: Re: Extending Org to support multi-level Orgs (i.e. OU)

Hi James, let me throw more light on this and provide my perspective on all this. Essentially, what Zongwei is asking for is something that is typically available within an enterprise application platform level. To illustrate a very concrete example, application developers can make use of the general purpose platform level hierarchy mechanism and implement functionality such as resource access control and data visibility in order to enable desired application specific behavior. Let me illustrate this Role-Hierarchy based Access Control requirement typically desired in any enterprise grade application environment.

Hierarchies based Access Control
One can improve the overall efficiency of an enterprise by utilizing the concept of role hierarchies. A hierarchy defines roles that have unique attributes and may be “senior” to other roles. That is, one role may be implicitly associated with permissions that are associated with another “junior” role. In essence, if used appropriately, hierarchies are a natural way of organizing roles to reflect authority, responsibility, and competency.

According to an “Aggregation Hierarchy” approach, hierarchy is composed of hierarchy of roles where higher role is responsible for a superset of privileges/permissions of the lower roles. Organization chart is an example of such hierarchy (CEO->VPs->Mgrs->Employees). In such role-hierarchy based “Manager” role may be implicitly associated with permissions that are associated with another “junior” role (“Employee” role in our example). Essentially, the entire set of data visibility and access control permissions for “Manager” role is {permissions assigned to “Manager” Role} Union {permissions assigned to “Employee” Role}.

To complicate this further, instead of “Aggregation Hierarchy” approach, one could also have support for “Generalization Hierarchy” approach - an inverted tree structure. In “Generalization Hierarchy”, higher role in the hierarchy is more general than the lower role. For example, root node of the hierarchy is the most general role and has the least privileges/permissions, while, leaf node has the maximum privileges/permissions.

To summarize all of the above, mostly all enterprise application platforms provide support for a generic tree like hierarchical structure. I remember PeopleSoft Platform provided such generic tree structure as part of their platform. Once available, one can enable functionality such as “Role Hierarchy based Access Control & Data Visibility”, “Organization Hierarchy based Quota Allocations”, support for “Hierarchical Financial Controls” desired in the financial services industry etc. etc.

Hope this all makes sense. Thanks.

Regards,
Deepak Vij

From: James Bayer [mailto:jbayer(a)pivotal.io]
Sent: Friday, September 11, 2015 8:33 PM
To: Discussions about Cloud Foundry projects and the system overall.
Subject: [cf-dev] Re: Re: Re: Re: Re: Re: Re: Re: Re: Re: Re: Re: Extending Org to support multi-level Orgs (i.e. OU)

zongwei, i'm unclear why a single org with multiple spaces, each potentially with their own quota, which is what the cf system does today, does not satisfy the customer's request when using a single cf installation.

for example:

org: foo-org
quota: big-quota

space: foo-east
quota: small-quota

space: foo-west
quota: medium-quota

space: foo-north
quota: [none, defaults to org quota]

On Fri, Sep 11, 2015 at 3:06 PM, Zongwei Sun <Zongwei.Sun(a)huawei.com<mailto:Zongwei.Sun(a)huawei.com>> wrote:
One idea is that Org will refer to itself and form a TREE structure. The most descendant Org instance in the hierarchy will have a Space instance. Any parent Org won't have Space.

-Zongwei



--
Thank you,

James Bayer


Re: PHP and HHVM support questions

Mike Dalessio
 

Hello cf-dev,

Following up on this, we've only had a handful of responses to the PHP
survey, and all responses indicate that HHVM is not being used.

*If you're a PHP developer, or represent PHP developers, I urge you to take
one minute to respond to the survey right now.*


https://docs.google.com/forms/d/1WBupympWFRMQnoGZAgQLKmUZugreVldj3xDhyn9kpWM


-m

On Tue, Sep 8, 2015 at 5:36 PM, Mike Dalessio <mdalessio(a)pivotal.io> wrote:

Hello cf-dev,

*TL;DR*: The Cloud Foundry Buildpacks team is discussing whether, and
how, to continue support for HHVM support in the php-buildpack.

*Actions*: If you're a PHP developer, please fill out a short
four-question survey to help us determine what level of support the
community needs for HHVM.

*Please click through to the anonymous survey here:*


https://docs.google.com/forms/d/1WBupympWFRMQnoGZAgQLKmUZugreVldj3xDhyn9kpWM/viewform?usp=send_form

-----

*Context*

The PHP buildpack, in v4.0.0 and later, supports PHP 5.4, 5.5, and 5.6 as
well as HHVM 3.5 and 3.6.

HHVM currently presents a challenge in that it depends on many packages
that are not present in the rootfs. The tooling we're using now downloads a
handful of .deb packages as part of the HHVM compilation process and
packages them in the buildpack with the compiled binary.

This, of course, opens HHVM users up to potentially needing to update a
buildpack to address security vulnerabilities and bugs that could normally
be easily addressed with a rootfs update. And maybe that's OK, but it's a
notable deviation from how we generally the binaries we vendor into the CF
buildpacks.

One possible solution is to add all the packages necessary to run HHVM to
the rootfs, which would include libboost as well as the other libraries
enumerated here:

https://www.pivotaltracker.com/story/show/99169476

In order to really understand the tradeoffs, it's necessary to understand
whether, and how, HHVM is being used by the CF community.


This is related to a broader conversation around customization and
modification of rootfses, but for now I'd like to focus on the specific
question of whether HHVM support is valuable enough to continue.

Thanks for reading, and *once again, the survey link is here*:


https://docs.google.com/forms/d/1WBupympWFRMQnoGZAgQLKmUZugreVldj3xDhyn9kpWM/viewform?usp=send_form

Cheers,

-mike


Re: DEA/Warden staging error

Kyle Havlovitz (kyhavlov)
 

I didn't; I'm still having this problem. Even adding this lenient security group didn't let me get any traffic out of the VM:

[{"name":"allow_all","rules":[{"protocol":"all","destination":"0.0.0.0/0"},{"protocol":"tcp","destination":"0.0.0.0/0","ports":"1-65535"},{"protocol":"udp","destination":"0.0.0.0/0","ports":"1-65535"}]}]

The only way I was able to get traffic out was by manually removing the reject/drop iptables rules that warden set up, and even with that the container still lost all connectivity after 30 seconds.

From: CF Runtime <cfruntime(a)gmail.com<mailto:cfruntime(a)gmail.com>>
Reply-To: "Discussions about Cloud Foundry projects and the system overall." <cf-dev(a)lists.cloudfoundry.org<mailto:cf-dev(a)lists.cloudfoundry.org>>
Date: Tuesday, September 22, 2015 at 12:50 PM
To: "Discussions about Cloud Foundry projects and the system overall." <cf-dev(a)lists.cloudfoundry.org<mailto:cf-dev(a)lists.cloudfoundry.org>>
Subject: [cf-dev] Re: Re: Re: Re: Re: Re: Re: Re: DEA/Warden staging error

Hey Kyle,

Did you make any progress?

Zak & Mikhail
CF Release Integration Team

On Thu, Sep 17, 2015 at 10:28 AM, CF Runtime <cfruntime(a)gmail.com<mailto:cfruntime(a)gmail.com>> wrote:
It certainly could be. By default the contains reject all egress traffic. CC security groups configure iptables rules that allow traffic out.

One of the default security groups in the BOSH templates allows access on port 53. If you have no security groups, the containers will not be able to make any outgoing requests.

Joseph & Natalie
CF Release Integration Team

On Thu, Sep 17, 2015 at 8:44 AM, Kyle Havlovitz (kyhavlov) <kyhavlov(a)cisco.com<mailto:kyhavlov(a)cisco.com>> wrote:
On running git clone inside the container via the warden shell, I get:
"Cloning into 'staticfile-buildpack'...
fatal: unable to access 'https://github.com/cloudfoundry/staticfile-buildpack/': Could not resolve host: github.com<http://github.com>".
So the container can't get to anything outside of it (I also tried pinging some external IPs to make sure it wasn't a DNS thing). Would this be caused by cloud controller security group settings?


Re: Unable to ping outside VM from the warden container

CF Runtime
 

This definitely sounds like a security groups issue,

Do you have an progress on this?

Zak & Mikhail,
CF Release Integration

On Fri, Sep 18, 2015 at 8:55 PM, John Wong <gokoproject(a)gmail.com> wrote:

To Yitao's point:
http://docs.pivotal.io/pivotalcf/adminguide/app-sec-groups.html give it a
look. It took me a night to figure out.

John

On Fri, Sep 18, 2015 at 10:07 PM, Yitao Jiang <jiangyt.cn(a)gmail.com>
wrote:

what's are the security-groups of two envs, are they the same?Both
Openstack and CF

On Fri, Sep 18, 2015 at 11:20 PM, Jayarajan Ramapurath Kozhummal (jayark)
<jayark(a)cisco.com> wrote:

Hi,

I have deployed Cloud Foundry using Bosh on two different Openstack
environments.
In one environment, micro service installed inside the warden container
is unable to connect to Cassandra node deployed through bosh.
In fact, we are unable to ping any outside VM from inside the warden
container. Whereas all VMs are pingable from the runner VM outside the
warden container.
Everything is working fine in the other environment.

Following is the exception stack trace while trying to connect tot eh
Cassandra node:-

s. Not enough entrophy?

*2015-09-17T21:23:58.32+0000 [App/0]* ERR
java.lang.reflect.InvocationTargetException

*2015-09-17T21:23:58.32+0000 [App/0]* ERR at
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

*2015-09-17T21:23:58.32+0000 [App/0]* ERR at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)

*2015-09-17T21:23:58.32+0000 [App/0]* ERR at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

*2015-09-17T21:23:58.32+0000 [App/0]* ERR at
java.lang.reflect.Method.invoke(Method.java:497)

*2015-09-17T21:23:58.32+0000 [App/0]* ERR at
org.springframework.boot.loader.MainMethodRunner.run(MainMethodRunner.java:53)

*2015-09-17T21:23:58.32+0000 [App/0]* ERR at
java.lang.Thread.run(Thread.java:745)

*2015-09-17T21:23:58.32+0000 [App/0]* ERR Caused by:
com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s)
tried for query failed (tried: /10.20.0.199:9042
(com.datastax.driver.core.TransportException: [/10.20.0.199:9042]
Cannot connect))

*2015-09-17T21:23:58.32+0000 [App/0]* ERR at
com.datastax.driver.core.ControlConnection.reconnectInternal(ControlConnection.java:223)

*2015-09-17T21:23:58.32+0000 [App/0]* ERR at
com.datastax.driver.core.ControlConnection.connect(ControlConnection.java:78)

*2015-09-17T21:23:58.32+0000 [App/0]* ERR at
com.datastax.driver.core.Cluster$Manager.init(Cluster.java:1272)

*2015-09-17T21:23:58.32+0000 [App/0]* ERR at
com.datastax.driver.core.Cluster.init(Cluster.java:158)

*2015-09-17T21:23:58.32+0000 [App/0]* ERR at
com.datastax.driver.core.Cluster.connect(Cluster.java:248)

*2015-09-17T21:23:58.32+0000 [App/0]* ERR at
com.datastax.driver.core.Cluster.connect(Cluster.java:281)

*2015-09-17T21:23:58.32+0000 [App/0]* ERR at
com.cisco.skyfall.platform.cassandra.CassandraConnector.connect(CassandraConnector.java:52)

*2015-09-17T21:23:58.32+0000 [App/0]* ERR at
com.cisco.skyfall.platform.cassandra.CassandraConnector.getSession(CassandraConnector.java:60)

*2015-09-17T21:23:58.32+0000 [App/0]* ERR at
com.cisco.skyfall.platform.resourcestring.ResourceString.<init>(ResourceString.java:49)

*2015-09-17T21:23:58.32+0000 [App/0]* ERR at
com.cisco.skyfall.IdmMicroService.main(IdmMicroService.java:37)

*2015-09-17T21:23:58.32+0000 [App/0]* ERR ... 6 more

Non-working environment is Canonical openstack(Icehouse) installed on a
POD, which is a private network. All Vms including Cassandra is in the same
network.
Working environment is in Cisco Cloud Services,with the same openstack
version. Here also all VM are inside the private network with external
connectivity.
Could you please shed some light on what could be blocking external
connectivity inside the warden container in the first env.
Please let me know if you want to know more specific details.

Your help is really appreciated as we are blocked on this for a while.

Thanks
Jayaraj


--

Regards,

Yitao
jiangyt.github.io


Re: DEA/Warden staging error

CF Runtime
 

Hey Kyle,

Did you make any progress?

Zak & Mikhail
CF Release Integration Team

On Thu, Sep 17, 2015 at 10:28 AM, CF Runtime <cfruntime(a)gmail.com> wrote:

It certainly could be. By default the contains reject all egress traffic.
CC security groups configure iptables rules that allow traffic out.

One of the default security groups in the BOSH templates allows access on
port 53. If you have no security groups, the containers will not be able to
make any outgoing requests.

Joseph & Natalie
CF Release Integration Team

On Thu, Sep 17, 2015 at 8:44 AM, Kyle Havlovitz (kyhavlov) <
kyhavlov(a)cisco.com> wrote:

On running git clone inside the container via the warden shell, I get:
"Cloning into 'staticfile-buildpack'...
fatal: unable to access '
https://github.com/cloudfoundry/staticfile-buildpack/': Could not
resolve host: github.com".
So the container can't get to anything outside of it (I also tried
pinging some external IPs to make sure it wasn't a DNS thing). Would this
be caused by cloud controller security group settings?


Enquiry from web

Business Opportunity <business@...>
 

Myself and 30 other businesses are urgently looking for the best utilities broker in eh9 to refer our clients to.

I found your website on Google and wondered if you might be interested in coming to our meeting on Thursday 22nd of October or the 5th or 19th of November , where we will be discussing referral opportunities.

We meet at the Marriot Hotel, 111 Glasgow Rd, Edinburgh EH12 8NF.

Simply reply to this email with YES in the subject heading and your contact name and telephone number in the body text, so that we can call you to discuss any dietary requirements you may have ( there is a terrific breakfast buffet) and to provide you with further details about the meeting.
Best Wishes
Stephen,
Anderson Craig Office Supplies


Re: Throttling App Logging

Daniel Jones
 

It's a similar situation, in that app teams are being re-educated to store
important data in persistent data stores, rather than logging.

AFAIK there is an element of best-effort with regards to the logging
requirements, in that reasonable efforts to persist the logs must be
demonstrable - so whilst it's acknowledged that logs aren't guaranteed, the
PaaS team has done all they can to make sure they stand as high a chance as
possible that they get to their destination. Just like having logs backed
up to tape doesn't guarantee their invincibility (the tapes might get
destroyed), but reasonable efforts have been made to keep them.

On Tue, Sep 22, 2015 at 10:17 AM, Aleksey Zalesov <
aleksey.zalesov(a)altoros.com> wrote:

By the way how do you comply with the requirement to persist all the logs?
CF logging system is lossy by its nature so can drop messages.

We have similar requirement - *some* log messages must be reliably
delivered and stored. After some considerations we decided to use RabbitMQ
service for delivering these kind of messages from apps to log storage. All
other logs go through metron-doppler chain as usual.

Aleksey Zalesov | CloudFoundry Engineer | Altoros
Tel: (617) 841-2121 ext. 5707 | Toll free: 855-ALTOROS
Fax: (866) 201-3646 | Skype: aleksey_zalesov
www.altoros.com | blog.altoros.com | twitter.com/altoros

On 22 Sep 2015, at 11:01, Daniel Jones <daniel.jones(a)engineerbetter.com>
wrote:

Thanks all.

Sadly the client has a regulatory requirement for *some* apps that all
logs must be persisted for a number of years, so we can't drop messages
indiscriminately using the loggregator buffer. They're a PCF customer, so
I'll raise a feature request through the support process.

Cheers!

On Mon, Sep 21, 2015 at 5:24 PM, Rohit Kumar <rokumar(a)pivotal.io> wrote:

It isn't possible to throttle logging output on a per application basis.
It is possible to configure the message_drain_buffer_size [1] to be lower
than the default value of 100, which will reduce the number of logs which
loggregator will buffer. If the producer is filling up logs too quickly,
loggregator will drop the messages present in its buffer. This
configuration will affect ALL the applications running on your Cloud
Foundry environment. You could play with that property and see if it helps.

Rohit

[1]:
https://github.com/cloudfoundry/loggregator/blob/develop/bosh/jobs/doppler/spec#L60-L62

On Mon, Sep 21, 2015 at 2:57 AM, Daniel Jones <
daniel.jones(a)engineerbetter.com> wrote:


Is it possible with the current logging infrastructure in CF to limit
the logging throughput of particular apps or spaces?

Current client is running CF multi-tenant, and has some particularly
noisy customers. It'd be nice to be able to put a hard limit on how much
they can pass through to downstream commercial log indexers.

Any suggestions most gratefully received!

Regards,

Daniel Jones
EngineerBetter.com

--
Regards,

Daniel Jones
EngineerBetter.com


--
Regards,

Daniel Jones
EngineerBetter.com


Re: Throttling App Logging

Aleksey Zalesov
 

By the way how do you comply with the requirement to persist all the logs? CF logging system is lossy by its nature so can drop messages.

We have similar requirement - some log messages must be reliably delivered and stored. After some considerations we decided to use RabbitMQ service for delivering these kind of messages from apps to log storage. All other logs go through metron-doppler chain as usual.

Aleksey Zalesov | CloudFoundry Engineer | Altoros
Tel: (617) 841-2121 ext. 5707 | Toll free: 855-ALTOROS
Fax: (866) 201-3646 | Skype: aleksey_zalesov
www.altoros.com <http://www.altoros.com/> | blog.altoros.com <http://blog.altoros.com/> | twitter.com/altoros <http://twitter.com/altoros>

On 22 Sep 2015, at 11:01, Daniel Jones <daniel.jones(a)engineerbetter.com> wrote:

Thanks all.

Sadly the client has a regulatory requirement for some apps that all logs must be persisted for a number of years, so we can't drop messages indiscriminately using the loggregator buffer. They're a PCF customer, so I'll raise a feature request through the support process.

Cheers!

On Mon, Sep 21, 2015 at 5:24 PM, Rohit Kumar <rokumar(a)pivotal.io <mailto:rokumar(a)pivotal.io>> wrote:
It isn't possible to throttle logging output on a per application basis. It is possible to configure the message_drain_buffer_size [1] to be lower than the default value of 100, which will reduce the number of logs which loggregator will buffer. If the producer is filling up logs too quickly, loggregator will drop the messages present in its buffer. This configuration will affect ALL the applications running on your Cloud Foundry environment. You could play with that property and see if it helps.

Rohit

[1]: https://github.com/cloudfoundry/loggregator/blob/develop/bosh/jobs/doppler/spec#L60-L62 <https://github.com/cloudfoundry/loggregator/blob/develop/bosh/jobs/doppler/spec#L60-L62>

On Mon, Sep 21, 2015 at 2:57 AM, Daniel Jones <daniel.jones(a)engineerbetter.com <mailto:daniel.jones(a)engineerbetter.com>> wrote:

Is it possible with the current logging infrastructure in CF to limit the logging throughput of particular apps or spaces?

Current client is running CF multi-tenant, and has some particularly noisy customers. It'd be nice to be able to put a hard limit on how much they can pass through to downstream commercial log indexers.

Any suggestions most gratefully received!

Regards,

Daniel Jones
EngineerBetter.com




--
Regards,

Daniel Jones
EngineerBetter.com


Re: Throttling App Logging

Daniel Jones
 

Thanks all.

Sadly the client has a regulatory requirement for *some* apps that all logs
must be persisted for a number of years, so we can't drop messages
indiscriminately using the loggregator buffer. They're a PCF customer, so
I'll raise a feature request through the support process.

Cheers!

On Mon, Sep 21, 2015 at 5:24 PM, Rohit Kumar <rokumar(a)pivotal.io> wrote:

It isn't possible to throttle logging output on a per application basis.
It is possible to configure the message_drain_buffer_size [1] to be lower
than the default value of 100, which will reduce the number of logs which
loggregator will buffer. If the producer is filling up logs too quickly,
loggregator will drop the messages present in its buffer. This
configuration will affect ALL the applications running on your Cloud
Foundry environment. You could play with that property and see if it helps.

Rohit

[1]:
https://github.com/cloudfoundry/loggregator/blob/develop/bosh/jobs/doppler/spec#L60-L62

On Mon, Sep 21, 2015 at 2:57 AM, Daniel Jones <
daniel.jones(a)engineerbetter.com> wrote:


Is it possible with the current logging infrastructure in CF to limit the
logging throughput of particular apps or spaces?

Current client is running CF multi-tenant, and has some particularly
noisy customers. It'd be nice to be able to put a hard limit on how much
they can pass through to downstream commercial log indexers.

Any suggestions most gratefully received!

Regards,

Daniel Jones
EngineerBetter.com
--
Regards,

Daniel Jones
EngineerBetter.com


Re: Adding new events table index requires truncation

Dieu Cao <dcao@...>
 

Yes, we'd only be truncating the table backing /v2/events. This will not
affect the tables backing /v2/app_usage_events or /v2/service_usage_events
and thus should not affect billing.
I'll clarify in the story description.

-Dieu

On Mon, Sep 21, 2015 at 8:00 PM, Matt Cholick <cholick(a)gmail.com> wrote:

From the discussion on the story, it looks like this won't affect any
billing? I want to be sure as we base our billing off event data, and
missing an event could mean we'd continue to bill for applications that
were shut down (or never bill for an app). We're billing off of
/v2/app_usage_events and using the state. Is the distinction that you're
truncating the table behind /v2/events but *not* /v2/app_usage_events? It's
unclear from the story what is being truncated vs preserved, from the api
perspective.

-Matt

On Mon, Sep 21, 2015 at 10:44 AM, Jeffrey Pak <jeffrey.pak(a)emc.com> wrote:

Hi all,

The CAPI team is looking to merge in a PR to cloud_controller_ng,
https://github.com/cloudfoundry/cloud_controller_ng/pull/418, which will
update an index on the events table to include "id" as well as "timestamp".
See https://www.pivotaltracker.com/story/show/101985370 for more
information and discussion.

Older deployments with many events would experience a very slow deploy if
this migration runs as-is. To prevent this from causing failed deploys or
unintended downtime, we'd like to truncate the events table as part of the
migration.

If we do this, it'll be made clear in the release notes and will most
likely be included in v219.

Any questions or concerns?

Thanks,

Raina and Jeff
CF CAPI Team


Re: Packaging CF app as bosh-release

Paul Bakare
 

Yes Amit. Thanks

I'm trying the 2 approaches since the both have their pros and cons.

is your compute environment a multi-tenant one that will be running
multiple different workloads?
Yes. dev can push their own spark-based apps and non-spark apps. The
spark-based apps would rely on the existing Spark cluster.

it's also likely to be a more efficient use of resources, since a BOSH VM
can only run one of these spark-job-processors,
I think a Spark cluster(using YARN) of BOSH VMs should be able to run
multiple spark jobs concurrently.

With the app deployment approach, I did setup a UPS for the Spark cluster
and I've been able to submit Spark jobs to the cluster programmatically
through the Spark API. I'll stay with app deployment for now until I get a
stronger use case for a boshrelease.

On Tue, Sep 22, 2015 at 12:21 AM, Amit Gupta <agupta(a)pivotal.io> wrote:

Hey Kayode,

Were you able to make any progress with the deployments you were trying to
do?

Best,
Amit

On Wed, Sep 16, 2015 at 12:48 PM, Amit Gupta <agupta(a)pivotal.io> wrote:

My very limited understanding is that NFS writes to the actual
filesystem, and achieves persistence by having centralized NFS servers
where it writes to a real mounted device, whereas the clients write to an
ephemeral nfs-mount.

My very limited understanding of HDFS is that it's all userland FS, does
not write to the actual filesystem, and relies on replication to other
nodes in the HDFS cluster. Being a userland FS, you don't have to worry
about the data being wiped when a container is shut down, if you were to
run it as an app.

I think one main issue is going to be ensuring that you never lose too
many instances (whether they are containers or VMs), since you might then
lose all replicas of a given data shard. Whether you go with apps or BOSH
VMs doesn't make a big difference here.

Deploying as an app may be a better way to go, it's simpler right now to
configure and deploy and app, than to configure and deploy a full BOSH
release. It's also likely to be a more efficient use of resources, since a
BOSH VM can only run one of these spark-job-processors, but a CF
container-runner can run lots of other things. That actually brings up a
different question: is your compute environment a multi-tenant one that
will be running multiple different workloads? E.g. could someone also use
the CF to push their own apps? Or is the whole thing just for your spark
jobs, in which case you might only be running one container per VM anyways?

Assuming you can make use of the VMs for other workloads, I think this
would be an ideal use case for Diego. You probably don't need all the
extra logic around apps, like staging and routing, you just need Diego to
efficiently schedule containers for you.

On Wed, Sep 16, 2015 at 1:13 PM, Kayode Odeyemi <dreyemi(a)gmail.com>
wrote:

Thanks Dmitriy,

Just for clarity, are you saying multiple instances of a VM cannot share
a single shared filesystem?

On Wed, Sep 16, 2015 at 6:59 PM, Dmitriy Kalinin <dkalinin(a)pivotal.io>
wrote:

BOSH allocates a persistent disk per instance. It never shares
persistent disks between multiple instances at the same time.

If you need a shared file system, you will have to use some kind of a
release for it. It's not any different from what people do with nfs
server/client.

On Wed, Sep 16, 2015 at 7:09 AM, Amit Gupta <agupta(a)pivotal.io> wrote:

The shared file system aspect is an interesting wrinkle to the
problem. Unless you use some network layer to how you write to the shared
file system, e.g. SSHFS, I think apps will not work because they get
isolated to run in a container, they're given a chroot "jail" for their
file system, and it gets blown away whenever the app is stopped or
restarted (which will commonly happen, e.g. during a rolling deploy of the
container-runner VMs).

Do you have something that currently works? How do your VMs currently
access this shared FS? I'm not sure BOSH has the abstractions for choosing
a shared, already-existing "persistent disk" to be attached to multiple
VMs. I also don't know what happens when you scale your VMs down, because
BOSH would generally destroy the associated persistent disk, but you don't
want to destroy the shared data.

Dmitriy, any idea how BOSH can work with a shared filesystem (e.g.
HDFS)?

Amit

On Wed, Sep 16, 2015 at 6:54 AM, Kayode Odeyemi <dreyemi(a)gmail.com>
wrote:


On Wed, Sep 16, 2015 at 3:44 PM, Amit Gupta <agupta(a)pivotal.io>
wrote:

Are the spark jobs tasks that you expect to end, or apps that you
expect to run forever?
They are tasks that run forever. The jobs are subscribers to RabbitMQ
queues that process
messages in batches.


Do your jobs need to write to the file system, or do they access a
shared/distributed file system somehow?
The jobs write to shared filesystem.


Do you need things like a static IP allocated to your jobs?
No.


Are your spark jobs serving any web traffic?
No.




Re: Adding new events table index requires truncation

Matt Cholick
 

From the discussion on the story, it looks like this won't affect any
billing? I want to be sure as we base our billing off event data, and
missing an event could mean we'd continue to bill for applications that
were shut down (or never bill for an app). We're billing off of
/v2/app_usage_events and using the state. Is the distinction that you're
truncating the table behind /v2/events but *not* /v2/app_usage_events? It's
unclear from the story what is being truncated vs preserved, from the api
perspective.

-Matt

On Mon, Sep 21, 2015 at 10:44 AM, Jeffrey Pak <jeffrey.pak(a)emc.com> wrote:

Hi all,

The CAPI team is looking to merge in a PR to cloud_controller_ng,
https://github.com/cloudfoundry/cloud_controller_ng/pull/418, which will
update an index on the events table to include "id" as well as "timestamp".
See https://www.pivotaltracker.com/story/show/101985370 for more
information and discussion.

Older deployments with many events would experience a very slow deploy if
this migration runs as-is. To prevent this from causing failed deploys or
unintended downtime, we'd like to truncate the events table as part of the
migration.

If we do this, it'll be made clear in the release notes and will most
likely be included in v219.

Any questions or concerns?

Thanks,

Raina and Jeff
CF CAPI Team


Re: cf push without a manifest file on linux does not work but works on windows

Rasheed Abdul-Aziz
 

Im sorry, I forgot to add:

Can you execute `cf curl /v2/info` and tell us what you get?

Thanks again

On Mon, Sep 21, 2015 at 9:15 PM, Rasheed Abdul-Aziz <rabdulaziz(a)pivotal.io>
wrote:

Hi Varsha

Could you please repost this to our issue tracker:
https://github.com/cloudfoundry/cli/issues

And when you do so, could you rerun the command with CF_TRACE=true.
Scan it for anything that you feel needs to remain private and hide it
with ***'s, and paste the output into the issue.

I'm pretty sure we'll be able to help!

Kind Regards,
Rasheed.



On Sun, Sep 20, 2015 at 11:58 PM, Varsha Nagraj <n.varsha(a)gmail.com>
wrote:

I am trying to push a nodejs application without a mainfest file as
follows(using cloud foundry push command): cf push appname -c "node app.js"
-d "mydomain.net" -i 1 -n hostname -m 64M -p "path to directory"
--no-manifest.

This works on Windows. However if I run the same on Linux, it throws me
"incorrect" usage. Is there any difference wrt "double quotes" or what
might be the issue?


Re: Loggregator Community Survey #2 - TCP for Metron<-->Doppler

taichi nakashima
 

Hi

In our case, we have a requirement that all application logs are needed to
be store for 6 month without lost or if lost we need to know where logs get
lost (I'm not sure this is general requirement). So TCP option is one of
what we really want and really welcome.

'option to choose TCP or UDP at bosh deploy time' is also nice !

--
Taichi Nakashima




2015年9月20日(日) 1:16 Matthew Sykes <matthew.sykes(a)gmail.com>:

Having attempted to debug issues where application logs get dropped, I
would welcome an option to have TCP used. In the current system, you never
know if logs are dropped because the datagram never reached the target or
because it got hung up in some component along the way.

I would think that the concerns about back pressure from TCP could be
dealt with in the clients using mechanisms similar to what you already have
with the syslog sinks.

On Fri, Sep 18, 2015 at 5:22 PM, Erik Jasiak <mjasiak(a)pivotal.io> wrote:

Greetings again CF community!

As part of the new dropsonde point proposal[1], the team would have to
implement some tough sizing choices related to UDP packets, and the
likelihood of larger packets getting dropped.

Alternatively, we could finally add tcp support for Metron; this has
several pros (among them: much lower chance of lossiness) and cons (among
them: puts a component at risk of failure from backpressure if the system
is sized wrong).

* Would operators be interested in TCP support, even if it means a higher
risk of component failure if the loggregator system and message-producing
component weren't tuned correctly?

* Would you prefer there be an option to choose TCP or UDP at bosh deploy
time? We'd be open to this option, but are more likely to be biased in
supporting only one choice over time.

Thanks again,
Erik Jasiak
PM - Loggregator

[1]
https://lists.cloudfoundry.org/archives/list/cf-dev(a)lists.cloudfoundry.org/message/765VT4CLXL3F2KCGR4PUO2LHPV73USTB/


--
Matthew Sykes
matthew.sykes(a)gmail.com


Re: Proposal: Decomposing cf-release and Extracting Deployment Strategies

Amit Kumar Gupta
 

This forces us to spread all clusterable nodes across 2 deploys and
certain jobs, like CC, use the job_name+index to uniquely identify a node

I believe they're planning on switching to guids for bosh job identifiers.
I saw in another thread you and Dmitriy discussed this. Any other reasons
for having unique job names we should know about?

How would you feel about the interface allowing for specifying
additional releases, jobs, and templates to be colocated on existing jobs,
along with property configuration for these things?

I don't quite follow what you are proposing here. Can you clarify?
What I mean is the tools we build for generating manifests will support
specifying inputs (probably in the form of a YAML file) that declares what
additional releases you want to add to the deployment, what additional jobs
you may want to add, what additional job templates you may want to colocate
with an existing job, and property configuration for those additional jobs
or colocated job templates. A common example is wanting to colocate some
monitoring agent on all the jobs, and providing some credential
configuration so it can pump metrics into some third party service. This
would be for things not already covered by the LAMB architecture.

Something like that would work for me as long as we were still able to
take advantage of the scripts/tooling in cf-deployment to manage the config
and templates we manage in lds-deployment.

Yes, that'd be the plan.

Cheers,
Amit


On Mon, Sep 21, 2015 at 2:41 PM, Mike Youngstrom <youngm(a)gmail.com> wrote:

Thanks for the response. See comments below:


Sensitive property management as part of manifest generation
(encrypted or acquired from an outside source)

How do you currently get these encrypted or external values into your
manifests? At manifest generation time, would you be able to generate a
stub on the fly from this source, and pass it into the manifest generation
script?
Yes, that would work fine. Just thought I'd call it out as something our
current solution does that we'd have to augment in cf-deployment.


If for some reason we are forced to fork a stock release we'd like to
be able to use that forked release we are building instead of the publicly
available one for manifest generation and release uploads, etc.

Yes, using the stock release will be the default option, but we will
support several other ways of specifying a release, including providing a
URL to a remote tarball, a path to a local release directory, a path to a
local tarball, and maybe a git URL and SHA.
Great!


The job names in each deployment must be unique across the
installation.

Why do the job names need to be unique across deployments?
This is because a single bosh cannot connect to multiple datacenters which
for us represent different availability zones. This forces us to spread
all clusterable nodes across 2 deploys and certain jobs, like CC, use the
job_name+index to uniquely identify a node [0]. Therefore if we have 2 CCs
deployed across 2 AZ we must have one job named cloud_controller_az1 and
the other named cloud_controller_az2. Does that make sense? I recognize
this is mostly the fault of a limitation in Bosh but until bosh supports
connection to multiple vsphere datacenters with a single director we will
need to account for it in our templatin.

[0]
https://github.com/cloudfoundry/cloud_controller_ng/blob/5257a8af6990e71cd1e34ae8978dfe4773b32826/bosh-templates/cloud_controller_worker_ctl.erb#L48

Occasionally we may wish to use some config from a stock release not
currently exposed in a cf-deployment template. I'd like to be sure there
is a way we can add that config, in a not hacky way, without waiting for a
PR to be accepted and subsequent release.

This would be ideal. Currently, a lot of complexity in manifest
generation is around, if you specify a certain value X, then you need to
make sure you specify values Y, Z, etc. in a compatible way. E.g. if you
have 3 etcd instances, then the value for the etcd.machines property needs
to have those 3 IPs. If you specify domain as "mydomain.com", then you
need to specify in other places that the UAA URL is "
https://uaa.mydomain.com". The hope is most of this complexity goes
away with BOSH Links (
https://github.com/cloudfoundry/bosh-notes/blob/master/links.md). My
hope is that, as the complexity goes away, we will have to maintain less
logic and will be able to comfortably expose more, if not all, of the
properties.
Great

We have our own internal bosh releases and config that we'll need to
merge in with the things cf-deployment is doing.

How would you feel about the interface allowing for specifying additional
releases, jobs, and templates to be colocated on existing jobs, along with
property configuration for these things?
I don't quite follow what you are proposing here. Can you clarify?


we'd like to augment this with our own release jobs and config that we
know to work with cf-deployment 250's and perhaps tag it as v250.lds

Would a workflow like this work for you: maintain an lds-deployment repo,
which includes cf-deployment as a submodule, and you can version
lds-deployment and update your submodule pointer to cf-deployment as you
see fit? lds-deployment will probably just need the cf-deployment
submodule, and a config file describing the "blessed" versions of the
non-stock releases you wish to add on. I know this is lacking details, but
does something along those lines sound like a reasonable workflow?
Something like that would work for me as long as we were still able to
take advantage of the scripts/tooling in cf-deployment to manage the config
and templates we manage in lds-deployment.

Thanks,
Mike




On Wed, Sep 16, 2015 at 3:06 PM, Mike Youngstrom <youngm(a)gmail.com>
wrote:

Another situation we have that you may want to keep in mind while
developing cf-deployment:

* We are using vsphere and currently we have a cf installation with 2 AZ
using 2 separate vsphere "Datacenters" (more details:
https://github.com/cloudfoundry/bosh-notes/issues/7). This means we
have a CF installation that is actually made up of 2 deployments. So, we
need to generate a manifest for az1 and another for az2. The job names in
each deployment must be unique across the installation (e.g.
cloud_controller_az1 and cloud_controller_az2) would be the cc job names in
each deployment.

Mike

On Wed, Sep 16, 2015 at 3:38 PM, Mike Youngstrom <youngm(a)gmail.com>
wrote:

Here are some of the examples:

* Sensitive property management as part of manifest generation
(encrypted or acquired from an outside source)

* We have our own internal bosh releases and config that we'll need to
merge in with the things cf-deployment is doing. For example, if
cf-deployment tags v250 as including Diego 3333 and etcd 34 with given
templates perhaps we'd like to augment this with our own release jobs and
config that we know to work with cf-deployment 250's and perhaps tag it as
v250.lds and that becomes what we use to generate our manifests and upload
releases.

* Occasionally we may wish to use some config from a stock release not
currently exposed in a cf-deployment template. I'd like to be sure there
is a way we can add that config, in a not hacky way, without waiting for a
PR to be accepted and subsequent release.

* If for some reason we are forced to fork a stock release we'd like to
be able to use that forked release we are building instead of the publicly
available one for manifest generation and release uploads, etc.

Does that help?

Mike



On Tue, Sep 15, 2015 at 9:50 PM, Amit Gupta <agupta(a)pivotal.io> wrote:

Thanks for the feedback Mike!

Can you tell us more specifically what sort of extensions you need?
It would be great if cf-deployment provided an interface that could serve
the needs of essentially all operators of CF.

Thanks,
Amit

On Tue, Sep 15, 2015 at 4:02 PM, Mike Youngstrom <youngm(a)gmail.com>
wrote:

This is great stuff! My organization currently maintains our own
custom ways to generate manifests, include secure properties, and manage
release versions.

We would love to base the next generation of our solution on
cf-deployment. Have you put any thought into how others might customize or
extend cf-deployment? Our needs are very similar to yours just sometimes a
little different.

Perhaps a private fork periodically merged with a known good release
combination (tag) might be appropriate? Or some way to include the same
tools into a wholly private repo?

Mike


On Tue, Sep 8, 2015 at 1:22 PM, Amit Gupta <agupta(a)pivotal.io> wrote:

Hi all,

The CF OSS Release Integration team (casually referred to as the
"MEGA team") is trying to solve a lot of tightly interrelated problems, and
make many of said problems less interrelated. It is difficult to address
just one issue without touching the others, so the following proposal
addresses several issues, but the most important ones are:

* decompose cf-release into many independently manageable,
independently testable, independently usable releases
* separate manifest generation strategies from the release source,
paving the way for Diego to be part of the standard deployment

This proposal will outline a picture of how manifest generation will
work in a unified manner in development, test, and integration
environments. It will also outline a picture of what each release’s test
pipelines will look like, how they will feed into a common integration
environment, and how feedback from the integration environment will feed
back into the test environments. Finally, it will propose a picture for
what the integration environment will look like, and how we get from the
current integration environment to where we want to be.

For further details, please feel free to view and comment here:


https://docs.google.com/document/d/1Viga_TzUB2nLxN_ILqksmUiILM1hGhq7MBXxgLaUOkY

Thanks,
Amit, CF OSS Release Integration team


Re: cf push without a manifest file on linux does not work but works on windows

Rasheed Abdul-Aziz
 

Hi Varsha

Could you please repost this to our issue tracker:
https://github.com/cloudfoundry/cli/issues

And when you do so, could you rerun the command with CF_TRACE=true.
Scan it for anything that you feel needs to remain private and hide it with
***'s, and paste the output into the issue.

I'm pretty sure we'll be able to help!

Kind Regards,
Rasheed.

On Sun, Sep 20, 2015 at 11:58 PM, Varsha Nagraj <n.varsha(a)gmail.com> wrote:

I am trying to push a nodejs application without a mainfest file as
follows(using cloud foundry push command): cf push appname -c "node app.js"
-d "mydomain.net" -i 1 -n hostname -m 64M -p "path to directory"
--no-manifest.

This works on Windows. However if I run the same on Linux, it throws me
"incorrect" usage. Is there any difference wrt "double quotes" or what
might be the issue?


Re: User cannot do CF login when UAA is being updated

Yunata, Ricky <rickyy@...>
 

Hi Joseph, Amit & all,

Hi Joseph, have you received the attachment from Dies?
To everyone else, I just wanted to know if this is the normal behaviour of CF that user is logged out when UAA is being updated, or is it because I have my manifest wrongly configured.
It would be helpful if anyone can give me some answer based on their experience. Thanks

Regards,
Ricky

From: CF Runtime [mailto:cfruntime(a)gmail.com]
Sent: Wednesday, 16 September 2015 7:08 PM
To: Discussions about Cloud Foundry projects and the system overall.
Subject: [cf-dev] Re: Re: Re: Re: Re: Re: User cannot do CF login when UAA is being updated

If you can't get the list to accept the attachment, you can give it to Dies and he should be able to get it to us.

Joseph
OSS Release Integration Team

On Tue, Sep 15, 2015 at 7:19 PM, Yunata, Ricky <rickyy(a)fast.au.fujitsu.com<mailto:rickyy(a)fast.au.fujitsu.com>> wrote:
Hi Joseph,

Yes that is the case. I have sent my test result but it seems that my e-mail does not get through. How can I sent attachment in this mailing list?

Regards,
Ricky


From: CF Runtime [mailto:cfruntime(a)gmail.com<mailto:cfruntime(a)gmail.com>]
Sent: Tuesday, 15 September 2015 8:10 PM
To: Discussions about Cloud Foundry projects and the system overall.
Subject: [cf-dev] Re: Re: Re: Re: User cannot do CF login when UAA is being updated

Couple of updates here for clarity. No databases are stored on NFS in any default installation. NFS is only used to store blobstore data. If you are using the postgres job from cf-release, since it is single node there will be downtime during a stemcell deploy.

I talked with Dies from Fujitsu earlier and confirmed they are NOT using the postgres job but an external non-cf deployed postgres instance. So during a deploy, the UAA db should be up and available the entire time.

The issue they are seeing is that even though the database is up, and I'm guessing there is at least a single node of UAA up during the deploy, there are still login failures.

Joseph
OSS Release Integration Team

On Mon, Sep 14, 2015 at 6:39 PM, Filip Hanik <fhanik(a)pivotal.io<mailto:fhanik(a)pivotal.io>> wrote:
Amit, see previous comment.

Postgresql database is stored on NFS that is restarted during nfs job update.
UAA, while being up, is non functional while the NFS job is updated because it can't get to the DB.



On Mon, Sep 14, 2015 at 5:09 PM, Amit Gupta <agupta(a)pivotal.io<mailto:agupta(a)pivotal.io>> wrote:
Hi Ricky,

My understanding is that you still need help, and the issues Jiang and Alexander raised are different. To avoid confusion, let's keep this thread focused on your issue.

Can you confirm that you have two UAA VMs in separate bosh jobs, separate AZs, etc. Can you confirm that when you roll the UAAs, only one goes down at a time? The simplest way to affect a roll is to change some trivial property in the manifest for your UAA jobs. If you're using v215, any of the properties referenced here will do:

https://github.com/cloudfoundry/cf-release/blob/v215/jobs/uaa/spec#L321-L335

You should confirm that only one UAA is down at a time, and comes back up before bosh moves on to updating the other UAA.

While this roll is happening, can you just do `CF_TRACE=true cf auth USERNAME PASSWORD` in a loop, and if you see one that fails, post the output, along with noting the state of the bosh deploy when the error happens.

Thanks,
Amit

On Mon, Sep 14, 2015 at 10:51 AM, Amit Gupta <agupta(a)pivotal.io<mailto:agupta(a)pivotal.io>> wrote:
Ricky, Jiang, Alexander, are the three of you working together? It's hard to tell since you've got Fujitsu, Gmail, and Altoros email addresses. Are you folks talking about the same issue with the same deployment, or three separate issues.

Ricky, if you still need assistance with your issue, please let us know.

On Mon, Sep 14, 2015 at 10:16 AM, Lomov Alexander <alexander.lomov(a)altoros.com<mailto:alexander.lomov(a)altoros.com>> wrote:
Yes, the problem is that postgresql database is stored on NFS that is restarted during nfs job update. I’m sure that you’ll be able to run updates without outage with several customizations.

It is hard to tell without knowing your environment, but in common case steps will be following:


1. Add additional instances to nfs job and customize it to make replications (for instance use this docs for release customization [1])
2. Make your NFS job to update sequently without our jobs updates in parallel (like it is done for postgresql [2])
3. Check your options in update section [3].

[1] https://help.ubuntu.com/community/HighlyAvailableNFS
[2] https://github.com/cloudfoundry/cf-release/blob/master/example_manifests/minimal-aws.yml#L115-L116
[3] https://github.com/cloudfoundry/cf-release/blob/master/example_manifests/minimal-aws.yml#L57-L62

On Sep 14, 2015, at 9:47 AM, Yitao Jiang <jiangyt.cn(a)gmail.com<mailto:jiangyt.cn(a)gmail.com>> wrote:

On upgrading the deployment, the uaa not working due the uaadb filesystem hangup.Under my environment , the nfs-wal-server's ip changed which causing uaadb,ccdb hang up. Hard reboot the uaadb, restart uaa service solve the issue.

Hopes can help you.

On Mon, Sep 14, 2015 at 2:13 PM, Yunata, Ricky <rickyy(a)fast.au.fujitsu.com<mailto:rickyy(a)fast.au.fujitsu.com>> wrote:
Hello,

I have a question regarding UAA in Cloud Foundry. I’m currently running Cloud Foundry on Openstack.
I have 2 availability zones and redundancy of the important VMs including UAA.
Whenever I do an upgrade of either stemcell or CF release, user will not be able to do CF login when when CF is updating UAA VM.
My question is, is this a normal behaviour? If I have redundant UAA VM, shouldn’t user still be able to still login to the apps even though it’s being updated?
I’ve done this test a few times, with different CF version and stemcells and all of them are giving me the same result. The latest test that I’ve done was to upgrade CF version from 212 to 215.
Has anyone experienced the same issue?

Regards,
Ricky
Disclaimer

The information in this e-mail is confidential and may contain content that is subject to copyright and/or is commercial-in-confidence and is intended only for the use of the above named addressee. If you are not the intended recipient, you are hereby notified that dissemination, copying or use of the information is strictly prohibited. If you have received this e-mail in error, please telephone Fujitsu Australia Software Technology Pty Ltd on + 61 2 9452 9000<tel:%2B%2061%202%209452%209000> or by reply e-mail to the sender and delete the document and all copies thereof.


Whereas Fujitsu Australia Software Technology Pty Ltd would not knowingly transmit a virus within an email communication, it is the receiver’s responsibility to scan all communication and any files attached for computer viruses and other defects. Fujitsu Australia Software Technology Pty Ltd does not accept liability for any loss or damage (whether direct, indirect, consequential or economic) however caused, and whether by negligence or otherwise, which may result directly or indirectly from this communication or any files attached.


If you do not wish to receive commercial and/or marketing email messages from Fujitsu Australia Software Technology Pty Ltd, please email unsubscribe(a)fast.au.fujitsu.com<mailto:unsubscribe(a)fast.au.fujitsu.com>




--

Regards,

Yitao
jiangyt.github.io<http://jiangyt.github.io/>





Disclaimer

The information in this e-mail is confidential and may contain content that is subject to copyright and/or is commercial-in-confidence and is intended only for the use of the above named addressee. If you are not the intended recipient, you are hereby notified that dissemination, copying or use of the information is strictly prohibited. If you have received this e-mail in error, please telephone Fujitsu Australia Software Technology Pty Ltd on + 61 2 9452 9000<tel:%2B%2061%202%209452%209000> or by reply e-mail to the sender and delete the document and all copies thereof.


Whereas Fujitsu Australia Software Technology Pty Ltd would not knowingly transmit a virus within an email communication, it is the receiver’s responsibility to scan all communication and any files attached for computer viruses and other defects. Fujitsu Australia Software Technology Pty Ltd does not accept liability for any loss or damage (whether direct, indirect, consequential or economic) however caused, and whether by negligence or otherwise, which may result directly or indirectly from this communication or any files attached.


If you do not wish to receive commercial and/or marketing email messages from Fujitsu Australia Software Technology Pty Ltd, please email unsubscribe(a)fast.au.fujitsu.com<mailto:unsubscribe(a)fast.au.fujitsu.com>

Disclaimer

The information in this e-mail is confidential and may contain content that is subject to copyright and/or is commercial-in-confidence and is intended only for the use of the above named addressee. If you are not the intended recipient, you are hereby notified that dissemination, copying or use of the information is strictly prohibited. If you have received this e-mail in error, please telephone Fujitsu Australia Software Technology Pty Ltd on + 61 2 9452 9000 or by reply e-mail to the sender and delete the document and all copies thereof.


Whereas Fujitsu Australia Software Technology Pty Ltd would not knowingly transmit a virus within an email communication, it is the receiver’s responsibility to scan all communication and any files attached for computer viruses and other defects. Fujitsu Australia Software Technology Pty Ltd does not accept liability for any loss or damage (whether direct, indirect, consequential or economic) however caused, and whether by negligence or otherwise, which may result directly or indirectly from this communication or any files attached.


If you do not wish to receive commercial and/or marketing email messages from Fujitsu Australia Software Technology Pty Ltd, please email unsubscribe(a)fast.au.fujitsu.com


[ann] Subway - how to scale out any Cloud Foundry service

Dr Nic Williams
 

Quicky links:

* https://github.com/cloudfoundry-community/cf-subway
*
https://blog.starkandwayne.com/2015/09/21/how-to-scale-out-any-cloud-foundry-service/

We've been using Ferdy's Docker BOSH release since he created it, and have
published new docker images, new wrapper BOSH releases and more. But it
still doesn't scale horizontally (yes it has docker swarm support but no
that can't do persistent storage on volumes).

So we created Subway - a broker that allows you to run a fleet of
single-server service brokers such as Docker BOSH release, or
cf-redis-boshrelease.

I'll write up/create a video soon to walk-thru upgrading your existing
in-production single-server services to use Subway.

Have fun!

Nic


--
Dr Nic Williams
Stark & Wayne LLC - consultancy for Cloud Foundry users
http://drnicwilliams.com
http://starkandwayne.com
cell +1 (415) 860-2185
twitter @drnic


Re: Packaging CF app as bosh-release

Amit Kumar Gupta
 

Hey Kayode,

Were you able to make any progress with the deployments you were trying to
do?

Best,
Amit

On Wed, Sep 16, 2015 at 12:48 PM, Amit Gupta <agupta(a)pivotal.io> wrote:

My very limited understanding is that NFS writes to the actual filesystem,
and achieves persistence by having centralized NFS servers where it writes
to a real mounted device, whereas the clients write to an ephemeral
nfs-mount.

My very limited understanding of HDFS is that it's all userland FS, does
not write to the actual filesystem, and relies on replication to other
nodes in the HDFS cluster. Being a userland FS, you don't have to worry
about the data being wiped when a container is shut down, if you were to
run it as an app.

I think one main issue is going to be ensuring that you never lose too
many instances (whether they are containers or VMs), since you might then
lose all replicas of a given data shard. Whether you go with apps or BOSH
VMs doesn't make a big difference here.

Deploying as an app may be a better way to go, it's simpler right now to
configure and deploy and app, than to configure and deploy a full BOSH
release. It's also likely to be a more efficient use of resources, since a
BOSH VM can only run one of these spark-job-processors, but a CF
container-runner can run lots of other things. That actually brings up a
different question: is your compute environment a multi-tenant one that
will be running multiple different workloads? E.g. could someone also use
the CF to push their own apps? Or is the whole thing just for your spark
jobs, in which case you might only be running one container per VM anyways?

Assuming you can make use of the VMs for other workloads, I think this
would be an ideal use case for Diego. You probably don't need all the
extra logic around apps, like staging and routing, you just need Diego to
efficiently schedule containers for you.

On Wed, Sep 16, 2015 at 1:13 PM, Kayode Odeyemi <dreyemi(a)gmail.com> wrote:

Thanks Dmitriy,

Just for clarity, are you saying multiple instances of a VM cannot share
a single shared filesystem?

On Wed, Sep 16, 2015 at 6:59 PM, Dmitriy Kalinin <dkalinin(a)pivotal.io>
wrote:

BOSH allocates a persistent disk per instance. It never shares
persistent disks between multiple instances at the same time.

If you need a shared file system, you will have to use some kind of a
release for it. It's not any different from what people do with nfs
server/client.

On Wed, Sep 16, 2015 at 7:09 AM, Amit Gupta <agupta(a)pivotal.io> wrote:

The shared file system aspect is an interesting wrinkle to the
problem. Unless you use some network layer to how you write to the shared
file system, e.g. SSHFS, I think apps will not work because they get
isolated to run in a container, they're given a chroot "jail" for their
file system, and it gets blown away whenever the app is stopped or
restarted (which will commonly happen, e.g. during a rolling deploy of the
container-runner VMs).

Do you have something that currently works? How do your VMs currently
access this shared FS? I'm not sure BOSH has the abstractions for choosing
a shared, already-existing "persistent disk" to be attached to multiple
VMs. I also don't know what happens when you scale your VMs down, because
BOSH would generally destroy the associated persistent disk, but you don't
want to destroy the shared data.

Dmitriy, any idea how BOSH can work with a shared filesystem (e.g.
HDFS)?

Amit

On Wed, Sep 16, 2015 at 6:54 AM, Kayode Odeyemi <dreyemi(a)gmail.com>
wrote:


On Wed, Sep 16, 2015 at 3:44 PM, Amit Gupta <agupta(a)pivotal.io> wrote:

Are the spark jobs tasks that you expect to end, or apps that you
expect to run forever?
They are tasks that run forever. The jobs are subscribers to RabbitMQ
queues that process
messages in batches.


Do your jobs need to write to the file system, or do they access a
shared/distributed file system somehow?
The jobs write to shared filesystem.


Do you need things like a static IP allocated to your jobs?
No.


Are your spark jobs serving any web traffic?
No.