Date   

Cloud Foundry Asia Summit Unconference: Call for Participation

Wei-Min Lu
 

For those who will come to the Cloud Foundry Asia Summit in Shanghai on Dec
2 and 3, there is a unconference before the Summit as a Summit tradition.
We would like to invite you to the Unconference. Also there are still few
lightning talk spots available at the Unconference; please email me at
wmlu(a)anchora.me if you would like to talk.


------------------------------------------------

*CF Asia Summit **Unconference*

*Wednesday, December 2, 2015*
1:30pm - 4:00pm
Salon 5, located on the 5th Floor at the Summit hotel:

SHANGHAI MARRIOTT HOTEL CITY CENTRE

555 XIZANG MIDDLE RD, HUANGPU

SHANGHAI, CHINA 200003

+86 21 2312 9888



*What is the Cloud Foundry Unconference?*

As a tradition of the Cloud Foundry Summit, the “unconference” takes place
the afternoon before the Cloud Foundry Summit and the focus is on
discussing opportunities to expand the ecosystem of solutions in order to
meet the needs of the ever-growing number of Cloud Foundry users.

An unconference does not have pre-determined presentation topics. Instead,
we will choose the topics we want to discuss at the beginning of the event.

Unconferences are a great way to share and discuss lots of new ideas and we
hope you will join us!



*Who should attend? *

Cloud Foundry Users, Developers, Consultants, Vendors, Entrepreneurs,
3rd-party Solution Providers, Anyone interested in Cloud Foundry

The unconference is free. You don't need to register for the summit to
attend the unconference. Also we will give out ten free CF Asia Summit
passes at the unconference.


*Schedule: Unconference*
1:00 PM Registration & Networking
1:30 PM Lightning Talks:

· Yiqun Ding, *Zhejiang University*

· Yuebin <https://twitter.com/bjuler222> Shen, *Anchora/MoPaaS*

· Hongqiang Chen, *VMware*

· *......*

· Please Contact Jinxiang Chen at jinxiang.chen(a)anchora.me if you
would like to give a lightning talk。


2:00 PM Unpanel Topic: Attendees propose topics which get answered by
experts in the audience
2:15 PM Unconference: Attendees propose topics for Breakout Discussions
2:30 PM Breakout Discussions
3:15 PM Wrap-up
3:20PM - 4:00PM Drinks Reception & Networking

4:00PM The Summit

--------------------------------------
Wei-Min Lu
Anchora, Inc
http://www.mopaas.com
Tel: +1-825-925-7698
+1-408-658-8166
Skype: wmlu2006
WeChat: wmlu_wechat


uaa: beginner issue with default user in uaa.yml configuration

tony kerz
 

having some basic understanding issues with appropriate configuration for uaa.yml
around setting up a default user with the ability to login,
if anyone is kind enough to take me to school, here is a github issue with the details:

https://github.com/sequenceiq/docker-uaa/issues/5#issuecomment-160185248

cheers,
tony.


Cpu as a autoscaling metric [was CFScaler - CloudFoundry Auto Scaling]

john mcteague <john.mcteague@...>
 

Is CPU a valid autoscaling metric when running apps in system that uses CPU
shares like CF since the CPU reported to CF by the Linux container is
always in the context of all CPU on the host (DEA) machine?

If my underlying DEAs have 8vcpu's and only one app is on that DEA then it
could consume and report 800% CPU. Depending on how I have configured my
DEAs it may actually be guaranteed a minimum of 25% CPU by the Linux
scheduler.

Given that, saying my application should scale when it hits 70% CPU is
entirely inaccurate as if may never be allowed to get anywhere near that,
or if it does exceed 70% it could keep going until it consumes all dea CPU.

Memory likewise is a bad metric for autoscaling in many cases, particularly
Java apps where the container view doesn't take into account the JVM heap
management.

The real value IMO for autoscaling is for apps to advertise factors ( via
firehose?) specific to them such as message queue depth, http response
times etc, and scale based on those.

John.

On 27 Nov 2015 11:12, "Layne Peng" <layne.peng(a)emc.com> wrote:

Nice work! It is what we exactly need!


Re: Dev and Production environment inconsistent

Juan Antonio Breña Moral <bren at juanantonio.info...>
 

Hi Alex,

when I get the summary from an app:
http://apidocs.cloudfoundry.org/214/apps/get_app_summary.html

I see a deprecated element: (example: production: false) so the best practice is the development of virtual environments using spaces?

Example:

+ CERT
+ PRE
+ PRO

Is the idea with spaces?

Juan Antonio


Re: CFScaler - CloudFoundry Auto Scaling

Layne Peng
 

Nice work! It is what we exactly need!


Re: 答复: Re: Re: Cloud Foundry deploy on suse

Youzhi Zhu
 

Hi shengjun

Thank you for your solution, now we can deploy apps success on CF based
suse OS. The only thing that not perfect is that when set the parameter
"disk_quota_enabled" of warden to "true", the app cannot start
successfully, and the warden log report as follow:

*{"timestamp":1448341389.3761559,"message":"Exited with status 1 (0.004s):
[[\"/var/vcap/data/packages/warden/43/warden/src/closefds/closefds\",
\"/var/vcap/data/packages/warden/43/warden/src/closefds/closefds\"],
\"/var/vcap/data/packages/warden/43/warden/src/repquota/repquota\", \"/\",
\"20000\"]","log_level":"warn","source":"Warden::Container::Linux","data":{"stdout":"","stderr":"Failed
retrieving quota for uid=20000: Block device doesn't
exist.\n"},"thread_id":7286900,"fiber_id":12257000,"process_id":9202,"file":"/var/vcap/data/packages/warden/43/warden/lib/warden/container/spawn.rb","lineno":135,"method":"set_deferred_success"}*

If set this parameter to "false", the app can start successfully. have you
ever met this problem when deploy on suse before. thanks!


2015-11-20 14:24 GMT+08:00 Tangshengjun (A) <tangshengjun(a)huawei.com>:

We encountered this and resolved.



Use this command: mount --make-rprivate /

Which followed this web:
http://gaijin-nippon.blogspot.in/2012/10/lxc-pivotroot-fails-on-shared-mount.html



And there are many same problems which you can as a reference:

https://github.com/docker/docker/issues/11382

https://github.com/docker/docker/issues/1751

https://github.com/lxc/lxc/issues/61

http://linux.die.net/man/8/mount




------------------------------

唐盛军
华为技术有限公司 Huawei Technologies Co., Ltd.


Phone: 13777864354
Fax:
Mobile: 13777864354
Email: tsjsdbd(a)huawei.com
地址:杭州市江虹路410号华为基地 邮编:310052
Huawei Technologies Co., Ltd.
JiangHong road 410,BingJiang District,Hangzhou 310052, P.R.China
http://www.huawei.com
------------------------------

本邮件及其附件含有华为公司的保密信息,仅限于发送给上面地址中列出的个人或群组。禁
止任何其他人以任何形式使用(包括但不限于全部或部分地泄露、复制、或散发)本邮件中
的信息。如果您错收了本邮件,请您立即电话或邮件通知发件人并删除本邮件!
This e-mail and its attachments contain confidential information from
HUAWEI, which
is intended only for the person or entity whose address is listed above.
Any use of the
information contained herein in any way (including, but not limited to,
total or partial
disclosure, reproduction, or dissemination) by persons other than the
intended
recipient(s) is prohibited. If you receive this e-mail in error, please
notify the sender by
phone or email immediately and delete it!

*发件人:* Youzhi Zhu [mailto:zhuyouzhi03(a)gmail.com]
*发送时间:* 2015年11月5日 15:09
*收件人:* Discussions about Cloud Foundry projects and the system overall.
*主题:* [cf-dev] Re: Re: Cloud Foundry deploy on suse



Hi Matthew



I also guess it is something wrong with the file system type, then I
checked the file system type when mount rootfs_lucid64 to container depot
path "mnt/", it's overlayfs for suse other than aufs for ubuntu10.04, but
it does support overlayfs for if you changed to overlayfs on ubuntu10.04,
the app can alslo be started successfully.



After that I found when stack the container file system, the command
"mount" exec with "-n" option, which means do not write the mount info to
/etc/mtab, but when exec on suse, it does write to the /proc/mtab. Another
strange phenomenon is that the mount command is called by "unshare -m",
which means do not share mount namespace with the calling process, but I
can see the mounted files in the calling UTS namespace in fact, even add
the "--make-rprivate" option to command mount does not work. that
confused me too much.





2015-11-04 23:37 GMT+08:00 Matthew Sykes <matthew.sykes(a)gmail.com>:

wshd is simply reporting [1] the pivot_root [2] failure. It looks like
you're getting an EINVAL from the call which implies warden is running in
an unexpected environment.



If I were to guess, I'd say that the container depot does not live on an
expected file system type or location...



As far as I'm aware, no work has been done to make warden run under
anything but Ubuntu or CentOS recently but it's possible someone has. If
nobody else has any hints, you'll likely have to look through the code and
work out what's going on.



[1]:
https://github.com/cloudfoundry/warden/blob/76010f2ba12e41d9e8755985ec874391fb3c962a/warden/src/wsh/wshd.c#L715

[2]: http://man7.org/linux/man-pages/man2/pivot_root.2.html



On Wed, Nov 4, 2015 at 7:27 AM, Youzhi Zhu <zhuyouzhi03(a)gmail.com> wrote:

Hi all

We are trying to deploy cloud foundry on suse, now every CF module can
start successfully, but when I push an app to CF, it occurred error, I
checked the logs and found when start the container, the wshd process throw
error "pivot_root: Invalid argument", anyone has seen this error before or
anyone has deploy CF to other OS successfully except ubuntu?thanks.



CF version is cf-release170

suse version is suse 12 with kernel 3.12.28-4-default





--

Matthew Sykes
matthew.sykes(a)gmail.com



Re: Custom service installation

Amit Kumar Gupta
 

Hi Saswat,

This mailing list is for discussions about the open source Cloud Foundry
project. You should contact Pivotal support (
https://support.pivotal.io/hc/en-us) or contact your sales rep/field
engineer for your question.

(To answer your question though, no, if you just BOSH deploy it will not
show up in OpsManager UI).

Best,
Amit

On Thu, Nov 26, 2015 at 4:52 AM, saswat sahoo <
saswat.kumar.sahoo(a)accenture.com> wrote:

I have a query regarding custom service installation which are not
available under Pivotal Cloudfoundry service list. I have a PCF
installation running on AWS and i need to install a service via BOSH
director instead of Opsmanager UI. If I install any service via BOSH, will
it be available as a tile on "Installation Dashboard" of Opsmanager after
installation? Or I need to manage that custom service via BOSH director only


Re: CF-RELEASE v202 UPLOAD ERROR

Amit Kumar Gupta
 

Hi Parthiban,

I've asked the CF API and NFS experts to track this issue:

https://www.pivotaltracker.com/story/show/109039624

If you do not receive a solution in this thread from the community or the
API team, you could also try opening up a GitHub issue on the Cloud
Controller, since GH issues are often easier for the core development teams
to track: https://github.com/cloudfoundry/cloud_controller_ng/issues

Best,
Amit

On Tue, Nov 24, 2015 at 12:49 AM, Parthiban Annadurai <senjiparthi(a)gmail.com
wrote:
Okay.. Let me try with it.. Thanks..

On 24 November 2015 at 14:02, ronak banka <ronakbanka.cse(a)gmail.com>
wrote:

Subnet ranges on which your other components are provisioned.

allow_from_entries:
- 192.168.33.0/24




On Tue, Nov 24, 2015 at 5:16 PM, Parthiban Annadurai <
senjiparthi(a)gmail.com> wrote:

Hello Ronak,
Actually, previously i have given the values for
ALLOW_FROM_ENTRIES, after seeing some mail groups only i changed it to
NULL. Could you tell me which IP i need to give their or something else??
Thanks..

On 24 November 2015 at 13:23, ronak banka <ronakbanka.cse(a)gmail.com>
wrote:

Hi Parthiban,

In your manifest , there is a global property block

nfs_server:
address: 192.168.33.53
allow_from_entries:
- null
- null
share: null

allow from entries are provided for cc individual property and not for actual debian nfs server, that is possible reason cc is not able to write to nfs


https://github.com/cloudfoundry/cf-release/blob/master/jobs/debian_nfs_server/spec#L20

Thanks
Ronak



On Tue, Nov 24, 2015 at 3:42 PM, Parthiban Annadurai <
senjiparthi(a)gmail.com> wrote:

Thanks Amit for your faster reply. FYI, I have shared my deployment
manifest too. I got struck in this issue for past couple of weeks. Thanks..

On 24 November 2015 at 12:00, Amit Gupta <agupta(a)pivotal.io> wrote:

Hi Parthiban,

Sorry to hear your deployment is still getting stuck. As Warren
points out, based on your log output, it looks like an issue with NFS
configuration. I will ask the CAPI team, who are experts on cloud
controller and NFS server, to take a look at your question.

Best,
Amit

On Thu, Nov 19, 2015 at 8:11 PM, Parthiban Annadurai <
senjiparthi(a)gmail.com> wrote:

Thanks for your suggestions Warren. I am attaching the Manifest file
which used for the deployment. Am also suspecting that the problem with the
NFS Server Configuration only.

On 19 November 2015 at 22:32, Warren Fernandes <
wfernandes(a)pivotal.io> wrote:

Hey Parthiban,

It seems that there may be a misconfiguration in your manifest.
Did you configure the nfs_server properties?


https://github.com/cloudfoundry/cf-release/blob/master/templates/cf-jobs.yml#L19-L22

The api_z1 pulls the above properties in here.
https://github.com/cloudfoundry/cf-release/blob/master/templates/cf-jobs.yml#L368
.

Is it possible to share your manifest with us via a gist or
attachment? Please remove any sensitive information like passwords, certs
and keys.

Thanks.


Re: REGARDING_api_z1/0_CANARY_UPDATE

Amit Kumar Gupta
 

Hi Parthiban,

Looks like you're discussing the exact same issue here:
https://lists.cloudfoundry.org/archives/list/cf-dev(a)lists.cloudfoundry.org/thread/GZWT445VL4Y4WSH6K2Y2I4R5WA5AB665/.
Anybody trying to help out with this problem, please do so on that other
thread, so we don't duplicate effort.

Thanks,
Amit

On Thu, Nov 26, 2015 at 6:43 AM, Parthiban Annadurai <senjiparthi(a)gmail.com>
wrote:

Hi,
The var/vcap/sys/log shows the following,

root(a)5c446a3d-3070-4d24-9f2e-1cff18218c07:/var/vcap/sys/log# monit summary
The Monit daemon 5.2.4 uptime: 20m

Process 'cloud_controller_ng' initializing
Process 'cloud_controller_worker_local_1' not monitored
Process 'cloud_controller_worker_local_2' not monitored
Process 'nginx_cc' initializing
Process 'metron_agent' running
File 'nfs_mounter' Does not exist
System 'system_5c446a3d-3070-4d24-9f2e-1cff18218c07' running

Also I have checked for cloud_controller_ng_ctl.log, it has the following,

[2015-11-18 04:33:34+0000] ------------ STARTING cloud_controller_ng_ctl
at Wed Nov 18 04:32:53 UTC 2015 --------------
[2015-11-18 04:33:34+0000] Preparing local package directory
[2015-11-18 04:33:34+0000] Preparing local resource_pool directory
[2015-11-18 04:33:34+0000] Preparing local droplet directory
[2015-11-18 04:33:34+0000] Deprecated: Use -s or --insert-seed flag
[2015-11-18 04:33:34+0000] Killing
/var/vcap/sys/run/cloud_controller_ng/cloud_controller_ng.pid: 32522
[2015-11-18 04:33:34+0000] .Stopped


Then, nfs_mounter_ctl.log has the following,

[2015-11-18 04:27:20+0000] Found NFS mount, unmounting...
[2015-11-18 04:27:20+0000] NFS unmounted
[2015-11-18 04:27:20+0000] idmapd start/post-stop, process 25777
[2015-11-18 04:27:20+0000] NFS unmounted
[2015-11-18 04:27:20+0000] Mounting NFS...
[2015-11-18 04:27:20+0000] mount.nfs: timeout set for Wed Nov 18 04:29:20
2015
[2015-11-18 04:27:20+0000] mount.nfs: trying text-based options
'timeo=10,intr,lookupcache=positive,vers=4,addr=192.168.33.53,clientaddr=192.168.33.184'
[2015-11-18 04:27:20+0000] mount.nfs: trying text-based options
'timeo=10,intr,lookupcache=positive,addr=192.168.33.53'
[2015-11-18 04:27:20+0000] mount.nfs: prog 100003, trying vers=3, prot=6
[2015-11-18 04:27:20+0000] mount.nfs: prog 100005, trying vers=3, prot=17
[2015-11-18 04:27:20+0000] Failed to start: cannot write to NFS

I think the problem is with the NFS. Could you please help on this issue??
Thanks..



On 25 November 2015 at 23:59, CF Runtime <cfruntime(a)gmail.com> wrote:

Hi, Parthiban!

Thanks for the manifest!
Can you please also attached the logs and error logs of control script?
Logs can be found in the `/var/vcap/sys/log/` folder.

Thanks!

On Wed, Nov 25, 2015 at 4:43 AM, Parthiban Annadurai <
senjiparthi(a)gmail.com> wrote:

Hello Natalie & Mikhail,

FYI, am trying to deploy Cloud Foundry
on vSphere. I have shared the Manifest too with this mail. Thanks..


On 25 November 2015 at 04:41, CF Runtime <cfruntime(a)gmail.com> wrote:

Have you checked the control script logs in the `/var/vcap/sys/log/`
folder? If the jobs are failing to start that's a good place to start. If
you send them to us we can tell you more.

Also, what infrastructure are you deploying cloud foundry to, and can
you send us the manifest you're using to deploy it?

Natalie & Mikhail
OSS Integration & Runtime

On Thu, Nov 19, 2015 at 1:19 AM, Parthiban A <senjiparthi(a)gmail.com>
wrote:

Hello All,
Since, I was facing the following issue for very long
time, I have opened it as a separate thread. The problem am currently
facing is

Error 400007: `api_z1/0' is not running after update

I have SSHed into the api_z1/0 VM and did a monit summary. It shows
that

root(a)5c446a3d-3070-4d24-9f2e-1cff18218c07:/var/vcap/sys/log# monit
summary
The Monit daemon 5.2.4 uptime: 20m

Process 'cloud_controller_ng' initializing
Process 'cloud_controller_worker_local_1' not monitored
Process 'cloud_controller_worker_local_2' not monitored
Process 'nginx_cc' initializing
Process 'metron_agent' running
File 'nfs_mounter' Does not exist
System 'system_5c446a3d-3070-4d24-9f2e-1cff18218c07' running

Could anyone help on this issue? Thanks.


Re: REGARDING_api_z1/0_CANARY_UPDATE

Parthiban Annadurai <senjiparthi@...>
 

Hi,
The var/vcap/sys/log shows the following,

root(a)5c446a3d-3070-4d24-9f2e-1cff18218c07:/var/vcap/sys/log# monit summary
The Monit daemon 5.2.4 uptime: 20m

Process 'cloud_controller_ng' initializing
Process 'cloud_controller_worker_local_1' not monitored
Process 'cloud_controller_worker_local_2' not monitored
Process 'nginx_cc' initializing
Process 'metron_agent' running
File 'nfs_mounter' Does not exist
System 'system_5c446a3d-3070-4d24-9f2e-1cff18218c07' running

Also I have checked for cloud_controller_ng_ctl.log, it has the following,

[2015-11-18 04:33:34+0000] ------------ STARTING cloud_controller_ng_ctl at
Wed Nov 18 04:32:53 UTC 2015 --------------
[2015-11-18 04:33:34+0000] Preparing local package directory
[2015-11-18 04:33:34+0000] Preparing local resource_pool directory
[2015-11-18 04:33:34+0000] Preparing local droplet directory
[2015-11-18 04:33:34+0000] Deprecated: Use -s or --insert-seed flag
[2015-11-18 04:33:34+0000] Killing
/var/vcap/sys/run/cloud_controller_ng/cloud_controller_ng.pid: 32522
[2015-11-18 04:33:34+0000] .Stopped


Then, nfs_mounter_ctl.log has the following,

[2015-11-18 04:27:20+0000] Found NFS mount, unmounting...
[2015-11-18 04:27:20+0000] NFS unmounted
[2015-11-18 04:27:20+0000] idmapd start/post-stop, process 25777
[2015-11-18 04:27:20+0000] NFS unmounted
[2015-11-18 04:27:20+0000] Mounting NFS...
[2015-11-18 04:27:20+0000] mount.nfs: timeout set for Wed Nov 18 04:29:20
2015
[2015-11-18 04:27:20+0000] mount.nfs: trying text-based options
'timeo=10,intr,lookupcache=positive,vers=4,addr=192.168.33.53,clientaddr=192.168.33.184'
[2015-11-18 04:27:20+0000] mount.nfs: trying text-based options
'timeo=10,intr,lookupcache=positive,addr=192.168.33.53'
[2015-11-18 04:27:20+0000] mount.nfs: prog 100003, trying vers=3, prot=6
[2015-11-18 04:27:20+0000] mount.nfs: prog 100005, trying vers=3, prot=17
[2015-11-18 04:27:20+0000] Failed to start: cannot write to NFS

I think the problem is with the NFS. Could you please help on this issue??
Thanks..

On 25 November 2015 at 23:59, CF Runtime <cfruntime(a)gmail.com> wrote:

Hi, Parthiban!

Thanks for the manifest!
Can you please also attached the logs and error logs of control script?
Logs can be found in the `/var/vcap/sys/log/` folder.

Thanks!

On Wed, Nov 25, 2015 at 4:43 AM, Parthiban Annadurai <
senjiparthi(a)gmail.com> wrote:

Hello Natalie & Mikhail,

FYI, am trying to deploy Cloud Foundry
on vSphere. I have shared the Manifest too with this mail. Thanks..


On 25 November 2015 at 04:41, CF Runtime <cfruntime(a)gmail.com> wrote:

Have you checked the control script logs in the `/var/vcap/sys/log/`
folder? If the jobs are failing to start that's a good place to start. If
you send them to us we can tell you more.

Also, what infrastructure are you deploying cloud foundry to, and can
you send us the manifest you're using to deploy it?

Natalie & Mikhail
OSS Integration & Runtime

On Thu, Nov 19, 2015 at 1:19 AM, Parthiban A <senjiparthi(a)gmail.com>
wrote:

Hello All,
Since, I was facing the following issue for very long
time, I have opened it as a separate thread. The problem am currently
facing is

Error 400007: `api_z1/0' is not running after update

I have SSHed into the api_z1/0 VM and did a monit summary. It shows that

root(a)5c446a3d-3070-4d24-9f2e-1cff18218c07:/var/vcap/sys/log# monit
summary
The Monit daemon 5.2.4 uptime: 20m

Process 'cloud_controller_ng' initializing
Process 'cloud_controller_worker_local_1' not monitored
Process 'cloud_controller_worker_local_2' not monitored
Process 'nginx_cc' initializing
Process 'metron_agent' running
File 'nfs_mounter' Does not exist
System 'system_5c446a3d-3070-4d24-9f2e-1cff18218c07' running

Could anyone help on this issue? Thanks.


Custom service installation

saswat sahoo
 

I have a query regarding custom service installation which are not available under Pivotal Cloudfoundry service list. I have a PCF installation running on AWS and i need to install a service via BOSH director instead of Opsmanager UI. If I install any service via BOSH, will it be available as a tile on "Installation Dashboard" of Opsmanager after installation? Or I need to manage that custom service via BOSH director only


Re: Dev and Production environment inconsistent

Aleksey Zalesov
 

Hi Lynn,

you can create additional environment in Cloud Foundry using spaces for this purpose. Then you need to alter your workflow so after accepting PR the app is pushed to staging space, where automated tests are ran to verify PR doesn't brake current functionality. After this verification you can push the app to prod space.

For more information please look at Continuous Delivery practice [1]

Alex Zalesov @ Altoros

[1]: http://martinfowler.com/bliki/ContinuousDelivery.html


Re: new feature discuss: User can use CF to deploy APP in specific zone.

Ronak Banka
 

Hi Rexxar,

We are thinking to do this using the stack property for application
placement on certain set of DEA's with that stack .

http://cf-dev.70369.x6.nabble.com/cf-dev-Using-stack-names-as-placement-tags-on-DEA-making-stack-name-check-optional-tc2783.html

so everytime cc wants to deploy a application with that stack it will select
a DEA out of that DEA stack pool and place application on it.

Thanks
Ronak



--
View this message in context: http://cf-dev.70369.x6.nabble.com/cf-dev-new-feature-discuss-User-can-use-CF-to-deploy-APP-in-specific-zone-tp2868p2887.html
Sent from the CF Dev mailing list archive at Nabble.com.


Re: new feature discuss: User can use CF to deploy APP in specific zone.

Amit Kumar Gupta
 

Hi Rexxar,

What use case do you have in mind?

I think Gwenn is referring to the Elastic Clusters proposal:
https://lists.cloudfoundry.org/archives/list/cf-dev(a)lists.cloudfoundry.org/thread/DWDRYMY6E7ZD36GXXDU7NR6VQ6OFCFIT/#DWDRYMY6E7ZD36GXXDU7NR6VQ6OFCFIT

Does that sound like what you were thinking about?

Best,
Amit

On Tue, Nov 24, 2015 at 7:18 PM, Gwenn Etourneau <getourneau(a)pivotal.io>
wrote:

CF team is already working on that it's call "isolation groups" (before
was placement pool).

Maybe Amit, James or Dieu can give you more information about that.

Thanks.

On Wed, Nov 25, 2015 at 12:12 PM, Liangbiao <rexxar.liang(a)huawei.com>
wrote:

Hi,

Currently, DEA can specified to a “zone”, and Cloud Controller can
schedule APP instance according to zone.(
https://github.com/cloudfoundry/cloud_controller_ng/blob/965dbc4bdf65df89f382329aef39f86a916b3f05/lib/cloud_controller/dea/pool.rb#L47
)

So, I think whether we can push it more further.

For example, APP developer can specify which zone to deploy the APP.



Regards,

Rexxar


Re: MEGA team mitosis

John Feminella
 

Awesome! Thanks Amit.

--
*John Feminella*
Advisory Platform Architect
✉ · jfeminella(a)pivotal.io
t · @jxxf <https://twitter.com/jxxf>

On Thu, Nov 26, 2015 at 12:40 AM, Amit Gupta <agupta(a)pivotal.io> wrote:

Good question.

We have two new slack channels: #infrastructure and #release-integration.
The #mega channel's "purpose" tagline is updated to redirect people to the
other two, and I've also configured a daily reminder that Slackbot will
post to the #mega channel redirecting folks to the other two. I will also
monitor the #mega channel and 301 people to the new channels until it dies
down enough, and will then archive the channel.

Best,
Amit

On Wed, Nov 25, 2015 at 9:14 PM, John Feminella <jfeminella(a)pivotal.io>
wrote:

How will the #mega Slack channel on CF split and/or be renamed as a
result of the reorganization? (Or will it?)

--
*John Feminella*
Advisory Platform Architect
✉ · jfeminella(a)pivotal.io
t · @jxxf <https://twitter.com/jxxf>

On Wed, Nov 25, 2015 at 11:14 PM, Amit Gupta <agupta(a)pivotal.io> wrote:

One more thing to add, we're also keeping an eye on the "cloudfoundry"
tag on StackOverflow, so feel free to ask questions there as well.

Best,
Amit

On Wed, Nov 25, 2015 at 8:04 PM, Amit Gupta <agupta(a)pivotal.io> wrote:

Hey all,

The CF Release Integration team (also known as "MEGA") will be
splitting into two smaller teams. The current team's responsibilities
included:

- splitting out, maintaining, and adding features to an etcd-release
- splitting out, maintaining, and adding features to a consul-release
- splitting out and maintaining a NATS-release
- splitting out and maintaining a "pre-diego-runtime"-release (DEA,
Warden, HM9k)
- splitting out and maintaining a postgres-release
- owning the CATS and cf-smoke-tests
- managing the main integration pipelines that take all the bits and
create a usable, well-tested distribution of the CF components
- managing the cf-release repo, and the final release process
- building tooling to generate manifests that compose all the split-out
releases
- building tooling to automate bootstrapping environments for Concourse
and CF (no more snowflakes)
- canary apps used to monitor integration and production environments
- providing other common functionality, e.g. route registration

This is a lot of context for a small team of two pairs to juggle, and
it could arguably be the work of several teams, but we will start with two
one-pair teams for now.

*CF Release Integration* (no longer referred to as MEGA?)
- splitting out and maintaining a "pre-diego-runtime"-release (DEA,
Warden, HM9k)
- owning the CATS and cf-smoke-tests
- managing the main integration pipelines that take all the bits and
create a usable, well-tested distribution of the CF components
- managing the cf-release repo, and the final release process
- building tooling to generate manifests that compose all the split-out
releases
- canary apps used to monitor integration and production environments
- providing other common functionality, e.g. route registration

Product Manager: Amit Gupta, Pivotal
Lead Engineer: Rob Dimsdale, Pivotal
Engineer: Zachary Auerbach, Pivotal

*CF Infrastructure*
- splitting out, maintaining, and adding features to an etcd-release
- splitting out, maintaining, and adding features to a consul-release
- splitting out and maintaining a NATS-release
- splitting out and maintaining a postgres-release
- building tooling to generate manifests that compose all the split-out
releases

Product Manager: Amit Gupta, Pivotal
Lead Engineer: Ryan Moran, Pivotal
Engineer: Adrian Zankich, Pivotal

We look forward to collaborating with all of you, we're still on the
usual communication channels -- the cf-dev mailing list, and issues and PR
on GitHub -- no changes there!

Cheers,
Amit


Re: MEGA team mitosis

Amit Kumar Gupta
 

Good question.

We have two new slack channels: #infrastructure and #release-integration.
The #mega channel's "purpose" tagline is updated to redirect people to the
other two, and I've also configured a daily reminder that Slackbot will
post to the #mega channel redirecting folks to the other two. I will also
monitor the #mega channel and 301 people to the new channels until it dies
down enough, and will then archive the channel.

Best,
Amit

On Wed, Nov 25, 2015 at 9:14 PM, John Feminella <jfeminella(a)pivotal.io>
wrote:

How will the #mega Slack channel on CF split and/or be renamed as a result
of the reorganization? (Or will it?)

--
*John Feminella*
Advisory Platform Architect
✉ · jfeminella(a)pivotal.io
t · @jxxf <https://twitter.com/jxxf>

On Wed, Nov 25, 2015 at 11:14 PM, Amit Gupta <agupta(a)pivotal.io> wrote:

One more thing to add, we're also keeping an eye on the "cloudfoundry"
tag on StackOverflow, so feel free to ask questions there as well.

Best,
Amit

On Wed, Nov 25, 2015 at 8:04 PM, Amit Gupta <agupta(a)pivotal.io> wrote:

Hey all,

The CF Release Integration team (also known as "MEGA") will be splitting
into two smaller teams. The current team's responsibilities included:

- splitting out, maintaining, and adding features to an etcd-release
- splitting out, maintaining, and adding features to a consul-release
- splitting out and maintaining a NATS-release
- splitting out and maintaining a "pre-diego-runtime"-release (DEA,
Warden, HM9k)
- splitting out and maintaining a postgres-release
- owning the CATS and cf-smoke-tests
- managing the main integration pipelines that take all the bits and
create a usable, well-tested distribution of the CF components
- managing the cf-release repo, and the final release process
- building tooling to generate manifests that compose all the split-out
releases
- building tooling to automate bootstrapping environments for Concourse
and CF (no more snowflakes)
- canary apps used to monitor integration and production environments
- providing other common functionality, e.g. route registration

This is a lot of context for a small team of two pairs to juggle, and it
could arguably be the work of several teams, but we will start with two
one-pair teams for now.

*CF Release Integration* (no longer referred to as MEGA?)
- splitting out and maintaining a "pre-diego-runtime"-release (DEA,
Warden, HM9k)
- owning the CATS and cf-smoke-tests
- managing the main integration pipelines that take all the bits and
create a usable, well-tested distribution of the CF components
- managing the cf-release repo, and the final release process
- building tooling to generate manifests that compose all the split-out
releases
- canary apps used to monitor integration and production environments
- providing other common functionality, e.g. route registration

Product Manager: Amit Gupta, Pivotal
Lead Engineer: Rob Dimsdale, Pivotal
Engineer: Zachary Auerbach, Pivotal

*CF Infrastructure*
- splitting out, maintaining, and adding features to an etcd-release
- splitting out, maintaining, and adding features to a consul-release
- splitting out and maintaining a NATS-release
- splitting out and maintaining a postgres-release
- building tooling to generate manifests that compose all the split-out
releases

Product Manager: Amit Gupta, Pivotal
Lead Engineer: Ryan Moran, Pivotal
Engineer: Adrian Zankich, Pivotal

We look forward to collaborating with all of you, we're still on the
usual communication channels -- the cf-dev mailing list, and issues and PR
on GitHub -- no changes there!

Cheers,
Amit


Re: MEGA team mitosis

John Feminella
 

How will the #mega Slack channel on CF split and/or be renamed as a result
of the reorganization? (Or will it?)

--
*John Feminella*
Advisory Platform Architect
✉ · jfeminella(a)pivotal.io
t · @jxxf <https://twitter.com/jxxf>

On Wed, Nov 25, 2015 at 11:14 PM, Amit Gupta <agupta(a)pivotal.io> wrote:

One more thing to add, we're also keeping an eye on the "cloudfoundry" tag
on StackOverflow, so feel free to ask questions there as well.

Best,
Amit

On Wed, Nov 25, 2015 at 8:04 PM, Amit Gupta <agupta(a)pivotal.io> wrote:

Hey all,

The CF Release Integration team (also known as "MEGA") will be splitting
into two smaller teams. The current team's responsibilities included:

- splitting out, maintaining, and adding features to an etcd-release
- splitting out, maintaining, and adding features to a consul-release
- splitting out and maintaining a NATS-release
- splitting out and maintaining a "pre-diego-runtime"-release (DEA,
Warden, HM9k)
- splitting out and maintaining a postgres-release
- owning the CATS and cf-smoke-tests
- managing the main integration pipelines that take all the bits and
create a usable, well-tested distribution of the CF components
- managing the cf-release repo, and the final release process
- building tooling to generate manifests that compose all the split-out
releases
- building tooling to automate bootstrapping environments for Concourse
and CF (no more snowflakes)
- canary apps used to monitor integration and production environments
- providing other common functionality, e.g. route registration

This is a lot of context for a small team of two pairs to juggle, and it
could arguably be the work of several teams, but we will start with two
one-pair teams for now.

*CF Release Integration* (no longer referred to as MEGA?)
- splitting out and maintaining a "pre-diego-runtime"-release (DEA,
Warden, HM9k)
- owning the CATS and cf-smoke-tests
- managing the main integration pipelines that take all the bits and
create a usable, well-tested distribution of the CF components
- managing the cf-release repo, and the final release process
- building tooling to generate manifests that compose all the split-out
releases
- canary apps used to monitor integration and production environments
- providing other common functionality, e.g. route registration

Product Manager: Amit Gupta, Pivotal
Lead Engineer: Rob Dimsdale, Pivotal
Engineer: Zachary Auerbach, Pivotal

*CF Infrastructure*
- splitting out, maintaining, and adding features to an etcd-release
- splitting out, maintaining, and adding features to a consul-release
- splitting out and maintaining a NATS-release
- splitting out and maintaining a postgres-release
- building tooling to generate manifests that compose all the split-out
releases

Product Manager: Amit Gupta, Pivotal
Lead Engineer: Ryan Moran, Pivotal
Engineer: Adrian Zankich, Pivotal

We look forward to collaborating with all of you, we're still on the
usual communication channels -- the cf-dev mailing list, and issues and PR
on GitHub -- no changes there!

Cheers,
Amit


Dev and Production environment inconsistent

Lynn Lin
 

All,
We setup local environment in laptop and developers wrote code ,run testing locally then send a pull request ,once pull request is approved ,it will be pused to production enviornment in Cloud foundry . We observe that some bugs are only foundy in production environment(CF) however we don't find it in developers local enrionment and testing . There maybe inconsistent between local environment(developers' desktop) and production environment in production(CF) . What's best pratices to resove this ?

Thanks in advance !


Re: MEGA team mitosis

Amit Kumar Gupta
 

One more thing to add, we're also keeping an eye on the "cloudfoundry" tag
on StackOverflow, so feel free to ask questions there as well.

Best,
Amit

On Wed, Nov 25, 2015 at 8:04 PM, Amit Gupta <agupta(a)pivotal.io> wrote:

Hey all,

The CF Release Integration team (also known as "MEGA") will be splitting
into two smaller teams. The current team's responsibilities included:

- splitting out, maintaining, and adding features to an etcd-release
- splitting out, maintaining, and adding features to a consul-release
- splitting out and maintaining a NATS-release
- splitting out and maintaining a "pre-diego-runtime"-release (DEA,
Warden, HM9k)
- splitting out and maintaining a postgres-release
- owning the CATS and cf-smoke-tests
- managing the main integration pipelines that take all the bits and
create a usable, well-tested distribution of the CF components
- managing the cf-release repo, and the final release process
- building tooling to generate manifests that compose all the split-out
releases
- building tooling to automate bootstrapping environments for Concourse
and CF (no more snowflakes)
- canary apps used to monitor integration and production environments
- providing other common functionality, e.g. route registration

This is a lot of context for a small team of two pairs to juggle, and it
could arguably be the work of several teams, but we will start with two
one-pair teams for now.

*CF Release Integration* (no longer referred to as MEGA?)
- splitting out and maintaining a "pre-diego-runtime"-release (DEA,
Warden, HM9k)
- owning the CATS and cf-smoke-tests
- managing the main integration pipelines that take all the bits and
create a usable, well-tested distribution of the CF components
- managing the cf-release repo, and the final release process
- building tooling to generate manifests that compose all the split-out
releases
- canary apps used to monitor integration and production environments
- providing other common functionality, e.g. route registration

Product Manager: Amit Gupta, Pivotal
Lead Engineer: Rob Dimsdale, Pivotal
Engineer: Zachary Auerbach, Pivotal

*CF Infrastructure*
- splitting out, maintaining, and adding features to an etcd-release
- splitting out, maintaining, and adding features to a consul-release
- splitting out and maintaining a NATS-release
- splitting out and maintaining a postgres-release
- building tooling to generate manifests that compose all the split-out
releases

Product Manager: Amit Gupta, Pivotal
Lead Engineer: Ryan Moran, Pivotal
Engineer: Adrian Zankich, Pivotal

We look forward to collaborating with all of you, we're still on the usual
communication channels -- the cf-dev mailing list, and issues and PR on
GitHub -- no changes there!

Cheers,
Amit


MEGA team mitosis

Amit Kumar Gupta
 

Hey all,

The CF Release Integration team (also known as "MEGA") will be splitting
into two smaller teams. The current team's responsibilities included:

- splitting out, maintaining, and adding features to an etcd-release
- splitting out, maintaining, and adding features to a consul-release
- splitting out and maintaining a NATS-release
- splitting out and maintaining a "pre-diego-runtime"-release (DEA, Warden,
HM9k)
- splitting out and maintaining a postgres-release
- owning the CATS and cf-smoke-tests
- managing the main integration pipelines that take all the bits and create
a usable, well-tested distribution of the CF components
- managing the cf-release repo, and the final release process
- building tooling to generate manifests that compose all the split-out
releases
- building tooling to automate bootstrapping environments for Concourse and
CF (no more snowflakes)
- canary apps used to monitor integration and production environments
- providing other common functionality, e.g. route registration

This is a lot of context for a small team of two pairs to juggle, and it
could arguably be the work of several teams, but we will start with two
one-pair teams for now.

*CF Release Integration* (no longer referred to as MEGA?)
- splitting out and maintaining a "pre-diego-runtime"-release (DEA, Warden,
HM9k)
- owning the CATS and cf-smoke-tests
- managing the main integration pipelines that take all the bits and create
a usable, well-tested distribution of the CF components
- managing the cf-release repo, and the final release process
- building tooling to generate manifests that compose all the split-out
releases
- canary apps used to monitor integration and production environments
- providing other common functionality, e.g. route registration

Product Manager: Amit Gupta, Pivotal
Lead Engineer: Rob Dimsdale, Pivotal
Engineer: Zachary Auerbach, Pivotal

*CF Infrastructure*
- splitting out, maintaining, and adding features to an etcd-release
- splitting out, maintaining, and adding features to a consul-release
- splitting out and maintaining a NATS-release
- splitting out and maintaining a postgres-release
- building tooling to generate manifests that compose all the split-out
releases

Product Manager: Amit Gupta, Pivotal
Lead Engineer: Ryan Moran, Pivotal
Engineer: Adrian Zankich, Pivotal

We look forward to collaborating with all of you, we're still on the usual
communication channels -- the cf-dev mailing list, and issues and PR on
GitHub -- no changes there!

Cheers,
Amit

6521 - 6540 of 9425