Date   

Re: Recreating uaadb and ccdb databases

CF Runtime
 

No, there are no special scripts that run on job instance creation,
everything is embedded in the startup control scripts. So while recreate
would work too, restart should be fine.

Joseph
OSS Release Integration Team

On Sat, Aug 1, 2015 at 5:31 AM, James Bayer <jbayer(a)pivotal.io> wrote:

joseph, did you mean "bosh recreate" instead of "bosh restart"?

On Sat, Aug 1, 2015 at 5:28 AM, CF Runtime <cfruntime(a)gmail.com> wrote:

If you are using the default postgres job, you should just be able to
"bosh restart postgres_z1/0". This will create both the databases, but they
will not have the schemas.

The individual jobs should recreate the schemas, so you'll probably need
to "bosh restart api_z1/0" and "bosh restart uaa_z1/0".

Joseph
OSS Release Integration Team

On Sat, Aug 1, 2015 at 3:36 AM, rmi <rishi.investigate(a)gmail.com> wrote:

Hi Amit - since this is a fresh install I an just trying to recreate
ccdb and
uaadb from scratch. What is the best way of deleting/redeploying my
environment? Note that due to I am only able to use cf_nise installer
for
this deployment. This is another reason I wanted to just recreate dbs if
possible.

Thanks.



--
View this message in context:
http://cf-dev.70369.x6.nabble.com/cf-dev-Recreating-uaadb-and-ccdb-databases-tp1007p1011.html
Sent from the CF Dev mailing list archive at Nabble.com.
_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev

_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev


--
Thank you,

James Bayer

_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev


Re: V3 Rest API

Dieu Cao <dcao@...>
 

Hi!

Yes, there are plans to change the cf command line to use the v3 cloud
controller rest api.
You can follow along on progress on stories related to v3 in the CAPI
backlog [1] looking at the process types epic [2].
The largest chunks of work left involve modeling service bindings in v3 and
making the transition for users from v2 apps to v3 apps as seamless as
possible.
Our plan is that we'll be able to migrate v2 app to the v3 set of
equivalent objects and to modify the v2 end points such that they will
update the equivalent v3 objects. The goal here is that if you are using a
client still using the v2 end points, things should continue to just work
and if you have a user with some clients using v2 and some clients using
v3, it should also be seamless.
As we go through different phases of the migration, we plan to work with
the cli on updating commands to use the new v3 end points.
I'm hoping that we start this work at some point in September.

You can find documentation on the v3 api here [3] in the sections marked
experimental.
You can find basic instructions on how to create and push a v3 app here [4]
We are also working on a v3 style guide that we hope to share with the
community in the next week or so for feedback.
You can also find this talk from cf summit describing why we embarked on
v3. [5]

-Dieu


[1] https://www.pivotaltracker.com/n/projects/966314
[2] https://www.pivotaltracker.com/epic/show/1334418
[3] http://apidocs.cloudfoundry.org/214/
[4]
https://github.com/cloudfoundry/cloud_controller_ng/blob/master/docs/create_v3_app.md
[5]
https://www.youtube.com/watch?v=Cz3rKCHicf4&index=33&list=PLhuMOCWn4P9g-UMN5nzDiw78zgf5rJ4gR

On Fri, Jul 31, 2015 at 11:39 AM, Ethan Vogel <evogel(a)us.ibm.com> wrote:

Can anyone answer these questions please:

Are there plans to change the cf command line to use the V3 Rest API? If
so, when?
Where can I find documentation on the V3 API?

Thanks,
Ethan

_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev


Re: Php build pack offline version availability

James Bayer
 

did you try building it with the "cached" option as referenced in the
readme [1]:

BUNDLE_GEMFILE=cf.Gemfile bundle exec buildpack-packager cached


[1] https://github.com/cloudfoundry/php-buildpack#building

On Fri, Jul 31, 2015 at 5:25 PM, Amishi Shah (amishish) <amishish(a)cisco.com>
wrote:

Hi team,

We are a team at Cisco trying to use the Buildpack for Php-Httpd. We tried
couple of ways,

1. Cloning a git repo and creating the build pack, uploading the same
and using the same for the app.
2. Downloading the readymade build pack available in the github
releases pages and uploading, using the same for the app.

What we are looking for is an offline version, both of these build packs
connects to internet while we are deploying the app.

The details are as below.

AMISHISH-M-91EG:webapp ashah$ cf push

Using manifest file
*/Users/ashah/workspace/Symphony/sample_httpd_app/webapp/manifest.yml*


Using stack *cflinuxfs2*...

*OK*

Creating app *sample_app* in org *admin* / space *skyfall* as *admin*...

*OK*


Using route *sample-app.203.35.248.123.xip.io
<http://sample-app.203.35.248.123.xip.io>*

Binding *sample-app.203.35.248.123.xip.io
<http://sample-app.203.35.248.123.xip.io>* to *sample_app*...

*OK*


Uploading *sample_app*...

Uploading app files from:
/Users/ashah/workspace/Symphony/sample_httpd_app/webapp

Uploading 185, 1 files

Done uploading

*OK*


Starting app *sample_app* in org *admin* / space *skyfall* as *admin*...

-----> Downloaded app package (4.0K)

-------> Buildpack version 4.0.0

Installing HTTPD

Downloaded [
https://pivotal-buildpacks.s3.amazonaws.com/php/binaries/trusty/httpd/2.4.12/httpd-2.4.12.tar.gz]
to [/tmp]

Installing PHP

PHP 5.5.27

Downloaded [
https://pivotal-buildpacks.s3.amazonaws.com/php/binaries/trusty/php/5.5.27/php-5.5.27.tar.gz]
to [/tmp]

Finished: [2015-07-31 21:28:46.871154]


-----> Uploading droplet (51M)


0 of 1 instances running, 1 down

0 of 1 instances running, 1 down

0 of 1 instances running, 1 down

0 of 1 instances running, 1 down

1 of 1 instances running


*App started*



*OK*


App *sample_app* was started using this command `$HOME/.bp/bin/start`


Showing health and status for app *sample_app* in org *admin* / space
*skyfall* as *admin*...

*OK*


*requested state:* started

*instances:* 1/1

*usage:* 64M x 1 instances

*urls:* sample-app.203.35.248.123.xip.io

*last uploaded:* Fri Jul 31 21:28:35 UTC 2015

*stack:* cflinuxfs2

*buildpack:* readymade_httpd_buildpack


*state* *since* *cpu* *memory*
*disk* *details*

*#0* running 2015-07-31 02:29:39 PM 0.0% 40.3M of 64M 136.6M of
1G


Do we have any offline Php-httpd buildpack available? We want the build
pack which would be sufficient enough and it works even without the
internet connectivity.


Your timely response will be really appreciated.


Thanks,

Amishi Shah

_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev


--
Thank you,

James Bayer


Re: Recreating uaadb and ccdb databases

James Bayer
 

joseph, did you mean "bosh recreate" instead of "bosh restart"?

On Sat, Aug 1, 2015 at 5:28 AM, CF Runtime <cfruntime(a)gmail.com> wrote:

If you are using the default postgres job, you should just be able to
"bosh restart postgres_z1/0". This will create both the databases, but they
will not have the schemas.

The individual jobs should recreate the schemas, so you'll probably need
to "bosh restart api_z1/0" and "bosh restart uaa_z1/0".

Joseph
OSS Release Integration Team

On Sat, Aug 1, 2015 at 3:36 AM, rmi <rishi.investigate(a)gmail.com> wrote:

Hi Amit - since this is a fresh install I an just trying to recreate ccdb
and
uaadb from scratch. What is the best way of deleting/redeploying my
environment? Note that due to I am only able to use cf_nise installer for
this deployment. This is another reason I wanted to just recreate dbs if
possible.

Thanks.



--
View this message in context:
http://cf-dev.70369.x6.nabble.com/cf-dev-Recreating-uaadb-and-ccdb-databases-tp1007p1011.html
Sent from the CF Dev mailing list archive at Nabble.com.
_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev

_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev

--
Thank you,

James Bayer


Re: Recreating uaadb and ccdb databases

CF Runtime
 

If you are using the default postgres job, you should just be able to "bosh
restart postgres_z1/0". This will create both the databases, but they will
not have the schemas.

The individual jobs should recreate the schemas, so you'll probably need to
"bosh restart api_z1/0" and "bosh restart uaa_z1/0".

Joseph
OSS Release Integration Team

On Sat, Aug 1, 2015 at 3:36 AM, rmi <rishi.investigate(a)gmail.com> wrote:

Hi Amit - since this is a fresh install I an just trying to recreate ccdb
and
uaadb from scratch. What is the best way of deleting/redeploying my
environment? Note that due to I am only able to use cf_nise installer for
this deployment. This is another reason I wanted to just recreate dbs if
possible.

Thanks.



--
View this message in context:
http://cf-dev.70369.x6.nabble.com/cf-dev-Recreating-uaadb-and-ccdb-databases-tp1007p1011.html
Sent from the CF Dev mailing list archive at Nabble.com.
_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev


Re: Recreating uaadb and ccdb databases

R M
 

Hi Amit - since this is a fresh install I an just trying to recreate ccdb and
uaadb from scratch. What is the best way of deleting/redeploying my
environment? Note that due to I am only able to use cf_nise installer for
this deployment. This is another reason I wanted to just recreate dbs if
possible.

Thanks.



--
View this message in context: http://cf-dev.70369.x6.nabble.com/cf-dev-Recreating-uaadb-and-ccdb-databases-tp1007p1011.html
Sent from the CF Dev mailing list archive at Nabble.com.


Re: Recreating uaadb and ccdb databases

Amit Kumar Gupta
 

Hi Rishi,

Are you trying to recover the data or just recreate the ccdb and uaadb
(starting from scratch with the data)? If you're willing to start with
scratch with respect to the data, it may be simplest to delete your
deployment and redeploy. If you don't want to start from scratch, then you
must have your data backed up somewhere on a persistent volume. Can you say
more about your deployment and how you configured the persistence aspect of
ccdb and uaadb?

Best,
Amit



-----
Amit, CF OSS Release Integration PM
Pivotal Software, Inc.
--
View this message in context: http://cf-dev.70369.x6.nabble.com/cf-dev-Recreating-uaadb-and-ccdb-databases-tp1007p1010.html
Sent from the CF Dev mailing list archive at Nabble.com.


Re: Garden is Moving!

James Bayer
 

thanks julz for summarizing all of this. i'm very excited that cloud
foundry will be able to use runc and contribute to the open container
initiative. by joining with the other members and working together, we'll
be able to use the same base runtime as docker, coreos and others. we'll
also preserve the flexibility to do the innovations and user experience we
want for CF users above the core container runtime. this seems like a big
win for everyone.

On Fri, Jul 31, 2015 at 3:06 PM, Deepak Vij (A) <deepak.vij(a)huawei.com>
wrote:

Hi Julz & the whole garden team, it is great to know that Garden Container
is moving towards Open-Container-Project (OCP) App-Container
specifications. Great work.

I am hoping that down the road we will also see App Container Pods
(Co-locating Containers) capabilities enabled as well. A pod is a list of
apps that will be launched together inside a shared execution context (
single Unit of Deployment, migration etc. sharing IP address Space, Storage
etc.). Kubernetes also supports similar Pod concept.

Pod architecture allows me to enable design patters such as Sidecar,
Ambassador & Adaptor. All of this is really helpful from the standpoint of
refactoring the core telecom capabilities such as vEPC (virtual Evolved
packet Core network) and many more NFV/telecom capabilities - Network
Function Virtualization.

- Deepak Vij

----------------------------------------------------------------------

Message: 1
Date: Fri, 31 Jul 2015 18:49:25 +0100
From: Julz Friedman <julz.friedman(a)gmail.com>
To: "Discussions about Cloud Foundry projects and the system overall."
<cf-dev(a)lists.cloudfoundry.org>
Subject: [cf-dev] Garden is Moving!
Message-ID:
<
CAHfHzfOrrdEn_QBZwnoq7qQtXbBW1K2fk-NbbqgLSKaacMcPsw(a)mail.gmail.com>
Content-Type: text/plain; charset="utf-8"

Hi cf-dev, I?d like to discuss some exciting changes the Garden team is
planning to make in Diego?s container subsystem, Garden.

Garden? What?s that?

Garden is the containerisation layer used by Diego. Garden provides a
platform-neutral, lightweight container abstraction that can be backed by
multiple backends (most importantly, a Linux backend and a Windows
backend). Currently the linux backend is based on our own code which
evolved from Warden and which has been used to power Cloud Foundry for many
years. Garden enables diego to support buildpack apps and docker apps (via
the Linux backend) and windows apps (via the Windows backend).

So: What's changing?

We're planning to use runC [1] as the Linux backend for Garden.

Why?

Garden has always been an unopinionated container system - we like to have
the opinionation happen at the higher levels (i.e. in Diego). Docker, on
the other hand, is quite an opinionated container technology: it tightly
couples containerisation and the user experience (which is one of the
reasons docker is so great to use, I?m not knocking docker here!).
Recently, docker and others (including IBM and Pivotal) have come together
under the Open Container Initiative to spin out an unopinionated common
containerisation runtime, ?runC?, which gives us a fantastic opportunity to
be part of this community while letting us ensure we can retain the
flexibility required by our broader use cases. RunC is a reference
implementation of the Open Container spec, which means both Docker and
Cloud Foundry will be running the same code, and both Docker and Cloud
Foundry apps will be using Open Containers.

Using runC as the garden backend has two major advantages. Firstly it lets
us reuse some awesome code and be part of the Open Container community.
Secondly it means CF applications will be using not only the same kernel
primitives as docker apps (as they already are today), but also the exact
same runtime container engine. This will minimise incompatibility for our
docker lifecycle and result in a first class experience for users, as well
as letting us reuse and contribute back to a great open-source code base.
We have some remaining features in the Garden Linux backend that we?d like
to see in RunC, but we?re excited to engage with the Open Container
community to close these gaps.

What about regular CF buildpack apps and the other nice features of Garden?

Moving to runC gives us all the above advantages without compromising our
ability to also deliver the buildpack-based platform-centric workflows that
make CF great. We will retain the garden abstraction to make it easy for
Diego to support both buildpack apps, windows apps and docker apps, and we
will maintain a small layer above runC to manage the containers, pull down
native warden and docker root filesystems, let us perform live upgrades and
so on.

Why not use the full docker-engine as the backend?

Docker-engine has both more capabilities than we need at the layer Garden
runs and different opinions than Cloud Foundry currently requires. This
means it?s harder for us to maintain (because it?s larger and does more
stuff), harder for us to contribute to (for similar reasons) and for some
of our use cases (particularly with Diego?s more generic lifecycles) we?d
have to actively work around things that would be quite easy to expose if
we use runC directly (for example docker-engine intentionally doesn?t
support signalling `docker exec`ed processes, which is required by
Diego[2]).

Most of the reasons you might want to use docker-engine (e.g. being able to
?docker push?) make much more sense to expose at the platform level in a
multi-host environment (you want to push to the cluster, not a single host)
or need to be integrated with multi-tenancy (which again should happen at
the platform level - you need access control on storage and networks to
integrate with the rest of a multi-tenant platform). For these reasons
Cloud Foundry prefers to implement many features at the Diego layer whereas
docker-engine implements some of these capabilities at the host layer. As
the capabilities for running distributed applications in containers
continue to evolve, CF prefers the flexibility to implement the opinions of
our developers and community for areas like networking and storage even if
those may differ from other orchestration solutions like docker-engine, and
in turn Garden needs to retain that flexibility.

We also note that many new features have come to runC first (e.g. criu
snapshot restore and - importantly for us - user namespaces were first
available in runC before being added to docker-engine; at the time of
writing these are still not fully available in docker-engine). We?d like to
be able to consume new features as they come out in runC, rather than
waiting for them to make it in to docker-engine. We also hope to be
contributing new features of our own and this is much easier for us to
accomplish against the smaller surface area of runC, and within the open
context of the Open Container Initiative.

When will this happen?

Our first goal is completing the work of improving Garden?s security
profile around supporting docker apps in production, we're about two weeks
out from this according to Tracker and plan to do this with the current
code. As soon as we hit this milestone we plan to shift our focus to runC.
We have an initial prototype working and will iterate quickly to bring this
to production quality and switch over when we feel confident.


I?m excited to hear the community?s views and input on this, so let us know
what you think!


Thanks!

- Julz, on behalf the Garden Team

[1]: https://github.com/opencontainers/runc

[2]: https://github.com/docker/docker/pull/9167,
https://github.com/docker/docker/pull/9402

_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev


--
Thank you,

James Bayer


Php build pack offline version availability

Amishi Shah
 

Hi team,

We are a team at Cisco trying to use the Buildpack for Php-Httpd. We tried couple of ways,

1. Cloning a git repo and creating the build pack, uploading the same and using the same for the app.
2. Downloading the readymade build pack available in the github releases pages and uploading, using the same for the app.

What we are looking for is an offline version, both of these build packs connects to internet while we are deploying the app.

The details are as below.


AMISHISH-M-91EG:webapp ashah$ cf push

Using manifest file /Users/ashah/workspace/Symphony/sample_httpd_app/webapp/manifest.yml


Using stack cflinuxfs2...

OK

Creating app sample_app in org admin / space skyfall as admin...

OK


Using route sample-app.203.35.248.123.xip.io

Binding sample-app.203.35.248.123.xip.io to sample_app...

OK


Uploading sample_app...

Uploading app files from: /Users/ashah/workspace/Symphony/sample_httpd_app/webapp

Uploading 185, 1 files

Done uploading

OK


Starting app sample_app in org admin / space skyfall as admin...

-----> Downloaded app package (4.0K)

-------> Buildpack version 4.0.0

Installing HTTPD

Downloaded [https://pivotal-buildpacks.s3.amazonaws.com/php/binaries/trusty/httpd/2.4.12/httpd-2.4.12.tar.gz] to [/tmp]

Installing PHP

PHP 5.5.27

Downloaded [https://pivotal-buildpacks.s3.amazonaws.com/php/binaries/trusty/php/5.5.27/php-5.5.27.tar.gz] to [/tmp]

Finished: [2015-07-31 21:28:46.871154]


-----> Uploading droplet (51M)


0 of 1 instances running, 1 down

0 of 1 instances running, 1 down

0 of 1 instances running, 1 down

0 of 1 instances running, 1 down

1 of 1 instances running


App started



OK


App sample_app was started using this command `$HOME/.bp/bin/start`


Showing health and status for app sample_app in org admin / space skyfall as admin...

OK


requested state: started

instances: 1/1

usage: 64M x 1 instances

urls: sample-app.203.35.248.123.xip.io

last uploaded: Fri Jul 31 21:28:35 UTC 2015

stack: cflinuxfs2

buildpack: readymade_httpd_buildpack


state since cpu memory disk details

#0 running 2015-07-31 02:29:39 PM 0.0% 40.3M of 64M 136.6M of 1G


Do we have any offline Php-httpd buildpack available? We want the build pack which would be sufficient enough and it works even without the internet connectivity.


Your timely response will be really appreciated.


Thanks,

Amishi Shah


Recreating uaadb and ccdb databases

R M
 

Hello, is there a way of recreating uaadb and ccdb databases? I had
corrupted my CF install and in the process deleted both of these
databases. I wonder if there are some scripts that I can run to recreate
these databases.

Thanks.


Re: Garden is Moving!

Deepak Vij
 

Hi Julz & the whole garden team, it is great to know that Garden Container is moving towards Open-Container-Project (OCP) App-Container specifications. Great work.

I am hoping that down the road we will also see App Container Pods (Co-locating Containers) capabilities enabled as well. A pod is a list of apps that will be launched together inside a shared execution context ( single Unit of Deployment, migration etc. sharing IP address Space, Storage etc.). Kubernetes also supports similar Pod concept.

Pod architecture allows me to enable design patters such as Sidecar, Ambassador & Adaptor. All of this is really helpful from the standpoint of refactoring the core telecom capabilities such as vEPC (virtual Evolved packet Core network) and many more NFV/telecom capabilities - Network Function Virtualization.

- Deepak Vij

----------------------------------------------------------------------

Message: 1
Date: Fri, 31 Jul 2015 18:49:25 +0100
From: Julz Friedman <julz.friedman(a)gmail.com>
To: "Discussions about Cloud Foundry projects and the system overall."
<cf-dev(a)lists.cloudfoundry.org>
Subject: [cf-dev] Garden is Moving!
Message-ID:
<CAHfHzfOrrdEn_QBZwnoq7qQtXbBW1K2fk-NbbqgLSKaacMcPsw(a)mail.gmail.com>
Content-Type: text/plain; charset="utf-8"

Hi cf-dev, I?d like to discuss some exciting changes the Garden team is
planning to make in Diego?s container subsystem, Garden.

Garden? What?s that?

Garden is the containerisation layer used by Diego. Garden provides a
platform-neutral, lightweight container abstraction that can be backed by
multiple backends (most importantly, a Linux backend and a Windows
backend). Currently the linux backend is based on our own code which
evolved from Warden and which has been used to power Cloud Foundry for many
years. Garden enables diego to support buildpack apps and docker apps (via
the Linux backend) and windows apps (via the Windows backend).

So: What's changing?

We're planning to use runC [1] as the Linux backend for Garden.

Why?

Garden has always been an unopinionated container system - we like to have
the opinionation happen at the higher levels (i.e. in Diego). Docker, on
the other hand, is quite an opinionated container technology: it tightly
couples containerisation and the user experience (which is one of the
reasons docker is so great to use, I?m not knocking docker here!).
Recently, docker and others (including IBM and Pivotal) have come together
under the Open Container Initiative to spin out an unopinionated common
containerisation runtime, ?runC?, which gives us a fantastic opportunity to
be part of this community while letting us ensure we can retain the
flexibility required by our broader use cases. RunC is a reference
implementation of the Open Container spec, which means both Docker and
Cloud Foundry will be running the same code, and both Docker and Cloud
Foundry apps will be using Open Containers.

Using runC as the garden backend has two major advantages. Firstly it lets
us reuse some awesome code and be part of the Open Container community.
Secondly it means CF applications will be using not only the same kernel
primitives as docker apps (as they already are today), but also the exact
same runtime container engine. This will minimise incompatibility for our
docker lifecycle and result in a first class experience for users, as well
as letting us reuse and contribute back to a great open-source code base.
We have some remaining features in the Garden Linux backend that we?d like
to see in RunC, but we?re excited to engage with the Open Container
community to close these gaps.

What about regular CF buildpack apps and the other nice features of Garden?

Moving to runC gives us all the above advantages without compromising our
ability to also deliver the buildpack-based platform-centric workflows that
make CF great. We will retain the garden abstraction to make it easy for
Diego to support both buildpack apps, windows apps and docker apps, and we
will maintain a small layer above runC to manage the containers, pull down
native warden and docker root filesystems, let us perform live upgrades and
so on.

Why not use the full docker-engine as the backend?

Docker-engine has both more capabilities than we need at the layer Garden
runs and different opinions than Cloud Foundry currently requires. This
means it?s harder for us to maintain (because it?s larger and does more
stuff), harder for us to contribute to (for similar reasons) and for some
of our use cases (particularly with Diego?s more generic lifecycles) we?d
have to actively work around things that would be quite easy to expose if
we use runC directly (for example docker-engine intentionally doesn?t
support signalling `docker exec`ed processes, which is required by
Diego[2]).

Most of the reasons you might want to use docker-engine (e.g. being able to
?docker push?) make much more sense to expose at the platform level in a
multi-host environment (you want to push to the cluster, not a single host)
or need to be integrated with multi-tenancy (which again should happen at
the platform level - you need access control on storage and networks to
integrate with the rest of a multi-tenant platform). For these reasons
Cloud Foundry prefers to implement many features at the Diego layer whereas
docker-engine implements some of these capabilities at the host layer. As
the capabilities for running distributed applications in containers
continue to evolve, CF prefers the flexibility to implement the opinions of
our developers and community for areas like networking and storage even if
those may differ from other orchestration solutions like docker-engine, and
in turn Garden needs to retain that flexibility.

We also note that many new features have come to runC first (e.g. criu
snapshot restore and - importantly for us - user namespaces were first
available in runC before being added to docker-engine; at the time of
writing these are still not fully available in docker-engine). We?d like to
be able to consume new features as they come out in runC, rather than
waiting for them to make it in to docker-engine. We also hope to be
contributing new features of our own and this is much easier for us to
accomplish against the smaller surface area of runC, and within the open
context of the Open Container Initiative.

When will this happen?

Our first goal is completing the work of improving Garden?s security
profile around supporting docker apps in production, we're about two weeks
out from this according to Tracker and plan to do this with the current
code. As soon as we hit this milestone we plan to shift our focus to runC.
We have an initial prototype working and will iterate quickly to bring this
to production quality and switch over when we feel confident.


I?m excited to hear the community?s views and input on this, so let us know
what you think!


Thanks!

- Julz, on behalf the Garden Team

[1]: https://github.com/opencontainers/runc

[2]: https://github.com/docker/docker/pull/9167,
https://github.com/docker/docker/pull/9402


CFF Mailman Upgrade - Monday August 3, 2015 at 7 PM PT

Chip Childers <cchilders@...>
 

Hi all,

On Monday, at approximately 7 PM Pacific Time, the Linux Foundation IT team
will be performing a mailing list migration to a newer version of Mailman
and adding the HyperKitty UI for web-based access.

If you experience any issues after the migration, please contact the
support team via helpdesk(a)cloudfoundry.org

Here's what the IT team says subscribers need to know:

-- Migrating list members will happen silently.

-- The email addresses for the lists will be the same.

-- During the migration, it will take some time for DNS changes to
propagate for https://lists.cloudfoundry.org to point at the new mailman
and the new archives.

-- Since there's no DNS changes for the mail side of things, if any new mail
is posted to the new list during this time, the mail will go to new mailman
and all members of the list will receive it whether DNS has fully
propagated or not.

-- Once DNS is fully propagated, everyone will see new mailman when they
visit https://lists.cloudfoundry.org. There will be no access to the
old mailman,
but all of the archives will be present in new mailman.

-- In order for folks to change their subscription settings, post comments
directly to the new archives, or moderate the lists, they will need to
create a Linux Foundation ID with the email address they used when they
signed up for the list. Most of the list members do not have a
Linux Foundation ID and will need to do this. You can still send and receive
mail from the list without setting up your LFID.

Essentially, the changeover will be seamless for the users of the list while
it is happening, and they will at some point in the near future need to go
set up an account.

Again, If you experience any issues after the migration, please contact the
support team via helpdesk(a)cloudfoundry.org

Chip Childers | VP Technology | Cloud Foundry Foundation


V3 Rest API

Ethan Vogel <evogel@...>
 

Can anyone answer these questions please:

Are there plans to change the cf command line to use the V3 Rest API? If
so, when?
Where can I find documentation on the V3 API?

Thanks,
Ethan


Re: Garden is Moving!

Christopher B Ferris <chrisfer@...>
 

awesome! thanks

Cheers,

Christopher Ferris
IBM Distinguished Engineer, CTO Open Cloud
IBM Software Group, Open Technologies
email: chrisfer(a)us.ibm.com
twitter: @christo4ferris
blog: http://thoughtsoncloud.com/index.php/author/cferris/
phone: +1 508 667 0402

On Jul 31, 2015, at 1:56 PM, Chip Childers <cchilders(a)cloudfoundry.org> wrote:

This is great news for CF and a good writeup of the reasoning. Thanks Julz!

-chip

Chip Childers | VP Technology | Cloud Foundry Foundation

On Fri, Jul 31, 2015 at 1:49 PM, Julz Friedman <julz.friedman(a)gmail.com> wrote:
Hi cf-dev, I’d like to discuss some exciting changes the Garden team is planning to make in Diego’s container subsystem, Garden.

Garden? What’s that?

Garden is the containerisation layer used by Diego. Garden provides a platform-neutral, lightweight container abstraction that can be backed by multiple backends (most importantly, a Linux backend and a Windows backend). Currently the linux backend is based on our own code which evolved from Warden and which has been used to power Cloud Foundry for many years. Garden enables diego to support buildpack apps and docker apps (via the Linux backend) and windows apps (via the Windows backend).

So: What's changing?

We're planning to use runC [1] as the Linux backend for Garden.

Why?

Garden has always been an unopinionated container system - we like to have the opinionation happen at the higher levels (i.e. in Diego). Docker, on the other hand, is quite an opinionated container technology: it tightly couples containerisation and the user experience (which is one of the reasons docker is so great to use, I’m not knocking docker here!). Recently, docker and others (including IBM and Pivotal) have come together under the Open Container Initiative to spin out an unopinionated common containerisation runtime, “runC”, which gives us a fantastic opportunity to be part of this community while letting us ensure we can retain the flexibility required by our broader use cases. RunC is a reference implementation of the Open Container spec, which means both Docker and Cloud Foundry will be running the same code, and both Docker and Cloud Foundry apps will be using Open Containers.

Using runC as the garden backend has two major advantages. Firstly it lets us reuse some awesome code and be part of the Open Container community. Secondly it means CF applications will be using not only the same kernel primitives as docker apps (as they already are today), but also the exact same runtime container engine. This will minimise incompatibility for our docker lifecycle and result in a first class experience for users, as well as letting us reuse and contribute back to a great open-source code base. We have some remaining features in the Garden Linux backend that we’d like to see in RunC, but we’re excited to engage with the Open Container community to close these gaps.

What about regular CF buildpack apps and the other nice features of Garden?

Moving to runC gives us all the above advantages without compromising our ability to also deliver the buildpack-based platform-centric workflows that make CF great. We will retain the garden abstraction to make it easy for Diego to support both buildpack apps, windows apps and docker apps, and we will maintain a small layer above runC to manage the containers, pull down native warden and docker root filesystems, let us perform live upgrades and so on.

Why not use the full docker-engine as the backend?

Docker-engine has both more capabilities than we need at the layer Garden runs and different opinions than Cloud Foundry currently requires. This means it’s harder for us to maintain (because it’s larger and does more stuff), harder for us to contribute to (for similar reasons) and for some of our use cases (particularly with Diego’s more generic lifecycles) we’d have to actively work around things that would be quite easy to expose if we use runC directly (for example docker-engine intentionally doesn’t support signalling `docker exec`ed processes, which is required by Diego[2]).

Most of the reasons you might want to use docker-engine (e.g. being able to ‘docker push’) make much more sense to expose at the platform level in a multi-host environment (you want to push to the cluster, not a single host) or need to be integrated with multi-tenancy (which again should happen at the platform level - you need access control on storage and networks to integrate with the rest of a multi-tenant platform). For these reasons Cloud Foundry prefers to implement many features at the Diego layer whereas docker-engine implements some of these capabilities at the host layer. As the capabilities for running distributed applications in containers continue to evolve, CF prefers the flexibility to implement the opinions of our developers and community for areas like networking and storage even if those may differ from other orchestration solutions like docker-engine, and in turn Garden needs to retain that flexibility.

We also note that many new features have come to runC first (e.g. criu snapshot restore and - importantly for us - user namespaces were first available in runC before being added to docker-engine; at the time of writing these are still not fully available in docker-engine). We’d like to be able to consume new features as they come out in runC, rather than waiting for them to make it in to docker-engine. We also hope to be contributing new features of our own and this is much easier for us to accomplish against the smaller surface area of runC, and within the open context of the Open Container Initiative.

When will this happen?

Our first goal is completing the work of improving Garden’s security profile around supporting docker apps in production, we're about two weeks out from this according to Tracker and plan to do this with the current code. As soon as we hit this milestone we plan to shift our focus to runC. We have an initial prototype working and will iterate quickly to bring this to production quality and switch over when we feel confident.


I’m excited to hear the community’s views and input on this, so let us know what you think!

Thanks!
- Julz, on behalf the Garden Team

[1]: https://github.com/opencontainers/runc
[2]: https://github.com/docker/docker/pull/9167, https://github.com/docker/docker/pull/9402


_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev


Re: Garden is Moving!

Chip Childers <cchilders@...>
 

This is great news for CF and a good writeup of the reasoning. Thanks Julz!

-chip

Chip Childers | VP Technology | Cloud Foundry Foundation

On Fri, Jul 31, 2015 at 1:49 PM, Julz Friedman <julz.friedman(a)gmail.com>
wrote:

Hi cf-dev, I’d like to discuss some exciting changes the Garden team is
planning to make in Diego’s container subsystem, Garden.

Garden? What’s that?

Garden is the containerisation layer used by Diego. Garden provides a
platform-neutral, lightweight container abstraction that can be backed by
multiple backends (most importantly, a Linux backend and a Windows
backend). Currently the linux backend is based on our own code which
evolved from Warden and which has been used to power Cloud Foundry for many
years. Garden enables diego to support buildpack apps and docker apps (via
the Linux backend) and windows apps (via the Windows backend).

So: What's changing?

We're planning to use runC [1] as the Linux backend for Garden.

Why?

Garden has always been an unopinionated container system - we like to have
the opinionation happen at the higher levels (i.e. in Diego). Docker, on
the other hand, is quite an opinionated container technology: it tightly
couples containerisation and the user experience (which is one of the
reasons docker is so great to use, I’m not knocking docker here!).
Recently, docker and others (including IBM and Pivotal) have come together
under the Open Container Initiative to spin out an unopinionated common
containerisation runtime, “runC”, which gives us a fantastic opportunity to
be part of this community while letting us ensure we can retain the
flexibility required by our broader use cases. RunC is a reference
implementation of the Open Container spec, which means both Docker and
Cloud Foundry will be running the same code, and both Docker and Cloud
Foundry apps will be using Open Containers.

Using runC as the garden backend has two major advantages. Firstly it lets
us reuse some awesome code and be part of the Open Container community.
Secondly it means CF applications will be using not only the same kernel
primitives as docker apps (as they already are today), but also the exact
same runtime container engine. This will minimise incompatibility for our
docker lifecycle and result in a first class experience for users, as well
as letting us reuse and contribute back to a great open-source code base.
We have some remaining features in the Garden Linux backend that we’d like
to see in RunC, but we’re excited to engage with the Open Container
community to close these gaps.

What about regular CF buildpack apps and the other nice features of Garden?

Moving to runC gives us all the above advantages without compromising our
ability to also deliver the buildpack-based platform-centric workflows that
make CF great. We will retain the garden abstraction to make it easy for
Diego to support both buildpack apps, windows apps and docker apps, and we
will maintain a small layer above runC to manage the containers, pull down
native warden and docker root filesystems, let us perform live upgrades and
so on.

Why not use the full docker-engine as the backend?

Docker-engine has both more capabilities than we need at the layer Garden
runs and different opinions than Cloud Foundry currently requires. This
means it’s harder for us to maintain (because it’s larger and does more
stuff), harder for us to contribute to (for similar reasons) and for some
of our use cases (particularly with Diego’s more generic lifecycles) we’d
have to actively work around things that would be quite easy to expose if
we use runC directly (for example docker-engine intentionally doesn’t
support signalling `docker exec`ed processes, which is required by
Diego[2]).

Most of the reasons you might want to use docker-engine (e.g. being able
to ‘docker push’) make much more sense to expose at the platform level in a
multi-host environment (you want to push to the cluster, not a single host)
or need to be integrated with multi-tenancy (which again should happen at
the platform level - you need access control on storage and networks to
integrate with the rest of a multi-tenant platform). For these reasons
Cloud Foundry prefers to implement many features at the Diego layer whereas
docker-engine implements some of these capabilities at the host layer. As
the capabilities for running distributed applications in containers
continue to evolve, CF prefers the flexibility to implement the opinions of
our developers and community for areas like networking and storage even if
those may differ from other orchestration solutions like docker-engine, and
in turn Garden needs to retain that flexibility.

We also note that many new features have come to runC first (e.g. criu
snapshot restore and - importantly for us - user namespaces were first
available in runC before being added to docker-engine; at the time of
writing these are still not fully available in docker-engine). We’d like
to be able to consume new features as they come out in runC, rather than
waiting for them to make it in to docker-engine. We also hope to be
contributing new features of our own and this is much easier for us to
accomplish against the smaller surface area of runC, and within the open
context of the Open Container Initiative.

When will this happen?

Our first goal is completing the work of improving Garden’s security
profile around supporting docker apps in production, we're about two weeks
out from this according to Tracker and plan to do this with the current
code. As soon as we hit this milestone we plan to shift our focus to runC.
We have an initial prototype working and will iterate quickly to bring this
to production quality and switch over when we feel confident.


I’m excited to hear the community’s views and input on this, so let us
know what you think!


Thanks!

- Julz, on behalf the Garden Team

[1]: https://github.com/opencontainers/runc

[2]: https://github.com/docker/docker/pull/9167,
https://github.com/docker/docker/pull/9402


_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev


Garden is Moving!

Julz Friedman
 

Hi cf-dev, I’d like to discuss some exciting changes the Garden team is
planning to make in Diego’s container subsystem, Garden.

Garden? What’s that?

Garden is the containerisation layer used by Diego. Garden provides a
platform-neutral, lightweight container abstraction that can be backed by
multiple backends (most importantly, a Linux backend and a Windows
backend). Currently the linux backend is based on our own code which
evolved from Warden and which has been used to power Cloud Foundry for many
years. Garden enables diego to support buildpack apps and docker apps (via
the Linux backend) and windows apps (via the Windows backend).

So: What's changing?

We're planning to use runC [1] as the Linux backend for Garden.

Why?

Garden has always been an unopinionated container system - we like to have
the opinionation happen at the higher levels (i.e. in Diego). Docker, on
the other hand, is quite an opinionated container technology: it tightly
couples containerisation and the user experience (which is one of the
reasons docker is so great to use, I’m not knocking docker here!).
Recently, docker and others (including IBM and Pivotal) have come together
under the Open Container Initiative to spin out an unopinionated common
containerisation runtime, “runC”, which gives us a fantastic opportunity to
be part of this community while letting us ensure we can retain the
flexibility required by our broader use cases. RunC is a reference
implementation of the Open Container spec, which means both Docker and
Cloud Foundry will be running the same code, and both Docker and Cloud
Foundry apps will be using Open Containers.

Using runC as the garden backend has two major advantages. Firstly it lets
us reuse some awesome code and be part of the Open Container community.
Secondly it means CF applications will be using not only the same kernel
primitives as docker apps (as they already are today), but also the exact
same runtime container engine. This will minimise incompatibility for our
docker lifecycle and result in a first class experience for users, as well
as letting us reuse and contribute back to a great open-source code base.
We have some remaining features in the Garden Linux backend that we’d like
to see in RunC, but we’re excited to engage with the Open Container
community to close these gaps.

What about regular CF buildpack apps and the other nice features of Garden?

Moving to runC gives us all the above advantages without compromising our
ability to also deliver the buildpack-based platform-centric workflows that
make CF great. We will retain the garden abstraction to make it easy for
Diego to support both buildpack apps, windows apps and docker apps, and we
will maintain a small layer above runC to manage the containers, pull down
native warden and docker root filesystems, let us perform live upgrades and
so on.

Why not use the full docker-engine as the backend?

Docker-engine has both more capabilities than we need at the layer Garden
runs and different opinions than Cloud Foundry currently requires. This
means it’s harder for us to maintain (because it’s larger and does more
stuff), harder for us to contribute to (for similar reasons) and for some
of our use cases (particularly with Diego’s more generic lifecycles) we’d
have to actively work around things that would be quite easy to expose if
we use runC directly (for example docker-engine intentionally doesn’t
support signalling `docker exec`ed processes, which is required by
Diego[2]).

Most of the reasons you might want to use docker-engine (e.g. being able to
‘docker push’) make much more sense to expose at the platform level in a
multi-host environment (you want to push to the cluster, not a single host)
or need to be integrated with multi-tenancy (which again should happen at
the platform level - you need access control on storage and networks to
integrate with the rest of a multi-tenant platform). For these reasons
Cloud Foundry prefers to implement many features at the Diego layer whereas
docker-engine implements some of these capabilities at the host layer. As
the capabilities for running distributed applications in containers
continue to evolve, CF prefers the flexibility to implement the opinions of
our developers and community for areas like networking and storage even if
those may differ from other orchestration solutions like docker-engine, and
in turn Garden needs to retain that flexibility.

We also note that many new features have come to runC first (e.g. criu
snapshot restore and - importantly for us - user namespaces were first
available in runC before being added to docker-engine; at the time of
writing these are still not fully available in docker-engine). We’d like to
be able to consume new features as they come out in runC, rather than
waiting for them to make it in to docker-engine. We also hope to be
contributing new features of our own and this is much easier for us to
accomplish against the smaller surface area of runC, and within the open
context of the Open Container Initiative.

When will this happen?

Our first goal is completing the work of improving Garden’s security
profile around supporting docker apps in production, we're about two weeks
out from this according to Tracker and plan to do this with the current
code. As soon as we hit this milestone we plan to shift our focus to runC.
We have an initial prototype working and will iterate quickly to bring this
to production quality and switch over when we feel confident.


I’m excited to hear the community’s views and input on this, so let us know
what you think!


Thanks!

- Julz, on behalf the Garden Team

[1]: https://github.com/opencontainers/runc

[2]: https://github.com/docker/docker/pull/9167,
https://github.com/docker/docker/pull/9402


Re: Troubleshooting tips ...

CF Runtime
 

Set CF_TRACE=true to see the exact request that results in the 404. Most
likely it is a request to api.cf.fxlab.net/v2/info. Double check that that
url does in fact return a 404.

From there, the problem will probably be in either the router or the api
instance. If you ssh onto the api instance, you can try to curl
localhost:9022/v2/info to see if it is working.

On Fri, Jul 31, 2015 at 5:55 AM, Vishwanath V <thelinuxguyis(a)yahoo.co.in>
wrote:

Hi Folks,

I see the below error , when trying to connect to api end point :

cf api api.cf.fxlab.net --skip-ssl-validation
Setting api endpoint to api.cf.fxlab.net...
FAILED
Server error, status code: 404, error code: 0, message:


This was working yesterday, I already checked that all the cf components
are active and running.

Need pointers on where to start the troubleshooting.

Kindly assist.

Regards,
Vish.

_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev


Re: UAA: How to set client_credentials token grant type to not expire

Filip Hanik
 

Start a local server (./gradlew run --info)

In another console, the following commands

1. uaac target http://localhost:8080/uaa
2. uaac token client get admin -s adminsecret
3. uaac client add testclient --authorized_grant_types
client_credentials --access_token_validity 315360000 --authorities openid
-s testclientsecret
4. uaac token client get testclient -s testclientsecret
5. uaac token decode

The output from the last command is
jti: 7397c7c9-de08-4b33-bd6a-0d248fd983b1
sub: testclient
authorities: openid
scope: openid
client_id: testclient
cid: testclient
azp: testclient
grant_type: client_credentials
rev_sig: fbc56677
iat: 1438351964
exp: 1753711964
iss: http://localhost:8080/uaa/oauth/token
zid: uaa
aud: testclient openid

The exp time is 1753711964, that is seconds from Jan 1st, 1970, and
corresponds to July 28, 2025

On Fri, Jul 31, 2015 at 12:57 AM, Kayode Odeyemi <dreyemi(a)gmail.com> wrote:

Filip,

Here's my client config:
useraccount
scope: clients.read oauth.approvals openid password.write tokens.read
tokens.write uaa.admin
resource_ids: none
authorized_grant_types: authorization_code client_credentials password
refresh_token
authorities: scim.read scim.userids uaa.admin uaa.resource
clients.read scim.write cloud_controller.write scim.me clients.secret
password.write clients.write openid cloud_controller.read oauth.approvals
access_token_validity: 315360000
autoapprove: true

Gotten from `uaac clients`

I really do not know what else I might be doing wrongly.

Does `test_Token_Expiry_Time()` also cover for client_credentials grant
type? I tried running the test with
`./gradlew test
-Dtest.single=org/cloudfoundry/identity/uaa/mock/token/TokenMvcMockTests`
and placed debuggers in order to view the generated expiration time.
Nothing was printed in the test results.


On Wed, Jul 29, 2015 at 6:11 PM, Filip Hanik <fhanik(a)pivotal.io> wrote:

exp is expected to be 1753544877 when decoded. Unfortunately, this test
fails, as exp reads 1438228276

most likely your client does not have the access token validity setup
correctly. See the test case I posted that validates my statements

https://github.com/cloudfoundry/uaa/commit/f0c8ba99cf37855fec54b74c07ce19613c51d7e9#diff-f7a9f1a69eec2ce4278914f342d8a160R883


On Wed, Jul 29, 2015 at 9:57 AM, Kayode Odeyemi <dreyemi(a)gmail.com>
wrote:

Good. But my apologies. Assume:

creation time = 1438184877
access token validity (set by me) = 315360000

exp is expected to be 1753544877 when decoded. Unfortunately, this test
fails, as exp reads 1438228276

On Wed, Jul 29, 2015 at 5:43 PM, Filip Hanik <fhanik(a)pivotal.io> wrote:

If I set the access_token_validity to 315569260, I'm expecting the
token when decoded to read exp: 315569260. If this is not, then is it
possible to set the token expiry time?

It's a little bit different.

access_token_validity is how long the token is valid for from the time
of creation. thus we can derive

exp (expiration time) = token creation time + access token validity

you don't get to set the expiration time, since that doesn't make sense
as the clock keeps ticking forward.

in your case, having access token validity be 10 years, achieves
exactly what you want

Filip


On Wed, Jul 29, 2015 at 9:36 AM, Kayode Odeyemi <dreyemi(a)gmail.com>
wrote:

Thanks again Filip.

However, here's what I mean,

If I set the access_token_validity to 315569260, I'm expecting the
token when decoded to read exp: 315569260. If this is not, then is it
possible to set the token expiry time?

line 906 sets the value to 1438209609 when the token is decoded and I
believe that's what the check_token service also checks.
expirationTime*1000l occurs after the token has been decoded (whose exp
value is set to 1438209609)

Now the question is why do you have to do expirationTime*1000l since
the token when decoded originally set's this value to 1438209609
(without * 1000l)

Except I'm completely getting this all wrong?

_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev

_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev


_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev

_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev


_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev


Troubleshooting tips ...

Vish
 

Hi Folks,
I see the below  error , when trying to connect to api end point :
cf api api.cf.fxlab.net --skip-ssl-validation
Setting api endpoint to api.cf.fxlab.net...
FAILED
Server error, status code: 404, error code: 0, message:


This was working yesterday, I already checked that all the cf components are active and running.
Need pointers on where to start the troubleshooting.

Kindly assist.
Regards,Vish.


Re: CF-Abacus: incubation and inception meeting coming soon

Guillaume Berche
 

Any chance to have the google hangout recorded (screen + voice) as to
enable offline replays (similar to CFAD) ?

Thanks,

Guillaume.

On Fri, Jul 31, 2015 at 6:38 AM, Matt Cowger <matt(a)cowger.us> wrote:

Stormy - I will happily blog it and take pictures...but you don't want my
notes...I was always the one in college that borrowed notes.

On Thu, Jul 30, 2015 at 3:37 PM, Stormy Peters <speters(a)cloudfoundry.org>
wrote:

Will somebody be taking notes? Would somebody be willing to blog about
this afterwards?

Maybe take a group picture and summarize what was discussed?

Thanks,

Stormy


On Thu, Jul 30, 2015 at 3:31 PM, Michael Maximilien <maxim(a)us.ibm.com>
wrote:

Hi, all,

Here is pertinent information for CF-Abacus inception meeting next
week.

Invites to those interested is sent. If you want to attend physically then ping me or Devin from CFF on CC: since
we need to add you to the list for the WeWork building.

---------
*Date:* Wednesday August 5th, 2015

*Time:* 9:30am - 12:30pm PDT

*Location:*
CloudFoundry Foundation Offices @ WeWork SF on Mission

WeWork
535 Mission St., *19th floor *
San Francisco, CA

*Room:* 19B

*Call info:*
IBM AT&T Conference Call
USA 888-426-6840; 215-861-6239 | Participant code: 1985291
All other countries, find number here: http://goo.gl/RnNfc1

*Hangout:* TBD
---------

Best,

------
dr.max
ibm cloud labs
silicon valley, ca
maximilien.org


*Michael Maximilien/Almaden/IBM*

07/29/2015 11:35 AM
To
"cf-dev(a)lists.cloudfoundry.org" <cf-dev(a)lists.cloudfoundry.org>
cc
Subject
Re: CF-Abacus: incubation and inception meeting coming soon




Quick update on inception meeting.

To accommodate our friends and colleagues from Europe who would like to
attend, let's plan to move the meeting to 10a to 12:30p with the option of
lunch after at nearby location in SF.

Unless I hear any objections I will send the invites to those interested
parties who have already contacted me and confirm details here.

If you want to attend (local or remote) please remember to reply to me
with email so I can add you to invite list.

Best,

dr.max
ibm cloud labs
silicon valley, ca

Sent from my iPhone

On Jul 28, 2015, at 10:15 PM, Michael Maximilien <*maxim(a)us.ibm.com*
<maxim(a)us.ibm.com>> wrote:

Hi, all,

Now that CF-Abacus is officially an incubator under the guidance of the
CFF, here are some quick updates:

1. The project official github moved to:

*https://github.com/cloudfoundry-incubator/cf-abacus*
<https://github.com/cloudfoundry-incubator/cf-abacus>

2. We are planning an inception next week Wednesday from 2p to 5p in SF.

We invite everyone interested to take a look at the repo, provide
feedback, or better, join us at the inception meeting. The location will be
either CFF, Pivotal, or IBM. All within a few blocks in downtown SF.

We will also have Google hangout and conference call for remote
participants.

If interested, then respond to me directly so I add you to the invite
list.

Thanks and talk next week. Best,

CF-Abacus team

dr.max
ibm cloud labs
silicon valley, ca

Sent from my iPhone


_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev

_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev


--
-- Matt

_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev

8381 - 8400 of 9429