Date   

Re: Recreating uaadb and ccdb databases

James Bayer
 

joseph, did you mean "bosh recreate" instead of "bosh restart"?

On Sat, Aug 1, 2015 at 5:28 AM, CF Runtime <cfruntime(a)gmail.com> wrote:

If you are using the default postgres job, you should just be able to
"bosh restart postgres_z1/0". This will create both the databases, but they
will not have the schemas.

The individual jobs should recreate the schemas, so you'll probably need
to "bosh restart api_z1/0" and "bosh restart uaa_z1/0".

Joseph
OSS Release Integration Team

On Sat, Aug 1, 2015 at 3:36 AM, rmi <rishi.investigate(a)gmail.com> wrote:

Hi Amit - since this is a fresh install I an just trying to recreate ccdb
and
uaadb from scratch. What is the best way of deleting/redeploying my
environment? Note that due to I am only able to use cf_nise installer for
this deployment. This is another reason I wanted to just recreate dbs if
possible.

Thanks.



--
View this message in context:
http://cf-dev.70369.x6.nabble.com/cf-dev-Recreating-uaadb-and-ccdb-databases-tp1007p1011.html
Sent from the CF Dev mailing list archive at Nabble.com.
_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev

_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev

--
Thank you,

James Bayer


Re: Recreating uaadb and ccdb databases

CF Runtime
 

If you are using the default postgres job, you should just be able to "bosh
restart postgres_z1/0". This will create both the databases, but they will
not have the schemas.

The individual jobs should recreate the schemas, so you'll probably need to
"bosh restart api_z1/0" and "bosh restart uaa_z1/0".

Joseph
OSS Release Integration Team

On Sat, Aug 1, 2015 at 3:36 AM, rmi <rishi.investigate(a)gmail.com> wrote:

Hi Amit - since this is a fresh install I an just trying to recreate ccdb
and
uaadb from scratch. What is the best way of deleting/redeploying my
environment? Note that due to I am only able to use cf_nise installer for
this deployment. This is another reason I wanted to just recreate dbs if
possible.

Thanks.



--
View this message in context:
http://cf-dev.70369.x6.nabble.com/cf-dev-Recreating-uaadb-and-ccdb-databases-tp1007p1011.html
Sent from the CF Dev mailing list archive at Nabble.com.
_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev


Re: Recreating uaadb and ccdb databases

R M
 

Hi Amit - since this is a fresh install I an just trying to recreate ccdb and
uaadb from scratch. What is the best way of deleting/redeploying my
environment? Note that due to I am only able to use cf_nise installer for
this deployment. This is another reason I wanted to just recreate dbs if
possible.

Thanks.



--
View this message in context: http://cf-dev.70369.x6.nabble.com/cf-dev-Recreating-uaadb-and-ccdb-databases-tp1007p1011.html
Sent from the CF Dev mailing list archive at Nabble.com.


Re: Recreating uaadb and ccdb databases

Amit Kumar Gupta
 

Hi Rishi,

Are you trying to recover the data or just recreate the ccdb and uaadb
(starting from scratch with the data)? If you're willing to start with
scratch with respect to the data, it may be simplest to delete your
deployment and redeploy. If you don't want to start from scratch, then you
must have your data backed up somewhere on a persistent volume. Can you say
more about your deployment and how you configured the persistence aspect of
ccdb and uaadb?

Best,
Amit



-----
Amit, CF OSS Release Integration PM
Pivotal Software, Inc.
--
View this message in context: http://cf-dev.70369.x6.nabble.com/cf-dev-Recreating-uaadb-and-ccdb-databases-tp1007p1010.html
Sent from the CF Dev mailing list archive at Nabble.com.


Re: Garden is Moving!

James Bayer
 

thanks julz for summarizing all of this. i'm very excited that cloud
foundry will be able to use runc and contribute to the open container
initiative. by joining with the other members and working together, we'll
be able to use the same base runtime as docker, coreos and others. we'll
also preserve the flexibility to do the innovations and user experience we
want for CF users above the core container runtime. this seems like a big
win for everyone.

On Fri, Jul 31, 2015 at 3:06 PM, Deepak Vij (A) <deepak.vij(a)huawei.com>
wrote:

Hi Julz & the whole garden team, it is great to know that Garden Container
is moving towards Open-Container-Project (OCP) App-Container
specifications. Great work.

I am hoping that down the road we will also see App Container Pods
(Co-locating Containers) capabilities enabled as well. A pod is a list of
apps that will be launched together inside a shared execution context (
single Unit of Deployment, migration etc. sharing IP address Space, Storage
etc.). Kubernetes also supports similar Pod concept.

Pod architecture allows me to enable design patters such as Sidecar,
Ambassador & Adaptor. All of this is really helpful from the standpoint of
refactoring the core telecom capabilities such as vEPC (virtual Evolved
packet Core network) and many more NFV/telecom capabilities - Network
Function Virtualization.

- Deepak Vij

----------------------------------------------------------------------

Message: 1
Date: Fri, 31 Jul 2015 18:49:25 +0100
From: Julz Friedman <julz.friedman(a)gmail.com>
To: "Discussions about Cloud Foundry projects and the system overall."
<cf-dev(a)lists.cloudfoundry.org>
Subject: [cf-dev] Garden is Moving!
Message-ID:
<
CAHfHzfOrrdEn_QBZwnoq7qQtXbBW1K2fk-NbbqgLSKaacMcPsw(a)mail.gmail.com>
Content-Type: text/plain; charset="utf-8"

Hi cf-dev, I?d like to discuss some exciting changes the Garden team is
planning to make in Diego?s container subsystem, Garden.

Garden? What?s that?

Garden is the containerisation layer used by Diego. Garden provides a
platform-neutral, lightweight container abstraction that can be backed by
multiple backends (most importantly, a Linux backend and a Windows
backend). Currently the linux backend is based on our own code which
evolved from Warden and which has been used to power Cloud Foundry for many
years. Garden enables diego to support buildpack apps and docker apps (via
the Linux backend) and windows apps (via the Windows backend).

So: What's changing?

We're planning to use runC [1] as the Linux backend for Garden.

Why?

Garden has always been an unopinionated container system - we like to have
the opinionation happen at the higher levels (i.e. in Diego). Docker, on
the other hand, is quite an opinionated container technology: it tightly
couples containerisation and the user experience (which is one of the
reasons docker is so great to use, I?m not knocking docker here!).
Recently, docker and others (including IBM and Pivotal) have come together
under the Open Container Initiative to spin out an unopinionated common
containerisation runtime, ?runC?, which gives us a fantastic opportunity to
be part of this community while letting us ensure we can retain the
flexibility required by our broader use cases. RunC is a reference
implementation of the Open Container spec, which means both Docker and
Cloud Foundry will be running the same code, and both Docker and Cloud
Foundry apps will be using Open Containers.

Using runC as the garden backend has two major advantages. Firstly it lets
us reuse some awesome code and be part of the Open Container community.
Secondly it means CF applications will be using not only the same kernel
primitives as docker apps (as they already are today), but also the exact
same runtime container engine. This will minimise incompatibility for our
docker lifecycle and result in a first class experience for users, as well
as letting us reuse and contribute back to a great open-source code base.
We have some remaining features in the Garden Linux backend that we?d like
to see in RunC, but we?re excited to engage with the Open Container
community to close these gaps.

What about regular CF buildpack apps and the other nice features of Garden?

Moving to runC gives us all the above advantages without compromising our
ability to also deliver the buildpack-based platform-centric workflows that
make CF great. We will retain the garden abstraction to make it easy for
Diego to support both buildpack apps, windows apps and docker apps, and we
will maintain a small layer above runC to manage the containers, pull down
native warden and docker root filesystems, let us perform live upgrades and
so on.

Why not use the full docker-engine as the backend?

Docker-engine has both more capabilities than we need at the layer Garden
runs and different opinions than Cloud Foundry currently requires. This
means it?s harder for us to maintain (because it?s larger and does more
stuff), harder for us to contribute to (for similar reasons) and for some
of our use cases (particularly with Diego?s more generic lifecycles) we?d
have to actively work around things that would be quite easy to expose if
we use runC directly (for example docker-engine intentionally doesn?t
support signalling `docker exec`ed processes, which is required by
Diego[2]).

Most of the reasons you might want to use docker-engine (e.g. being able to
?docker push?) make much more sense to expose at the platform level in a
multi-host environment (you want to push to the cluster, not a single host)
or need to be integrated with multi-tenancy (which again should happen at
the platform level - you need access control on storage and networks to
integrate with the rest of a multi-tenant platform). For these reasons
Cloud Foundry prefers to implement many features at the Diego layer whereas
docker-engine implements some of these capabilities at the host layer. As
the capabilities for running distributed applications in containers
continue to evolve, CF prefers the flexibility to implement the opinions of
our developers and community for areas like networking and storage even if
those may differ from other orchestration solutions like docker-engine, and
in turn Garden needs to retain that flexibility.

We also note that many new features have come to runC first (e.g. criu
snapshot restore and - importantly for us - user namespaces were first
available in runC before being added to docker-engine; at the time of
writing these are still not fully available in docker-engine). We?d like to
be able to consume new features as they come out in runC, rather than
waiting for them to make it in to docker-engine. We also hope to be
contributing new features of our own and this is much easier for us to
accomplish against the smaller surface area of runC, and within the open
context of the Open Container Initiative.

When will this happen?

Our first goal is completing the work of improving Garden?s security
profile around supporting docker apps in production, we're about two weeks
out from this according to Tracker and plan to do this with the current
code. As soon as we hit this milestone we plan to shift our focus to runC.
We have an initial prototype working and will iterate quickly to bring this
to production quality and switch over when we feel confident.


I?m excited to hear the community?s views and input on this, so let us know
what you think!


Thanks!

- Julz, on behalf the Garden Team

[1]: https://github.com/opencontainers/runc

[2]: https://github.com/docker/docker/pull/9167,
https://github.com/docker/docker/pull/9402

_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev


--
Thank you,

James Bayer


Php build pack offline version availability

Amishi Shah
 

Hi team,

We are a team at Cisco trying to use the Buildpack for Php-Httpd. We tried couple of ways,

1. Cloning a git repo and creating the build pack, uploading the same and using the same for the app.
2. Downloading the readymade build pack available in the github releases pages and uploading, using the same for the app.

What we are looking for is an offline version, both of these build packs connects to internet while we are deploying the app.

The details are as below.


AMISHISH-M-91EG:webapp ashah$ cf push

Using manifest file /Users/ashah/workspace/Symphony/sample_httpd_app/webapp/manifest.yml


Using stack cflinuxfs2...

OK

Creating app sample_app in org admin / space skyfall as admin...

OK


Using route sample-app.203.35.248.123.xip.io

Binding sample-app.203.35.248.123.xip.io to sample_app...

OK


Uploading sample_app...

Uploading app files from: /Users/ashah/workspace/Symphony/sample_httpd_app/webapp

Uploading 185, 1 files

Done uploading

OK


Starting app sample_app in org admin / space skyfall as admin...

-----> Downloaded app package (4.0K)

-------> Buildpack version 4.0.0

Installing HTTPD

Downloaded [https://pivotal-buildpacks.s3.amazonaws.com/php/binaries/trusty/httpd/2.4.12/httpd-2.4.12.tar.gz] to [/tmp]

Installing PHP

PHP 5.5.27

Downloaded [https://pivotal-buildpacks.s3.amazonaws.com/php/binaries/trusty/php/5.5.27/php-5.5.27.tar.gz] to [/tmp]

Finished: [2015-07-31 21:28:46.871154]


-----> Uploading droplet (51M)


0 of 1 instances running, 1 down

0 of 1 instances running, 1 down

0 of 1 instances running, 1 down

0 of 1 instances running, 1 down

1 of 1 instances running


App started



OK


App sample_app was started using this command `$HOME/.bp/bin/start`


Showing health and status for app sample_app in org admin / space skyfall as admin...

OK


requested state: started

instances: 1/1

usage: 64M x 1 instances

urls: sample-app.203.35.248.123.xip.io

last uploaded: Fri Jul 31 21:28:35 UTC 2015

stack: cflinuxfs2

buildpack: readymade_httpd_buildpack


state since cpu memory disk details

#0 running 2015-07-31 02:29:39 PM 0.0% 40.3M of 64M 136.6M of 1G


Do we have any offline Php-httpd buildpack available? We want the build pack which would be sufficient enough and it works even without the internet connectivity.


Your timely response will be really appreciated.


Thanks,

Amishi Shah


Recreating uaadb and ccdb databases

R M
 

Hello, is there a way of recreating uaadb and ccdb databases? I had
corrupted my CF install and in the process deleted both of these
databases. I wonder if there are some scripts that I can run to recreate
these databases.

Thanks.


Re: Garden is Moving!

Deepak Vij
 

Hi Julz & the whole garden team, it is great to know that Garden Container is moving towards Open-Container-Project (OCP) App-Container specifications. Great work.

I am hoping that down the road we will also see App Container Pods (Co-locating Containers) capabilities enabled as well. A pod is a list of apps that will be launched together inside a shared execution context ( single Unit of Deployment, migration etc. sharing IP address Space, Storage etc.). Kubernetes also supports similar Pod concept.

Pod architecture allows me to enable design patters such as Sidecar, Ambassador & Adaptor. All of this is really helpful from the standpoint of refactoring the core telecom capabilities such as vEPC (virtual Evolved packet Core network) and many more NFV/telecom capabilities - Network Function Virtualization.

- Deepak Vij

----------------------------------------------------------------------

Message: 1
Date: Fri, 31 Jul 2015 18:49:25 +0100
From: Julz Friedman <julz.friedman(a)gmail.com>
To: "Discussions about Cloud Foundry projects and the system overall."
<cf-dev(a)lists.cloudfoundry.org>
Subject: [cf-dev] Garden is Moving!
Message-ID:
<CAHfHzfOrrdEn_QBZwnoq7qQtXbBW1K2fk-NbbqgLSKaacMcPsw(a)mail.gmail.com>
Content-Type: text/plain; charset="utf-8"

Hi cf-dev, I?d like to discuss some exciting changes the Garden team is
planning to make in Diego?s container subsystem, Garden.

Garden? What?s that?

Garden is the containerisation layer used by Diego. Garden provides a
platform-neutral, lightweight container abstraction that can be backed by
multiple backends (most importantly, a Linux backend and a Windows
backend). Currently the linux backend is based on our own code which
evolved from Warden and which has been used to power Cloud Foundry for many
years. Garden enables diego to support buildpack apps and docker apps (via
the Linux backend) and windows apps (via the Windows backend).

So: What's changing?

We're planning to use runC [1] as the Linux backend for Garden.

Why?

Garden has always been an unopinionated container system - we like to have
the opinionation happen at the higher levels (i.e. in Diego). Docker, on
the other hand, is quite an opinionated container technology: it tightly
couples containerisation and the user experience (which is one of the
reasons docker is so great to use, I?m not knocking docker here!).
Recently, docker and others (including IBM and Pivotal) have come together
under the Open Container Initiative to spin out an unopinionated common
containerisation runtime, ?runC?, which gives us a fantastic opportunity to
be part of this community while letting us ensure we can retain the
flexibility required by our broader use cases. RunC is a reference
implementation of the Open Container spec, which means both Docker and
Cloud Foundry will be running the same code, and both Docker and Cloud
Foundry apps will be using Open Containers.

Using runC as the garden backend has two major advantages. Firstly it lets
us reuse some awesome code and be part of the Open Container community.
Secondly it means CF applications will be using not only the same kernel
primitives as docker apps (as they already are today), but also the exact
same runtime container engine. This will minimise incompatibility for our
docker lifecycle and result in a first class experience for users, as well
as letting us reuse and contribute back to a great open-source code base.
We have some remaining features in the Garden Linux backend that we?d like
to see in RunC, but we?re excited to engage with the Open Container
community to close these gaps.

What about regular CF buildpack apps and the other nice features of Garden?

Moving to runC gives us all the above advantages without compromising our
ability to also deliver the buildpack-based platform-centric workflows that
make CF great. We will retain the garden abstraction to make it easy for
Diego to support both buildpack apps, windows apps and docker apps, and we
will maintain a small layer above runC to manage the containers, pull down
native warden and docker root filesystems, let us perform live upgrades and
so on.

Why not use the full docker-engine as the backend?

Docker-engine has both more capabilities than we need at the layer Garden
runs and different opinions than Cloud Foundry currently requires. This
means it?s harder for us to maintain (because it?s larger and does more
stuff), harder for us to contribute to (for similar reasons) and for some
of our use cases (particularly with Diego?s more generic lifecycles) we?d
have to actively work around things that would be quite easy to expose if
we use runC directly (for example docker-engine intentionally doesn?t
support signalling `docker exec`ed processes, which is required by
Diego[2]).

Most of the reasons you might want to use docker-engine (e.g. being able to
?docker push?) make much more sense to expose at the platform level in a
multi-host environment (you want to push to the cluster, not a single host)
or need to be integrated with multi-tenancy (which again should happen at
the platform level - you need access control on storage and networks to
integrate with the rest of a multi-tenant platform). For these reasons
Cloud Foundry prefers to implement many features at the Diego layer whereas
docker-engine implements some of these capabilities at the host layer. As
the capabilities for running distributed applications in containers
continue to evolve, CF prefers the flexibility to implement the opinions of
our developers and community for areas like networking and storage even if
those may differ from other orchestration solutions like docker-engine, and
in turn Garden needs to retain that flexibility.

We also note that many new features have come to runC first (e.g. criu
snapshot restore and - importantly for us - user namespaces were first
available in runC before being added to docker-engine; at the time of
writing these are still not fully available in docker-engine). We?d like to
be able to consume new features as they come out in runC, rather than
waiting for them to make it in to docker-engine. We also hope to be
contributing new features of our own and this is much easier for us to
accomplish against the smaller surface area of runC, and within the open
context of the Open Container Initiative.

When will this happen?

Our first goal is completing the work of improving Garden?s security
profile around supporting docker apps in production, we're about two weeks
out from this according to Tracker and plan to do this with the current
code. As soon as we hit this milestone we plan to shift our focus to runC.
We have an initial prototype working and will iterate quickly to bring this
to production quality and switch over when we feel confident.


I?m excited to hear the community?s views and input on this, so let us know
what you think!


Thanks!

- Julz, on behalf the Garden Team

[1]: https://github.com/opencontainers/runc

[2]: https://github.com/docker/docker/pull/9167,
https://github.com/docker/docker/pull/9402


CFF Mailman Upgrade - Monday August 3, 2015 at 7 PM PT

Chip Childers <cchilders@...>
 

Hi all,

On Monday, at approximately 7 PM Pacific Time, the Linux Foundation IT team
will be performing a mailing list migration to a newer version of Mailman
and adding the HyperKitty UI for web-based access.

If you experience any issues after the migration, please contact the
support team via helpdesk(a)cloudfoundry.org

Here's what the IT team says subscribers need to know:

-- Migrating list members will happen silently.

-- The email addresses for the lists will be the same.

-- During the migration, it will take some time for DNS changes to
propagate for https://lists.cloudfoundry.org to point at the new mailman
and the new archives.

-- Since there's no DNS changes for the mail side of things, if any new mail
is posted to the new list during this time, the mail will go to new mailman
and all members of the list will receive it whether DNS has fully
propagated or not.

-- Once DNS is fully propagated, everyone will see new mailman when they
visit https://lists.cloudfoundry.org. There will be no access to the
old mailman,
but all of the archives will be present in new mailman.

-- In order for folks to change their subscription settings, post comments
directly to the new archives, or moderate the lists, they will need to
create a Linux Foundation ID with the email address they used when they
signed up for the list. Most of the list members do not have a
Linux Foundation ID and will need to do this. You can still send and receive
mail from the list without setting up your LFID.

Essentially, the changeover will be seamless for the users of the list while
it is happening, and they will at some point in the near future need to go
set up an account.

Again, If you experience any issues after the migration, please contact the
support team via helpdesk(a)cloudfoundry.org

Chip Childers | VP Technology | Cloud Foundry Foundation


V3 Rest API

Ethan Vogel <evogel@...>
 

Can anyone answer these questions please:

Are there plans to change the cf command line to use the V3 Rest API? If
so, when?
Where can I find documentation on the V3 API?

Thanks,
Ethan


Re: Garden is Moving!

Christopher B Ferris <chrisfer@...>
 

awesome! thanks

Cheers,

Christopher Ferris
IBM Distinguished Engineer, CTO Open Cloud
IBM Software Group, Open Technologies
email: chrisfer(a)us.ibm.com
twitter: @christo4ferris
blog: http://thoughtsoncloud.com/index.php/author/cferris/
phone: +1 508 667 0402

On Jul 31, 2015, at 1:56 PM, Chip Childers <cchilders(a)cloudfoundry.org> wrote:

This is great news for CF and a good writeup of the reasoning. Thanks Julz!

-chip

Chip Childers | VP Technology | Cloud Foundry Foundation

On Fri, Jul 31, 2015 at 1:49 PM, Julz Friedman <julz.friedman(a)gmail.com> wrote:
Hi cf-dev, I’d like to discuss some exciting changes the Garden team is planning to make in Diego’s container subsystem, Garden.

Garden? What’s that?

Garden is the containerisation layer used by Diego. Garden provides a platform-neutral, lightweight container abstraction that can be backed by multiple backends (most importantly, a Linux backend and a Windows backend). Currently the linux backend is based on our own code which evolved from Warden and which has been used to power Cloud Foundry for many years. Garden enables diego to support buildpack apps and docker apps (via the Linux backend) and windows apps (via the Windows backend).

So: What's changing?

We're planning to use runC [1] as the Linux backend for Garden.

Why?

Garden has always been an unopinionated container system - we like to have the opinionation happen at the higher levels (i.e. in Diego). Docker, on the other hand, is quite an opinionated container technology: it tightly couples containerisation and the user experience (which is one of the reasons docker is so great to use, I’m not knocking docker here!). Recently, docker and others (including IBM and Pivotal) have come together under the Open Container Initiative to spin out an unopinionated common containerisation runtime, “runC”, which gives us a fantastic opportunity to be part of this community while letting us ensure we can retain the flexibility required by our broader use cases. RunC is a reference implementation of the Open Container spec, which means both Docker and Cloud Foundry will be running the same code, and both Docker and Cloud Foundry apps will be using Open Containers.

Using runC as the garden backend has two major advantages. Firstly it lets us reuse some awesome code and be part of the Open Container community. Secondly it means CF applications will be using not only the same kernel primitives as docker apps (as they already are today), but also the exact same runtime container engine. This will minimise incompatibility for our docker lifecycle and result in a first class experience for users, as well as letting us reuse and contribute back to a great open-source code base. We have some remaining features in the Garden Linux backend that we’d like to see in RunC, but we’re excited to engage with the Open Container community to close these gaps.

What about regular CF buildpack apps and the other nice features of Garden?

Moving to runC gives us all the above advantages without compromising our ability to also deliver the buildpack-based platform-centric workflows that make CF great. We will retain the garden abstraction to make it easy for Diego to support both buildpack apps, windows apps and docker apps, and we will maintain a small layer above runC to manage the containers, pull down native warden and docker root filesystems, let us perform live upgrades and so on.

Why not use the full docker-engine as the backend?

Docker-engine has both more capabilities than we need at the layer Garden runs and different opinions than Cloud Foundry currently requires. This means it’s harder for us to maintain (because it’s larger and does more stuff), harder for us to contribute to (for similar reasons) and for some of our use cases (particularly with Diego’s more generic lifecycles) we’d have to actively work around things that would be quite easy to expose if we use runC directly (for example docker-engine intentionally doesn’t support signalling `docker exec`ed processes, which is required by Diego[2]).

Most of the reasons you might want to use docker-engine (e.g. being able to ‘docker push’) make much more sense to expose at the platform level in a multi-host environment (you want to push to the cluster, not a single host) or need to be integrated with multi-tenancy (which again should happen at the platform level - you need access control on storage and networks to integrate with the rest of a multi-tenant platform). For these reasons Cloud Foundry prefers to implement many features at the Diego layer whereas docker-engine implements some of these capabilities at the host layer. As the capabilities for running distributed applications in containers continue to evolve, CF prefers the flexibility to implement the opinions of our developers and community for areas like networking and storage even if those may differ from other orchestration solutions like docker-engine, and in turn Garden needs to retain that flexibility.

We also note that many new features have come to runC first (e.g. criu snapshot restore and - importantly for us - user namespaces were first available in runC before being added to docker-engine; at the time of writing these are still not fully available in docker-engine). We’d like to be able to consume new features as they come out in runC, rather than waiting for them to make it in to docker-engine. We also hope to be contributing new features of our own and this is much easier for us to accomplish against the smaller surface area of runC, and within the open context of the Open Container Initiative.

When will this happen?

Our first goal is completing the work of improving Garden’s security profile around supporting docker apps in production, we're about two weeks out from this according to Tracker and plan to do this with the current code. As soon as we hit this milestone we plan to shift our focus to runC. We have an initial prototype working and will iterate quickly to bring this to production quality and switch over when we feel confident.


I’m excited to hear the community’s views and input on this, so let us know what you think!

Thanks!
- Julz, on behalf the Garden Team

[1]: https://github.com/opencontainers/runc
[2]: https://github.com/docker/docker/pull/9167, https://github.com/docker/docker/pull/9402


_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev


Re: Garden is Moving!

Chip Childers <cchilders@...>
 

This is great news for CF and a good writeup of the reasoning. Thanks Julz!

-chip

Chip Childers | VP Technology | Cloud Foundry Foundation

On Fri, Jul 31, 2015 at 1:49 PM, Julz Friedman <julz.friedman(a)gmail.com>
wrote:

Hi cf-dev, I’d like to discuss some exciting changes the Garden team is
planning to make in Diego’s container subsystem, Garden.

Garden? What’s that?

Garden is the containerisation layer used by Diego. Garden provides a
platform-neutral, lightweight container abstraction that can be backed by
multiple backends (most importantly, a Linux backend and a Windows
backend). Currently the linux backend is based on our own code which
evolved from Warden and which has been used to power Cloud Foundry for many
years. Garden enables diego to support buildpack apps and docker apps (via
the Linux backend) and windows apps (via the Windows backend).

So: What's changing?

We're planning to use runC [1] as the Linux backend for Garden.

Why?

Garden has always been an unopinionated container system - we like to have
the opinionation happen at the higher levels (i.e. in Diego). Docker, on
the other hand, is quite an opinionated container technology: it tightly
couples containerisation and the user experience (which is one of the
reasons docker is so great to use, I’m not knocking docker here!).
Recently, docker and others (including IBM and Pivotal) have come together
under the Open Container Initiative to spin out an unopinionated common
containerisation runtime, “runC”, which gives us a fantastic opportunity to
be part of this community while letting us ensure we can retain the
flexibility required by our broader use cases. RunC is a reference
implementation of the Open Container spec, which means both Docker and
Cloud Foundry will be running the same code, and both Docker and Cloud
Foundry apps will be using Open Containers.

Using runC as the garden backend has two major advantages. Firstly it lets
us reuse some awesome code and be part of the Open Container community.
Secondly it means CF applications will be using not only the same kernel
primitives as docker apps (as they already are today), but also the exact
same runtime container engine. This will minimise incompatibility for our
docker lifecycle and result in a first class experience for users, as well
as letting us reuse and contribute back to a great open-source code base.
We have some remaining features in the Garden Linux backend that we’d like
to see in RunC, but we’re excited to engage with the Open Container
community to close these gaps.

What about regular CF buildpack apps and the other nice features of Garden?

Moving to runC gives us all the above advantages without compromising our
ability to also deliver the buildpack-based platform-centric workflows that
make CF great. We will retain the garden abstraction to make it easy for
Diego to support both buildpack apps, windows apps and docker apps, and we
will maintain a small layer above runC to manage the containers, pull down
native warden and docker root filesystems, let us perform live upgrades and
so on.

Why not use the full docker-engine as the backend?

Docker-engine has both more capabilities than we need at the layer Garden
runs and different opinions than Cloud Foundry currently requires. This
means it’s harder for us to maintain (because it’s larger and does more
stuff), harder for us to contribute to (for similar reasons) and for some
of our use cases (particularly with Diego’s more generic lifecycles) we’d
have to actively work around things that would be quite easy to expose if
we use runC directly (for example docker-engine intentionally doesn’t
support signalling `docker exec`ed processes, which is required by
Diego[2]).

Most of the reasons you might want to use docker-engine (e.g. being able
to ‘docker push’) make much more sense to expose at the platform level in a
multi-host environment (you want to push to the cluster, not a single host)
or need to be integrated with multi-tenancy (which again should happen at
the platform level - you need access control on storage and networks to
integrate with the rest of a multi-tenant platform). For these reasons
Cloud Foundry prefers to implement many features at the Diego layer whereas
docker-engine implements some of these capabilities at the host layer. As
the capabilities for running distributed applications in containers
continue to evolve, CF prefers the flexibility to implement the opinions of
our developers and community for areas like networking and storage even if
those may differ from other orchestration solutions like docker-engine, and
in turn Garden needs to retain that flexibility.

We also note that many new features have come to runC first (e.g. criu
snapshot restore and - importantly for us - user namespaces were first
available in runC before being added to docker-engine; at the time of
writing these are still not fully available in docker-engine). We’d like
to be able to consume new features as they come out in runC, rather than
waiting for them to make it in to docker-engine. We also hope to be
contributing new features of our own and this is much easier for us to
accomplish against the smaller surface area of runC, and within the open
context of the Open Container Initiative.

When will this happen?

Our first goal is completing the work of improving Garden’s security
profile around supporting docker apps in production, we're about two weeks
out from this according to Tracker and plan to do this with the current
code. As soon as we hit this milestone we plan to shift our focus to runC.
We have an initial prototype working and will iterate quickly to bring this
to production quality and switch over when we feel confident.


I’m excited to hear the community’s views and input on this, so let us
know what you think!


Thanks!

- Julz, on behalf the Garden Team

[1]: https://github.com/opencontainers/runc

[2]: https://github.com/docker/docker/pull/9167,
https://github.com/docker/docker/pull/9402


_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev


Garden is Moving!

Julz Friedman
 

Hi cf-dev, I’d like to discuss some exciting changes the Garden team is
planning to make in Diego’s container subsystem, Garden.

Garden? What’s that?

Garden is the containerisation layer used by Diego. Garden provides a
platform-neutral, lightweight container abstraction that can be backed by
multiple backends (most importantly, a Linux backend and a Windows
backend). Currently the linux backend is based on our own code which
evolved from Warden and which has been used to power Cloud Foundry for many
years. Garden enables diego to support buildpack apps and docker apps (via
the Linux backend) and windows apps (via the Windows backend).

So: What's changing?

We're planning to use runC [1] as the Linux backend for Garden.

Why?

Garden has always been an unopinionated container system - we like to have
the opinionation happen at the higher levels (i.e. in Diego). Docker, on
the other hand, is quite an opinionated container technology: it tightly
couples containerisation and the user experience (which is one of the
reasons docker is so great to use, I’m not knocking docker here!).
Recently, docker and others (including IBM and Pivotal) have come together
under the Open Container Initiative to spin out an unopinionated common
containerisation runtime, “runC”, which gives us a fantastic opportunity to
be part of this community while letting us ensure we can retain the
flexibility required by our broader use cases. RunC is a reference
implementation of the Open Container spec, which means both Docker and
Cloud Foundry will be running the same code, and both Docker and Cloud
Foundry apps will be using Open Containers.

Using runC as the garden backend has two major advantages. Firstly it lets
us reuse some awesome code and be part of the Open Container community.
Secondly it means CF applications will be using not only the same kernel
primitives as docker apps (as they already are today), but also the exact
same runtime container engine. This will minimise incompatibility for our
docker lifecycle and result in a first class experience for users, as well
as letting us reuse and contribute back to a great open-source code base.
We have some remaining features in the Garden Linux backend that we’d like
to see in RunC, but we’re excited to engage with the Open Container
community to close these gaps.

What about regular CF buildpack apps and the other nice features of Garden?

Moving to runC gives us all the above advantages without compromising our
ability to also deliver the buildpack-based platform-centric workflows that
make CF great. We will retain the garden abstraction to make it easy for
Diego to support both buildpack apps, windows apps and docker apps, and we
will maintain a small layer above runC to manage the containers, pull down
native warden and docker root filesystems, let us perform live upgrades and
so on.

Why not use the full docker-engine as the backend?

Docker-engine has both more capabilities than we need at the layer Garden
runs and different opinions than Cloud Foundry currently requires. This
means it’s harder for us to maintain (because it’s larger and does more
stuff), harder for us to contribute to (for similar reasons) and for some
of our use cases (particularly with Diego’s more generic lifecycles) we’d
have to actively work around things that would be quite easy to expose if
we use runC directly (for example docker-engine intentionally doesn’t
support signalling `docker exec`ed processes, which is required by
Diego[2]).

Most of the reasons you might want to use docker-engine (e.g. being able to
‘docker push’) make much more sense to expose at the platform level in a
multi-host environment (you want to push to the cluster, not a single host)
or need to be integrated with multi-tenancy (which again should happen at
the platform level - you need access control on storage and networks to
integrate with the rest of a multi-tenant platform). For these reasons
Cloud Foundry prefers to implement many features at the Diego layer whereas
docker-engine implements some of these capabilities at the host layer. As
the capabilities for running distributed applications in containers
continue to evolve, CF prefers the flexibility to implement the opinions of
our developers and community for areas like networking and storage even if
those may differ from other orchestration solutions like docker-engine, and
in turn Garden needs to retain that flexibility.

We also note that many new features have come to runC first (e.g. criu
snapshot restore and - importantly for us - user namespaces were first
available in runC before being added to docker-engine; at the time of
writing these are still not fully available in docker-engine). We’d like to
be able to consume new features as they come out in runC, rather than
waiting for them to make it in to docker-engine. We also hope to be
contributing new features of our own and this is much easier for us to
accomplish against the smaller surface area of runC, and within the open
context of the Open Container Initiative.

When will this happen?

Our first goal is completing the work of improving Garden’s security
profile around supporting docker apps in production, we're about two weeks
out from this according to Tracker and plan to do this with the current
code. As soon as we hit this milestone we plan to shift our focus to runC.
We have an initial prototype working and will iterate quickly to bring this
to production quality and switch over when we feel confident.


I’m excited to hear the community’s views and input on this, so let us know
what you think!


Thanks!

- Julz, on behalf the Garden Team

[1]: https://github.com/opencontainers/runc

[2]: https://github.com/docker/docker/pull/9167,
https://github.com/docker/docker/pull/9402


Re: Troubleshooting tips ...

CF Runtime
 

Set CF_TRACE=true to see the exact request that results in the 404. Most
likely it is a request to api.cf.fxlab.net/v2/info. Double check that that
url does in fact return a 404.

From there, the problem will probably be in either the router or the api
instance. If you ssh onto the api instance, you can try to curl
localhost:9022/v2/info to see if it is working.

On Fri, Jul 31, 2015 at 5:55 AM, Vishwanath V <thelinuxguyis(a)yahoo.co.in>
wrote:

Hi Folks,

I see the below error , when trying to connect to api end point :

cf api api.cf.fxlab.net --skip-ssl-validation
Setting api endpoint to api.cf.fxlab.net...
FAILED
Server error, status code: 404, error code: 0, message:


This was working yesterday, I already checked that all the cf components
are active and running.

Need pointers on where to start the troubleshooting.

Kindly assist.

Regards,
Vish.

_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev


Re: UAA: How to set client_credentials token grant type to not expire

Filip Hanik
 

Start a local server (./gradlew run --info)

In another console, the following commands

1. uaac target http://localhost:8080/uaa
2. uaac token client get admin -s adminsecret
3. uaac client add testclient --authorized_grant_types
client_credentials --access_token_validity 315360000 --authorities openid
-s testclientsecret
4. uaac token client get testclient -s testclientsecret
5. uaac token decode

The output from the last command is
jti: 7397c7c9-de08-4b33-bd6a-0d248fd983b1
sub: testclient
authorities: openid
scope: openid
client_id: testclient
cid: testclient
azp: testclient
grant_type: client_credentials
rev_sig: fbc56677
iat: 1438351964
exp: 1753711964
iss: http://localhost:8080/uaa/oauth/token
zid: uaa
aud: testclient openid

The exp time is 1753711964, that is seconds from Jan 1st, 1970, and
corresponds to July 28, 2025

On Fri, Jul 31, 2015 at 12:57 AM, Kayode Odeyemi <dreyemi(a)gmail.com> wrote:

Filip,

Here's my client config:
useraccount
scope: clients.read oauth.approvals openid password.write tokens.read
tokens.write uaa.admin
resource_ids: none
authorized_grant_types: authorization_code client_credentials password
refresh_token
authorities: scim.read scim.userids uaa.admin uaa.resource
clients.read scim.write cloud_controller.write scim.me clients.secret
password.write clients.write openid cloud_controller.read oauth.approvals
access_token_validity: 315360000
autoapprove: true

Gotten from `uaac clients`

I really do not know what else I might be doing wrongly.

Does `test_Token_Expiry_Time()` also cover for client_credentials grant
type? I tried running the test with
`./gradlew test
-Dtest.single=org/cloudfoundry/identity/uaa/mock/token/TokenMvcMockTests`
and placed debuggers in order to view the generated expiration time.
Nothing was printed in the test results.


On Wed, Jul 29, 2015 at 6:11 PM, Filip Hanik <fhanik(a)pivotal.io> wrote:

exp is expected to be 1753544877 when decoded. Unfortunately, this test
fails, as exp reads 1438228276

most likely your client does not have the access token validity setup
correctly. See the test case I posted that validates my statements

https://github.com/cloudfoundry/uaa/commit/f0c8ba99cf37855fec54b74c07ce19613c51d7e9#diff-f7a9f1a69eec2ce4278914f342d8a160R883


On Wed, Jul 29, 2015 at 9:57 AM, Kayode Odeyemi <dreyemi(a)gmail.com>
wrote:

Good. But my apologies. Assume:

creation time = 1438184877
access token validity (set by me) = 315360000

exp is expected to be 1753544877 when decoded. Unfortunately, this test
fails, as exp reads 1438228276

On Wed, Jul 29, 2015 at 5:43 PM, Filip Hanik <fhanik(a)pivotal.io> wrote:

If I set the access_token_validity to 315569260, I'm expecting the
token when decoded to read exp: 315569260. If this is not, then is it
possible to set the token expiry time?

It's a little bit different.

access_token_validity is how long the token is valid for from the time
of creation. thus we can derive

exp (expiration time) = token creation time + access token validity

you don't get to set the expiration time, since that doesn't make sense
as the clock keeps ticking forward.

in your case, having access token validity be 10 years, achieves
exactly what you want

Filip


On Wed, Jul 29, 2015 at 9:36 AM, Kayode Odeyemi <dreyemi(a)gmail.com>
wrote:

Thanks again Filip.

However, here's what I mean,

If I set the access_token_validity to 315569260, I'm expecting the
token when decoded to read exp: 315569260. If this is not, then is it
possible to set the token expiry time?

line 906 sets the value to 1438209609 when the token is decoded and I
believe that's what the check_token service also checks.
expirationTime*1000l occurs after the token has been decoded (whose exp
value is set to 1438209609)

Now the question is why do you have to do expirationTime*1000l since
the token when decoded originally set's this value to 1438209609
(without * 1000l)

Except I'm completely getting this all wrong?

_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev

_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev


_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev

_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev


_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev


Troubleshooting tips ...

Vish
 

Hi Folks,
I see the below  error , when trying to connect to api end point :
cf api api.cf.fxlab.net --skip-ssl-validation
Setting api endpoint to api.cf.fxlab.net...
FAILED
Server error, status code: 404, error code: 0, message:


This was working yesterday, I already checked that all the cf components are active and running.
Need pointers on where to start the troubleshooting.

Kindly assist.
Regards,Vish.


Re: CF-Abacus: incubation and inception meeting coming soon

Guillaume Berche
 

Any chance to have the google hangout recorded (screen + voice) as to
enable offline replays (similar to CFAD) ?

Thanks,

Guillaume.

On Fri, Jul 31, 2015 at 6:38 AM, Matt Cowger <matt(a)cowger.us> wrote:

Stormy - I will happily blog it and take pictures...but you don't want my
notes...I was always the one in college that borrowed notes.

On Thu, Jul 30, 2015 at 3:37 PM, Stormy Peters <speters(a)cloudfoundry.org>
wrote:

Will somebody be taking notes? Would somebody be willing to blog about
this afterwards?

Maybe take a group picture and summarize what was discussed?

Thanks,

Stormy


On Thu, Jul 30, 2015 at 3:31 PM, Michael Maximilien <maxim(a)us.ibm.com>
wrote:

Hi, all,

Here is pertinent information for CF-Abacus inception meeting next
week.

Invites to those interested is sent. If you want to attend physically then ping me or Devin from CFF on CC: since
we need to add you to the list for the WeWork building.

---------
*Date:* Wednesday August 5th, 2015

*Time:* 9:30am - 12:30pm PDT

*Location:*
CloudFoundry Foundation Offices @ WeWork SF on Mission

WeWork
535 Mission St., *19th floor *
San Francisco, CA

*Room:* 19B

*Call info:*
IBM AT&T Conference Call
USA 888-426-6840; 215-861-6239 | Participant code: 1985291
All other countries, find number here: http://goo.gl/RnNfc1

*Hangout:* TBD
---------

Best,

------
dr.max
ibm cloud labs
silicon valley, ca
maximilien.org


*Michael Maximilien/Almaden/IBM*

07/29/2015 11:35 AM
To
"cf-dev(a)lists.cloudfoundry.org" <cf-dev(a)lists.cloudfoundry.org>
cc
Subject
Re: CF-Abacus: incubation and inception meeting coming soon




Quick update on inception meeting.

To accommodate our friends and colleagues from Europe who would like to
attend, let's plan to move the meeting to 10a to 12:30p with the option of
lunch after at nearby location in SF.

Unless I hear any objections I will send the invites to those interested
parties who have already contacted me and confirm details here.

If you want to attend (local or remote) please remember to reply to me
with email so I can add you to invite list.

Best,

dr.max
ibm cloud labs
silicon valley, ca

Sent from my iPhone

On Jul 28, 2015, at 10:15 PM, Michael Maximilien <*maxim(a)us.ibm.com*
<maxim(a)us.ibm.com>> wrote:

Hi, all,

Now that CF-Abacus is officially an incubator under the guidance of the
CFF, here are some quick updates:

1. The project official github moved to:

*https://github.com/cloudfoundry-incubator/cf-abacus*
<https://github.com/cloudfoundry-incubator/cf-abacus>

2. We are planning an inception next week Wednesday from 2p to 5p in SF.

We invite everyone interested to take a look at the repo, provide
feedback, or better, join us at the inception meeting. The location will be
either CFF, Pivotal, or IBM. All within a few blocks in downtown SF.

We will also have Google hangout and conference call for remote
participants.

If interested, then respond to me directly so I add you to the invite
list.

Thanks and talk next week. Best,

CF-Abacus team

dr.max
ibm cloud labs
silicon valley, ca

Sent from my iPhone


_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev

_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev


--
-- Matt

_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev


Re: Default cgroup CPU share

Will Pragnell <wpragnell@...>
 

In case it's not clear, shares are not dynamically reallocated to apps when
new apps are deployed. So in the example from the original email, if app #1
initially has N shares, it will still have N shares after app #2 is
deployed (and app #2 will also have N shares, given it has the same amount
of memory). This ties in with Matthew's point that there's no overall limit
to the number of shares.

I'm afraid I'm not quite sure what the absolute share values are and how
they're calculated relative to the memory amount.

On 30 July 2015 at 18:10, Matthew Sykes <matthew.sykes(a)gmail.com> wrote:

The old vcap-dev mailing list had a number of exchanges around this topic
that you might want to look at.

The basic gist is that linux gives processes that are not associated with
a cgroup a cpu share of 1024. That means that the code that runs the DEA
and all of the linux daemons that make things go will get that share.

When applications are placed on a DEA, the containers they run in are
associated with a cpu share that is proportional to the amount of memory
requested. If you request a lot of memory per app instance, you'll have a
high cpu share; if you request a little memory per app instance, you'll
have a low cpu share.

The cpu share values associated with the container cgroups will never be
allowed to exceed 1024 (to prevent applications from adversely impacting
the DEA processes).

These cpu share values really only start to impact things when there's
competition for the cpu. When that happens, processes in a cgroup that is
associated with higher shares will get more cpu than those with lower
shares.

There is no "limit" to the number of shares - they're treated as relative
values when the scheduler needs to make a choice. The goal is that, given
two processes A and B, if process A has a share weight that is twice that
of process B and both processes are cpu bound, process A will get twice as
many shares of the cpu as process A.

For a more complete understanding, you should read the documentation in
the linux tree for the scheduler.

On Tue, Jul 28, 2015 at 8:11 PM, John Wong <gokoproject(a)gmail.com> wrote:

I am reading
https://docs.cloudfoundry.org/concepts/architecture/warden.html#cpu and
it said:

If B is idle, A may receive up to all the CPU. Shares per cgroup range
from 2 to 1024, with 1024 the default. Both Diego apps and DEA apps scale
the number of allocated shares linearly with the amount of memory, with an
app instance requesting 8G of memory getting the upper limit of 1024
shares. Diego also guarantees a minimum of 10 shares per app instance.

So 1024 is the default share every app get by default?

Say I start with an empty DEA.


APP #1: 1G shared = 1024?

APP #2 added. 1G shared =? what happen to APP #1?

APP #2 added: 512MB shared =? What happened to APP #1 & APP 2?

APP #3 added: 8GB, now what happened?


I am all assuming their usage is nearly idle. What is the total number of
share for a N-core DEA? Also are the shares dynamic? In the mean time I
will try to understand how CPU usage is shared in cgroup from other
resource.


Thanks.


John

_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev


--
Matthew Sykes
matthew.sykes(a)gmail.com

_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev


Re: App autosleep support

Guillaume Berche
 

Thanks Gwenn and James for your feedback. Responses inline below

On Fri, Jul 31, 2015 at 7:57 AM, James Bayer wrote:


you should consider rescheduling when moving out of the dormant state
(which may take awhile to send the container image to a new host), then you
have to account for resource reservations on the Cells anyway in case they
all wake up. one possibility is the common case to have the container image
pre-staged on a host and the Cell typically would have enough resources
available. in the case where it doesn't, then you reschedule and hold the
requests (up to a max # of requests for that app) longer. has some
interesting potential for DOS if not constrained.
I would think that autosleep and auto-wakeup only make sense for apps that
can tolerate their traffic to be heldup for a while (i.e. time to schedule
the container image on a host), or alternatively tolerage their traffic to
transiently return 504 status code when wakingup. Apps that require better
availability and performance would be OK to pay for 2 permanent instances.


the use of a route-service for implementation is an interesting idea, but
it does mean that every request for the app needs to go through a route
service even when the app is not sleeping, so i could also see other
alternative designs that stay out of the request path unless the app is
dormant. maybe that's something the system could do (after inactivity
period bind the app to a route-service) and when leaving the dormant state,
unbind the app from the route service.
great idea, thanks! Implies the autosleep service would use some other way
to capture the incoming traffic signal (e.g. consume metron gorouter
metrics for the app), as to measure inactivity.

it's probably most important to define "the what and why" of this feature
first, and then we can ask the routing eng team if they have ideas on how
to implement or if it makes sense as existing points of extension like a
service and the in-progress route service.
Sounds good. Is the autosleep proposal document a good place to start this
"what and why" definition ? Should I made the document writeable by anyone
or just responding to incoming write access request sufficient ?

Guillaume.



On Thu, Jul 30, 2015 at 5:58 PM, Gwenn Etourneau wrote:

For the autosleep feature why not but again only for non-prod application.

In my previous company, for DEV environment we stop application which
have been not updated since one month except some exception.
We considered that DEV is for active development.
Was just a batch script looking in the CCDB and calling cf-cli to stop
apps.






On Thu, Jul 30, 2015 at 7:46 PM, Guillaume Berche wrote:

Hi,

I wonder if there are plans to implement an auto-sleep behavior in
cloudfoundry, in which inactive apps would be automatically stopped after a
max inactivity threshold, and automatically restart upon arrival of traffic
on their routes. Similar to google app engine default behavior [1]

I did not find mentions of this yet in mailing lists and trackers.

We feel at Orange that such feature can improve the density for some of
our non-prod use-cases (with environmental and financial benefits).

I'd like to know if someone in the community already worked on such
feature or would be interested in collaborating on an opensource
implementation.

I drafted some specs for a java-based implementation we're planning to
work on [2]. I'd love to hear feedbacks and suggestions on this.

Thanks in advance,

Guillaume.

[1] https://cloud.google.com/appengine/docs/java/modules/
[2]
https://docs.google.com/document/d/1tMhIBX3tw7kPEOMCzKhUgmtmr26GVxyXwUTwMO71THI/edit#

_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev

_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev


--
Thank you,

James Bayer

_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev


Re: How long will events be kept?

Gwenn Etourneau
 

Event are purged based on some value put on your manifest :

cc.app_events.cutoff_age_in_days:
description: "How old an app event should stay in cloud controller
database before being cleaned up"
default: 31
cc.app_usage_events.cutoff_age_in_days:
description: "How old an app usage event should stay in cloud
controller database before being cleaned up"
default: 31
cc.audit_events.cutoff_age_in_days:
description: "How old an audit event should stay in cloud controller
database before being cleaned up"
default: 31
cc.failed_jobs.cutoff_age_in_days:
description: "How old a failed job should stay in cloud controller
database before being cleaned up"
default: 31

On Fri, Jul 31, 2015 at 3:54 PM, Guangcai Wang <guangcai.wang(a)gmail.com>
wrote:

Hi all,

Recently, I am playing with Event API[1]. I am wondering if the events
will be kept persistent for ever or will be purged automatically based on
some policy. Who can share some details?

Meanwhile, I found there is no event audit.app.stop. It was
audit.app.update when I stopped the app. Is it a code bug or documentation
bug?
{
"metadata": {
"guid": "ff81faa1-ce42-415d-a482-defc27524ef4",
"url": "/v2/events/ff81faa1-ce42-415d-a482-defc27524ef4",
"created_at": "2015-07-31T05:27:18Z",
"updated_at": null
},
"entity": {
"type": "audit.app.update",
"actor": "7d85d3d1-9d23-4ba4-8908-7f634f37d0d4",
"actor_type": "user",
"actor_name": "admin",
"actee": "b4953111-9913-4fbf-835a-a6f618c6a59d",
"actee_type": "app",
"actee_name": "simple-java",
"timestamp": "2015-07-31T05:27:18Z",
"metadata": {
"request": {
"state": "STOPPED"
}
},
"space_guid": "d9e4eb09-6b31-401b-87ea-2305364f7a1a",
"organization_guid": "79f25032-0f56-4c7c-86cc-d5c6a67ec300"
}
},




[1] http://apidocs.cloudfoundry.org/214/events/list_app_update_events.html


_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev

8381 - 8400 of 9426