Date   

Re: CVE-2015-1834 CC Path Traversal vulnerability

Dieu Cao <dcao@...>
 

Yes, that's the correct commit to cherry pick for the cc path traversal
vulnerability.

-Dieu
CF Runtime PM

On Tue, May 26, 2015 at 12:30 AM, nota-ja <dev(a)nota.m001.jp> wrote:

I understand the CFF strongly recommends to upgrade to v208 or after, but
for
those (including us) who cannot immediately upgrade, I want to know if
there
is a workaround against this vulnerability.

I've found that there is a commit which seems related this vulnerability:

https://github.com/cloudfoundry/cloud_controller_ng/commit/5257a8af6990e71cd1e34ae8978dfe4773b32826

Cherry-picking this commit may be a workaround? Or we need another commits
to cherry-pick?

Thanks in advance.





--
View this message in context:
http://cf-dev.70369.x6.nabble.com/cf-dev-CVE-2015-1834-CC-Path-Traversal-vulnerability-tp163p173.html
Sent from the CF Dev mailing list archive at Nabble.com.
_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev


Re: CVE-2015-1834 CC Path Traversal vulnerability

Noburou TANIGUCHI
 

I understand the CFF strongly recommends to upgrade to v208 or after, but for
those (including us) who cannot immediately upgrade, I want to know if there
is a workaround against this vulnerability.

I've found that there is a commit which seems related this vulnerability:
https://github.com/cloudfoundry/cloud_controller_ng/commit/5257a8af6990e71cd1e34ae8978dfe4773b32826

Cherry-picking this commit may be a workaround? Or we need another commits
to cherry-pick?

Thanks in advance.





--
View this message in context: http://cf-dev.70369.x6.nabble.com/cf-dev-CVE-2015-1834-CC-Path-Traversal-vulnerability-tp163p173.html
Sent from the CF Dev mailing list archive at Nabble.com.


Re: Doppler zoning query

Erik Jasiak <ejasiak@...>
 

Hi John,

I'll be working on this with engineering in the morning; thanks for the
details thus far.

This is puzzling: Metrons do not route traffic to dopplers outside
their zone today. If all your app instances are spread evenly, and all are
serving an equal amount of requests, then I would expect no
major variability in Doppler load either.

For completeness, what version of CF are you running? I assume your
configurations for all dopplers are roughly the same? All app instances per
AZ are serving an equal number of requests?

Thanks,
Erik Jasiak

On Monday, May 25, 2015, john mcteague <john.mcteague(a)gmail.com> wrote:

Correct, thanks.

On Mon, May 25, 2015 at 12:01 AM, James Bayer <jbayer(a)pivotal.io
<javascript:_e(%7B%7D,'cvml','jbayer(a)pivotal.io');>> wrote:

ok thanks for the extra detail.

to confirm, during the load test, the http traffic is being routed
through zones 4 and 5 app instances on DEAs in a balanced way. however the
dopplers associated with zone 4 / 5 are getting a very small amount of load
sent their way. is that right?


On Sun, May 24, 2015 at 3:45 PM, john mcteague <john.mcteague(a)gmail.com
<javascript:_e(%7B%7D,'cvml','john.mcteague(a)gmail.com');>> wrote:

I am seeing logs from zone 4 and 5 when tailing the logs (*cf logs
hello-world | grep App | awk '{ print $2 }'*), I see a relatively even
balance between all app instances, yet doppler on zones 1-3 consume far
greater cpu resources (15x in some cases) than zones 4 and 5. Generally
zones 4 and 5 barely get above 1% utilization.

Running *cf curl /v2/apps/guid/stats | grep host | sort *shows 30
instances, 6 in each zone, a perfect balance.

Each loggregator is running with 8GB RAM and 4vcpus.


John

On Sat, May 23, 2015 at 11:56 PM, James Bayer <jbayer(a)pivotal.io
<javascript:_e(%7B%7D,'cvml','jbayer(a)pivotal.io');>> wrote:

john,

can you say more about "receiving no load at all"? for example, if you
restart one of the app instances in zone 4 or zone 5 do you see logs with
"cf logs"? you can target a single app instance index to get restarted with
using a "cf curl" command for terminating an app index [1]. you can find
the details with json output from "cf stats" that should show you the
private IPs for the DEAs hosting your app, which should help you figure out
which zone each app index is in.
http://apidocs.cloudfoundry.org/209/apps/terminate_the_running_app_instance_at_the_given_index.html

if you are seeing logs from zone 4 and zone 5, then what might be
happening is that for some reason DEAs in zone 4 or zone 5 are not routable
somewhere along the path. reasons for that could be:
* DEAs in Zone 4 / Zone 5 not getting apps that are hosted there listed
in the routing table
* The routing table may be correct, but for some reason the routers
cannot reach DEAs in zone 4 or zone 5 with outbound traffic and routers
fails over to instances in DEAs 1-3 that it can reach
* some other mystery

On Fri, May 22, 2015 at 2:06 PM, john mcteague <john.mcteague(a)gmail.com
<javascript:_e(%7B%7D,'cvml','john.mcteague(a)gmail.com');>> wrote:

We map our dea's , dopplers and traffic controllers in 5 logical zones
using the various zone properties of doppler, metron_agent and
traffic_controller. This aligns to our physical failure domains in
openstack.

During a recent load test we discovered that zones 4 and 5 were
receiving no load at all, all traffic went to zones 1-3.

What would cause this unbalanced distribution? I have a single app
running 30 instances and have verified it is evenly balanced across all 5
zones (6 instances in each). I have additionally verified that each logical
zone in the bosh yml contains 1 dea, doppler server and traffic controller.

Thanks,
John

_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
<javascript:_e(%7B%7D,'cvml','cf-dev(a)lists.cloudfoundry.org');>
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev


--
Thank you,

James Bayer

--
Thank you,

James Bayer


Re: Doppler zoning query

john mcteague <john.mcteague@...>
 

Correct, thanks.

On Mon, May 25, 2015 at 12:01 AM, James Bayer <jbayer(a)pivotal.io> wrote:

ok thanks for the extra detail.

to confirm, during the load test, the http traffic is being routed through
zones 4 and 5 app instances on DEAs in a balanced way. however the dopplers
associated with zone 4 / 5 are getting a very small amount of load sent
their way. is that right?


On Sun, May 24, 2015 at 3:45 PM, john mcteague <john.mcteague(a)gmail.com>
wrote:

I am seeing logs from zone 4 and 5 when tailing the logs (*cf logs
hello-world | grep App | awk '{ print $2 }'*), I see a relatively even
balance between all app instances, yet doppler on zones 1-3 consume far
greater cpu resources (15x in some cases) than zones 4 and 5. Generally
zones 4 and 5 barely get above 1% utilization.

Running *cf curl /v2/apps/guid/stats | grep host | sort *shows 30
instances, 6 in each zone, a perfect balance.

Each loggregator is running with 8GB RAM and 4vcpus.


John

On Sat, May 23, 2015 at 11:56 PM, James Bayer <jbayer(a)pivotal.io> wrote:

john,

can you say more about "receiving no load at all"? for example, if you
restart one of the app instances in zone 4 or zone 5 do you see logs with
"cf logs"? you can target a single app instance index to get restarted with
using a "cf curl" command for terminating an app index [1]. you can find
the details with json output from "cf stats" that should show you the
private IPs for the DEAs hosting your app, which should help you figure out
which zone each app index is in.
http://apidocs.cloudfoundry.org/209/apps/terminate_the_running_app_instance_at_the_given_index.html

if you are seeing logs from zone 4 and zone 5, then what might be
happening is that for some reason DEAs in zone 4 or zone 5 are not routable
somewhere along the path. reasons for that could be:
* DEAs in Zone 4 / Zone 5 not getting apps that are hosted there listed
in the routing table
* The routing table may be correct, but for some reason the routers
cannot reach DEAs in zone 4 or zone 5 with outbound traffic and routers
fails over to instances in DEAs 1-3 that it can reach
* some other mystery

On Fri, May 22, 2015 at 2:06 PM, john mcteague <john.mcteague(a)gmail.com>
wrote:

We map our dea's , dopplers and traffic controllers in 5 logical zones
using the various zone properties of doppler, metron_agent and
traffic_controller. This aligns to our physical failure domains in
openstack.

During a recent load test we discovered that zones 4 and 5 were
receiving no load at all, all traffic went to zones 1-3.

What would cause this unbalanced distribution? I have a single app
running 30 instances and have verified it is evenly balanced across all 5
zones (6 instances in each). I have additionally verified that each logical
zone in the bosh yml contains 1 dea, doppler server and traffic controller.

Thanks,
John

_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev


--
Thank you,

James Bayer

--
Thank you,

James Bayer


Re: scheduler

James Bayer
 

there is ongoing work to support process types using buildpacks, so that
the same application codebase could be used for multiple different types of
processes (web, worker, etc).

once process types and diego tasks are fully available, we expect to
implement a user-facing api for running batch jobs as application processes.

what people do today is run a long-running process application which uses
something like quartz scheduler [1] or ruby clock with a worker system like
resque [2]

[1] http://quartz-scheduler.org/
[2] https://github.com/resque/resque-scheduler

On Mon, May 25, 2015 at 6:19 AM, Corentin Dupont <cdupont(a)create-net.org>
wrote:

To complete my request, I'm thinking of something like this in the
manifest.yml:

applications:
- name: virusscan
memory: 512M
instances: 1




*schedule: - startFrom : a date endBefore : a date
walltime : a duration*
* precedence : other application name moldable :
true/false*

What do you think?

On Mon, May 25, 2015 at 11:25 AM, Corentin Dupont <cdupont(a)create-net.org>
wrote:


---------- Forwarded message ----------
From: Corentin Dupont <corentin.dupont(a)create-net.org>
Date: Mon, May 25, 2015 at 11:21 AM
Subject: scheduler
To: cf-dev(a)lists.cloudfoundry.org


Hi guys,
just to know, is there a project to add a job scheduler in Cloud Foundry?
I'm thinking of something like the Heroku scheduler (
https://devcenter.heroku.com/articles/scheduler).
That would be very neat to have regular tasks triggered...
Thanks,
Corentin


--

Corentin Dupont
Researcher @ Create-Netwww.corentindupont.info


_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev


--
Thank you,

James Bayer


Re: scheduler

Corentin Dupont <cdupont@...>
 

To complete my request, I'm thinking of something like this in the
manifest.yml:

applications:
- name: virusscan
memory: 512M
instances: 1




*schedule: - startFrom : a date endBefore : a date
walltime : a duration*
* precedence : other application name moldable : true/false*

What do you think?

On Mon, May 25, 2015 at 11:25 AM, Corentin Dupont <cdupont(a)create-net.org>
wrote:


---------- Forwarded message ----------
From: Corentin Dupont <corentin.dupont(a)create-net.org>
Date: Mon, May 25, 2015 at 11:21 AM
Subject: scheduler
To: cf-dev(a)lists.cloudfoundry.org


Hi guys,
just to know, is there a project to add a job scheduler in Cloud Foundry?
I'm thinking of something like the Heroku scheduler (
https://devcenter.heroku.com/articles/scheduler).
That would be very neat to have regular tasks triggered...
Thanks,
Corentin


--

Corentin Dupont
Researcher @ Create-Netwww.corentindupont.info



scheduler

Corentin Dupont <corentin.dupont@...>
 

Hi guys,
just to know, is there a project to add a job scheduler in Cloud Foundry?
I'm thinking of something like the Heroku scheduler (
https://devcenter.heroku.com/articles/scheduler).
That would be very neat to have regular tasks triggered...
Thanks,
Corentin


--

Corentin Dupont
Researcher @ Create-Netwww.corentindupont.info


Re: Doppler zoning query

James Bayer
 

ok thanks for the extra detail.

to confirm, during the load test, the http traffic is being routed through
zones 4 and 5 app instances on DEAs in a balanced way. however the dopplers
associated with zone 4 / 5 are getting a very small amount of load sent
their way. is that right?

On Sun, May 24, 2015 at 3:45 PM, john mcteague <john.mcteague(a)gmail.com>
wrote:

I am seeing logs from zone 4 and 5 when tailing the logs (*cf logs
hello-world | grep App | awk '{ print $2 }'*), I see a relatively even
balance between all app instances, yet doppler on zones 1-3 consume far
greater cpu resources (15x in some cases) than zones 4 and 5. Generally
zones 4 and 5 barely get above 1% utilization.

Running *cf curl /v2/apps/guid/stats | grep host | sort *shows 30
instances, 6 in each zone, a perfect balance.

Each loggregator is running with 8GB RAM and 4vcpus.


John

On Sat, May 23, 2015 at 11:56 PM, James Bayer <jbayer(a)pivotal.io> wrote:

john,

can you say more about "receiving no load at all"? for example, if you
restart one of the app instances in zone 4 or zone 5 do you see logs with
"cf logs"? you can target a single app instance index to get restarted with
using a "cf curl" command for terminating an app index [1]. you can find
the details with json output from "cf stats" that should show you the
private IPs for the DEAs hosting your app, which should help you figure out
which zone each app index is in.
http://apidocs.cloudfoundry.org/209/apps/terminate_the_running_app_instance_at_the_given_index.html

if you are seeing logs from zone 4 and zone 5, then what might be
happening is that for some reason DEAs in zone 4 or zone 5 are not routable
somewhere along the path. reasons for that could be:
* DEAs in Zone 4 / Zone 5 not getting apps that are hosted there listed
in the routing table
* The routing table may be correct, but for some reason the routers
cannot reach DEAs in zone 4 or zone 5 with outbound traffic and routers
fails over to instances in DEAs 1-3 that it can reach
* some other mystery

On Fri, May 22, 2015 at 2:06 PM, john mcteague <john.mcteague(a)gmail.com>
wrote:

We map our dea's , dopplers and traffic controllers in 5 logical zones
using the various zone properties of doppler, metron_agent and
traffic_controller. This aligns to our physical failure domains in
openstack.

During a recent load test we discovered that zones 4 and 5 were
receiving no load at all, all traffic went to zones 1-3.

What would cause this unbalanced distribution? I have a single app
running 30 instances and have verified it is evenly balanced across all 5
zones (6 instances in each). I have additionally verified that each logical
zone in the bosh yml contains 1 dea, doppler server and traffic controller.

Thanks,
John

_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev


--
Thank you,

James Bayer

--
Thank you,

James Bayer


Re: Doppler zoning query

john mcteague <john.mcteague@...>
 

I am seeing logs from zone 4 and 5 when tailing the logs (*cf logs
hello-world | grep App | awk '{ print $2 }'*), I see a relatively even
balance between all app instances, yet doppler on zones 1-3 consume far
greater cpu resources (15x in some cases) than zones 4 and 5. Generally
zones 4 and 5 barely get above 1% utilization.

Running *cf curl /v2/apps/guid/stats | grep host | sort *shows 30
instances, 6 in each zone, a perfect balance.

Each loggregator is running with 8GB RAM and 4vcpus.


John

On Sat, May 23, 2015 at 11:56 PM, James Bayer <jbayer(a)pivotal.io> wrote:

john,

can you say more about "receiving no load at all"? for example, if you
restart one of the app instances in zone 4 or zone 5 do you see logs with
"cf logs"? you can target a single app instance index to get restarted with
using a "cf curl" command for terminating an app index [1]. you can find
the details with json output from "cf stats" that should show you the
private IPs for the DEAs hosting your app, which should help you figure out
which zone each app index is in.
http://apidocs.cloudfoundry.org/209/apps/terminate_the_running_app_instance_at_the_given_index.html

if you are seeing logs from zone 4 and zone 5, then what might be
happening is that for some reason DEAs in zone 4 or zone 5 are not routable
somewhere along the path. reasons for that could be:
* DEAs in Zone 4 / Zone 5 not getting apps that are hosted there listed in
the routing table
* The routing table may be correct, but for some reason the routers cannot
reach DEAs in zone 4 or zone 5 with outbound traffic and routers fails over
to instances in DEAs 1-3 that it can reach
* some other mystery

On Fri, May 22, 2015 at 2:06 PM, john mcteague <john.mcteague(a)gmail.com>
wrote:

We map our dea's , dopplers and traffic controllers in 5 logical zones
using the various zone properties of doppler, metron_agent and
traffic_controller. This aligns to our physical failure domains in
openstack.

During a recent load test we discovered that zones 4 and 5 were receiving
no load at all, all traffic went to zones 1-3.

What would cause this unbalanced distribution? I have a single app
running 30 instances and have verified it is evenly balanced across all 5
zones (6 instances in each). I have additionally verified that each logical
zone in the bosh yml contains 1 dea, doppler server and traffic controller.

Thanks,
John

_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev


--
Thank you,

James Bayer


Re: Question about services on Cloud Foundry

James Bayer
 

it simply means that there is a Service Broker, and works in conjunction
with the "marketplace" so commands like "cf marketplace", "cf
create-service", "cf bind-service" and related all work with the service.
user provided services don't show up in the marketplace replated commands
and they don't have service plans, but they still work with bind/unbind.

On Fri, May 22, 2015 at 7:44 AM, Kinjal Doshi <kindoshi(a)gmail.com> wrote:

Hi,

From the architecture point of view I understand that there are no service
explicitly associated with CF.

However, the following doc is very confusing:
http://docs.cloudfoundry.org/devguide/services/managed.html

Would be great if some one can explain the meaning of manages services her.

Thanks,
Kinjal

_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev

--
Thank you,

James Bayer


Delivery Status Notification (Failure)

Frank Li <alivedata@...>
 

Hi,

When I run 'bosh deploy' , I got a error''Error 400007: `uaa_z1/0' is not
running after update":

*Started preparing configuration > Binding configuration. Done (00:00:04)*

*Started updating job ha_proxy_z1 > ha_proxy_z1/0. Done (00:00:13)*
*Started updating job nats_z1 > nats_z1/0. Done (00:00:27)*
*Started updating job etcd_z1 > etcd_z1/0. Done (00:00:14)*
*Started updating job postgres_z1 > postgres_z1/0. Done (00:00:22)*
*Started updating job uaa_z1 > uaa_z1/0. Failed: `uaa_z1/0' is not running
after update (00:04:02)*

*Error 400007: `uaa_z1/0' is not running after update*





bosh task 132 --debug

*I, [2015-05-22 03:58:56 #2299] [instance_update(uaa_z1/0)] INFO --
DirectorJobRunner: Waiting for 19.88888888888889 seconds to check uaa_z1/0
status*
*D, [2015-05-22 03:58:56 #2299] [] DEBUG -- DirectorJobRunner: Renewing
lock: lock:deployment:cf-warden*
*D, [2015-05-22 03:59:01 #2299] [] DEBUG -- DirectorJobRunner: Renewing
lock: lock:deployment:cf-warden*
*D, [2015-05-22 03:59:06 #2299] [] DEBUG -- DirectorJobRunner: Renewing
lock: lock:deployment:cf-warden*
*D, [2015-05-22 03:59:11 #2299] [] DEBUG -- DirectorJobRunner: Renewing
lock: lock:deployment:cf-warden*
*I, [2015-05-22 03:59:15 #2299] [instance_update(uaa_z1/0)] INFO --
DirectorJobRunner: Checking if uaa_z1/0 has been updated after
19.88888888888889 seconds*
*D, [2015-05-22 03:59:15 #2299] [instance_update(uaa_z1/0)] DEBUG --
DirectorJobRunner: SENT: agent.04446a2b-a103-4a33-9bbe-d8b07d2c6466
{"method":"get_state","arguments":[],"reply_to":"director.2052649d-bafc-4d7a-8184-caa0373ec71f.55816c88-fea4-45cb-a7a9-13d7579b459a"}*
*D, [2015-05-22 03:59:15 #2299] [] DEBUG -- DirectorJobRunner: RECEIVED:
director.2052649d-bafc-4d7a-8184-caa0373ec71f.55816c88-fea4-45cb-a7a9-13d7579b459a
{"value":{"properties":{"logging":{"max_log_file_size":""}},"job":{"name":"uaa_z1","release":"","template":"uaa","version":"e3278da4c650f21c13cfa935814233bc79f156f0","sha1":"c8f3ee66bd955a58f95dbb7c02ca008c5e91ab6a","blobstore_id":"00e2df47-e90f-414d-8965-f97e1ec81b24","templates":[{"name":"uaa","version":"e3278da4c650f21c13cfa935814233bc79f156f0","sha1":"c8f3ee66bd955a58f95dbb7c02ca008c5e91ab6a","blobstore_id":"00e2df47-e90f-414d-8965-f97e1ec81b24"},{"name":"metron_agent","version":"51cf1a4f2e361bc2a2bbd1bee7fa324fe7029589","sha1":"50fccfa5198b0ccd6b39109ec5585f2502011da3","blobstore_id":"beac8dfd-57e9-45c0-8529-56e4c73154bc"},{"name":"consul_agent","version":"6a3b1fe7963fbcc3dea0eab7db337116ba062056","sha1":"54c6a956f7ee1c906e0f8e8aaac13a25584e7d3f","blobstore_id":"aee73914-cf03-4e7c-98a5-a1695cbc2cc5"}]},"packages":{"common":{"name":"common","version":"99c756b71550530632e393f5189220f170a69647.1","sha1":"6da06edd87b2d78e5e0e9848c26cdafe1b3a94eb","blobstore_id":"6783e7af-2366-4142-7199-ac487f359adb"},"consul":{"name":"consul","version":"d828a4735b02229631673bc9cb6aab8e2d56eda5.1","sha1":"15d541d6f0c8708b9af00f045d58d10951755ad6","blobstore_id":"a9256e97-0940-45dc-6003-77141979c976"},"metron_agent":{"name":"metron_agent","version":"122c9dea1f4be749d48bf1203ed0a407b5a2e1ff.1","sha1":"b8241c6482b03f0d010031e5e99cbae4a909ae05","blobstore_id":"8aa07a49-753a-4200-4cbb-cbb554034986"},"ruby-2.1.4":{"name":"ruby-2.1.4","version":"5a4612011cb6b8338d384acc7802367ae5e11003.1","sha1":"032f58346f55ad468c83e015997ff50091a76ef7","blobstore_id":"afaf9c7a-5633-40cc-7a7a-5d285a560b20"},"uaa":{"name":"uaa","version":"05b84acccba5cb31a170d9cad531d22ccb5df8a5.1","sha1":"ae0a7aa73132db192c2800d0094c607a41d56ddb","blobstore_id":"b474ea8d-5c66-4eea-4a7e-689a0cd0de63"}},"configuration_hash":"c1c40387ae387a29bb69124e3d9f741ee50f0d48","networks":{"cf1":{"cloud_properties":{"name":"random"},"default":["dns","gateway"],"dns_record_name":"0.uaa-z1.cf1.cf-warden.bosh","ip":"10.244.0.130","netmask":"255.255.255.252"}},"resource_pool":{"cloud_properties":{"name":"random"},"name":"medium_z1","stemcell":{"name":"bosh-warden-boshlite-ubuntu-lucid-go_agent","version":"64"}},"deployment":"cf-warden","index":0,"persistent_disk":0,"rendered_templates_archive":{"sha1":"2ebf29eac887fb88dab65aeb911a36403c41b1cb","blobstore_id":"38890fbc-f95e-44a9-9f19-859dc42ec381"},"agent_id":"04446a2b-a103-4a33-9bbe-d8b07d2c6466","bosh_protocol":"1","job_state":"failing","vm":{"name":"755410d0-6697-4505-754e-9521d23788ef"},"ntp":{"message":"file
missing"}}}*
*E, [2015-05-22 03:59:15 #2299] [instance_update(uaa_z1/0)] ERROR --
DirectorJobRunner: Error updating instance:
#<Bosh::Director::AgentJobNotRunning: `uaa_z1/0' is not running after
update>*
*/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2811.0/lib/bosh/director/instance_updater.rb:85:in
`update'*
*/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2811.0/lib/bosh/director/job_updater.rb:94:in
`block (2 levels) in update_instance'*
*/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh_common-1.2811.0/lib/common/thread_formatter.rb:49:in
`with_thread_name'*
*/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2811.0/lib/bosh/director/job_updater.rb:92:in
`block in update_instance'*
*/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2811.0/lib/bosh/director/event_log.rb:97:in
`call'*
*/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2811.0/lib/bosh/director/event_log.rb:97:in
`advance_and_track'*
*/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2811.0/lib/bosh/director/job_updater.rb:91:in
`update_instance'*
*/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2811.0/lib/bosh/director/job_updater.rb:85:in
`block (2 levels) in update_instances'*
*/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh_common-1.2811.0/lib/common/thread_pool.rb:77:in
`call'*
*/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh_common-1.2811.0/lib/common/thread_pool.rb:77:in
`block (2 levels) in create_thread'*
*/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh_common-1.2811.0/lib/common/thread_pool.rb:63:in
`loop'*
*/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh_common-1.2811.0/lib/common/thread_pool.rb:63:in
`block in create_thread'*
*/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/logging-1.8.2/lib/logging/diagnostic_context.rb:323:in
`call'*
*/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/logging-1.8.2/lib/logging/diagnostic_context.rb:323:in
`block in create_with_logging_context'*
*D, [2015-05-22 03:59:15 #2299] [] DEBUG -- DirectorJobRunner: Worker
thread raised exception: `uaa_z1/0' is not running after update -
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2811.0/lib/bosh/director/instance_updater.rb:85:in
`update'*
*/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2811.0/lib/bosh/director/job_updater.rb:94:in
`block (2 levels) in update_instance'*
*/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh_common-1.2811.0/lib/common/thread_formatter.rb:49:in
`with_thread_name'*
*/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2811.0/lib/bosh/director/job_updater.rb:92:in
`block in update_instance'*
*/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2811.0/lib/bosh/director/event_log.rb:97:in
`call'*
*/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2811.0/lib/bosh/director/event_log.rb:97:in
`advance_and_track'*
*/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2811.0/lib/bosh/director/job_updater.rb:91:in
`update_instance'*
*/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2811.0/lib/bosh/director/job_updater.rb:85:in
`block (2 levels) in update_instances'*
*/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh_common-1.2811.0/lib/common/thread_pool.rb:77:in
`call'*
*/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh_common-1.2811.0/lib/common/thread_pool.rb:77:in
`block (2 levels) in create_thread'*
*/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh_common-1.2811.0/lib/common/thread_pool.rb:63:in
`loop'*
*/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh_common-1.2811.0/lib/common/thread_pool.rb:63:in
`block in create_thread'*
*/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/logging-1.8.2/lib/logging/diagnostic_context.rb:323:in
`call'*
*/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/logging-1.8.2/lib/logging/diagnostic_context.rb:323:in
`block in create_with_logging_context'*
*D, [2015-05-22 03:59:16 #2299] [] DEBUG -- DirectorJobRunner: Thread is no
longer needed, cleaning up*
*D, [2015-05-22 03:59:16 #2299] [task:132] DEBUG -- DirectorJobRunner:
Shutting down pool*
*D, [2015-05-22 03:59:16 #2299] [task:132] DEBUG -- DirectorJobRunner:
(0.004399s) SELECT "stemcells".* FROM "stemcells" INNER JOIN
"deployments_stemcells" ON (("deployments_stemcells"."stemcell_id" =
"stemcells"."id") AND ("deployments_stemcells"."deployment_id" = 1))*
*D, [2015-05-22 03:59:16 #2299] [task:132] DEBUG -- DirectorJobRunner:
Deleting lock: lock:deployment:cf-warden*
*D, [2015-05-22 03:59:16 #2299] [] DEBUG -- DirectorJobRunner: Lock renewal
thread exiting*
*D, [2015-05-22 03:59:16 #2299] [task:132] DEBUG -- DirectorJobRunner:
Deleted lock: lock:deployment:cf-warden*
*I, [2015-05-22 03:59:16 #2299] [task:132] INFO -- DirectorJobRunner:
sending update deployment error event*
*D, [2015-05-22 03:59:16 #2299] [task:132] DEBUG -- DirectorJobRunner:
SENT: hm.director.alert
{"id":"7245631b-b6b3-43df-bd43-65b19e23f6ae","severity":3,"title":"director
- error during update deployment","summary":"Error during update deployment
for cf-warden against Director c6f166bd-ddac-4f7d-9c57-d11c6ad5133b:
#<Bosh::Director::AgentJobNotRunning: `uaa_z1/0' is not running after
update>","created_at":1432267156}*
*E, [2015-05-22 03:59:16 #2299] [task:132] ERROR -- DirectorJobRunner:
`uaa_z1/0' is not running after update*
*/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2811.0/lib/bosh/director/instance_updater.rb:85:in
`update'*
*/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2811.0/lib/bosh/director/job_updater.rb:94:in
`block (2 levels) in update_instance'*
*/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh_common-1.2811.0/lib/common/thread_formatter.rb:49:in
`with_thread_name'*
*/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2811.0/lib/bosh/director/job_updater.rb:92:in
`block in update_instance'*
*/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2811.0/lib/bosh/director/event_log.rb:97:in
`call'*
*/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2811.0/lib/bosh/director/event_log.rb:97:in
`advance_and_track'*
*/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2811.0/lib/bosh/director/job_updater.rb:91:in
`update_instance'*
*/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2811.0/lib/bosh/director/job_updater.rb:85:in
`block (2 levels) in update_instances'*
*/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh_common-1.2811.0/lib/common/thread_pool.rb:77:in
`call'*
*/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh_common-1.2811.0/lib/common/thread_pool.rb:77:in
`block (2 levels) in create_thread'*
*/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh_common-1.2811.0/lib/common/thread_pool.rb:63:in
`loop'*
*/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh_common-1.2811.0/lib/common/thread_pool.rb:63:in
`block in create_thread'*
*/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/logging-1.8.2/lib/logging/diagnostic_context.rb:323:in
`call'*
*/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/logging-1.8.2/lib/logging/diagnostic_context.rb:323:in
`block in create_with_logging_context'*
*D, [2015-05-22 03:59:16 #2299] [task:132] DEBUG -- DirectorJobRunner:
(0.000396s) BEGIN*
*D, [2015-05-22 03:59:16 #2299] [task:132] DEBUG -- DirectorJobRunner:
(0.001524s) UPDATE "tasks" SET "state" = 'error', "timestamp" = '2015-05-22
03:59:16.090280+0000', "description" = 'create deployment', "result" =
'`uaa_z1/0'' is not running after update', "output" =
'/var/vcap/store/director/tasks/132', "checkpoint_time" = '2015-05-22
03:58:52.002311+0000', "type" = 'update_deployment', "username" = 'admin'
WHERE ("id" = 132)*
*D, [2015-05-22 03:59:16 #2299] [task:132] DEBUG -- DirectorJobRunner:
(0.002034s) COMMIT*
*I, [2015-05-22 03:59:16 #2299] [] INFO -- DirectorJobRunner: Task took 5
minutes 55.32297424799998 seconds to process.*





uaa section in cf-manifest.yml as following:

*uaa:*
*admin:*
*client_secret: admin-secret*
*authentication:*
*policy:*
*countFailuresWithinSeconds: null*
*lockoutAfterFailures: null*
*lockoutPeriodSeconds: null*
*batch:*
*password: batch-password*
*username: batch-username*
*catalina_opts: -Xmx192m -XX:MaxPermSize=128m*
*cc:*
*client_secret: cc-secret*
*clients:*
*app-direct:*
*access-token-validity: 1209600*
*authorities: app_direct_invoice.write*
*authorized-grant-types:
authorization_code,client_credentials,password,refresh_token,implicit*
*override: true*
*redirect-uri: https://console.10.244.0.34.xip.io
<https://console.10.244.0.34.xip.io/>*
*refresh-token-validity: 1209600*
*secret: app-direct-secret*
*cc-service-dashboards:*
*authorities: clients.read,clients.write,clients.admin*
*authorized-grant-types: client_credentials*
*scope: openid,cloud_controller_service_permissions.read*
*secret: cc-broker-secret*
*cloud_controller_username_lookup:*
*authorities: scim.userids*
*authorized-grant-types: client_credentials*
*secret: cloud-controller-username-lookup-secret*
*developer_console:*
*access-token-validity: 1209600*
*authorities:
scim.write,scim.read,cloud_controller.read,cloud_controller.write,password.write,uaa.admin,uaa.resource,cloud_controller.admin,billing.admin*
*authorized-grant-types: authorization_code,client_credentials*
*override: true*
*redirect-uri: https://console.10.244.0.34.xip.io/oauth/callback
<https://console.10.244.0.34.xip.io/oauth/callback>*
*refresh-token-validity: 1209600*
*scope:
openid,cloud_controller.read,cloud_controller.write,password.write,console.admin,console.support*
*secret: console-secret*
*doppler:*
*authorities: uaa.resource*
*override: true*
*secret: doppler-secret*
*gorouter:*
*authorities:
clients.read,clients.write,clients.admin,route.admin,route.advertise*
*authorized-grant-types: client_credentials,refresh_token*
*scope: openid,cloud_controller_service_permissions.read*
*secret: gorouter-secret*
*login:*
*authorities:
oauth.login,scim.write,clients.read,notifications.write,critical_notifications.write,emails.write,scim.userids,password.write*
*authorized-grant-types:
authorization_code,client_credentials,refresh_token*
*override: true*
*redirect-uri: http://login.10.244.0.34.xip.io
<http://login.10.244.0.34.xip.io/>*
*scope: openid,oauth.approvals*
*secret: login-secret*
*notifications:*
*authorities: cloud_controller.admin,scim.read*
*authorized-grant-types: client_credentials*
*secret: notification-secret*
*issuer: https://uaa.10.244.0.34.xip.io <https://uaa.10.244.0.34.xip.io/>*
*jwt:*
*signing_key: |+*
*-----BEGIN RSA PRIVATE KEY-----*
*MIICXAIBAAKBgQDHFr+KICms+tuT1OXJwhCUmR2dKVy7psa8xzElSyzqx7oJyfJ1*
*JZyOzToj9T5SfTIq396agbHJWVfYphNahvZ/7uMXqHxf+ZH9BL1gk9Y6kCnbM5R6*
*0gfwjyW1/dQPjOzn9N394zd2FJoFHwdq9Qs0wBugspULZVNRxq7veq/fzwIDAQAB*
*AoGBAJ8dRTQFhIllbHx4GLbpTQsWXJ6w4hZvskJKCLM/o8R4n+0W45pQ1xEiYKdA*
*Z/DRcnjltylRImBD8XuLL8iYOQSZXNMb1h3g5/UGbUXLmCgQLOUUlnYt34QOQm+0*
*KvUqfMSFBbKMsYBAoQmNdTHBaz3dZa8ON9hh/f5TT8u0OWNRAkEA5opzsIXv+52J*
*duc1VGyX3SwlxiE2dStW8wZqGiuLH142n6MKnkLU4ctNLiclw6BZePXFZYIK+AkE*
*xQ+k16je5QJBAN0TIKMPWIbbHVr5rkdUqOyezlFFWYOwnMmw/BKa1d3zp54VP/P8*
*+5aQ2d4sMoKEOfdWH7UqMe3FszfYFvSu5KMCQFMYeFaaEEP7Jn8rGzfQ5HQd44ek*
*lQJqmq6CE2BXbY/i34FuvPcKU70HEEygY6Y9d8J3o6zQ0K9SYNu+pcXt4lkCQA3h*
*jJQQe5uEGJTExqed7jllQ0khFJzLMx0K6tj0NeeIzAaGCQz13oo2sCdeGRHO4aDh*
*HH6Qlq/6UOV5wP8+GAcCQFgRCcB+hrje8hfEEefHcFpyKH+5g1Eu1k0mLrxK2zd+*
*4SlotYRHgPCEubokb2S1zfZDWIXW3HmggnGgM949TlY=*
*-----END RSA PRIVATE KEY-----*
*verification_key: |+*
*-----BEGIN PUBLIC KEY-----*
*MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQDHFr+KICms+tuT1OXJwhCUmR2d*
*KVy7psa8xzElSyzqx7oJyfJ1JZyOzToj9T5SfTIq396agbHJWVfYphNahvZ/7uMX*
*qHxf+ZH9BL1gk9Y6kCnbM5R60gfwjyW1/dQPjOzn9N394zd2FJoFHwdq9Qs0wBug*
*spULZVNRxq7veq/fzwIDAQAB*
*-----END PUBLIC KEY-----*
*ldap: null*
*login: null*
*no_ssl: true*
*restricted_ips_regex:
10\.\d{1,3}\.\d{1,3}\.\d{1,3}|192\.168\.\d{1,3}\.\d{1,3}|169\.254\.\d{1,3}\.\d{1,3}|127\.\d{1,3}\.\d{1,3}\.\d{1,3}|172\.1[6-9]{1}\.\d{1,3}\.\d{1,3}|172\.2[0-9]{1}\.\d{1,3}\.\d{1,3}|172\.3[0-1]{1}\.\d{1,3}\.\d{1,3}*
*scim:*
*external_groups: null*
*userids_enabled: true*
*users:*
*-
admin|admin|scim.write,scim.read,openid,cloud_controller.admin,clients.read,clients.write,doppler.firehose*
*spring_profiles: null*
*url: https://uaa.10.244.0.34.xip.io <https://uaa.10.244.0.34.xip.io/>*
*user: null*
*uaadb:*
*address: 10.244.0.30*
*databases:*
*- citext: true*
*name: uaadb*
*tag: uaa*
*db_scheme: postgresql*
*port: 5524*
*roles:*
*- name: uaaadmin*
*password: admin*
*tag: admin*



Can anyone help me ?Thanks!


Best Regards,

Frank


Re: Release Notes for v210

James Bayer
 

On Sat, May 23, 2015 at 9:41 PM, James Bayer <jbayer(a)pivotal.io> wrote:

please note that this release addresses CVE-2015-3202 and CVE-2015-1834
and we strongly recommend upgrading to this release. more details will be
forthcoming after the long united states holiday weekend.

https://github.com/cloudfoundry/cf-release/releases/tag/v210

*https://github.com/cloudfoundry-community/cf-docs-contrib/wiki/v210
<https://github.com/cloudfoundry-community/cf-docs-contrib/wiki/v210>*

--
Thank you,

James Bayer
--
Thank you,

James Bayer


CVE-2015-1834 CC Path Traversal vulnerability

James Bayer
 

Severity: Medium

Vendor: Cloud Foundry Foundation

Vulnerable Versions: Cloud Foundry Runtime Releases prior to 208

CVE References: CVE-2015-1834
Description:

A path traversal vulnerability was identified in the Cloud Foundry
component Cloud Controller. Path traversal is the "outbreak" of a given
directory structure through relative file paths in the user input. It aims
at accessing files and directories that are stored outside the web root
folder, for disallowed reading or even executing arbitrary system commands.
An attacker could use a certain parameter of the file path for instance to
inject "../" sequences in order to navigate through the file system. In
this particular case a remote authenticated attacker can exploit the
identified vulnerability in order to upload arbitrary files to the server
running a Cloud Controller instance – outside the isolated application
container.

Affected Products and Versions:

Cloud Foundry Runtime cf-release versions v207 or earlier are susceptible
to the vulnerability

Mitigation:

The Cloud Foundry project recommends that Cloud Foundry Runtime Deployments
running Release v207 or earlier upgrade to v208 or later.

Credit:

This issue was identified by Swisscom / SEC Consult

--
Thank you,

James Bayer


USN-2617-1 and CVE-2015-3202 FUSE vulnerability

James Bayer
 

Severity: High

Vendor: Canonical Ubuntu

Vulnerable Versions: Canonical Ubuntu 10.04 and 14.04

CVE References: USN-2617-1, CVE-2015-3202
Description:

A privilege escalation vulnerability was identified in a component used in
the Cloud Foundry stacks lucid64 and cfliunuxfs2. The FUSE package
incorrectly filtered environment variables and could be made to overwrite
files as an administrator, allowing a local attacker to gain administrative
privileges.
Affected Products and Versions:

-

Cloud Foundry Runtime cf-release versions v183 and all releases through
v209

Mitigation:

The Cloud Foundry project recommends that Cloud Foundry Runtime Deployments
running Release v209 or earlier upgrade to v210 or later. Note that the
FUSE package has been removed from the lucid64 stack in the v210 release
while it has been patched in the cflinuxfs2 stack (Trusty). Developers
should use the cflinuxfs2 stack in order to use FUSE with v210 and higher.

Credit:

This issue was identified by Tavis Ormandy


--
Thank you,

James Bayer


Release Notes for v210

James Bayer
 

please note that this release addresses CVE-2015-3202 and CVE-2015-1834 and
we strongly recommend upgrading to this release. more details will be
forthcoming after the long united states holiday weekend.

https://github.com/cloudfoundry/cf-release/releases/tag/v210

*https://github.com/cloudfoundry-community/cf-docs-contrib/wiki/v210
<https://github.com/cloudfoundry-community/cf-docs-contrib/wiki/v210>*

--
Thank you,

James Bayer


Re: Doppler zoning query

James Bayer
 

john,

can you say more about "receiving no load at all"? for example, if you
restart one of the app instances in zone 4 or zone 5 do you see logs with
"cf logs"? you can target a single app instance index to get restarted with
using a "cf curl" command for terminating an app index [1]. you can find
the details with json output from "cf stats" that should show you the
private IPs for the DEAs hosting your app, which should help you figure out
which zone each app index is in.
http://apidocs.cloudfoundry.org/209/apps/terminate_the_running_app_instance_at_the_given_index.html

if you are seeing logs from zone 4 and zone 5, then what might be happening
is that for some reason DEAs in zone 4 or zone 5 are not routable somewhere
along the path. reasons for that could be:
* DEAs in Zone 4 / Zone 5 not getting apps that are hosted there listed in
the routing table
* The routing table may be correct, but for some reason the routers cannot
reach DEAs in zone 4 or zone 5 with outbound traffic and routers fails over
to instances in DEAs 1-3 that it can reach
* some other mystery

On Fri, May 22, 2015 at 2:06 PM, john mcteague <john.mcteague(a)gmail.com>
wrote:

We map our dea's , dopplers and traffic controllers in 5 logical zones
using the various zone properties of doppler, metron_agent and
traffic_controller. This aligns to our physical failure domains in
openstack.

During a recent load test we discovered that zones 4 and 5 were receiving
no load at all, all traffic went to zones 1-3.

What would cause this unbalanced distribution? I have a single app running
30 instances and have verified it is evenly balanced across all 5 zones (6
instances in each). I have additionally verified that each logical zone in
the bosh yml contains 1 dea, doppler server and traffic controller.

Thanks,
John

_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev


--
Thank you,

James Bayer


Release Notes for v209

Shannon Coen
 


Doppler zoning query

john mcteague <john.mcteague@...>
 

We map our dea's , dopplers and traffic controllers in 5 logical zones
using the various zone properties of doppler, metron_agent and
traffic_controller. This aligns to our physical failure domains in
openstack.

During a recent load test we discovered that zones 4 and 5 were receiving
no load at all, all traffic went to zones 1-3.

What would cause this unbalanced distribution? I have a single app running
30 instances and have verified it is evenly balanced across all 5 zones (6
instances in each). I have additionally verified that each logical zone in
the bosh yml contains 1 dea, doppler server and traffic controller.

Thanks,
John


Re: Addressing buildpack size

Daniel Mikusa
 

On Fri, May 8, 2015 at 3:09 PM, Mike Dalessio <mdalessio(a)pivotal.io> wrote:

Hey Dan,


On Tue, May 5, 2015 at 1:33 PM, Daniel Mikusa <dmikusa(a)pivotal.io> wrote:

I'm happy to see the size of the build packs dropping, but I have to ask
why do we bundle the build packs with a fixed set of binaries?

The build packs themselves are very small, it's the binaries that are
huge. It seems like it would make sense to handle them as separate
concerns.
You've nailed it. Yes, it makes a ton of sense to handle binaries as
separate concerns, and we're heading in that direction.

At one point very recently, we started doing some planning around how we
might cache buildpack assets in a structured way (like a blob store) and
seamlessly have everything Just Work™.

The first step towards separating these concerns was to extract the use of
dependencies out of the (generally upstream) buildpack code and into a
buildpack manifest file. Having done that, the dependencies are now
first-class artifacts that can be managed by operators.

We stopped there, at least for the time being, as it's not terribly clear
how to jam buildpack asset caching into the current API, CC buildpack
model, and staging process (though, again, the manifest is the best first
step, as it enables us to trap network calls and thus redirect them to a
cache either on disk or over the network).

It's also quite possible that the remaining pain will be further
ameliorated by the proposed Diego feature to attach persistent disk (on
which, presumably, the buildpacks and their assets are cached), which means
we're deferring further work until we've got more user feedback and data.

This sounds cool. Can't wait to see what you guys come up with here. I've
been thinking about the subject a bit, but haven't come up with any great
ideas.

The first thought that came to mind was a transparent network proxy, like
Squid, which would just automatically cache the files as they're accessed.
It's nice and simple, nothing with the build pack would need to change or
be altered to take advantage of it, but I'm not sure how that would work in
a completely offline environments as I'm not sure how you'd seed the cache.

Another thought was for the DEA to provided some additional hints to the
build packs about how they could locate binaries. Perhaps a special
environment variable like CF_BP_REPO=http://repo.system.domain/. The build
pack could then take that and use it to generate URLs to it's binary
resources. A variation on that would be to check this repo first, and then
fall back to some global external repo if available (i.e. most recent stuff
is on CF_BP_REPO, older stuff needs Internet access to download). Yet
another variation would be for the CF_BP_REPO to start small and grow as
things are requested. For example, if you request a file that doesn't
exist CF_BP_REPO would try to download it from the Internet, cache it and
stream it back to the app.

Anyway, I'm just thinking out loud now. Thanks for the update!

Dan






I don't want to come off too harsh, but in addition to the size of the
build packs when bundled with binaries, there are some other disadvantages
to doing things this way.

- Binaries and build packs are updated at different rates. Binaries
are usually updated often, to pick up new runtimes versions & security
fixes; build packs are generally changed at a slower pace, as features or
bug fixes for them are needed. Bundling the two together, requires an
operator to update the build packs more often, just to get updated
binaries. It's been my experience that users don't (or forget) to update
build packs which means they're likely running with older, possibly
insecure runtimes.

- It's difficult to bundle a set of runtime binaries that suite
everyone's needs, different users will update at different rates and will
want different sets of binaries. If build packs and binaries are packaged
together, users will end up needing to find a specific build pack bundle
that contains the runtime they want or users will need to build their own
custom bundles. If build packs and binaries are handled separately, there
will be more flexibility in what binaries a build pack has available as an
operator can manage binaries independently. Wayne's post seems to hit on
this point.

- At some point, I think this has already happened (jruby & java),
build packs are going to start having overlapping sets of binaries. If the
binaries are bundled with the build pack, there's no way that build packs
could ever share binaries.

My personal preference would be to see build packs bundled without
binaries and some other solution, which probably merits a separate thread,
for managing the binaries.

I'm curious to hear what others think or if I've missed something and
bundling build packs and binaries is clearly the way to go.

Dan

PS. If this is something that came up in the PMC, I apologize. I
skimmed the notes, but may have missed it.



On Mon, May 4, 2015 at 2:10 PM, Wayne E. Seguin <
wayneeseguin(a)starkandwayne.com> wrote:

Because of very good compatibility between versions (post 1.X) I would
like to make a motion to do the following:

Split the buildpack:

have the default golang buildpack track the latest golang version

Then handle older versions in one of two ways, either:

a) have a large secondary for older versions

or

b) have multiple, one for each version of golang, users can specify a
specific URL if they care about specific versions.

This would improve space/time considerations for operations. Personally
I would prefer b) because it allows you to enable supporting older go
versions out of the box by design but still keeping each golang buildpack
small.

~Wayne

Wayne E. Seguin <wayneeseguin(a)starkandwayne.com>
CTO ; Stark & Wayne, LLC

On May 4, 2015, at 12:40 , Mike Dalessio <mdalessio(a)pivotal.io> wrote:

Hi Wayne,

On Fri, May 1, 2015 at 1:29 PM, Wayne E. Seguin <
wayneeseguin(a)starkandwayne.com> wrote:

What an incredible step in the right direction, Awesome!!!

Out of curiosity, why is the go buildpack still quite so large?
Thanks for asking this question.

Currently we're including the following binary dependencies in
`go-buildpack`:

```
cache $ ls -lSh *_go*
-rw-r--r-- 1 flavorjones flavorjones 60M 2015-05-04 12:36
https___storage.googleapis.com_golang_go1.4.2.linux-amd64.tar.gz
-rw-r--r-- 1 flavorjones flavorjones 60M 2015-05-04 12:36
https___storage.googleapis.com_golang_go1.4.1.linux-amd64.tar.gz
-rw-r--r-- 1 flavorjones flavorjones 54M 2015-05-04 12:36
https___storage.googleapis.com_golang_go1.2.2.linux-amd64.tar.gz
-rw-r--r-- 1 flavorjones flavorjones 54M 2015-05-04 12:36
http___go.googlecode.com_files_go1.2.1.linux-amd64.tar.gz
-rw-r--r-- 1 flavorjones flavorjones 51M 2015-05-04 12:36
https___storage.googleapis.com_golang_go1.3.3.linux-amd64.tar.gz
-rw-r--r-- 1 flavorjones flavorjones 51M 2015-05-04 12:36
https___storage.googleapis.com_golang_go1.3.2.linux-amd64.tar.gz
-rw-r--r-- 1 flavorjones flavorjones 40M 2015-05-04 12:36
http___go.googlecode.com_files_go1.1.2.linux-amd64.tar.gz
-rw-r--r-- 1 flavorjones flavorjones 40M 2015-05-04 12:36
http___go.googlecode.com_files_go1.1.1.linux-amd64.tar.gz
```

One question we should ask, I think, is: should we still be supporting
golang 1.1 and 1.2? Dropping those versions would cut the size of the
buildpack in (approximately) half.





On May 1, 2015, at 11:54 , Mike Dalessio <mdalessio(a)pivotal.io> wrote:

Skinny buildpacks have been cut for go, nodejs, php, python and ruby
buildpacks.

| | current | previous |
|--------+---------+----------|
| go | 442MB | 633MB |
| nodejs | 69MB | 417MB |
| php | 804MB | 1.1GB |
| python | 454MB | 654MB |
| ruby | 365MB | 1.3GB |
|--------+---------+----------|
| total | 2.1GB | 4.1GB |

for an aggregate 51% reduction in size. Details follow.
Next Steps

I recognize that every cloud operator may have a different policy on
what versions of interpreters and libraries they want to support, based on
the specific requirements of their users.

These buildpacks reflect a "bare mininum" policy for a cloud to be
operable, and I do not expect these buildpacks to be adopted as-is by many
operators.

These buildpacks have not yet been added to cf-release, specifically
so that the community can prepare their own buildpacks if necessary.

Over the next few days, the buildpacks core team will ship
documentation and tooling to assist you in packaging specific dependencies
for your instance of CF. I'll start a new thread on this list early next
week to communicate this information.
Call to Action

In the meantime, please think about whether the policy implemented in
these buildpacks ("last two patches (or teenies) on all supported
major.minor releases") is suitable for your users; and if not, think about
what dependencies you'll ideally be supporting.
go-buildpack v1.3.0

Release notes are here
<https://github.com/cloudfoundry/go-buildpack/releases/tag/v1.3.0>.

Size reduced 30% from 633MB
<https://github.com/cloudfoundry/go-buildpack/releases/tag/v1.2.0> to
442MB
<https://github.com/cloudfoundry/go-buildpack/releases/tag/v1.3.0>.

Supports (full manifest here
<https://github.com/cloudfoundry/go-buildpack/blob/v1.3.0/manifest.yml>
):

- golang 1.4.{1,2}
- golang 1.3.{2,3}
- golang 1.2.{1,2}
- golang 1.1.{1,2}

nodejs-buildpack v1.3.0

Full release notes are here
<https://github.com/cloudfoundry/nodejs-buildpack/releases/tag/v1.3.0>.

Size reduced 83% from 417MB
<https://github.com/cloudfoundry/nodejs-buildpack/releases/tag/v1.2.1>
to 69MB
<https://github.com/cloudfoundry/nodejs-buildpack/releases/tag/v1.3.0>.

Supports (full manifest here
<https://github.com/cloudfoundry/nodejs-buildpack/blob/v1.3.0/manifest.yml>
):

- 0.8.{27,28}
- 0.9.{11,12}
- 0.10.{37,38}
- 0.11.{15,16}
- 0.12.{1,2}

php-buildpack v3.2.0

Full release notes are here
<https://github.com/cloudfoundry/php-buildpack/releases/tag/v3.2.0>.

Size reduced 27% from 1.1GB
<https://github.com/cloudfoundry/php-buildpack/releases/tag/v3.1.1> to
803MB
<https://github.com/cloudfoundry/php-buildpack/releases/tag/v3.2.0>.

Supports: (full manifest here
<https://github.com/cloudfoundry/php-buildpack/blob/v3.2.0/manifest.yml>
)

*PHP*:

- 5.6.{6,7}
- 5.5.{22,23}
- 5.4.{38,39}

*HHVM* (lucid64 stack):

- 3.2.0

*HHVM* (cflinuxfs2 stack):

- 3.5.{0,1}
- 3.6.{0,1}

*Apache HTTPD*:

- 2.4.12

*nginx*:

- 1.7.10
- 1.6.2
- 1.5.13

python-buildpack v1.3.0

Full release notes are here
<https://github.com/cloudfoundry/python-buildpack/releases/tag/v1.3.0>.

Size reduced 30% from 654MB
<https://github.com/cloudfoundry/python-buildpack/releases/tag/v1.2.0>
to 454MB
<https://github.com/cloudfoundry/python-buildpack/releases/tag/v1.3.0>.

Supports: (full manifest here
<https://github.com/cloudfoundry/python-buildpack/blob/v1.3.0/manifest.yml>
)

- 2.7.{8,9}
- 3.2.{4,5}
- 3.3.{5,6}
- 3.4.{2,3}

ruby-buildpack v1.4.0

Release notes are here
<https://github.com/cloudfoundry/ruby-buildpack/releases/tag/v1.4.0>.

Size reduced 71% from 1.3GB
<https://github.com/cloudfoundry/ruby-buildpack/releases/tag/v1.3.1>
to 365MB
<https://github.com/cloudfoundry/ruby-buildpack/releases/tag/v1.4.0>.

Supports: (full manifest here
<https://github.com/cloudfoundry/ruby-buildpack/blob/v1.4.0/manifest.yml>
)

*MRI*:

- 2.2.{1,2}
- 2.1.{5,6}
- 2.0.0p645

*JRuby*:

- ruby-1.9.3-jruby-1.7.19
- ruby-2.0.0-jruby-1.7.19
- ruby-2.2.0-jruby-9.0.0.0.pre1


---------- Forwarded message ----------
From: Mike Dalessio <mdalessio(a)pivotal.io>
Date: Wed, Apr 8, 2015 at 11:10 AM
Subject: Addressing buildpack size
To: vcap-dev(a)cloudfoundry.org


Hello vcap-dev!

This email details a proposed change to how Cloud Foundry buildpacks
are packaged, with respect to the ever-increasing number of binary
dependencies being cached within them.

This proposal's permanent residence is here:

https://github.com/cloudfoundry-incubator/buildpack-packager/issues/4

Feel free to comment there or reply to this email.
------------------------------
Buildpack SizesWhere we are today

Many of you have seen, and possibly been challenged by, the enormous
sizes of some of the buildpacks that are currently shipping with cf-release.

Here's the state of the world right now, as of v205:

php-buildpack: 1.1G
ruby-buildpack: 922M
go-buildpack: 675M
python-buildpack: 654M
nodejs-buildpack: 403M
----------------------
total: 3.7G

These enormous sizes are the result of the current policy of packaging
every-version-of-everything-ever-supported ("EVOEES") within the buildpack.

Most recently, this problem was exacerbated by the fact that buildpacks
now contain binaries for two rootfses.
Why this is a problem

If continued, buildpacks will only continue to increase in size,
leading to longer and longer build and deploy times, longer test times,
slacker feedback loops, and therefore less frequent buildpack releases.

Additionally, this also means that we're shipping versions of
interpreters, web servers, and libraries that are deprecated, insecure, or
both. Feedback from CF users has made it clear that many companies view
this as an unnecessary security risk.

This policy is clearly unsustainable.
What we can do about it

There are many things being discussed to ameliorate the impact that
buildpack size is having on the operations of CF.

Notably, Onsi has proposed a change to buildpack caching, to improve
Diego staging times (link to proposal
<https://github.com/pivotal-cf-experimental/diego-dev-notes/blob/master/proposals/better-buildpack-caching.md>
).

However, there is an immediate solution available, which addresses both
the size concerns as well as the security concern: packaging fewer binary
dependencies within the buildpack.
The proposal

I'm proposing that we reduce the binary dependencies in each buildpack
in a very specific way.

Aside on terms I'll use below:

- Versions of the form "1.2.3" are broken down as:
MAJOR.MINOR.TEENY. Many language ecosystems refer to the "TEENY" as "PATCH"
interchangeably, but we're going to use "TEENY" in this proposal.
- We'll assume that TEENY gets bumped for API/ABI compatible
changes.
- We'll assume that MINOR and MAJOR get bumped when there are
API/ABI *incompatible* changes.

I'd like to move forward soon with the following changes:

1. For language interpreters/compilers, we'll package the two
most-recent TEENY versions on each MAJOR.MINOR release.
2. For all other dependencies, we'll package only the single
most-recent TEENY version on each MAJOR.MINOR release.
3. We will discontinue packaging versions of dependencies that have
been deprecated.
4. We will no longer provide "EVOEES" buildpack releases.
5. We will no longer provide "online" buildpack releases, which
download dependencies from the public internet.
6. We will document the process, and provide tooling, for CF
operators to build their own buildpacks, choosing the dependencies that
their organization wants to support or creating "online" buildpacks at
operators' discretion.

An example for #1 is that we'll go from packaging 34 versions of node v0.10.x
to only packaging two: 0.10.37 and 0.10.38.

An example for #2 is that we'll go from packaging 3 versions of nginx 1.5
in the PHP buildpack to only packaging one: 1.5.12.

An example for #3 is that we'll discontinue packaging ruby 1.9.3 in the
ruby-buildpack, which reached end-of-life in February 2015.
Outcomes

With these changes, the total buildpack size will be reduced greatly.
As an example, we expect the ruby-buildpack size to go from 922M to 338M.

We also want to set the expectation that, as new interpreter versions
are released, either for new features or (more urgently) for security
fixes, we'll release new buildpacks much more quickly than we do today. My
hope is that we'll be able to do it within 24 hours of a new release.
Planning

These changes will be relatively easy to make, since all the buildpacks
are now using a manifest.yml file to declare what's being packaged. We
expect to be able to complete this work within the next two weeks.

Stories are in the Tracker backlog under the Epic named
"skinny-buildpacks", which you can see here:

https://www.pivotaltracker.com/epic/show/1747328

------------------------------

Please let me know how these changes will impact you and your
organizations, and let me know of any counter-proposals or variations you'd
like to consider.

Thanks,

-mike



_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev



_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev


Runtime PMC: 2015-05-19 Notes

Eric Malm <emalm@...>
 

Hi, all,

The Runtime PMC met on Tuesday, 2015-05-19. Permanent notes are available
at:

https://github.com/cloudfoundry/pmc-notes/blob/master/Runtime/2015-05-19-runtime.md

and are included below.

Best,
Eric

---

*# Runtime PMC Meeting 2015-05-19*

*## Agenda*

1. Current Backlog and Priorities
1. PMC Lifecycle Activities
1. Open Discussion


*## Attendees*

* Chip Childers, Cloud Foundry Foundation
* Matt Sykes, IBM
* Atul Kshirsagar, GE
* Erik Jasiak, Pivotal
* Sree Tummidi, Pivotal
* Eric Malm, Pivotal
* Shannon Coen, Pivotal
* Will Pragnell, Pivotal
* Marco Nicosia, Pivotal


*## Current Backlog and Priorities*

*### Runtime*

* Shannon filling in for Dieu this week
* support for context-based routing; delivered
* investigating query performance
* addressing outstanding pull requests
* bump to UAA
* issues with loggregator in acceptance environment, blocker to cutting
stabilization release for collector


*### Diego*

* ssh access largely done, currently working routing ssh traffic to proxy
* performance breadth: completed 50 cell test, investigating bulk
processing in jobs that do so
* refining CI to improve recording compatible versions of Diego and CF
* processing of PRs from Garden and Lattice are prioritized
* Stories queued up to investigate securing identified gaps in Diego


*### UAA*

* 2.2.6, 2.3.0 releases, notes available
* upgraded Spring versions
* update to JRE expected in v210 of cf-release
* more LDAP work, chaining in identity zone: both LDAP and internal
authentication can work simultaneously
* support for New Relic instrumentation, will appear after v209
* upcoming:
* risk assessment of persistent token storage: understand performance
implications
* starting work on password policy: multi-tenant for default zone and
additional zones
* OAuth client groups: authorization to manage clients
* SAML support
* question from Matt Sykes:
* would like to discuss IBM PR for UAA DB migration strategy with the team


*### Garden*

* investigating management of disk quotas
* replacing C/Bash code with Go to enable instrumentation, security, and
maintainability
* planning to remove default VCAP user in Garden


*### Lattice*

* nearly done with last stories before releasing 0.2.5
* Cisco contributed openstack support
* baking deployment automation into published images on some providers
* improved documentation for how to install lattice on VMs
* next work planned is support for CF-like app lifecycle management
(pushing code in addition to docker)


*### TCP Router*

* building out icebox to reflect inception
* question from Matt Sykes:
* how to incorporate new project into PMC? IBM parties surprised with
announcement at Summit
* Chip: inconsistent policy so far; maybe this belongs alongside gorouter
in Runtime PMC
* working on process for review, discussion of incubating project
* Shannon: first step will be to produce proposal, discuss with community


*### LAMB*

* big rewind project on datadog firehose nozzle: limitation in doppler
about size of messages, dropping messages
* working to resolve those problems: improving number of concurrent reads,
marshaling efficiency
* seeing increases in message loss in Runtime environments: may be other
source of contention, working with them to resolve
* Datadog nozzle work:
* looking at developing a Graphite nozzle from community work
* will investigate community interest in Graphite support
* naming alignment from loggregator to doppler
* instrumentation of statsd for larger message sizes, work to phase out
collector and NATS in CF
* goal is to stream metrics directly to firehose
* question from Matt Sykes: story about protobuf protocol proposal
* best way to support vm tagging in log messages: distinguish between types
of data in log messages
* goal would be to improve the implementation: more generic API for message
data; understand implications of this change


*### Greenhouse*

* Accepted code from HP
* will get support from Microsoft with regard to interest in entire
Microsoft stack


*## PMC Lifecycle Activities*

None to report.

*## Open Discussion*

None to report.

9201 - 9220 of 9408