Date   

Re: List Reply-To behavior

Chip Childers <cchilders@...>
 

I've asked the admin team to make this adjustment. Thanks for pointing this
out!

Chip Childers | Technology Chief of Staff | Cloud Foundry Foundation

On Fri, May 22, 2015 at 10:06 AM, James Bayer <jbayer(a)pivotal.io> wrote:

yes, this has affected me

On Fri, May 22, 2015 at 4:33 AM, Daniel Mikusa <dmikusa(a)pivotal.io> wrote:



On Fri, May 22, 2015 at 6:22 AM, Matthew Sykes <matthew.sykes(a)gmail.com>
wrote:

The vcap-dev list used to use a Reply-To header pointing back to the
list such that replying to a post would automatically go back to the list.
The current mailman configuration for cf-dev does not set a Reply-To header
and the default behavior is to reply to the author.

While I understand the pros and cons of setting the Reply-To header,
this new behavior has bitten me several times and I've found myself
re-posting a response to the list instead of just the author.

I'm interested in knowing if anyone else has been bitten by this
behavior and would like a Reply-To header added back...
+1 and +1

Dan



Thanks.

--
Matthew Sykes
matthew.sykes(a)gmail.com

_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev

_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev


--
Thank you,

James Bayer

_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev


Re: CVE-2015-1834 CC Path Traversal vulnerability

Noburou TANIGUCHI
 

Thank you for the quick response, Dieu!



--
View this message in context: http://cf-dev.70369.x6.nabble.com/cf-dev-CVE-2015-1834-CC-Path-Traversal-vulnerability-tp163p176.html
Sent from the CF Dev mailing list archive at Nabble.com.


Re: scheduler

Corentin Dupont <cdupont@...>
 

Hi James, thanks for the answer!
We are interested to implement a job scheduler for CF. Do you think this
could be interesting to have?

We are working in a project called DC4Cities (http://www.dc4cities.eu) were
the objective is to make data centres use more renewable energy.
We want to use PaaS frameworks such as CloudFoundry to achieve this goal.
The idea is to schedule some PaaS tasks at the moment there is more
renewable energies (when the sun is shining).

That's why I had the idea to implement a job scheduler for batch jobs in
CF. For example one could state "I need to have this task to run for 2
hours per day" and the scheduler could choose when to run it.

Another possibility is to have application-oriented SLA implemented at CF
level. For example if some KPIs of the application are getting too low, CF
would spark a new container. If the SLA is defined with some flexibility,
it could also be used to schedule renewable energies. For example in our
trial scenarios we have an application that convert images. Its SLA says
that it needs to convert 1000 images per day, but you are free to produce
them when you want i.e. when renewable energies are available...

On Mon, May 25, 2015 at 7:29 PM, James Bayer <jbayer(a)pivotal.io> wrote:

there is ongoing work to support process types using buildpacks, so that
the same application codebase could be used for multiple different types of
processes (web, worker, etc).

once process types and diego tasks are fully available, we expect to
implement a user-facing api for running batch jobs as application processes.

what people do today is run a long-running process application which uses
something like quartz scheduler [1] or ruby clock with a worker system like
resque [2]

[1] http://quartz-scheduler.org/
[2] https://github.com/resque/resque-scheduler

On Mon, May 25, 2015 at 6:19 AM, Corentin Dupont <cdupont(a)create-net.org>
wrote:

To complete my request, I'm thinking of something like this in the
manifest.yml:

applications:
- name: virusscan
memory: 512M
instances: 1




*schedule: - startFrom : a date endBefore : a date
walltime : a duration*
* precedence : other application name moldable :
true/false*

What do you think?

On Mon, May 25, 2015 at 11:25 AM, Corentin Dupont <cdupont(a)create-net.org
wrote:

---------- Forwarded message ----------
From: Corentin Dupont <corentin.dupont(a)create-net.org>
Date: Mon, May 25, 2015 at 11:21 AM
Subject: scheduler
To: cf-dev(a)lists.cloudfoundry.org


Hi guys,
just to know, is there a project to add a job scheduler in Cloud
Foundry?
I'm thinking of something like the Heroku scheduler (
https://devcenter.heroku.com/articles/scheduler).
That would be very neat to have regular tasks triggered...
Thanks,
Corentin


--

Corentin Dupont
Researcher @ Create-Netwww.corentindupont.info


_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev


--
Thank you,

James Bayer


Re: CVE-2015-1834 CC Path Traversal vulnerability

Dieu Cao <dcao@...>
 

Yes, that's the correct commit to cherry pick for the cc path traversal
vulnerability.

-Dieu
CF Runtime PM

On Tue, May 26, 2015 at 12:30 AM, nota-ja <dev(a)nota.m001.jp> wrote:

I understand the CFF strongly recommends to upgrade to v208 or after, but
for
those (including us) who cannot immediately upgrade, I want to know if
there
is a workaround against this vulnerability.

I've found that there is a commit which seems related this vulnerability:

https://github.com/cloudfoundry/cloud_controller_ng/commit/5257a8af6990e71cd1e34ae8978dfe4773b32826

Cherry-picking this commit may be a workaround? Or we need another commits
to cherry-pick?

Thanks in advance.





--
View this message in context:
http://cf-dev.70369.x6.nabble.com/cf-dev-CVE-2015-1834-CC-Path-Traversal-vulnerability-tp163p173.html
Sent from the CF Dev mailing list archive at Nabble.com.
_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev


Re: CVE-2015-1834 CC Path Traversal vulnerability

Noburou TANIGUCHI
 

I understand the CFF strongly recommends to upgrade to v208 or after, but for
those (including us) who cannot immediately upgrade, I want to know if there
is a workaround against this vulnerability.

I've found that there is a commit which seems related this vulnerability:
https://github.com/cloudfoundry/cloud_controller_ng/commit/5257a8af6990e71cd1e34ae8978dfe4773b32826

Cherry-picking this commit may be a workaround? Or we need another commits
to cherry-pick?

Thanks in advance.





--
View this message in context: http://cf-dev.70369.x6.nabble.com/cf-dev-CVE-2015-1834-CC-Path-Traversal-vulnerability-tp163p173.html
Sent from the CF Dev mailing list archive at Nabble.com.


Re: Doppler zoning query

Erik Jasiak <ejasiak@...>
 

Hi John,

I'll be working on this with engineering in the morning; thanks for the
details thus far.

This is puzzling: Metrons do not route traffic to dopplers outside
their zone today. If all your app instances are spread evenly, and all are
serving an equal amount of requests, then I would expect no
major variability in Doppler load either.

For completeness, what version of CF are you running? I assume your
configurations for all dopplers are roughly the same? All app instances per
AZ are serving an equal number of requests?

Thanks,
Erik Jasiak

On Monday, May 25, 2015, john mcteague <john.mcteague(a)gmail.com> wrote:

Correct, thanks.

On Mon, May 25, 2015 at 12:01 AM, James Bayer <jbayer(a)pivotal.io
<javascript:_e(%7B%7D,'cvml','jbayer(a)pivotal.io');>> wrote:

ok thanks for the extra detail.

to confirm, during the load test, the http traffic is being routed
through zones 4 and 5 app instances on DEAs in a balanced way. however the
dopplers associated with zone 4 / 5 are getting a very small amount of load
sent their way. is that right?


On Sun, May 24, 2015 at 3:45 PM, john mcteague <john.mcteague(a)gmail.com
<javascript:_e(%7B%7D,'cvml','john.mcteague(a)gmail.com');>> wrote:

I am seeing logs from zone 4 and 5 when tailing the logs (*cf logs
hello-world | grep App | awk '{ print $2 }'*), I see a relatively even
balance between all app instances, yet doppler on zones 1-3 consume far
greater cpu resources (15x in some cases) than zones 4 and 5. Generally
zones 4 and 5 barely get above 1% utilization.

Running *cf curl /v2/apps/guid/stats | grep host | sort *shows 30
instances, 6 in each zone, a perfect balance.

Each loggregator is running with 8GB RAM and 4vcpus.


John

On Sat, May 23, 2015 at 11:56 PM, James Bayer <jbayer(a)pivotal.io
<javascript:_e(%7B%7D,'cvml','jbayer(a)pivotal.io');>> wrote:

john,

can you say more about "receiving no load at all"? for example, if you
restart one of the app instances in zone 4 or zone 5 do you see logs with
"cf logs"? you can target a single app instance index to get restarted with
using a "cf curl" command for terminating an app index [1]. you can find
the details with json output from "cf stats" that should show you the
private IPs for the DEAs hosting your app, which should help you figure out
which zone each app index is in.
http://apidocs.cloudfoundry.org/209/apps/terminate_the_running_app_instance_at_the_given_index.html

if you are seeing logs from zone 4 and zone 5, then what might be
happening is that for some reason DEAs in zone 4 or zone 5 are not routable
somewhere along the path. reasons for that could be:
* DEAs in Zone 4 / Zone 5 not getting apps that are hosted there listed
in the routing table
* The routing table may be correct, but for some reason the routers
cannot reach DEAs in zone 4 or zone 5 with outbound traffic and routers
fails over to instances in DEAs 1-3 that it can reach
* some other mystery

On Fri, May 22, 2015 at 2:06 PM, john mcteague <john.mcteague(a)gmail.com
<javascript:_e(%7B%7D,'cvml','john.mcteague(a)gmail.com');>> wrote:

We map our dea's , dopplers and traffic controllers in 5 logical zones
using the various zone properties of doppler, metron_agent and
traffic_controller. This aligns to our physical failure domains in
openstack.

During a recent load test we discovered that zones 4 and 5 were
receiving no load at all, all traffic went to zones 1-3.

What would cause this unbalanced distribution? I have a single app
running 30 instances and have verified it is evenly balanced across all 5
zones (6 instances in each). I have additionally verified that each logical
zone in the bosh yml contains 1 dea, doppler server and traffic controller.

Thanks,
John

_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
<javascript:_e(%7B%7D,'cvml','cf-dev(a)lists.cloudfoundry.org');>
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev


--
Thank you,

James Bayer

--
Thank you,

James Bayer


Re: Doppler zoning query

john mcteague <john.mcteague@...>
 

Correct, thanks.

On Mon, May 25, 2015 at 12:01 AM, James Bayer <jbayer(a)pivotal.io> wrote:

ok thanks for the extra detail.

to confirm, during the load test, the http traffic is being routed through
zones 4 and 5 app instances on DEAs in a balanced way. however the dopplers
associated with zone 4 / 5 are getting a very small amount of load sent
their way. is that right?


On Sun, May 24, 2015 at 3:45 PM, john mcteague <john.mcteague(a)gmail.com>
wrote:

I am seeing logs from zone 4 and 5 when tailing the logs (*cf logs
hello-world | grep App | awk '{ print $2 }'*), I see a relatively even
balance between all app instances, yet doppler on zones 1-3 consume far
greater cpu resources (15x in some cases) than zones 4 and 5. Generally
zones 4 and 5 barely get above 1% utilization.

Running *cf curl /v2/apps/guid/stats | grep host | sort *shows 30
instances, 6 in each zone, a perfect balance.

Each loggregator is running with 8GB RAM and 4vcpus.


John

On Sat, May 23, 2015 at 11:56 PM, James Bayer <jbayer(a)pivotal.io> wrote:

john,

can you say more about "receiving no load at all"? for example, if you
restart one of the app instances in zone 4 or zone 5 do you see logs with
"cf logs"? you can target a single app instance index to get restarted with
using a "cf curl" command for terminating an app index [1]. you can find
the details with json output from "cf stats" that should show you the
private IPs for the DEAs hosting your app, which should help you figure out
which zone each app index is in.
http://apidocs.cloudfoundry.org/209/apps/terminate_the_running_app_instance_at_the_given_index.html

if you are seeing logs from zone 4 and zone 5, then what might be
happening is that for some reason DEAs in zone 4 or zone 5 are not routable
somewhere along the path. reasons for that could be:
* DEAs in Zone 4 / Zone 5 not getting apps that are hosted there listed
in the routing table
* The routing table may be correct, but for some reason the routers
cannot reach DEAs in zone 4 or zone 5 with outbound traffic and routers
fails over to instances in DEAs 1-3 that it can reach
* some other mystery

On Fri, May 22, 2015 at 2:06 PM, john mcteague <john.mcteague(a)gmail.com>
wrote:

We map our dea's , dopplers and traffic controllers in 5 logical zones
using the various zone properties of doppler, metron_agent and
traffic_controller. This aligns to our physical failure domains in
openstack.

During a recent load test we discovered that zones 4 and 5 were
receiving no load at all, all traffic went to zones 1-3.

What would cause this unbalanced distribution? I have a single app
running 30 instances and have verified it is evenly balanced across all 5
zones (6 instances in each). I have additionally verified that each logical
zone in the bosh yml contains 1 dea, doppler server and traffic controller.

Thanks,
John

_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev


--
Thank you,

James Bayer

--
Thank you,

James Bayer


Re: scheduler

James Bayer
 

there is ongoing work to support process types using buildpacks, so that
the same application codebase could be used for multiple different types of
processes (web, worker, etc).

once process types and diego tasks are fully available, we expect to
implement a user-facing api for running batch jobs as application processes.

what people do today is run a long-running process application which uses
something like quartz scheduler [1] or ruby clock with a worker system like
resque [2]

[1] http://quartz-scheduler.org/
[2] https://github.com/resque/resque-scheduler

On Mon, May 25, 2015 at 6:19 AM, Corentin Dupont <cdupont(a)create-net.org>
wrote:

To complete my request, I'm thinking of something like this in the
manifest.yml:

applications:
- name: virusscan
memory: 512M
instances: 1




*schedule: - startFrom : a date endBefore : a date
walltime : a duration*
* precedence : other application name moldable :
true/false*

What do you think?

On Mon, May 25, 2015 at 11:25 AM, Corentin Dupont <cdupont(a)create-net.org>
wrote:


---------- Forwarded message ----------
From: Corentin Dupont <corentin.dupont(a)create-net.org>
Date: Mon, May 25, 2015 at 11:21 AM
Subject: scheduler
To: cf-dev(a)lists.cloudfoundry.org


Hi guys,
just to know, is there a project to add a job scheduler in Cloud Foundry?
I'm thinking of something like the Heroku scheduler (
https://devcenter.heroku.com/articles/scheduler).
That would be very neat to have regular tasks triggered...
Thanks,
Corentin


--

Corentin Dupont
Researcher @ Create-Netwww.corentindupont.info


_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev


--
Thank you,

James Bayer


Re: scheduler

Corentin Dupont <cdupont@...>
 

To complete my request, I'm thinking of something like this in the
manifest.yml:

applications:
- name: virusscan
memory: 512M
instances: 1




*schedule: - startFrom : a date endBefore : a date
walltime : a duration*
* precedence : other application name moldable : true/false*

What do you think?

On Mon, May 25, 2015 at 11:25 AM, Corentin Dupont <cdupont(a)create-net.org>
wrote:


---------- Forwarded message ----------
From: Corentin Dupont <corentin.dupont(a)create-net.org>
Date: Mon, May 25, 2015 at 11:21 AM
Subject: scheduler
To: cf-dev(a)lists.cloudfoundry.org


Hi guys,
just to know, is there a project to add a job scheduler in Cloud Foundry?
I'm thinking of something like the Heroku scheduler (
https://devcenter.heroku.com/articles/scheduler).
That would be very neat to have regular tasks triggered...
Thanks,
Corentin


--

Corentin Dupont
Researcher @ Create-Netwww.corentindupont.info



scheduler

Corentin Dupont <corentin.dupont@...>
 

Hi guys,
just to know, is there a project to add a job scheduler in Cloud Foundry?
I'm thinking of something like the Heroku scheduler (
https://devcenter.heroku.com/articles/scheduler).
That would be very neat to have regular tasks triggered...
Thanks,
Corentin


--

Corentin Dupont
Researcher @ Create-Netwww.corentindupont.info


Re: Doppler zoning query

James Bayer
 

ok thanks for the extra detail.

to confirm, during the load test, the http traffic is being routed through
zones 4 and 5 app instances on DEAs in a balanced way. however the dopplers
associated with zone 4 / 5 are getting a very small amount of load sent
their way. is that right?

On Sun, May 24, 2015 at 3:45 PM, john mcteague <john.mcteague(a)gmail.com>
wrote:

I am seeing logs from zone 4 and 5 when tailing the logs (*cf logs
hello-world | grep App | awk '{ print $2 }'*), I see a relatively even
balance between all app instances, yet doppler on zones 1-3 consume far
greater cpu resources (15x in some cases) than zones 4 and 5. Generally
zones 4 and 5 barely get above 1% utilization.

Running *cf curl /v2/apps/guid/stats | grep host | sort *shows 30
instances, 6 in each zone, a perfect balance.

Each loggregator is running with 8GB RAM and 4vcpus.


John

On Sat, May 23, 2015 at 11:56 PM, James Bayer <jbayer(a)pivotal.io> wrote:

john,

can you say more about "receiving no load at all"? for example, if you
restart one of the app instances in zone 4 or zone 5 do you see logs with
"cf logs"? you can target a single app instance index to get restarted with
using a "cf curl" command for terminating an app index [1]. you can find
the details with json output from "cf stats" that should show you the
private IPs for the DEAs hosting your app, which should help you figure out
which zone each app index is in.
http://apidocs.cloudfoundry.org/209/apps/terminate_the_running_app_instance_at_the_given_index.html

if you are seeing logs from zone 4 and zone 5, then what might be
happening is that for some reason DEAs in zone 4 or zone 5 are not routable
somewhere along the path. reasons for that could be:
* DEAs in Zone 4 / Zone 5 not getting apps that are hosted there listed
in the routing table
* The routing table may be correct, but for some reason the routers
cannot reach DEAs in zone 4 or zone 5 with outbound traffic and routers
fails over to instances in DEAs 1-3 that it can reach
* some other mystery

On Fri, May 22, 2015 at 2:06 PM, john mcteague <john.mcteague(a)gmail.com>
wrote:

We map our dea's , dopplers and traffic controllers in 5 logical zones
using the various zone properties of doppler, metron_agent and
traffic_controller. This aligns to our physical failure domains in
openstack.

During a recent load test we discovered that zones 4 and 5 were
receiving no load at all, all traffic went to zones 1-3.

What would cause this unbalanced distribution? I have a single app
running 30 instances and have verified it is evenly balanced across all 5
zones (6 instances in each). I have additionally verified that each logical
zone in the bosh yml contains 1 dea, doppler server and traffic controller.

Thanks,
John

_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev


--
Thank you,

James Bayer

--
Thank you,

James Bayer


Re: Doppler zoning query

john mcteague <john.mcteague@...>
 

I am seeing logs from zone 4 and 5 when tailing the logs (*cf logs
hello-world | grep App | awk '{ print $2 }'*), I see a relatively even
balance between all app instances, yet doppler on zones 1-3 consume far
greater cpu resources (15x in some cases) than zones 4 and 5. Generally
zones 4 and 5 barely get above 1% utilization.

Running *cf curl /v2/apps/guid/stats | grep host | sort *shows 30
instances, 6 in each zone, a perfect balance.

Each loggregator is running with 8GB RAM and 4vcpus.


John

On Sat, May 23, 2015 at 11:56 PM, James Bayer <jbayer(a)pivotal.io> wrote:

john,

can you say more about "receiving no load at all"? for example, if you
restart one of the app instances in zone 4 or zone 5 do you see logs with
"cf logs"? you can target a single app instance index to get restarted with
using a "cf curl" command for terminating an app index [1]. you can find
the details with json output from "cf stats" that should show you the
private IPs for the DEAs hosting your app, which should help you figure out
which zone each app index is in.
http://apidocs.cloudfoundry.org/209/apps/terminate_the_running_app_instance_at_the_given_index.html

if you are seeing logs from zone 4 and zone 5, then what might be
happening is that for some reason DEAs in zone 4 or zone 5 are not routable
somewhere along the path. reasons for that could be:
* DEAs in Zone 4 / Zone 5 not getting apps that are hosted there listed in
the routing table
* The routing table may be correct, but for some reason the routers cannot
reach DEAs in zone 4 or zone 5 with outbound traffic and routers fails over
to instances in DEAs 1-3 that it can reach
* some other mystery

On Fri, May 22, 2015 at 2:06 PM, john mcteague <john.mcteague(a)gmail.com>
wrote:

We map our dea's , dopplers and traffic controllers in 5 logical zones
using the various zone properties of doppler, metron_agent and
traffic_controller. This aligns to our physical failure domains in
openstack.

During a recent load test we discovered that zones 4 and 5 were receiving
no load at all, all traffic went to zones 1-3.

What would cause this unbalanced distribution? I have a single app
running 30 instances and have verified it is evenly balanced across all 5
zones (6 instances in each). I have additionally verified that each logical
zone in the bosh yml contains 1 dea, doppler server and traffic controller.

Thanks,
John

_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev


--
Thank you,

James Bayer


Re: Question about services on Cloud Foundry

James Bayer
 

it simply means that there is a Service Broker, and works in conjunction
with the "marketplace" so commands like "cf marketplace", "cf
create-service", "cf bind-service" and related all work with the service.
user provided services don't show up in the marketplace replated commands
and they don't have service plans, but they still work with bind/unbind.

On Fri, May 22, 2015 at 7:44 AM, Kinjal Doshi <kindoshi(a)gmail.com> wrote:

Hi,

From the architecture point of view I understand that there are no service
explicitly associated with CF.

However, the following doc is very confusing:
http://docs.cloudfoundry.org/devguide/services/managed.html

Would be great if some one can explain the meaning of manages services her.

Thanks,
Kinjal

_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev

--
Thank you,

James Bayer


Delivery Status Notification (Failure)

Frank Li <alivedata@...>
 

Hi,

When I run 'bosh deploy' , I got a error''Error 400007: `uaa_z1/0' is not
running after update":

*Started preparing configuration > Binding configuration. Done (00:00:04)*

*Started updating job ha_proxy_z1 > ha_proxy_z1/0. Done (00:00:13)*
*Started updating job nats_z1 > nats_z1/0. Done (00:00:27)*
*Started updating job etcd_z1 > etcd_z1/0. Done (00:00:14)*
*Started updating job postgres_z1 > postgres_z1/0. Done (00:00:22)*
*Started updating job uaa_z1 > uaa_z1/0. Failed: `uaa_z1/0' is not running
after update (00:04:02)*

*Error 400007: `uaa_z1/0' is not running after update*





bosh task 132 --debug

*I, [2015-05-22 03:58:56 #2299] [instance_update(uaa_z1/0)] INFO --
DirectorJobRunner: Waiting for 19.88888888888889 seconds to check uaa_z1/0
status*
*D, [2015-05-22 03:58:56 #2299] [] DEBUG -- DirectorJobRunner: Renewing
lock: lock:deployment:cf-warden*
*D, [2015-05-22 03:59:01 #2299] [] DEBUG -- DirectorJobRunner: Renewing
lock: lock:deployment:cf-warden*
*D, [2015-05-22 03:59:06 #2299] [] DEBUG -- DirectorJobRunner: Renewing
lock: lock:deployment:cf-warden*
*D, [2015-05-22 03:59:11 #2299] [] DEBUG -- DirectorJobRunner: Renewing
lock: lock:deployment:cf-warden*
*I, [2015-05-22 03:59:15 #2299] [instance_update(uaa_z1/0)] INFO --
DirectorJobRunner: Checking if uaa_z1/0 has been updated after
19.88888888888889 seconds*
*D, [2015-05-22 03:59:15 #2299] [instance_update(uaa_z1/0)] DEBUG --
DirectorJobRunner: SENT: agent.04446a2b-a103-4a33-9bbe-d8b07d2c6466
{"method":"get_state","arguments":[],"reply_to":"director.2052649d-bafc-4d7a-8184-caa0373ec71f.55816c88-fea4-45cb-a7a9-13d7579b459a"}*
*D, [2015-05-22 03:59:15 #2299] [] DEBUG -- DirectorJobRunner: RECEIVED:
director.2052649d-bafc-4d7a-8184-caa0373ec71f.55816c88-fea4-45cb-a7a9-13d7579b459a
{"value":{"properties":{"logging":{"max_log_file_size":""}},"job":{"name":"uaa_z1","release":"","template":"uaa","version":"e3278da4c650f21c13cfa935814233bc79f156f0","sha1":"c8f3ee66bd955a58f95dbb7c02ca008c5e91ab6a","blobstore_id":"00e2df47-e90f-414d-8965-f97e1ec81b24","templates":[{"name":"uaa","version":"e3278da4c650f21c13cfa935814233bc79f156f0","sha1":"c8f3ee66bd955a58f95dbb7c02ca008c5e91ab6a","blobstore_id":"00e2df47-e90f-414d-8965-f97e1ec81b24"},{"name":"metron_agent","version":"51cf1a4f2e361bc2a2bbd1bee7fa324fe7029589","sha1":"50fccfa5198b0ccd6b39109ec5585f2502011da3","blobstore_id":"beac8dfd-57e9-45c0-8529-56e4c73154bc"},{"name":"consul_agent","version":"6a3b1fe7963fbcc3dea0eab7db337116ba062056","sha1":"54c6a956f7ee1c906e0f8e8aaac13a25584e7d3f","blobstore_id":"aee73914-cf03-4e7c-98a5-a1695cbc2cc5"}]},"packages":{"common":{"name":"common","version":"99c756b71550530632e393f5189220f170a69647.1","sha1":"6da06edd87b2d78e5e0e9848c26cdafe1b3a94eb","blobstore_id":"6783e7af-2366-4142-7199-ac487f359adb"},"consul":{"name":"consul","version":"d828a4735b02229631673bc9cb6aab8e2d56eda5.1","sha1":"15d541d6f0c8708b9af00f045d58d10951755ad6","blobstore_id":"a9256e97-0940-45dc-6003-77141979c976"},"metron_agent":{"name":"metron_agent","version":"122c9dea1f4be749d48bf1203ed0a407b5a2e1ff.1","sha1":"b8241c6482b03f0d010031e5e99cbae4a909ae05","blobstore_id":"8aa07a49-753a-4200-4cbb-cbb554034986"},"ruby-2.1.4":{"name":"ruby-2.1.4","version":"5a4612011cb6b8338d384acc7802367ae5e11003.1","sha1":"032f58346f55ad468c83e015997ff50091a76ef7","blobstore_id":"afaf9c7a-5633-40cc-7a7a-5d285a560b20"},"uaa":{"name":"uaa","version":"05b84acccba5cb31a170d9cad531d22ccb5df8a5.1","sha1":"ae0a7aa73132db192c2800d0094c607a41d56ddb","blobstore_id":"b474ea8d-5c66-4eea-4a7e-689a0cd0de63"}},"configuration_hash":"c1c40387ae387a29bb69124e3d9f741ee50f0d48","networks":{"cf1":{"cloud_properties":{"name":"random"},"default":["dns","gateway"],"dns_record_name":"0.uaa-z1.cf1.cf-warden.bosh","ip":"10.244.0.130","netmask":"255.255.255.252"}},"resource_pool":{"cloud_properties":{"name":"random"},"name":"medium_z1","stemcell":{"name":"bosh-warden-boshlite-ubuntu-lucid-go_agent","version":"64"}},"deployment":"cf-warden","index":0,"persistent_disk":0,"rendered_templates_archive":{"sha1":"2ebf29eac887fb88dab65aeb911a36403c41b1cb","blobstore_id":"38890fbc-f95e-44a9-9f19-859dc42ec381"},"agent_id":"04446a2b-a103-4a33-9bbe-d8b07d2c6466","bosh_protocol":"1","job_state":"failing","vm":{"name":"755410d0-6697-4505-754e-9521d23788ef"},"ntp":{"message":"file
missing"}}}*
*E, [2015-05-22 03:59:15 #2299] [instance_update(uaa_z1/0)] ERROR --
DirectorJobRunner: Error updating instance:
#<Bosh::Director::AgentJobNotRunning: `uaa_z1/0' is not running after
update>*
*/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2811.0/lib/bosh/director/instance_updater.rb:85:in
`update'*
*/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2811.0/lib/bosh/director/job_updater.rb:94:in
`block (2 levels) in update_instance'*
*/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh_common-1.2811.0/lib/common/thread_formatter.rb:49:in
`with_thread_name'*
*/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2811.0/lib/bosh/director/job_updater.rb:92:in
`block in update_instance'*
*/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2811.0/lib/bosh/director/event_log.rb:97:in
`call'*
*/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2811.0/lib/bosh/director/event_log.rb:97:in
`advance_and_track'*
*/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2811.0/lib/bosh/director/job_updater.rb:91:in
`update_instance'*
*/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2811.0/lib/bosh/director/job_updater.rb:85:in
`block (2 levels) in update_instances'*
*/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh_common-1.2811.0/lib/common/thread_pool.rb:77:in
`call'*
*/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh_common-1.2811.0/lib/common/thread_pool.rb:77:in
`block (2 levels) in create_thread'*
*/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh_common-1.2811.0/lib/common/thread_pool.rb:63:in
`loop'*
*/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh_common-1.2811.0/lib/common/thread_pool.rb:63:in
`block in create_thread'*
*/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/logging-1.8.2/lib/logging/diagnostic_context.rb:323:in
`call'*
*/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/logging-1.8.2/lib/logging/diagnostic_context.rb:323:in
`block in create_with_logging_context'*
*D, [2015-05-22 03:59:15 #2299] [] DEBUG -- DirectorJobRunner: Worker
thread raised exception: `uaa_z1/0' is not running after update -
/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2811.0/lib/bosh/director/instance_updater.rb:85:in
`update'*
*/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2811.0/lib/bosh/director/job_updater.rb:94:in
`block (2 levels) in update_instance'*
*/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh_common-1.2811.0/lib/common/thread_formatter.rb:49:in
`with_thread_name'*
*/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2811.0/lib/bosh/director/job_updater.rb:92:in
`block in update_instance'*
*/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2811.0/lib/bosh/director/event_log.rb:97:in
`call'*
*/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2811.0/lib/bosh/director/event_log.rb:97:in
`advance_and_track'*
*/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2811.0/lib/bosh/director/job_updater.rb:91:in
`update_instance'*
*/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2811.0/lib/bosh/director/job_updater.rb:85:in
`block (2 levels) in update_instances'*
*/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh_common-1.2811.0/lib/common/thread_pool.rb:77:in
`call'*
*/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh_common-1.2811.0/lib/common/thread_pool.rb:77:in
`block (2 levels) in create_thread'*
*/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh_common-1.2811.0/lib/common/thread_pool.rb:63:in
`loop'*
*/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh_common-1.2811.0/lib/common/thread_pool.rb:63:in
`block in create_thread'*
*/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/logging-1.8.2/lib/logging/diagnostic_context.rb:323:in
`call'*
*/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/logging-1.8.2/lib/logging/diagnostic_context.rb:323:in
`block in create_with_logging_context'*
*D, [2015-05-22 03:59:16 #2299] [] DEBUG -- DirectorJobRunner: Thread is no
longer needed, cleaning up*
*D, [2015-05-22 03:59:16 #2299] [task:132] DEBUG -- DirectorJobRunner:
Shutting down pool*
*D, [2015-05-22 03:59:16 #2299] [task:132] DEBUG -- DirectorJobRunner:
(0.004399s) SELECT "stemcells".* FROM "stemcells" INNER JOIN
"deployments_stemcells" ON (("deployments_stemcells"."stemcell_id" =
"stemcells"."id") AND ("deployments_stemcells"."deployment_id" = 1))*
*D, [2015-05-22 03:59:16 #2299] [task:132] DEBUG -- DirectorJobRunner:
Deleting lock: lock:deployment:cf-warden*
*D, [2015-05-22 03:59:16 #2299] [] DEBUG -- DirectorJobRunner: Lock renewal
thread exiting*
*D, [2015-05-22 03:59:16 #2299] [task:132] DEBUG -- DirectorJobRunner:
Deleted lock: lock:deployment:cf-warden*
*I, [2015-05-22 03:59:16 #2299] [task:132] INFO -- DirectorJobRunner:
sending update deployment error event*
*D, [2015-05-22 03:59:16 #2299] [task:132] DEBUG -- DirectorJobRunner:
SENT: hm.director.alert
{"id":"7245631b-b6b3-43df-bd43-65b19e23f6ae","severity":3,"title":"director
- error during update deployment","summary":"Error during update deployment
for cf-warden against Director c6f166bd-ddac-4f7d-9c57-d11c6ad5133b:
#<Bosh::Director::AgentJobNotRunning: `uaa_z1/0' is not running after
update>","created_at":1432267156}*
*E, [2015-05-22 03:59:16 #2299] [task:132] ERROR -- DirectorJobRunner:
`uaa_z1/0' is not running after update*
*/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2811.0/lib/bosh/director/instance_updater.rb:85:in
`update'*
*/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2811.0/lib/bosh/director/job_updater.rb:94:in
`block (2 levels) in update_instance'*
*/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh_common-1.2811.0/lib/common/thread_formatter.rb:49:in
`with_thread_name'*
*/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2811.0/lib/bosh/director/job_updater.rb:92:in
`block in update_instance'*
*/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2811.0/lib/bosh/director/event_log.rb:97:in
`call'*
*/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2811.0/lib/bosh/director/event_log.rb:97:in
`advance_and_track'*
*/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2811.0/lib/bosh/director/job_updater.rb:91:in
`update_instance'*
*/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh-director-1.2811.0/lib/bosh/director/job_updater.rb:85:in
`block (2 levels) in update_instances'*
*/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh_common-1.2811.0/lib/common/thread_pool.rb:77:in
`call'*
*/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh_common-1.2811.0/lib/common/thread_pool.rb:77:in
`block (2 levels) in create_thread'*
*/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh_common-1.2811.0/lib/common/thread_pool.rb:63:in
`loop'*
*/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/bosh_common-1.2811.0/lib/common/thread_pool.rb:63:in
`block in create_thread'*
*/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/logging-1.8.2/lib/logging/diagnostic_context.rb:323:in
`call'*
*/var/vcap/packages/director/gem_home/ruby/2.1.0/gems/logging-1.8.2/lib/logging/diagnostic_context.rb:323:in
`block in create_with_logging_context'*
*D, [2015-05-22 03:59:16 #2299] [task:132] DEBUG -- DirectorJobRunner:
(0.000396s) BEGIN*
*D, [2015-05-22 03:59:16 #2299] [task:132] DEBUG -- DirectorJobRunner:
(0.001524s) UPDATE "tasks" SET "state" = 'error', "timestamp" = '2015-05-22
03:59:16.090280+0000', "description" = 'create deployment', "result" =
'`uaa_z1/0'' is not running after update', "output" =
'/var/vcap/store/director/tasks/132', "checkpoint_time" = '2015-05-22
03:58:52.002311+0000', "type" = 'update_deployment', "username" = 'admin'
WHERE ("id" = 132)*
*D, [2015-05-22 03:59:16 #2299] [task:132] DEBUG -- DirectorJobRunner:
(0.002034s) COMMIT*
*I, [2015-05-22 03:59:16 #2299] [] INFO -- DirectorJobRunner: Task took 5
minutes 55.32297424799998 seconds to process.*





uaa section in cf-manifest.yml as following:

*uaa:*
*admin:*
*client_secret: admin-secret*
*authentication:*
*policy:*
*countFailuresWithinSeconds: null*
*lockoutAfterFailures: null*
*lockoutPeriodSeconds: null*
*batch:*
*password: batch-password*
*username: batch-username*
*catalina_opts: -Xmx192m -XX:MaxPermSize=128m*
*cc:*
*client_secret: cc-secret*
*clients:*
*app-direct:*
*access-token-validity: 1209600*
*authorities: app_direct_invoice.write*
*authorized-grant-types:
authorization_code,client_credentials,password,refresh_token,implicit*
*override: true*
*redirect-uri: https://console.10.244.0.34.xip.io
<https://console.10.244.0.34.xip.io/>*
*refresh-token-validity: 1209600*
*secret: app-direct-secret*
*cc-service-dashboards:*
*authorities: clients.read,clients.write,clients.admin*
*authorized-grant-types: client_credentials*
*scope: openid,cloud_controller_service_permissions.read*
*secret: cc-broker-secret*
*cloud_controller_username_lookup:*
*authorities: scim.userids*
*authorized-grant-types: client_credentials*
*secret: cloud-controller-username-lookup-secret*
*developer_console:*
*access-token-validity: 1209600*
*authorities:
scim.write,scim.read,cloud_controller.read,cloud_controller.write,password.write,uaa.admin,uaa.resource,cloud_controller.admin,billing.admin*
*authorized-grant-types: authorization_code,client_credentials*
*override: true*
*redirect-uri: https://console.10.244.0.34.xip.io/oauth/callback
<https://console.10.244.0.34.xip.io/oauth/callback>*
*refresh-token-validity: 1209600*
*scope:
openid,cloud_controller.read,cloud_controller.write,password.write,console.admin,console.support*
*secret: console-secret*
*doppler:*
*authorities: uaa.resource*
*override: true*
*secret: doppler-secret*
*gorouter:*
*authorities:
clients.read,clients.write,clients.admin,route.admin,route.advertise*
*authorized-grant-types: client_credentials,refresh_token*
*scope: openid,cloud_controller_service_permissions.read*
*secret: gorouter-secret*
*login:*
*authorities:
oauth.login,scim.write,clients.read,notifications.write,critical_notifications.write,emails.write,scim.userids,password.write*
*authorized-grant-types:
authorization_code,client_credentials,refresh_token*
*override: true*
*redirect-uri: http://login.10.244.0.34.xip.io
<http://login.10.244.0.34.xip.io/>*
*scope: openid,oauth.approvals*
*secret: login-secret*
*notifications:*
*authorities: cloud_controller.admin,scim.read*
*authorized-grant-types: client_credentials*
*secret: notification-secret*
*issuer: https://uaa.10.244.0.34.xip.io <https://uaa.10.244.0.34.xip.io/>*
*jwt:*
*signing_key: |+*
*-----BEGIN RSA PRIVATE KEY-----*
*MIICXAIBAAKBgQDHFr+KICms+tuT1OXJwhCUmR2dKVy7psa8xzElSyzqx7oJyfJ1*
*JZyOzToj9T5SfTIq396agbHJWVfYphNahvZ/7uMXqHxf+ZH9BL1gk9Y6kCnbM5R6*
*0gfwjyW1/dQPjOzn9N394zd2FJoFHwdq9Qs0wBugspULZVNRxq7veq/fzwIDAQAB*
*AoGBAJ8dRTQFhIllbHx4GLbpTQsWXJ6w4hZvskJKCLM/o8R4n+0W45pQ1xEiYKdA*
*Z/DRcnjltylRImBD8XuLL8iYOQSZXNMb1h3g5/UGbUXLmCgQLOUUlnYt34QOQm+0*
*KvUqfMSFBbKMsYBAoQmNdTHBaz3dZa8ON9hh/f5TT8u0OWNRAkEA5opzsIXv+52J*
*duc1VGyX3SwlxiE2dStW8wZqGiuLH142n6MKnkLU4ctNLiclw6BZePXFZYIK+AkE*
*xQ+k16je5QJBAN0TIKMPWIbbHVr5rkdUqOyezlFFWYOwnMmw/BKa1d3zp54VP/P8*
*+5aQ2d4sMoKEOfdWH7UqMe3FszfYFvSu5KMCQFMYeFaaEEP7Jn8rGzfQ5HQd44ek*
*lQJqmq6CE2BXbY/i34FuvPcKU70HEEygY6Y9d8J3o6zQ0K9SYNu+pcXt4lkCQA3h*
*jJQQe5uEGJTExqed7jllQ0khFJzLMx0K6tj0NeeIzAaGCQz13oo2sCdeGRHO4aDh*
*HH6Qlq/6UOV5wP8+GAcCQFgRCcB+hrje8hfEEefHcFpyKH+5g1Eu1k0mLrxK2zd+*
*4SlotYRHgPCEubokb2S1zfZDWIXW3HmggnGgM949TlY=*
*-----END RSA PRIVATE KEY-----*
*verification_key: |+*
*-----BEGIN PUBLIC KEY-----*
*MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQDHFr+KICms+tuT1OXJwhCUmR2d*
*KVy7psa8xzElSyzqx7oJyfJ1JZyOzToj9T5SfTIq396agbHJWVfYphNahvZ/7uMX*
*qHxf+ZH9BL1gk9Y6kCnbM5R60gfwjyW1/dQPjOzn9N394zd2FJoFHwdq9Qs0wBug*
*spULZVNRxq7veq/fzwIDAQAB*
*-----END PUBLIC KEY-----*
*ldap: null*
*login: null*
*no_ssl: true*
*restricted_ips_regex:
10\.\d{1,3}\.\d{1,3}\.\d{1,3}|192\.168\.\d{1,3}\.\d{1,3}|169\.254\.\d{1,3}\.\d{1,3}|127\.\d{1,3}\.\d{1,3}\.\d{1,3}|172\.1[6-9]{1}\.\d{1,3}\.\d{1,3}|172\.2[0-9]{1}\.\d{1,3}\.\d{1,3}|172\.3[0-1]{1}\.\d{1,3}\.\d{1,3}*
*scim:*
*external_groups: null*
*userids_enabled: true*
*users:*
*-
admin|admin|scim.write,scim.read,openid,cloud_controller.admin,clients.read,clients.write,doppler.firehose*
*spring_profiles: null*
*url: https://uaa.10.244.0.34.xip.io <https://uaa.10.244.0.34.xip.io/>*
*user: null*
*uaadb:*
*address: 10.244.0.30*
*databases:*
*- citext: true*
*name: uaadb*
*tag: uaa*
*db_scheme: postgresql*
*port: 5524*
*roles:*
*- name: uaaadmin*
*password: admin*
*tag: admin*



Can anyone help me ?Thanks!


Best Regards,

Frank


Re: Release Notes for v210

James Bayer
 

On Sat, May 23, 2015 at 9:41 PM, James Bayer <jbayer(a)pivotal.io> wrote:

please note that this release addresses CVE-2015-3202 and CVE-2015-1834
and we strongly recommend upgrading to this release. more details will be
forthcoming after the long united states holiday weekend.

https://github.com/cloudfoundry/cf-release/releases/tag/v210

*https://github.com/cloudfoundry-community/cf-docs-contrib/wiki/v210
<https://github.com/cloudfoundry-community/cf-docs-contrib/wiki/v210>*

--
Thank you,

James Bayer
--
Thank you,

James Bayer


CVE-2015-1834 CC Path Traversal vulnerability

James Bayer
 

Severity: Medium

Vendor: Cloud Foundry Foundation

Vulnerable Versions: Cloud Foundry Runtime Releases prior to 208

CVE References: CVE-2015-1834
Description:

A path traversal vulnerability was identified in the Cloud Foundry
component Cloud Controller. Path traversal is the "outbreak" of a given
directory structure through relative file paths in the user input. It aims
at accessing files and directories that are stored outside the web root
folder, for disallowed reading or even executing arbitrary system commands.
An attacker could use a certain parameter of the file path for instance to
inject "../" sequences in order to navigate through the file system. In
this particular case a remote authenticated attacker can exploit the
identified vulnerability in order to upload arbitrary files to the server
running a Cloud Controller instance – outside the isolated application
container.

Affected Products and Versions:

Cloud Foundry Runtime cf-release versions v207 or earlier are susceptible
to the vulnerability

Mitigation:

The Cloud Foundry project recommends that Cloud Foundry Runtime Deployments
running Release v207 or earlier upgrade to v208 or later.

Credit:

This issue was identified by Swisscom / SEC Consult

--
Thank you,

James Bayer


USN-2617-1 and CVE-2015-3202 FUSE vulnerability

James Bayer
 

Severity: High

Vendor: Canonical Ubuntu

Vulnerable Versions: Canonical Ubuntu 10.04 and 14.04

CVE References: USN-2617-1, CVE-2015-3202
Description:

A privilege escalation vulnerability was identified in a component used in
the Cloud Foundry stacks lucid64 and cfliunuxfs2. The FUSE package
incorrectly filtered environment variables and could be made to overwrite
files as an administrator, allowing a local attacker to gain administrative
privileges.
Affected Products and Versions:

-

Cloud Foundry Runtime cf-release versions v183 and all releases through
v209

Mitigation:

The Cloud Foundry project recommends that Cloud Foundry Runtime Deployments
running Release v209 or earlier upgrade to v210 or later. Note that the
FUSE package has been removed from the lucid64 stack in the v210 release
while it has been patched in the cflinuxfs2 stack (Trusty). Developers
should use the cflinuxfs2 stack in order to use FUSE with v210 and higher.

Credit:

This issue was identified by Tavis Ormandy


--
Thank you,

James Bayer


Release Notes for v210

James Bayer
 

please note that this release addresses CVE-2015-3202 and CVE-2015-1834 and
we strongly recommend upgrading to this release. more details will be
forthcoming after the long united states holiday weekend.

https://github.com/cloudfoundry/cf-release/releases/tag/v210

*https://github.com/cloudfoundry-community/cf-docs-contrib/wiki/v210
<https://github.com/cloudfoundry-community/cf-docs-contrib/wiki/v210>*

--
Thank you,

James Bayer


Re: Doppler zoning query

James Bayer
 

john,

can you say more about "receiving no load at all"? for example, if you
restart one of the app instances in zone 4 or zone 5 do you see logs with
"cf logs"? you can target a single app instance index to get restarted with
using a "cf curl" command for terminating an app index [1]. you can find
the details with json output from "cf stats" that should show you the
private IPs for the DEAs hosting your app, which should help you figure out
which zone each app index is in.
http://apidocs.cloudfoundry.org/209/apps/terminate_the_running_app_instance_at_the_given_index.html

if you are seeing logs from zone 4 and zone 5, then what might be happening
is that for some reason DEAs in zone 4 or zone 5 are not routable somewhere
along the path. reasons for that could be:
* DEAs in Zone 4 / Zone 5 not getting apps that are hosted there listed in
the routing table
* The routing table may be correct, but for some reason the routers cannot
reach DEAs in zone 4 or zone 5 with outbound traffic and routers fails over
to instances in DEAs 1-3 that it can reach
* some other mystery

On Fri, May 22, 2015 at 2:06 PM, john mcteague <john.mcteague(a)gmail.com>
wrote:

We map our dea's , dopplers and traffic controllers in 5 logical zones
using the various zone properties of doppler, metron_agent and
traffic_controller. This aligns to our physical failure domains in
openstack.

During a recent load test we discovered that zones 4 and 5 were receiving
no load at all, all traffic went to zones 1-3.

What would cause this unbalanced distribution? I have a single app running
30 instances and have verified it is evenly balanced across all 5 zones (6
instances in each). I have additionally verified that each logical zone in
the bosh yml contains 1 dea, doppler server and traffic controller.

Thanks,
John

_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev


--
Thank you,

James Bayer


Release Notes for v209

Shannon Coen