Date   

Re: Release Notes for v210

Guillaume Berche
 

Joseph,

Double checking after a good night of sleep, my last check was wrong
(reviewing the diff, I had checked for presence of jobs properties, and
missed at the bottom of the file for the presence of
``metron_agent.deployment`` within the top level ``properties``).

So indeed, the root cause of my issue was a indeed lack of "git submodule
update" which had left cf-release/templates/cf-lamb.yml outdated.

Sorry for the noise and extra work involved reviewing this. Thanks again
for your help and your prompt merge of the nfs template issue.

Guillaume.

On Thu, Jun 4, 2015 at 1:42 AM, CF Runtime <cfruntime(a)gmail.com> wrote:

Guillaume,

We run the pipelines using the Docker image built from
cf-release/pipeline-image/Dockerfile, which checks out the spiff repo and
builds it, so it should be 1.0.6 since that seems to be where master is
currently.

Which SHA do you have checked out for cf-release/src/loggregator?

Do you see:

metron_agent:
deployment: (( meta.environment ))

at the bottom of cf-release/templates/cf-lamb.yml?

Joseph Palermo
CF Runtime Team

On Wed, Jun 3, 2015 at 1:17 PM, Guillaume Berche <bercheg(a)gmail.com>
wrote:

Joseph,

I just checked, and I indeed still reproduce the issue against the
cf-release v210 branch with the submodule properly updated (including
loggregator).

What other info could be useful to diagnose the root cause and
environement difference with the cf runtime pipeline ? Are the pipeline
indeed using latest released spiff version (1.0.6 [8]) ?

Guillaume.

[8] https://github.com/cloudfoundry-incubator/spiff/releases/tag/v1.0.6


On Wed, Jun 3, 2015 at 9:46 PM, Guillaume Berche <bercheg(a)gmail.com>
wrote:

Hi Joseph,

Thanks for your prompt response and the details over the current
infrastructures covered by runtime pipelines. Great to hear the nfs
template will be merged soon, thanks!

I'm indeed using the generate_deployment_manifest from cf-release, and
was still experiencing issue described into [5], until I patched both
cf-release/templates/cf-lamb.yml (which happens to belong to loggregator
repo) and cf-jobs.yml as in [2].

I'll double check tomorrow if I could have be caught by a transient lack
of "git submodule update", which could have explained the problem on my
side. If this is the case, then I'm sorry for the noise, and the extra
associated work.

Regards,

Guillaume.

[2] https://github.com/cloudfoundry/cf-release/pull/696
[5] https://github.com/cloudfoundry/cf-release/issues/690
[7] https://github.com/cloudfoundry/bosh-lite/issues/265

On Wed, Jun 3, 2015 at 7:50 PM, CF Runtime <cfruntime(a)gmail.com> wrote:

Hi Guillaume,

The metron_agent.deployment default can be found in
cf-release/templates/cf-lamb.yml which should get merged automatically if
using the generate_deployment_manifest script in cf-release.

We do currently have pipelines for all supported environments (AWS,
vSphere, OpenStack, and BoshLite)

Spiff templates are still the recommended way of deploying cf-release,
and I would expect the nfs template change to be merged today as it is near
the top of our backlog.

Joseph Palermo
CF Runtime Team

On Wed, Jun 3, 2015 at 7:32 AM, Guillaume Berche <bercheg(a)gmail.com>
wrote:

Hi,

Thanks for the v210 announcement and the associated release note. It
seems that the v209-announced introduction of a new mandatory
metron_agent.deployment property did not make it into the default spiff
templates [5]. Note I tried updating v209 release note formatting to make
this more explicit [6].

I'm wondering whether the pivotal runtime/release team has a
cf-release pipeline for vsphere infrastructure (I'm suspecting the
aws-based pipelines were fine) ? Is such pipeline using the spiff templates
into cf-release/templates [4], or has it moved to something else such as
cf-boshworkspace [3] ?

If the spiff templates templates into cfrelease/templates are still
the recommended way of deploying CF, is there a way to priorize the merge
of PRs for known issues in v211 such as [1] and [2], as to avoid the need
by the cf-community to maintain its own fork of cfrelease/templates ?

Thanks in advance,

Guillaume.

[1] https://github.com/cloudfoundry/cf-release/pull/689
[2] https://github.com/cloudfoundry/cf-release/pull/696
[3] https://github.com/cloudfoundry-community/cf-boshworkspace
[4] https://github.com/cloudfoundry/cf-release/tree/master/templates
[5] https://github.com/cloudfoundry/cf-release/issues/690
[6] https://github.com/cloudfoundry-community/cf-docs-contrib/wiki/v209/fdae17795c61691f96f90cc9fd7be90945252937



On Wed, May 27, 2015 at 7:59 AM, Dieu Cao <dcao(a)pivotal.io> wrote:

The cf-release v210 was released on May 23rd, 2015
Runtime

- Addressed USN-2617-1 <http://www.ubuntu.com/usn/usn-2617-1/>
CVE-2015-3202
<http://people.canonical.com/~ubuntu-security/cve/2015/CVE-2015-3202.html> FUSE
vulnerabilities
- Removed fuse binaries from lucid64 rootfs . Apps running on
lucid64 stack requiring fuse should switch to cflinuxfs2
details <https://www.pivotaltracker.com/story/show/95186578>
- fuse binaries updated on cflinuxfs2 rootfs. details
<https://www.pivotaltracker.com/story/show/95177810>
- [Experimental] Work continues on support for Asynchronous
Service Instance Operationsdetails
<https://www.pivotaltracker.com/epic/show/1561148>
- Support for configurable max polling duration
- [Experimental] Work continues on /v3 and Application Process
Types details <https://www.pivotaltracker.com/epic/show/1334418>
- [Experimental] Work continues on Route API details
<https://www.pivotaltracker.com/epic/show/1590160>
- [Experimental] Work continues on Context Path Routes details
<https://www.pivotaltracker.com/epic/show/1808212>
- Work continues on support for Service Keys details
<https://www.pivotaltracker.com/epic/show/1743366>
- Upgrade etcd server to 2.0.1 details
<https://www.pivotaltracker.com/story/show/91070214>
- Should be run as 1 node (for small deployments) or 3 nodes
spread across zones (for HA)
- Also upgrades hm9k dependencies. LAMB client to be upgraded
in a subsequent release. Older client is compatible.
- cloudfoundry/cf-release #670
<https://github.com/cloudfoundry/cf-release/pull/670>: Be able to
specify timeouts for acceptance tests without defaults in the spec.
details <https://www.pivotaltracker.com/story/show/93914198>
- Fix bug where ssl enabled routers were not draining properly
details <https://www.pivotaltracker.com/story/show/94718480>
- cloudfoundry/cloud_controller_ng #378
<https://github.com/cloudfoundry/cf-release/pull/378>: current
usage against the org quota details
<https://www.pivotaltracker.com/story/show/94171010>

UAA

- Bumped to UAA 2.3.0 details
<https://github.com/cloudfoundry/uaa/releases/tag/2.3.0>

Used Configuration

- BOSH Version: 152
- Stemcell Version: 2889
- CC Api Version: 2.27.0

Commit summary
<http://htmlpreview.github.io/?https://github.com/cloudfoundry-community/cf-docs-contrib/blob/master/release_notes/cf-210-whats-in-the-deploy.html>
Compatible Diego Version

- final release 0.1247.0 commit
<https://github.com/cloudfoundry-incubator/diego-release/commit/a122a78eeb344bbfc90b7bcd0fa987d08ef1a5d1>

Manifest and Job Spec Changes

- properties.acceptance_tests.skip_regex added
- properties.app_ssh.host_key_fingerprint added
- properties.app_ssh.port defaults to 2222
- properties.uaa.newrelic added
- properties.login.logout.redirect.parameter.whitelist


On Sat, May 23, 2015 at 9:50 PM, James Bayer <jbayer(a)pivotal.io>
wrote:

CVE-2015-3202 details:
http://lists.cloudfoundry.org/pipermail/cf-dev/2015-May/000194.html

CVE-2015-1834 details:
http://lists.cloudfoundry.org/pipermail/cf-dev/2015-May/000195.html

On Sat, May 23, 2015 at 9:41 PM, James Bayer <jbayer(a)pivotal.io>
wrote:

please note that this release addresses CVE-2015-3202 and
CVE-2015-1834 and we strongly recommend upgrading to this release. more
details will be forthcoming after the long united states holiday weekend.

https://github.com/cloudfoundry/cf-release/releases/tag/v210

*https://github.com/cloudfoundry-community/cf-docs-contrib/wiki/v210
<https://github.com/cloudfoundry-community/cf-docs-contrib/wiki/v210>*

--
Thank you,

James Bayer


--
Thank you,

James Bayer

_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev

_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev

_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev

_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev

_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev

_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev


Re: Staging error: no available stagers (status code: 400, error code: 170001)

iamflying
 

I got the nats message on 'staging.advertise'. It has the enough resource.
but it seems not correct. And it also cannot explain the error - Server
error, status code: 400, error code: 170001, message: Staging error: no
available stagers.

[#41] Received on [staging.advertise] :
'{"id":"0-05b732df21c54f9cab3ac42869b4be64","stacks":["lucid64","cflinuxfs2"],"available_memory":3072,"available_disk":4096}'
[#42] Received on [staging.advertise] :
'{"id":"0-05b732df21c54f9cab3ac42869b4be64","stacks":["lucid64","cflinuxfs2"],"available_memory":3072,"available_disk":4096}'
[#43] Received on [staging.advertise] :
'{"id":"0-05b732df21c54f9cab3ac42869b4be64","stacks":["lucid64","cflinuxfs2"],"available_memory":3072,"available_disk":4096}'
[#44] Received on [staging.advertise] :
'{"id":"0-05b732df21c54f9cab3ac42869b4be64","stacks":["lucid64","cflinuxfs2"],"available_memory":3072,"available_disk":4096}'


+------------------------------------+---------+---------------+---------------+
| Job/index | State | Resource Pool |
IPs |
+------------------------------------+---------+---------------+---------------+
| api_worker_z1/0 | running | small_z1 |
100.64.0.23 |
| api_z1/0 | running | medium_z1 |
100.64.0.21 |
| clock_global/0 | running | medium_z1 |
100.64.0.22 |
| etcd_z1/0 | running | medium_z1 |
100.64.1.8 |
| ha_proxy_z1/0 | running | router_z1 |
100.64.1.0 |
| | | |
137.172.74.90 |
| hm9000_z1/0 | running | medium_z1 |
100.64.0.24 |
| loggregator_trafficcontroller_z1/0 | running | small_z1 |
100.64.0.27 |
| loggregator_z1/0 | running | medium_z1 |
100.64.0.26 |
| login_z1/0 | running | medium_z1 |
100.64.0.20 |
| nats_z1/0 | running | medium_z1 |
100.64.1.2 |
| nfs_z1/0 | running | medium_z1 |
100.64.1.3 |
| postgres_z1/0 | running | medium_z1 |
100.64.1.4 |
| router_z1/0 | running | router_z1 |
100.64.1.5 |
| runner_z1/0 | running | runner_z1 |
100.64.0.25 |
| stats_z1/0 | running | small_z1 |
100.64.0.18 |
| uaa_z1/0 | running | medium_z1 |
100.64.0.19 |
+------------------------------------+---------+---------------+---------------+


- 100.64.0.25

m1.large | 8GB RAM | 4 VCPU | 20.0GB Disk

92cf66ec-f2e1-4505-bd25-28c02e991535 | m1.large | 8192 | 20
| 20 | | 4 | 1.0 | True


On Thu, Jun 4, 2015 at 11:57 AM, Guangcai Wang <guangcai.wang(a)gmail.com>
wrote:


From the source code
/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/lib/cloud_controller/dea/app_stager_task.rb:26,
it seems there is no enough for memory or disk.

def stage(&completion_callback)
@stager_id = @stager_pool.find_stager(@app.stack.name,
staging_task_memory_mb, staging_task_disk_mb)
raise Errors::ApiError.new_from_details('StagingError', 'no
available stagers') unless @stager_id


However, this is my first app. It should be light. The DEA is using
m1.large which is
m1.large | 4096 | 20

Anyone has the same error? and any suggestion on manifest or debug tips?

Another question, I want to add more debug information in
cloud_controller_ng.log. I tried to add some code in
/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/lib/cloud_controller/dea/app_stager_task.rb,
but it did not show in the log. How to do?


On Thu, Jun 4, 2015 at 10:14 AM, Guangcai Wang <guangcai.wang(a)gmail.com>
wrote:

attached the deployment manifest. This is generated by spiff and then I
modified it.

On Thu, Jun 4, 2015 at 12:47 AM, Takeshi Morikawa <moog0814(a)gmail.com>
wrote:

Please check the 'staging.advertise' of nats message
https://github.com/cloudfoundry/dea_ng#staging

sample command:
bundle exec nats-sub -s
nats://[nats.user]:[nats.password]@[nats_ipaddress]:[nats.port]
'staging.advertise'


I have one additional request
Can you share your bosh deployment manifest?

_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev


Re: getting the cf version from the api

Dieu Cao <dcao@...>
 

There's not currently a way to determine this exactly.
As Takeshi suggests the api version is the closest thing, but it does not
get bumped every cf-release.

-Dieu
CF Runtime PM

On Wed, Jun 3, 2015 at 5:38 PM, Takeshi Morikawa <moog0814(a)gmail.com> wrote:

I was just thinking of the same thing.

http://cf-api-checker.mybluemix.net/cf/version/
http://cf-api-checker.mybluemix.net/api/version/

I tried to check in the following ways:

STEP1: check api version
https://api.cf-domain/v2/info

STEP2: check cf-release tag & submodule ccng source code

https://github.com/cloudfoundry/cf-release/tree/v210/src

https://github.com/cloudfoundry/cloud_controller_ng/blob/9cc1df0eb8c3039f19fc6e74bd243a342560490b/lib/cloud_controller/constants.rb

However api_version and cf-relase is not a unique relationship



2015-06-04 9:21 GMT+09:00 Naga Rakesh <nagarakesh4(a)gmail.com>:

Hello,

I wanted to know if there is a way to determine CF versions dynamically/
from API.

Thanks,
Venkata

_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev

_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev


Re: Staging error: no available stagers (status code: 400, error code: 170001)

iamflying
 

From the source code
/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/lib/cloud_controller/dea/app_stager_task.rb:26,
it seems there is no enough for memory or disk.

def stage(&completion_callback)
@stager_id = @stager_pool.find_stager(@app.stack.name,
staging_task_memory_mb, staging_task_disk_mb)
raise Errors::ApiError.new_from_details('StagingError', 'no
available stagers') unless @stager_id


However, this is my first app. It should be light. The DEA is using
m1.large which is
m1.large | 4096 | 20

Anyone has the same error? and any suggestion on manifest or debug tips?

Another question, I want to add more debug information in
cloud_controller_ng.log. I tried to add some code in
/var/vcap/packages/cloud_controller_ng/cloud_controller_ng/lib/cloud_controller/dea/app_stager_task.rb,
but it did not show in the log. How to do?


On Thu, Jun 4, 2015 at 10:14 AM, Guangcai Wang <guangcai.wang(a)gmail.com>
wrote:

attached the deployment manifest. This is generated by spiff and then I
modified it.

On Thu, Jun 4, 2015 at 12:47 AM, Takeshi Morikawa <moog0814(a)gmail.com>
wrote:

Please check the 'staging.advertise' of nats message
https://github.com/cloudfoundry/dea_ng#staging

sample command:
bundle exec nats-sub -s
nats://[nats.user]:[nats.password]@[nats_ipaddress]:[nats.port]
'staging.advertise'


I have one additional request
Can you share your bosh deployment manifest?

_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev


Re: getting the cf version from the api

Takeshi Morikawa
 

I was just thinking of the same thing.

http://cf-api-checker.mybluemix.net/cf/version/
http://cf-api-checker.mybluemix.net/api/version/

I tried to check in the following ways:

STEP1: check api version
https://api.cf-domain/v2/info

STEP2: check cf-release tag & submodule ccng source code

https://github.com/cloudfoundry/cf-release/tree/v210/src
https://github.com/cloudfoundry/cloud_controller_ng/blob/9cc1df0eb8c3039f19fc6e74bd243a342560490b/lib/cloud_controller/constants.rb

However api_version and cf-relase is not a unique relationship



2015-06-04 9:21 GMT+09:00 Naga Rakesh <nagarakesh4(a)gmail.com>:

Hello,

I wanted to know if there is a way to determine CF versions dynamically/
from API.

Thanks,
Venkata

_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev


getting the cf version from the api

Naga Rakesh
 

Hello,

I wanted to know if there is a way to determine CF versions dynamically/
from API.

Thanks,
Venkata


Service offering testService not found

Supraja Yasoda <ykmsupraja@...>
 

Hi,

I have deleted service broker by removing service instances. I created
again but now I am unable to enable-service-access to Service. I get error
"Service offering testService not found".
When I do get I see catalog gets service Id, name under service definition.
Could someone suggest one the same.
--

*Regards,*


Re: Staging error: no available stagers (status code: 400, error code: 170001)

iamflying
 

attached the deployment manifest. This is generated by spiff and then I
modified it.

On Thu, Jun 4, 2015 at 12:47 AM, Takeshi Morikawa <moog0814(a)gmail.com>
wrote:

Please check the 'staging.advertise' of nats message
https://github.com/cloudfoundry/dea_ng#staging

sample command:
bundle exec nats-sub -s
nats://[nats.user]:[nats.password]@[nats_ipaddress]:[nats.port]
'staging.advertise'


I have one additional request
Can you share your bosh deployment manifest?

_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev


Re: Syslog Drain to Logstash Problems

Josh Ghiloni
 

We’ll check that, thanks!

Josh Ghiloni
Senior Consultant
303.932.2202 o | 303.590.5427 m | 303.565.2794 f
jghiloni(a)ecsteam.com<mailto:jghiloni(a)ecsteam.com>

ECS Team
Technology Solutions Delivered
ECSTeam.com<http://ECSTeam.com>

On Jun 3, 2015, at 15:41, John Tuley <jtuley(a)pivotal.io<mailto:jtuley(a)pivotal.io>> wrote:

Steve,

Until recently (cf-release v198), binding a syslog service required restarting the app. If you're post-v198, it should Just Work.

However, one of the things that could be in your way is network security. In order to forward logs to your drain, your loggregator servers must be able to access that server. This is the most common cause we see of systems failing to forward to syslog drains.

Please let us know if you have more questions.

– John Tuley

On Wed, Jun 3, 2015 at 12:37 PM, Steve Wall <steve.wall(a)primetimesoftware.com<mailto:steve.wall(a)primetimesoftware.com>> wrote:
Hello,
We are having problems draining log messages to Logstash. The drain is setup as a user provided service.

cf cups logstash-drain -l syslog://xx.xx.xx.xx:5000

And then bound to the service.

cf bind-service myapp logstash-drain

But no log messages are coming through to Logstash. Or more specifically, we are using ELK and the messages aren't seen through Kibana.

We were able to log into the DEA and using netcat (nc), messages were successfully submitted to the ELK stack.

nc -w0 -u xx.xx.xx.xx 5000 <<< "logging from remote"

Any suggestions on how to debug this further?
-Steve


_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org<mailto:cf-dev(a)lists.cloudfoundry.org>
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev


_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org<mailto:cf-dev(a)lists.cloudfoundry.org>
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev


Re: Staging error: no available stagers (status code: 400, error code: 170001)

iamflying
 

attached the deployment manifest. cf-deployment-single-az.yml
<http://cf-dev.70369.x6.nabble.com/file/n290/cf-deployment-single-az.yml>



--
View this message in context: http://cf-dev.70369.x6.nabble.com/cf-dev-Staging-error-no-available-stagers-status-code-400-error-code-170001-tp271p290.html
Sent from the CF Dev mailing list archive at Nabble.com.


Re: Announcing Experimental support for Asynchronous Service Operations

Onsi Fakhouri <ofakhouri@...>
 

Well done Services API! This is an awesome milestone!

On Wed, Jun 3, 2015 at 5:04 PM, Chip Childers <cchilders(a)cloudfoundry.org>
wrote:

Awesome news! Long time coming, and it opens up a whole world of
additional capabilities for users.

Nice work everyone!



On Jun 4, 2015, at 9:00 AM, Shannon Coen <scoen(a)pivotal.io> wrote:

On behalf of the Services API team, including Dojo participants from IBM
and SAP, I'm pleased to announce experimental availability and published
documentation for this much-anticipated feature.

As of cf-release v208 and CLI v6.11.1, Cloud Foundry now supports an
enhanced service broker integration in support of long-running
provisioning, update, and delete operations. This significantly broadens
the supported use cases for Cloud Foundry Marketplace Services, and I can't
wait to hear what creative things the ecosystem does with it. Provision
VMs, orchestrate clusters, install software, move data... yes, your broker
can even open support tickets to have those things done manually!

This feature is currently considered experimental, as we'd like you all to
review our docs, try out the feature, and give us feedback. We very
interested to hear about any confusion in the docs or the UX, and any
sticky issues you encounter in implementation. Our goal is for our docs
enable a painless, intuitive (can we hope for joyful?) implementation
experience.

We have not bumped the broker API yet for this feature. You'll notice that
our documentation for the feature is separate from the stable API docs at
this point. Once we're confident in the design (we're relying on your
feedback!), we'll bump the broker API version, move the docs for
asynchronous operations into the stable docs, AND implement support for
asynchronous bind/create-key and unbind/delete-key.

Documentation:
- http://docs.cloudfoundry.org/services/asynchronous-operations.html
- http://docs.cloudfoundry.org/services/api.html
Example broker for AWS (contributed by IBM):
- http://docs.cloudfoundry.org/services/examples.html
- https://github.com/cloudfoundry-samples/go_service_broker
Demo of the feature presented at CF Summit 2015:
- https://youtu.be/Ij5KSKrAq9Q

tl;dr

Cloud Foundry expects broker responses within 60 seconds. Now a broker can
return an immediate response indicating that a provision, update, or delete
operation is in progress. Cloud Foundry then returns a similar response to
the client, and begins polling the broker for the status of the operation.
Users, via API clients, can discover the status of the operation ("in
progress", "succeeded", or "failed"), and brokers can provide user-facing
messages in response to each poll which are exposed to users (e.g. "VMs
provisioned, installing software, 30% complete").

Thank you,

Shannon Coen
Product Manager, Cloud Foundry
Pivotal, Inc.

_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev


_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev


Re: Announcing Experimental support for Asynchronous Service Operations

Chip Childers <cchilders@...>
 

Awesome news! Long time coming, and it opens up a whole world of additional capabilities for users.

Nice work everyone!

On Jun 4, 2015, at 9:00 AM, Shannon Coen <scoen(a)pivotal.io> wrote:

On behalf of the Services API team, including Dojo participants from IBM and SAP, I'm pleased to announce experimental availability and published documentation for this much-anticipated feature.

As of cf-release v208 and CLI v6.11.1, Cloud Foundry now supports an enhanced service broker integration in support of long-running provisioning, update, and delete operations. This significantly broadens the supported use cases for Cloud Foundry Marketplace Services, and I can't wait to hear what creative things the ecosystem does with it. Provision VMs, orchestrate clusters, install software, move data... yes, your broker can even open support tickets to have those things done manually!

This feature is currently considered experimental, as we'd like you all to review our docs, try out the feature, and give us feedback. We very interested to hear about any confusion in the docs or the UX, and any sticky issues you encounter in implementation. Our goal is for our docs enable a painless, intuitive (can we hope for joyful?) implementation experience.

We have not bumped the broker API yet for this feature. You'll notice that our documentation for the feature is separate from the stable API docs at this point. Once we're confident in the design (we're relying on your feedback!), we'll bump the broker API version, move the docs for asynchronous operations into the stable docs, AND implement support for asynchronous bind/create-key and unbind/delete-key.

Documentation:
- http://docs.cloudfoundry.org/services/asynchronous-operations.html
- http://docs.cloudfoundry.org/services/api.html
Example broker for AWS (contributed by IBM):
- http://docs.cloudfoundry.org/services/examples.html
- https://github.com/cloudfoundry-samples/go_service_broker
Demo of the feature presented at CF Summit 2015:
- https://youtu.be/Ij5KSKrAq9Q

tl;dr

Cloud Foundry expects broker responses within 60 seconds. Now a broker can return an immediate response indicating that a provision, update, or delete operation is in progress. Cloud Foundry then returns a similar response to the client, and begins polling the broker for the status of the operation. Users, via API clients, can discover the status of the operation ("in progress", "succeeded", or "failed"), and brokers can provide user-facing messages in response to each poll which are exposed to users (e.g. "VMs provisioned, installing software, 30% complete").

Thank you,

Shannon Coen
Product Manager, Cloud Foundry
Pivotal, Inc.
_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev


Announcing Experimental support for Asynchronous Service Operations

Shannon Coen
 

On behalf of the Services API team, including Dojo participants from IBM
and SAP, I'm pleased to announce experimental availability and published
documentation for this much-anticipated feature.

As of cf-release v208 and CLI v6.11.1, Cloud Foundry now supports an
enhanced service broker integration in support of long-running
provisioning, update, and delete operations. This significantly broadens
the supported use cases for Cloud Foundry Marketplace Services, and I can't
wait to hear what creative things the ecosystem does with it. Provision
VMs, orchestrate clusters, install software, move data... yes, your broker
can even open support tickets to have those things done manually!

This feature is currently considered experimental, as we'd like you all to
review our docs, try out the feature, and give us feedback. We very
interested to hear about any confusion in the docs or the UX, and any
sticky issues you encounter in implementation. Our goal is for our docs
enable a painless, intuitive (can we hope for joyful?) implementation
experience.

We have not bumped the broker API yet for this feature. You'll notice that
our documentation for the feature is separate from the stable API docs at
this point. Once we're confident in the design (we're relying on your
feedback!), we'll bump the broker API version, move the docs for
asynchronous operations into the stable docs, AND implement support for
asynchronous bind/create-key and unbind/delete-key.

Documentation:
- http://docs.cloudfoundry.org/services/asynchronous-operations.html
- http://docs.cloudfoundry.org/services/api.html
Example broker for AWS (contributed by IBM):
- http://docs.cloudfoundry.org/services/examples.html
- https://github.com/cloudfoundry-samples/go_service_broker
Demo of the feature presented at CF Summit 2015:
- https://youtu.be/Ij5KSKrAq9Q

tl;dr

Cloud Foundry expects broker responses within 60 seconds. Now a broker can
return an immediate response indicating that a provision, update, or delete
operation is in progress. Cloud Foundry then returns a similar response to
the client, and begins polling the broker for the status of the operation.
Users, via API clients, can discover the status of the operation ("in
progress", "succeeded", or "failed"), and brokers can provide user-facing
messages in response to each poll which are exposed to users (e.g. "VMs
provisioned, installing software, 30% complete").

Thank you,

Shannon Coen
Product Manager, Cloud Foundry
Pivotal, Inc.


Re: Release Notes for v210

CF Runtime
 

Guillaume,

We run the pipelines using the Docker image built from
cf-release/pipeline-image/Dockerfile, which checks out the spiff repo and
builds it, so it should be 1.0.6 since that seems to be where master is
currently.

Which SHA do you have checked out for cf-release/src/loggregator?

Do you see:

metron_agent:
deployment: (( meta.environment ))

at the bottom of cf-release/templates/cf-lamb.yml?

Joseph Palermo
CF Runtime Team

On Wed, Jun 3, 2015 at 1:17 PM, Guillaume Berche <bercheg(a)gmail.com> wrote:

Joseph,

I just checked, and I indeed still reproduce the issue against the
cf-release v210 branch with the submodule properly updated (including
loggregator).

What other info could be useful to diagnose the root cause and
environement difference with the cf runtime pipeline ? Are the pipeline
indeed using latest released spiff version (1.0.6 [8]) ?

Guillaume.

[8] https://github.com/cloudfoundry-incubator/spiff/releases/tag/v1.0.6


On Wed, Jun 3, 2015 at 9:46 PM, Guillaume Berche <bercheg(a)gmail.com>
wrote:

Hi Joseph,

Thanks for your prompt response and the details over the current
infrastructures covered by runtime pipelines. Great to hear the nfs
template will be merged soon, thanks!

I'm indeed using the generate_deployment_manifest from cf-release, and
was still experiencing issue described into [5], until I patched both
cf-release/templates/cf-lamb.yml (which happens to belong to loggregator
repo) and cf-jobs.yml as in [2].

I'll double check tomorrow if I could have be caught by a transient lack
of "git submodule update", which could have explained the problem on my
side. If this is the case, then I'm sorry for the noise, and the extra
associated work.

Regards,

Guillaume.

[2] https://github.com/cloudfoundry/cf-release/pull/696
[5] https://github.com/cloudfoundry/cf-release/issues/690
[7] https://github.com/cloudfoundry/bosh-lite/issues/265

On Wed, Jun 3, 2015 at 7:50 PM, CF Runtime <cfruntime(a)gmail.com> wrote:

Hi Guillaume,

The metron_agent.deployment default can be found in
cf-release/templates/cf-lamb.yml which should get merged automatically if
using the generate_deployment_manifest script in cf-release.

We do currently have pipelines for all supported environments (AWS,
vSphere, OpenStack, and BoshLite)

Spiff templates are still the recommended way of deploying cf-release,
and I would expect the nfs template change to be merged today as it is near
the top of our backlog.

Joseph Palermo
CF Runtime Team

On Wed, Jun 3, 2015 at 7:32 AM, Guillaume Berche <bercheg(a)gmail.com>
wrote:

Hi,

Thanks for the v210 announcement and the associated release note. It
seems that the v209-announced introduction of a new mandatory
metron_agent.deployment property did not make it into the default spiff
templates [5]. Note I tried updating v209 release note formatting to make
this more explicit [6].

I'm wondering whether the pivotal runtime/release team has a cf-release
pipeline for vsphere infrastructure (I'm suspecting the aws-based pipelines
were fine) ? Is such pipeline using the spiff templates into
cf-release/templates [4], or has it moved to something else such as
cf-boshworkspace [3] ?

If the spiff templates templates into cfrelease/templates are still the
recommended way of deploying CF, is there a way to priorize the merge of
PRs for known issues in v211 such as [1] and [2], as to avoid the need by
the cf-community to maintain its own fork of cfrelease/templates ?

Thanks in advance,

Guillaume.

[1] https://github.com/cloudfoundry/cf-release/pull/689
[2] https://github.com/cloudfoundry/cf-release/pull/696
[3] https://github.com/cloudfoundry-community/cf-boshworkspace
[4] https://github.com/cloudfoundry/cf-release/tree/master/templates
[5] https://github.com/cloudfoundry/cf-release/issues/690
[6] https://github.com/cloudfoundry-community/cf-docs-contrib/wiki/v209/fdae17795c61691f96f90cc9fd7be90945252937



On Wed, May 27, 2015 at 7:59 AM, Dieu Cao <dcao(a)pivotal.io> wrote:

The cf-release v210 was released on May 23rd, 2015
Runtime

- Addressed USN-2617-1 <http://www.ubuntu.com/usn/usn-2617-1/>
CVE-2015-3202
<http://people.canonical.com/~ubuntu-security/cve/2015/CVE-2015-3202.html> FUSE
vulnerabilities
- Removed fuse binaries from lucid64 rootfs . Apps running on
lucid64 stack requiring fuse should switch to cflinuxfs2 details
<https://www.pivotaltracker.com/story/show/95186578>
- fuse binaries updated on cflinuxfs2 rootfs. details
<https://www.pivotaltracker.com/story/show/95177810>
- [Experimental] Work continues on support for Asynchronous
Service Instance Operationsdetails
<https://www.pivotaltracker.com/epic/show/1561148>
- Support for configurable max polling duration
- [Experimental] Work continues on /v3 and Application Process
Types details <https://www.pivotaltracker.com/epic/show/1334418>
- [Experimental] Work continues on Route API details
<https://www.pivotaltracker.com/epic/show/1590160>
- [Experimental] Work continues on Context Path Routes details
<https://www.pivotaltracker.com/epic/show/1808212>
- Work continues on support for Service Keys details
<https://www.pivotaltracker.com/epic/show/1743366>
- Upgrade etcd server to 2.0.1 details
<https://www.pivotaltracker.com/story/show/91070214>
- Should be run as 1 node (for small deployments) or 3 nodes
spread across zones (for HA)
- Also upgrades hm9k dependencies. LAMB client to be upgraded
in a subsequent release. Older client is compatible.
- cloudfoundry/cf-release #670
<https://github.com/cloudfoundry/cf-release/pull/670>: Be able to
specify timeouts for acceptance tests without defaults in the spec.
details <https://www.pivotaltracker.com/story/show/93914198>
- Fix bug where ssl enabled routers were not draining properly
details <https://www.pivotaltracker.com/story/show/94718480>
- cloudfoundry/cloud_controller_ng #378
<https://github.com/cloudfoundry/cf-release/pull/378>: current
usage against the org quota details
<https://www.pivotaltracker.com/story/show/94171010>

UAA

- Bumped to UAA 2.3.0 details
<https://github.com/cloudfoundry/uaa/releases/tag/2.3.0>

Used Configuration

- BOSH Version: 152
- Stemcell Version: 2889
- CC Api Version: 2.27.0

Commit summary
<http://htmlpreview.github.io/?https://github.com/cloudfoundry-community/cf-docs-contrib/blob/master/release_notes/cf-210-whats-in-the-deploy.html>
Compatible Diego Version

- final release 0.1247.0 commit
<https://github.com/cloudfoundry-incubator/diego-release/commit/a122a78eeb344bbfc90b7bcd0fa987d08ef1a5d1>

Manifest and Job Spec Changes

- properties.acceptance_tests.skip_regex added
- properties.app_ssh.host_key_fingerprint added
- properties.app_ssh.port defaults to 2222
- properties.uaa.newrelic added
- properties.login.logout.redirect.parameter.whitelist


On Sat, May 23, 2015 at 9:50 PM, James Bayer <jbayer(a)pivotal.io>
wrote:

CVE-2015-3202 details:
http://lists.cloudfoundry.org/pipermail/cf-dev/2015-May/000194.html

CVE-2015-1834 details:
http://lists.cloudfoundry.org/pipermail/cf-dev/2015-May/000195.html

On Sat, May 23, 2015 at 9:41 PM, James Bayer <jbayer(a)pivotal.io>
wrote:

please note that this release addresses CVE-2015-3202 and
CVE-2015-1834 and we strongly recommend upgrading to this release. more
details will be forthcoming after the long united states holiday weekend.

https://github.com/cloudfoundry/cf-release/releases/tag/v210

*https://github.com/cloudfoundry-community/cf-docs-contrib/wiki/v210
<https://github.com/cloudfoundry-community/cf-docs-contrib/wiki/v210>*

--
Thank you,

James Bayer


--
Thank you,

James Bayer

_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev

_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev

_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev

_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev

_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev


Re: Backwards-incompatible NOAA library change

Long Nguyen
 

Thanks for letting us know!

Long

On 3 Jun 2015 18:28, "Erik Jasiak" <ejasiak(a)pivotal.io> wrote:

Hi all,

On Thursday, June 4th at ~noon MDT, the Loggregator NOAA[1] Library will
introduce a backwards-incompatible change, after feedback from other teams
and the community. The NOAA library is used for consuming Cloud Foundry
Loggregator data, including the firehose. Any update via “go get” will
pull down these changes and will break compilations.

Details on the change:

NOAA is changing how it closes socket connections on requests.
Previously, the Close() function in consumer.go [2] did not behave as
expected - a client was required to close stopChan separately. Calling
Close() on a noaa.Consumer that was not stopped or in a retry loop would do
nothing.

Now calling Close() stops the consumer, and none of the APIs take a
stopChan. This is a much cleaner design that also works more in-line with
client expectations.

Other info:
The Go language maintainers have taken a position that a repository should
“never make backwards incompatible changes” [3]. We recognize that while
Go may have taken this position to anyone using “go get”, this makes
iterating on a community API difficult. We will explore with the Cloud
Foundry and Go communities how to better handle major API changes in the
future.

Many thanks,
Erik Jasiak
PM - Loggregator, Logging Analytics Metrics Boulder

[1] https://github.com/cloudfoundry/noaa
[2] https://github.com/cloudfoundry/noaa/blob/master/consumer.go#L55
[3] http://golang.org/doc/faq#get_version


_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev


Backwards-incompatible NOAA library change

Erik Jasiak <ejasiak@...>
 

Hi all,

On Thursday, June 4th at ~noon MDT, the Loggregator NOAA[1] Library will
introduce a backwards-incompatible change, after feedback from other teams
and the community. The NOAA library is used for consuming Cloud Foundry
Loggregator data, including the firehose. Any update via “go get” will
pull down these changes and will break compilations.

Details on the change:

NOAA is changing how it closes socket connections on requests. Previously,
the Close() function in consumer.go [2] did not behave as expected - a
client was required to close stopChan separately. Calling Close() on a
noaa.Consumer that was not stopped or in a retry loop would do nothing.

Now calling Close() stops the consumer, and none of the APIs take a
stopChan. This is a much cleaner design that also works more in-line with
client expectations.

Other info:
The Go language maintainers have taken a position that a repository should
“never make backwards incompatible changes” [3]. We recognize that while
Go may have taken this position to anyone using “go get”, this makes
iterating on a community API difficult. We will explore with the Cloud
Foundry and Go communities how to better handle major API changes in the
future.

Many thanks,
Erik Jasiak
PM - Loggregator, Logging Analytics Metrics Boulder

[1] https://github.com/cloudfoundry/noaa
[2] https://github.com/cloudfoundry/noaa/blob/master/consumer.go#L55
[3] http://golang.org/doc/faq#get_version


Re: Syslog Drain to Logstash Problems

John Tuley <jtuley@...>
 

Steve,

Until recently (cf-release v198), binding a syslog service required
restarting the app. If you're post-v198, it *should* Just Work.

However, one of the things that could be in your way is network security.
In order to forward logs to your drain, your loggregator servers must be
able to access that server. This is the most common cause we see of systems
failing to forward to syslog drains.

Please let us know if you have more questions.

– John Tuley

On Wed, Jun 3, 2015 at 12:37 PM, Steve Wall <
steve.wall(a)primetimesoftware.com> wrote:

Hello,
We are having problems draining log messages to Logstash. The drain is
setup as a user provided service.

cf cups logstash-drain -l syslog://xx.xx.xx.xx:5000

And then bound to the service.

cf bind-service myapp logstash-drain

But no log messages are coming through to Logstash. Or more specifically,
we are using ELK and the messages aren't seen through Kibana.

We were able to log into the DEA and using netcat (nc), messages were
successfully submitted to the ELK stack.

nc -w0 -u xx.xx.xx.xx 5000 <<< "logging from remote"

Any suggestions on how to debug this further?
-Steve


_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev


Re: Release Notes for v210

Guillaume Berche
 

Joseph,

I just checked, and I indeed still reproduce the issue against the
cf-release v210 branch with the submodule properly updated (including
loggregator).

What other info could be useful to diagnose the root cause and environement
difference with the cf runtime pipeline ? Are the pipeline indeed using
latest released spiff version (1.0.6 [8]) ?

Guillaume.

[8] https://github.com/cloudfoundry-incubator/spiff/releases/tag/v1.0.6

On Wed, Jun 3, 2015 at 9:46 PM, Guillaume Berche <bercheg(a)gmail.com> wrote:

Hi Joseph,

Thanks for your prompt response and the details over the current
infrastructures covered by runtime pipelines. Great to hear the nfs
template will be merged soon, thanks!

I'm indeed using the generate_deployment_manifest from cf-release, and was
still experiencing issue described into [5], until I patched both
cf-release/templates/cf-lamb.yml (which happens to belong to loggregator
repo) and cf-jobs.yml as in [2].

I'll double check tomorrow if I could have be caught by a transient lack
of "git submodule update", which could have explained the problem on my
side. If this is the case, then I'm sorry for the noise, and the extra
associated work.

Regards,

Guillaume.

[2] https://github.com/cloudfoundry/cf-release/pull/696
[5] https://github.com/cloudfoundry/cf-release/issues/690
[7] https://github.com/cloudfoundry/bosh-lite/issues/265

On Wed, Jun 3, 2015 at 7:50 PM, CF Runtime <cfruntime(a)gmail.com> wrote:

Hi Guillaume,

The metron_agent.deployment default can be found in
cf-release/templates/cf-lamb.yml which should get merged automatically if
using the generate_deployment_manifest script in cf-release.

We do currently have pipelines for all supported environments (AWS,
vSphere, OpenStack, and BoshLite)

Spiff templates are still the recommended way of deploying cf-release,
and I would expect the nfs template change to be merged today as it is near
the top of our backlog.

Joseph Palermo
CF Runtime Team

On Wed, Jun 3, 2015 at 7:32 AM, Guillaume Berche <bercheg(a)gmail.com>
wrote:

Hi,

Thanks for the v210 announcement and the associated release note. It
seems that the v209-announced introduction of a new mandatory
metron_agent.deployment property did not make it into the default spiff
templates [5]. Note I tried updating v209 release note formatting to make
this more explicit [6].

I'm wondering whether the pivotal runtime/release team has a cf-release
pipeline for vsphere infrastructure (I'm suspecting the aws-based pipelines
were fine) ? Is such pipeline using the spiff templates into
cf-release/templates [4], or has it moved to something else such as
cf-boshworkspace [3] ?

If the spiff templates templates into cfrelease/templates are still the
recommended way of deploying CF, is there a way to priorize the merge of
PRs for known issues in v211 such as [1] and [2], as to avoid the need by
the cf-community to maintain its own fork of cfrelease/templates ?

Thanks in advance,

Guillaume.

[1] https://github.com/cloudfoundry/cf-release/pull/689
[2] https://github.com/cloudfoundry/cf-release/pull/696
[3] https://github.com/cloudfoundry-community/cf-boshworkspace
[4] https://github.com/cloudfoundry/cf-release/tree/master/templates
[5] https://github.com/cloudfoundry/cf-release/issues/690
[6] https://github.com/cloudfoundry-community/cf-docs-contrib/wiki/v209/fdae17795c61691f96f90cc9fd7be90945252937



On Wed, May 27, 2015 at 7:59 AM, Dieu Cao <dcao(a)pivotal.io> wrote:

The cf-release v210 was released on May 23rd, 2015
Runtime

- Addressed USN-2617-1 <http://www.ubuntu.com/usn/usn-2617-1/>
CVE-2015-3202
<http://people.canonical.com/~ubuntu-security/cve/2015/CVE-2015-3202.html> FUSE
vulnerabilities
- Removed fuse binaries from lucid64 rootfs . Apps running on
lucid64 stack requiring fuse should switch to cflinuxfs2 details
<https://www.pivotaltracker.com/story/show/95186578>
- fuse binaries updated on cflinuxfs2 rootfs. details
<https://www.pivotaltracker.com/story/show/95177810>
- [Experimental] Work continues on support for Asynchronous Service
Instance Operationsdetails
<https://www.pivotaltracker.com/epic/show/1561148>
- Support for configurable max polling duration
- [Experimental] Work continues on /v3 and Application Process
Types details <https://www.pivotaltracker.com/epic/show/1334418>
- [Experimental] Work continues on Route API details
<https://www.pivotaltracker.com/epic/show/1590160>
- [Experimental] Work continues on Context Path Routes details
<https://www.pivotaltracker.com/epic/show/1808212>
- Work continues on support for Service Keys details
<https://www.pivotaltracker.com/epic/show/1743366>
- Upgrade etcd server to 2.0.1 details
<https://www.pivotaltracker.com/story/show/91070214>
- Should be run as 1 node (for small deployments) or 3 nodes
spread across zones (for HA)
- Also upgrades hm9k dependencies. LAMB client to be upgraded in
a subsequent release. Older client is compatible.
- cloudfoundry/cf-release #670
<https://github.com/cloudfoundry/cf-release/pull/670>: Be able to
specify timeouts for acceptance tests without defaults in the spec.
details <https://www.pivotaltracker.com/story/show/93914198>
- Fix bug where ssl enabled routers were not draining properly
details <https://www.pivotaltracker.com/story/show/94718480>
- cloudfoundry/cloud_controller_ng #378
<https://github.com/cloudfoundry/cf-release/pull/378>: current
usage against the org quota details
<https://www.pivotaltracker.com/story/show/94171010>

UAA

- Bumped to UAA 2.3.0 details
<https://github.com/cloudfoundry/uaa/releases/tag/2.3.0>

Used Configuration

- BOSH Version: 152
- Stemcell Version: 2889
- CC Api Version: 2.27.0

Commit summary
<http://htmlpreview.github.io/?https://github.com/cloudfoundry-community/cf-docs-contrib/blob/master/release_notes/cf-210-whats-in-the-deploy.html>
Compatible Diego Version

- final release 0.1247.0 commit
<https://github.com/cloudfoundry-incubator/diego-release/commit/a122a78eeb344bbfc90b7bcd0fa987d08ef1a5d1>

Manifest and Job Spec Changes

- properties.acceptance_tests.skip_regex added
- properties.app_ssh.host_key_fingerprint added
- properties.app_ssh.port defaults to 2222
- properties.uaa.newrelic added
- properties.login.logout.redirect.parameter.whitelist


On Sat, May 23, 2015 at 9:50 PM, James Bayer <jbayer(a)pivotal.io> wrote:

CVE-2015-3202 details:
http://lists.cloudfoundry.org/pipermail/cf-dev/2015-May/000194.html

CVE-2015-1834 details:
http://lists.cloudfoundry.org/pipermail/cf-dev/2015-May/000195.html

On Sat, May 23, 2015 at 9:41 PM, James Bayer <jbayer(a)pivotal.io>
wrote:

please note that this release addresses CVE-2015-3202 and
CVE-2015-1834 and we strongly recommend upgrading to this release. more
details will be forthcoming after the long united states holiday weekend.

https://github.com/cloudfoundry/cf-release/releases/tag/v210

*https://github.com/cloudfoundry-community/cf-docs-contrib/wiki/v210
<https://github.com/cloudfoundry-community/cf-docs-contrib/wiki/v210>*

--
Thank you,

James Bayer


--
Thank you,

James Bayer

_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev

_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev

_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev

_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev


Re: Release Notes for v210

Guillaume Berche
 

Hi Joseph,

Thanks for your prompt response and the details over the current
infrastructures covered by runtime pipelines. Great to hear the nfs
template will be merged soon, thanks!

I'm indeed using the generate_deployment_manifest from cf-release, and was
still experiencing issue described into [5], until I patched both
cf-release/templates/cf-lamb.yml (which happens to belong to loggregator
repo) and cf-jobs.yml as in [2].

I'll double check tomorrow if I could have be caught by a transient lack of
"git submodule update", which could have explained the problem on my side.
If this is the case, then I'm sorry for the noise, and the extra associated
work.

Regards,

Guillaume.

[2] https://github.com/cloudfoundry/cf-release/pull/696
[5] https://github.com/cloudfoundry/cf-release/issues/690
[7] https://github.com/cloudfoundry/bosh-lite/issues/265

On Wed, Jun 3, 2015 at 7:50 PM, CF Runtime <cfruntime(a)gmail.com> wrote:

Hi Guillaume,

The metron_agent.deployment default can be found in
cf-release/templates/cf-lamb.yml which should get merged automatically if
using the generate_deployment_manifest script in cf-release.

We do currently have pipelines for all supported environments (AWS,
vSphere, OpenStack, and BoshLite)

Spiff templates are still the recommended way of deploying cf-release, and
I would expect the nfs template change to be merged today as it is near the
top of our backlog.

Joseph Palermo
CF Runtime Team

On Wed, Jun 3, 2015 at 7:32 AM, Guillaume Berche <bercheg(a)gmail.com>
wrote:

Hi,

Thanks for the v210 announcement and the associated release note. It
seems that the v209-announced introduction of a new mandatory
metron_agent.deployment property did not make it into the default spiff
templates [5]. Note I tried updating v209 release note formatting to make
this more explicit [6].

I'm wondering whether the pivotal runtime/release team has a cf-release
pipeline for vsphere infrastructure (I'm suspecting the aws-based pipelines
were fine) ? Is such pipeline using the spiff templates into
cf-release/templates [4], or has it moved to something else such as
cf-boshworkspace [3] ?

If the spiff templates templates into cfrelease/templates are still the
recommended way of deploying CF, is there a way to priorize the merge of
PRs for known issues in v211 such as [1] and [2], as to avoid the need by
the cf-community to maintain its own fork of cfrelease/templates ?

Thanks in advance,

Guillaume.

[1] https://github.com/cloudfoundry/cf-release/pull/689
[2] https://github.com/cloudfoundry/cf-release/pull/696
[3] https://github.com/cloudfoundry-community/cf-boshworkspace
[4] https://github.com/cloudfoundry/cf-release/tree/master/templates
[5] https://github.com/cloudfoundry/cf-release/issues/690
[6] https://github.com/cloudfoundry-community/cf-docs-contrib/wiki/v209/fdae17795c61691f96f90cc9fd7be90945252937



On Wed, May 27, 2015 at 7:59 AM, Dieu Cao <dcao(a)pivotal.io> wrote:

The cf-release v210 was released on May 23rd, 2015
Runtime

- Addressed USN-2617-1 <http://www.ubuntu.com/usn/usn-2617-1/>
CVE-2015-3202
<http://people.canonical.com/~ubuntu-security/cve/2015/CVE-2015-3202.html> FUSE
vulnerabilities
- Removed fuse binaries from lucid64 rootfs . Apps running on
lucid64 stack requiring fuse should switch to cflinuxfs2 details
<https://www.pivotaltracker.com/story/show/95186578>
- fuse binaries updated on cflinuxfs2 rootfs. details
<https://www.pivotaltracker.com/story/show/95177810>
- [Experimental] Work continues on support for Asynchronous Service
Instance Operationsdetails
<https://www.pivotaltracker.com/epic/show/1561148>
- Support for configurable max polling duration
- [Experimental] Work continues on /v3 and Application Process Types
details <https://www.pivotaltracker.com/epic/show/1334418>
- [Experimental] Work continues on Route API details
<https://www.pivotaltracker.com/epic/show/1590160>
- [Experimental] Work continues on Context Path Routes details
<https://www.pivotaltracker.com/epic/show/1808212>
- Work continues on support for Service Keys details
<https://www.pivotaltracker.com/epic/show/1743366>
- Upgrade etcd server to 2.0.1 details
<https://www.pivotaltracker.com/story/show/91070214>
- Should be run as 1 node (for small deployments) or 3 nodes
spread across zones (for HA)
- Also upgrades hm9k dependencies. LAMB client to be upgraded in
a subsequent release. Older client is compatible.
- cloudfoundry/cf-release #670
<https://github.com/cloudfoundry/cf-release/pull/670>: Be able to
specify timeouts for acceptance tests without defaults in the spec.
details <https://www.pivotaltracker.com/story/show/93914198>
- Fix bug where ssl enabled routers were not draining properly
details <https://www.pivotaltracker.com/story/show/94718480>
- cloudfoundry/cloud_controller_ng #378
<https://github.com/cloudfoundry/cf-release/pull/378>: current usage
against the org quota details
<https://www.pivotaltracker.com/story/show/94171010>

UAA

- Bumped to UAA 2.3.0 details
<https://github.com/cloudfoundry/uaa/releases/tag/2.3.0>

Used Configuration

- BOSH Version: 152
- Stemcell Version: 2889
- CC Api Version: 2.27.0

Commit summary
<http://htmlpreview.github.io/?https://github.com/cloudfoundry-community/cf-docs-contrib/blob/master/release_notes/cf-210-whats-in-the-deploy.html>
Compatible Diego Version

- final release 0.1247.0 commit
<https://github.com/cloudfoundry-incubator/diego-release/commit/a122a78eeb344bbfc90b7bcd0fa987d08ef1a5d1>

Manifest and Job Spec Changes

- properties.acceptance_tests.skip_regex added
- properties.app_ssh.host_key_fingerprint added
- properties.app_ssh.port defaults to 2222
- properties.uaa.newrelic added
- properties.login.logout.redirect.parameter.whitelist


On Sat, May 23, 2015 at 9:50 PM, James Bayer <jbayer(a)pivotal.io> wrote:

CVE-2015-3202 details:
http://lists.cloudfoundry.org/pipermail/cf-dev/2015-May/000194.html

CVE-2015-1834 details:
http://lists.cloudfoundry.org/pipermail/cf-dev/2015-May/000195.html

On Sat, May 23, 2015 at 9:41 PM, James Bayer <jbayer(a)pivotal.io> wrote:

please note that this release addresses CVE-2015-3202 and
CVE-2015-1834 and we strongly recommend upgrading to this release. more
details will be forthcoming after the long united states holiday weekend.

https://github.com/cloudfoundry/cf-release/releases/tag/v210

*https://github.com/cloudfoundry-community/cf-docs-contrib/wiki/v210
<https://github.com/cloudfoundry-community/cf-docs-contrib/wiki/v210>*

--
Thank you,

James Bayer


--
Thank you,

James Bayer

_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev

_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev

_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev

_______________________________________________
cf-dev mailing list
cf-dev(a)lists.cloudfoundry.org
https://lists.cloudfoundry.org/mailman/listinfo/cf-dev


Re: UAA : Is anyone utilizing the Password Score Feature

Winkler, Steve (GE Digital) <steve.winkler@...>
 

+1


From: Nicholas Calugar <ncalugar(a)pivotal.io<mailto:ncalugar(a)pivotal.io>>
Reply-To: "Discussions about Cloud Foundry projects and the system overall." <cf-dev(a)lists.cloudfoundry.org<mailto:cf-dev(a)lists.cloudfoundry.org>>
Date: Wednesday, June 3, 2015 at 12:20 PM
To: "Discussions about Cloud Foundry projects and the system overall." <cf-dev(a)lists.cloudfoundry.org<mailto:cf-dev(a)lists.cloudfoundry.org>>
Cc: CF Developers Mailing List <cf-dev(a)lists.cloudfoundry.org<mailto:cf-dev(a)lists.cloudfoundry.org>>
Subject: Re: [cf-dev] UAA : Is anyone utilizing the Password Score Feature

Hi Sree,

Not sure if this is possible, but maybe instead of requireAtLeastOneSpecialCharacter boolean, you could do minSpecialCharacters int (0-n)? This would allow more rigorous password policies.


Nick


Nicholas Calugar

On Wed, Jun 3, 2015 at 12:00 PM, Sree Tummidi <stummidi(a)pivotal.io<mailto:stummidi(a)pivotal.io>> wrote:

Hi All,

The UAA team is in the process of implementing Password Policy feature<https://urldefense.proofpoint.com/v2/url?u=https-3A__www.pivotaltracker.com_story_show_82182984&d=AwMFaQ&c=IV_clAzoPDE253xZdHuilRgztyh_RiV3wUrLrDQYWSI&r=8jfAtIC0enmugg7W93b4MxFNUdrneLwx6fyzU0yk9a8&m=wNYpag6E0rnGEhlO0X3GJ5d5Hz4fOBCSAOh8yveJ_mw&s=_wh20YK4sGow4AtgdhZx-n4fIJ4x2UiApoSSG8jVOCs&e=> for users stored in UAA.
The following properties around password strength will be exposed in the YML configuration.

#passwordPolicy:
# minLength: 8
# requireAtLeastOneSpecialCharacter: true
# requireAtLeastOneUppercaseCharacter: true
# requireAtLeastOneLowercaseCharacter: true
# requireAtLeastOneDigit: true

The Password Policy feature is being implemented to support multi-tenant UAA. Each Tenant/Identity Zone will get its own password policy. The password policy for the default zone will be configurable via YML.


UAA currently supports the zxcvbn<https://urldefense.proofpoint.com/v2/url?u=https-3A__blogs.dropbox.com_tech_2012_04_zxcvbn-2Drealistic-2Dpassword-2Dstrength-2Destimation_&d=AwMFaQ&c=IV_clAzoPDE253xZdHuilRgztyh_RiV3wUrLrDQYWSI&r=8jfAtIC0enmugg7W93b4MxFNUdrneLwx6fyzU0yk9a8&m=wNYpag6E0rnGEhlO0X3GJ5d5Hz4fOBCSAOh8yveJ_mw&s=b9G7EEOsCOiXnLJMJTaDbWyjwr386z7IQ5_5wvRZ6ew&e=> style password score. This is currently exposed via the following properties in the YML configuration file. There is an end point<https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_cloudfoundry_uaa_blob_master_docs_UAA-2DAPIs.rst-23query-2Dthe-2Dstrength-2Dof-2Da-2Dpassword-2Dpost-2Dpassword-2Dscore&d=AwMFaQ&c=IV_clAzoPDE253xZdHuilRgztyh_RiV3wUrLrDQYWSI&r=8jfAtIC0enmugg7W93b4MxFNUdrneLwx6fyzU0yk9a8&m=wNYpag6E0rnGEhlO0X3GJ5d5Hz4fOBCSAOh8yveJ_mw&s=JO1Yuq0GHq5FoW8uEHIMP-UNRnynikwtdSksZ0gklXk&e=> for querying the status of the same.

password-policy:

required-score: <int>

We would like to understand if this password score feature is being utilized at all. We don't plan on making this feature multi-tenant and would like to drop this in favor of the new approach which is much more granular and supports multi tenancy.

Thanks,
Sree Tummidi
Sr. Product Manager
Identity - Pivotal Cloud Foundry